Back to Resources

Creating a Culture of Experimentation

28 April 2025text
Creating a Culture of Experimentation

How to make space for testing, learning, and responsible AI use at work.

AIAI at Workemployee upskilling

Why it matters

For AI to be genuinely useful in a business setting, there needs to be a culture of experimentation. That’s not something most organisations naturally have.

Too often, we see a cycle like this:

  • A new tool gets introduced
  • It’s expected to deliver immediate results
  • If it doesn’t, it’s written off as a failure

But AI doesn’t work like that. It’s still evolving. Use cases aren’t always obvious, and success depends on people being able to try things, reflect, and adjust. That means we need to normalise experimentation - and support it in practice, not just in theory.


Three ways to support experimentation with AI

  1. Encourage curiosity

    People need time and permission to explore tools. Not every tool will be right for your team or your workflows, but exploration is how you find out what’s possible.

    When someone on your team spends an afternoon testing something new - even if it doesn’t lead anywhere - that’s still valuable learning. They’re building knowledge and confidence that will carry over into the next project or tool.

    Make it normal to ask “what does this do?” or “could this help with something we already do?” Curiosity drives adoption and long-term capability.

  2. Embrace failure

    Not every experiment will work. That’s the point.

    Teams need to feel safe trying things that might not succeed. If a tool or use case isn’t the right fit - that’s useful information. It tells you what not to pursue, or what might work better with a different setup or in a few months' time.

    Encourage people to share what didn’t work, not just what did. It helps others avoid wasted time, and it builds a shared understanding of where the tech is actually helpful.

  3. Create guardrails

    Experimentation works best when people understand the boundaries.

    That’s where a clear, simple AI usage policy can help. It doesn’t need to be complicated - just a short set of principles that cover:

    • What data is okay to use
    • Which tools are approved
    • Where it’s fine to experiment, and where to be cautious
    • Who to talk to if you’re not sure

    This creates psychological safety. It gives people a framework to explore without worrying they’re doing something wrong. And it ensures that your organisation stays aligned on privacy, compliance, and risk.


In practice

We often support companies by training both leadership and teams. That dual approach works well - it builds shared language, shared expectations, and space for experimentation across the business.

This kind of culture isn’t about being overly cautious or endlessly testing things for the sake of it. It’s about structured exploration. About trying things with intention, evaluating them thoughtfully, and sharing the results - even when they’re inconclusive.

That’s how you build real capability with AI.

Want More Resources Like This?

Sign up for our Thoughts by Humans newsletter to receive the latest AI and data resources directly to your inbox.