Here’s the pattern I see most often: the AI training was excellent. Participants left motivated. Three months later, AI adoption is sitting at 20% and the L&D team is quietly wondering what went wrong. The training didn’t fail. The culture did.
Let me be specific about what I mean — because “AI-positive culture” sounds like one of those phrases that means something different to everyone who uses it, which means it often means nothing at all.
An AI-positive culture is not one where everyone is enthusiastic about AI. That’s not realistic and it’s not the goal. It’s not a culture where AI is mentioned at every all-hands, or where the CEO puts “AI-first” in the company strategy deck. It’s more specific and, in some ways, more demanding than that.
In an AI-positive culture: people feel safe to try AI tools without judgment when they fail — and they will fail, because that’s how learning works. Managers ask about AI use in 1:1s as a matter of genuine interest, not as a surveillance mechanism. Mistakes made with AI are treated as learning rather than liability. AI wins are shared openly without people feeling like they’re showing off. And people are actually given time to explore and experiment — not just told to “be more innovative” while their workloads remain unchanged.
That last point matters more than people usually acknowledge. You cannot build an AI-positive culture by adding an expectation to a full plate. If people are already overwhelmed, they will not experiment with new tools — they will do what they know, because what they know works and the clock is running. The culture has to include structural space for learning, not just permission in principle.
In my experience working with organisations across Singapore and the region, there are four cultural patterns that reliably suppress AI adoption even after good training. It’s worth naming them directly:
If there’s one structural change that has the highest leverage for AI adoption, it’s this: publish a clear, simple internal AI use policy. One page. Plain English. Answer the questions that are actually making people hesitant.
What tools are approved for use with company data? (Be specific — “AI tools generally” is not an answer.) What categories of data are appropriate to use with AI, and what should never go into an AI prompt? What types of outputs require human review before they’re used? What’s off-limits entirely?
Most employees are risk-averse. They will default to non-use if the policy is unclear, and they will feel anxious about using AI even after training if they don’t know where the lines are. Writing the policy, sharing it at a team meeting, answering questions about it — this removes a significant psychological barrier that training alone cannot remove.
The policy doesn’t have to be perfect on the first version. It has to exist. You can update it as you learn more about how your organisation is using AI. A clear but imperfect policy is significantly better than an absent one. Share it, discuss it, revisit it at six months. That cadence signals that this is a living commitment, not a checkbox exercise.
One of the lowest-cost, highest-impact things an organisation can do to build AI culture is create a channel — a Slack channel, a Teams channel, a WhatsApp group, whatever the organisation actually uses — specifically for sharing AI wins. Not mandated. Not monitored. Just a space for “I built this today and it saved me an hour, here’s how.”
This does two things. First, it shows people what’s possible — in their organisation, using their actual tools, for work that looks like theirs. That’s far more motivating than any external case study. Second, it signals that AI use is a valued behaviour. When someone shares a win and gets three “this is great” responses from colleagues, they’ll do it again. When they share it and nothing happens, they won’t.
The channel works best when someone in leadership — ideally the direct manager — participates occasionally. Not as a monitor, but as a genuine contributor: “I tried this for my week-ahead planning and it saved me 40 minutes.” That kind of participation changes the channel from a team initiative to an organisational norm.
This sounds obvious. In practice, it’s rarely done. Managers need to say it out loud, in plain language: “If you try something with AI and it doesn’t work, that’s fine. Tell me about it.”
The psychological safety research is consistent on this point: people take more risks — and learn faster — when they are explicitly told that failure is acceptable. Implicit tolerance is not the same as explicit permission. Many employees will not assume they have permission to fail unless someone they respect says it directly. In organisations where failure has historically been penalised, this is even more important.
For AI specifically, failure is almost guaranteed in the early stages of adoption. Prompts don’t produce what you expected. The output needs significant editing. You built something that turned out not to be useful. These are not signs that AI doesn’t work — they’re how people learn to use it effectively. If the culture doesn’t support that learning process, people won’t go through it. They’ll stay in safe, shallow usage — using AI for low-stakes tasks they could have done themselves in the same time — or they’ll drop off entirely.
I’ve had participants tell me, months after a session, that the single most useful thing that came out of the training was when I said: “Whatever you build today, even if it’s terrible, that’s a win.” It gave them permission they’d been waiting for.
Here is the honest version of AI culture change, which not enough training providers will tell you: it takes 12 to 18 months of consistent signal-sending from management before AI use genuinely feels normal in most organisations. Not universal — normal. A training session is a catalyst. It cannot do the work of cultural change by itself.
The organisations I’ve seen reach genuine AI maturity — where AI use is embedded in day-to-day work, not a special initiative — have been building this culture intentionally for over a year. They have leaders who use AI themselves and talk about it honestly. They have clear policy. They have AI champions in each team. They have a cadence of recognition for AI use. They have made psychological safety an explicit value, not just a poster in the meeting room.
They also have L&D teams who understand that their job doesn’t end when the training session ends. The training creates capability. The culture creates permission. Both are necessary. Right now, most organisations are investing heavily in capability and almost nothing in permission — and then wondering why adoption numbers plateau.
The organisations that are furthest ahead on AI adoption in Singapore right now didn’t get there because they had better tools or a bigger budget. They got there because they started earlier and they treated culture as seriously as they treated training. That head start is compounding. The gap between them and organisations that are only now starting to think about AI culture is not months — it’s years of embedded habits, confidence, and process improvement.
Building AI culture is slow enough that it’s easy to lose confidence mid-process without measurement. Two metrics I recommend tracking on a monthly basis:
These are simple enough that you can embed them in a monthly team check-in pulse survey without creating survey fatigue. The trend over six months is more useful than any single data point. If the “feel supported” number is flat or declining while the CHRO is asking for adoption improvements, you have a culture problem, not a training problem — and knowing that is the first step to fixing it.
Training gives people the skill. Culture gives people permission. You need both. Right now, most organisations have one.
If you’re an L&D professional reading this and nodding, I’d genuinely encourage you to share it with whoever in your organisation owns culture change. The conversation between L&D and whoever is responsible for psychological safety, policy, and leadership behaviour is one that most organisations aren’t having — and it’s the conversation that determines whether AI training investment compounds or evaporates.