HomeBlog › How to Get AI Adoption After Training

You Did the AI Training. Now How Do You Get People to Actually Use It?

AI Adoption Behaviour Change

The workshop was great. Energy was high. People built things, had lightbulb moments, asked good questions. Three weeks later, 80% of the team is back to their old workflow and the training budget feels wasted. This is the most common AI training story in Singapore right now. Here's how to change it.

Why the Two-Week Drop-Off Happens

The two-week drop-off isn't a training quality problem — it's a habit formation problem. And it's entirely predictable, which is why it's so frustrating when organisations are surprised by it.

The neurological reality: habits form through repetition, not through a single exposure, no matter how well-designed that exposure is. One session — even a genuinely excellent one — is not enough to rewire a working pattern that has been in place for years. The session creates awareness, demonstrates capability, and ideally produces some genuine excitement. What it cannot do, by itself, is create the repeated loops of behaviour that eventually become automatic.

The structural reality is more actionable. People return from training to a desk where nothing has changed. The same default tools are open in their browser. The same manager is sending the same kinds of tasks. No one in the next meeting asks what they built or how they're using AI. The new skill sits unused not because people don't want to use it, but because the environment hasn't been redesigned to prompt it, support it, or reward it.

Training gave them a new skill. Nobody gave them permission, structure, or reminders to use it. That's the problem — and it's fixable.

The Role of the Manager

The single highest-leverage intervention available to any organisation trying to improve AI adoption is remarkably low-tech: the direct manager asks about AI use in 1:1s.

"How did you use AI this week?" Four words. Not "did you use AI?" (yes/no, easy to deflect), but "how did you use it?" — which presupposes usage and invites the kind of specific, reflective answer that reinforces the behaviour.

The question doesn't need to be punitive or evaluative. Curious is better than demanding. The goal isn't accountability in the negative sense — it's making AI use a normal, expected, openly discussed part of the team's working life. When a manager asks about it weekly, two things happen: the people who are using it feel validated and share what's working, and the people who aren't feel a low-stakes nudge to try something before next week.

We consistently see this single change — a manager who genuinely asks about AI use in regular 1:1s — have more impact on 90-day adoption than any amount of follow-up training resources. The mechanism is simple: it makes the behaviour visible and socially expected, which are the two conditions that turn a skill into a habit.

Designate an AI Champion

Managers can't be everywhere, and many are still figuring out their own AI workflows. The next most effective structure is designating one person in the team as the internal AI champion — someone whose informal role is to keep the energy alive between formal training touchpoints.

A champion does four things that no L&D programme can do on its own: they share what they're building (making AI use visibly normal), they answer colleagues' questions without judgment (reducing the friction of not knowing), they document use cases (building institutional knowledge), and they feed adoption barriers back to leadership (so the right things can be fixed).

Critically, the champion role doesn't require technical expertise. It requires curiosity, trust from peers, and the willingness to be openly enthusiastic about something new. If you want to build this more systematically, we've designed a full structure for it: the AI Champions Programme. But even informally — identifying one person in the team who's genuinely bought in and giving them permission to play that role — moves the adoption needle.

Create Space for Small Wins

Vague instructions don't build habits. "Use AI more in your work" is the kind of directive that sits in people's heads as a mild background intention that never becomes action. The habit-formation literature is clear on this: specific implementation intentions ("I will do X in situation Y") are dramatically more likely to result in the target behaviour than general intentions.

In the week immediately following training, give people one specific, low-stakes, concrete task to complete using AI:

These tasks are small enough that the cost of failure is zero, specific enough that there's no ambiguity about what "done" looks like, and directly connected to real work the person is already doing. Small wins do two things: they demonstrate that AI is useful for this person's actual job (which strengthens motivation), and they create the first repetitions of the habit loop.

Do this at week one, week two, and week four. Each time, make the task slightly more open-ended. By week four, you're asking people to identify their own use case. By that point, many of them will have already done so.

Reduce the Friction

Most AI non-adoption is not reluctance — it's friction. And friction is mostly invisible until you look for it. The most common friction points we encounter in organisations that have done AI training but are seeing low adoption:

Auditing for these friction points takes about an hour. Fixing them often takes less than a day. The payoff in adoption is disproportionate.

Track Usage — Lightly

You don't need to monitor every prompt or build a surveillance infrastructure around AI use. That would be counterproductive. But some signal on adoption is necessary if you want to know whether the training investment is working and where to intervene.

A simple monthly pulse check is enough: "Have you used an AI tool in your work this week?" Yes or no. Plus an optional open field: "If yes, what for? If no, what's getting in the way?"

This gives you a directional read on adoption rates over time, and — crucially — it tells you where adoption isn't happening. If a specific team or department is consistently at low adoption, something specific to their context is blocking them. Find out what it is and address it directly, rather than assuming the training needs to be repeated.

The act of asking also has an effect: people who know adoption is being tracked (in a light-touch, non-punitive way) are more likely to try. The question itself is a prompt.

Accept That 100% Adoption Isn't the Goal

Some people will take longer. Some may genuinely not have high-leverage AI use cases in their current role — at least not yet. Chasing 100% adoption six months post-training is a good way to exhaust your political capital on a target that isn't achievable and isn't what success actually looks like.

A realistic target: 60–70% of trained staff actively using AI tools at least weekly, measured at 90 days after training. That's a meaningful shift. It means more than half the organisation has built a new working habit. The remaining 30–40% will likely follow as the social norm shifts around them.

If you're at 30% at 90 days, something in the adoption structure needs to change — but it's not necessarily the training that needs to be repeated. Use your pulse-check data to diagnose whether the problem is friction, permission, peer visibility, or manager behaviour. The intervention should match the actual barrier, not just involve more training spend.

Training gives people the skill. Adoption is a culture and structure problem. You need both halves to see ROI.

Soh Wan Wei

Wan Wei is the founder of ANCHR AI Labs, Singapore's AI training company for non-technical professionals. She designs and delivers AI training for teams across Singapore and SEA — and spends a lot of time thinking about the gap between what happens in a workshop and what happens at someone's desk three weeks later.

Read next
AI Champions Programme Build internal advocates who sustain adoption long-term
Why AI Training Fails The structural reasons ROI evaporates — and what to do about it
How We Train The ANCHR approach to training design
AI Training ROI How to measure whether AI training is working