HomeBlog › Leadership & Management

AI Training for Managers: What You Actually Need to Know

LeadershipAI Adoption

The manager who hasn’t used AI is in an awkward position. Their team may know more than they do. The vendor pitches they’re evaluating are full of claims they can’t verify. The CHRO is asking them to “drive AI adoption” without being specific about what that means. This post is for that manager.

Three Different Things Managers Actually Need

One of the clearest mistakes I see in AI training programmes is treating managers as just a more senior version of the IC (individual contributor) cohort. They get the same session, maybe with slightly different examples. The thinking is: if they know what their team knows, they can lead the change.

That’s wrong. Managers need three things from AI training, and they’re distinct:

A good AI training programme for managers addresses all three. Most only address the first, briefly, before moving on to general prompting exercises that apply equally to everyone in the building.

Personal Fluency First

I want to be honest about what personal fluency does and doesn’t mean. It doesn’t mean becoming a prompt engineer. It doesn’t mean knowing how large language models work under the hood. It means using AI tools enough — for real tasks, on real work — that you develop genuine judgment about when AI outputs are good, when they’re plausible-sounding nonsense, and when they need significant human review before they go anywhere near a client or a board paper.

The practical threshold, in my experience, is about three to four hours of genuine practice applied to the manager’s own job. Not a training exercise. Their actual emails, their actual meeting prep, their actual performance review drafts, their actual data analysis. Something they would have spent time on anyway, done with AI assistance instead. That three-to-four hours builds more usable judgment than almost any amount of training about AI in the abstract.

The managers I’ve trained who come in having done this — even informally, before the session — are categorically different to manage in a workshop. They have real questions. They have opinions. They’ve already hit a wall and figured out how to get around it. They can speak to their teams with authority because they have actually done the thing.

How to Model AI Use Without Being Performative

Here’s a pattern I see regularly in organisations that have just done a big AI training push: managers who have been to one workshop and have come back evangelical. Every meeting they reference AI. Their status updates mention AI. They’re posting about AI on the company intranet. The team can tell within about forty-eight hours that this is performative rather than real, and it does more damage than if they’d said nothing at all.

The alternative isn’t silence — it’s credibility. Credibility comes from specificity. “I used Claude to prep my notes for this meeting and it saved me about forty minutes — I also had to correct two things it got wrong about the client’s history” is credible. It’s honest about both the value and the limitations. It’s something a team member can learn from. “AI is going to transform how we work” is not.

The best thing a manager can do after AI training: pick two or three specific tasks in their own role where they’ll use AI consistently for the next month. Do those tasks with AI. Mention it when it’s relevant, honestly, including when it doesn’t work as expected. That’s modelling. It doesn’t require enthusiasm. It requires practice and candour.

The 1:1 Question That Changes Everything

If there’s one thing I’d ask every manager to do after AI training, it’s this: add one question to their standing 1:1 agenda. “How did you use AI this week?”

Not as surveillance. Not as a KPI. As genuine curiosity — and as a signal about what the organisation values. When a manager asks this question regularly, it communicates that AI use is normal and expected. When they don’t ask, the message is equally clear: this isn’t actually important.

The downstream effects are significant. Team members who know they’ll be asked are more likely to try things and bring them to the conversation. Managers who ask regularly start to hear patterns — what’s working, where people are stuck, what support is needed — that they won’t get from an aggregate adoption dashboard. And it creates a natural cadence for sharing wins that doesn’t require anyone to set up a new Slack channel or run another all-hands.

I’ve had L&D managers tell me that this single practice — adding that question to 1:1s — moved adoption metrics more than anything else they tried in the six months after training. It costs nothing and takes ten minutes a week.

How to Handle Team Resistance

Every cohort has the sceptic, the anxious person, and the quietly resistant person. Managers need a different response to each, and lumping them together as “resistant to change” doesn’t help.

The sceptic is often right about some things. Their scepticism is usually based on a bad experience with a previous technology rollout, or a genuine observation that AI outputs in their specific domain are unreliable. The worst response is to argue with them. The best response: give them one specific task where AI demonstrably saves time, let them try it in private, and let the experience do the persuading. Don’t make them convert publicly. They often become excellent AI champions once they’re convinced, precisely because they’re rigorous.

The anxious person is worried about their job, their competence, or both. They need scaffolding and small supervised wins, not cheerleading about AI’s potential. The message they need to hear is: “You’re not behind. Here’s something specific you can do right now.” Progress on a concrete task reduces anxiety faster than any amount of reassurance.

The quietly resistant person often has something real they’re worried about that they haven’t said out loud yet — a concern about data privacy, a worry about being evaluated on AI output they don’t trust, a suspicion that this is the first step toward headcount reduction. Asking them directly — “What’s your hesitation?” in a private conversation — is almost always more productive than trying to override their resistance with enthusiasm.

What Managers Don’t Need

I’ll be direct about this because I see training programmes overloading managers unnecessarily. Managers do not need to become prompt engineers. They do not need to track every new AI tool release or have an opinion about model architecture. They do not need to become the AI expert for their team — that’s what AI champions are for.

What they need is enough fluency to lead — to ask good questions, to make credible decisions, to model behaviour authentically, and to support their team through a transition that is real and ongoing. That’s a much more achievable bar, and it’s the right bar. Training programmes that try to turn managers into technical AI users often fail both objectives: the managers don’t become technical users, and they don’t become better leaders of AI change either.

The most important thing a manager can do for AI adoption isn’t training their team. It’s using AI themselves — and being honest about what they’re learning.

That honesty — about what works, what doesn’t, what surprised them — is more valuable to a team than any amount of polished enthusiasm. It creates permission for the team to be honest too. And honest feedback is how organisations actually improve their AI practice, rather than reporting adoption numbers that look good in the quarterly update but don’t reflect what’s actually happening on the ground.

Soh Wan Wei — Founder, ANCHR AI Labs

AI trainer, keynote speaker, and builder — all without writing a single line of code. Wan Wei runs AI corporate training for sales, marketing, HR, and leadership teams across Singapore and Malaysia. ANCHR is pronounced “anchor” ⚓ — because being grounded is a core value.

Read next
The AI Champions Programme Team capability