Your L&D calendar has a slot for AI training. You have three providers shortlisted. Your CHRO wants a name by Friday. Here's what to actually ask — not the questions on every generic vendor checklist, but the ones that reveal whether a provider can genuinely move your team forward.
The AI training market in Singapore has exploded over the past two years. Every consulting firm, every bootcamp provider, every freelance trainer with a LinkedIn presence now offers AI workshops. The SkillsFuture-funded options alone have multiplied several times over. And at first glance, many of them look similar on paper: half-day sessions, hands-on exercises, takeaway resources, decent testimonials.
Most of these workshops are fine for surface-level awareness. People leave knowing what a large language model is, having tried a few prompts, and feeling reasonably optimistic about AI. What they typically don't leave with is a changed working pattern. Very few AI training programmes in Singapore are designed to produce lasting behaviour change — the kind where, three months after the session, your team is genuinely working differently.
The problem is that the providers who produce surface-level awareness and the providers who produce genuine capability change use almost identical language to describe their work. Both say "hands-on." Both say "customised." Both say "practical." The difference is in the specifics — and you have to ask for the specifics.
Ask this directly: "When was this curriculum last updated?" If the answer is more than three months ago, probe further. AI capabilities are changing at a pace that has no precedent in the L&D world. A course designed around what GPT-4 could do in mid-2025 is teaching a fundamentally different landscape than what your team will face when they sit down at their desks after the session.
Good providers update their content continuously — not because of process discipline, but because the trainers are using these tools daily and the gap between what they know and what's in the curriculum becomes uncomfortable quickly. Ask: "What changed in your curriculum in the last six weeks?"
A specific red flag worth calling out: a provider who leads with SkillsFuture approval as a primary quality credential. SF approval requires a fixed, reviewable syllabus — the same curriculum that made it through the approval process months or years ago. It's not a mark against a provider, but it signals a constraint. A syllabus designed for regulatory approval cannot be the same syllabus that's been updated in the last three months.
This is the clearest indicator of whether a workshop will produce lasting capability. Ask specifically: "What does an individual participant produce during the session, using the tools, applied to a real-world task?"
If the answer is "they try some prompts," "they follow along with live demos," or "we run group exercises," keep asking. Watching someone else use AI builds zero personal capability. Following along closely builds very little. The only thing that builds real skill is independently producing an output with the tool, on a problem that resembles the participant's actual work.
Ask to see an example build challenge. A good one is specific, time-bounded, set up so that participants have to make real decisions about how to approach it — not just fill in blanks. If the training doesn't include an independent build phase where participants work alone or in small groups without guidance, the retention will be weak and the behaviour change will be minimal.
Generic AI training uses generic examples. Marketing metaphors for finance teams. Generic email-writing exercises for operations teams who spend their days in spreadsheets. The content might be technically accurate, but if participants spend the day solving problems that don't look like their work, they leave without a clear path to applying what they've learned.
Ask: "What's your process for customising the build challenges and examples to our specific workflows?" A strong answer involves a pre-session conversation about what the team actually does day-to-day, how that shapes the session design, and what the build challenge will specifically look like for your team.
A weak answer is "we run different industry versions" — meaning they have a finance deck and a marketing deck and they pick the closest one. That's a packaging decision, not a customisation process. Your team deserves the latter.
A single workshop is a starting gun, not a finish line. The gap between "I understand how to use this tool" and "I actually use this tool in my daily work" is bridged by structure, repetition, and support — none of which a single session can provide on its own.
Ask what the provider offers post-session: resources for self-directed practice, follow-up check-ins at two weeks and thirty days, access to a cohort community, support for internal AI champions. If the answer is a PDF summary and a link to their website, that tells you they've thought about the session and not about the outcome.
This doesn't mean you need a six-month retainer. But some form of structured post-session support is a meaningful differentiator between providers who are optimising for a good workshop experience and providers who are optimising for teams that actually change how they work.
Testimonials are easy to produce. References are harder to fake. Ask for a direct introduction to an L&D lead or people manager at a previous client — someone whose team went through the training and can tell you what happened in the months afterwards.
Specifically, ask that reference: "What was different about how your team worked three months after the session?" Not "was it good" — that answer is always yes. What actually changed, and what stayed the same?
Any provider confident in their outcomes should welcome this request immediately. Hesitation, deflection, or an offer of written testimonials instead of a live conversation is informative.
Some AI training providers are genuinely excellent for technical teams — developers, data analysts, product managers with coding backgrounds. Those same providers often struggle with complete beginners: people who have never deliberately thought about how language models work, who are uncomfortable with ambiguity, and who may have pre-existing anxiety about being "replaced by AI."
If your team has limited or no AI exposure, ask specifically about their approach with total beginners. Listen to how they talk about it. Providers who are good at this will describe specific design decisions they make to reduce psychological friction — how they frame the opening of the session, how they sequence the exercises, how they handle the participant who openly says they don't think AI is useful for their role.
Condescension — even subtle condescension — is a red flag. It surfaces as talking about "hand-holding" beginners, describing non-technical learners as "resistant," or framing their role as overcoming participants' scepticism rather than earning their engagement.
This is the honest metric: what percentage of the people who attend a session are still actively using AI tools in their work ninety days later? This is different from "satisfaction" (did people enjoy the session) and different from "capability" (can people demonstrate a skill immediately after training). Follow-through rate measures whether the training changed anything durable.
Most providers won't have this number — partly because it's hard to measure, and partly because many providers haven't designed their programmes with long-term adoption as the primary success metric. But asking the question signals that you're evaluating for outcomes, not activities. And the response tells you a great deal: providers who have genuinely thought about this will either have the data or will have a clear, honest explanation of why they don't and how they're working towards it.
No provider — including us — can guarantee adoption. Training is a necessary condition for capability change, not a sufficient one. The organisation's culture, the direct manager's behaviour, the availability of appropriate tools, the clarity of internal AI policy: all of these sit outside the training room and all of them affect whether what happens in the training room actually sticks.
What a good training provider can guarantee is that the design is sound — that it gives participants the best possible starting point for building new habits. But if you book excellent training and then send participants back to an environment where AI use is never discussed, never modelled by leadership, and never structurally reinforced, the training will underperform. That's not the provider's failure. It's a shared responsibility.
The best AI training in Singapore isn't the most approved. It's the one that actually changes how your team works on Monday morning.