That's not an oversight. It's a deliberate design choice — and it's one of the reasons our training stays relevant.
Talk to us about training ↗The first question most L&D managers ask us is: "Is this SkillsFuture-eligible?" We understand why. It's the standard procurement filter, it's what the spreadsheet demands, and it's what your finance team knows how to approve. We don't take it personally.
But the honest answer is no — our AI training is not SkillsFuture-funded. And after you read why, we think you'll understand that this is a feature, not a bug.
The SkillsFuture framework was built for a different era of skills training — one where course content was stable, certifications aged well, and the tools taught in year one were still the tools in use in year three. AI training in 2026 is none of those things. The pace of change in AI tooling has made the standard approval-and-lock approach structurally incompatible with delivering training that's actually useful.
We're not anti-SkillsFuture as a concept. We're anti-teaching outdated tools with false confidence. Those two things happen to conflict right now, in this moment, in this industry. So we made a choice.
SkillsFuture course approval takes a minimum of 9–12 months from submission to listing. In practice, many providers report the process taking well over a year. That timeline is not unusual for traditional skills training — it's long, but acceptable when the skills being taught are stable.
In AI training, a 12-month lag is disqualifying.
Consider what happened between early 2024 and early 2026: Claude 3 launched, then Claude 3.5 Sonnet, then Claude 3.7, then Claude 4. OpenAI released GPT-4o and then o3. Perplexity went from niche to mainstream. Cursor and Claude Code reshaped how developers work. Agents went from demos to deployable tools. The average non-technical professional's AI toolkit in early 2024 looks almost nothing like it does in mid-2026.
A course submitted for approval in Q1 2025 could not have anticipated Claude 4, Claude Code, or any of the workflow integrations that are now the most relevant things to teach. If that course was approved and listed today, it would be teaching tools that have been significantly superseded, mental models that no longer match how the models actually behave, and workflows that have been replaced by simpler, more powerful alternatives.
We update our curriculum every time a meaningful Anthropic update drops. Sometimes that's monthly. We redesign session formats when better approaches emerge. We retire content when it stops being the right answer. None of that is possible with a fixed, approved syllabus that requires re-submission and re-approval for any meaningful change.
Here's the structural problem that doesn't get talked about enough: SkillsFuture-approved courses must teach to the approved syllabus. Any meaningful change to content or scope requires a re-submission. Which means providers face a choice — keep the curriculum current and risk being out of compliance, or teach to the approved spec and risk being out of date.
Most choose compliance. That's rational. But it creates a predictable pattern in the approved AI courses we've seen and sat through: the tools covered are 12–18 months old. The use cases are generic enough to survive a committee review. The trainers are good people working within a document they filed before the last three major model releases.
This isn't a failure of intent. It's a failure of structure. The approval process was not designed for a domain where the best practice from 14 months ago might actively mislead people today.
We'd rather be wrong and current than right and stale. If we teach something that turns out to be superseded in six months, we update it. If a better workflow emerges, we replace the old one. We're not locked into a document we filed when Claude 3.5 was the latest thing. That flexibility is the product.
There's a dynamic that L&D professionals know but rarely say out loud: when training is free, it's often treated as free.
Not universally. Not always. But often enough to matter. Participants show up less prepared. Attendance is treated as a checkbox rather than a commitment. Follow-through after the session is lower. The stakes feel different when there's no real investment on the table — from either side.
We're not making a moral argument about deserving training. We're making a practical observation about behaviour in rooms we've actually been in. When a company has made a genuine financial decision to invest in AI training, the people who show up are different. They've been selected, prepared, and briefed. There's a manager who wants to see outcomes. There's accountability in the room.
That dynamic produces better training. Not because we perform better with a paying audience, but because the participants are more engaged, more honest about their actual problems, and more likely to follow through. That's what makes the difference between a workshop people remember and one that doesn't survive contact with Monday morning.
We want clients who are investing because they believe the outcome is worth it. That's the relationship where we can do our best work. It also means we have to keep earning it — which is exactly the incentive we want.
Let's be direct about something the "free training" framing obscures: outdated AI skills training has a real cost, and that cost is negative.
If your team spends half a day learning a workflow that has been superseded, they leave with false confidence. They go back to their desks and try to apply what they were taught. The tools don't behave the way the training said they would. The prompt structure is for a model that's been replaced. The integrations they were shown require subscriptions or setups that no longer exist. So they give up, and they leave with a mental model of AI as something complicated and unreliable — exactly the wrong conclusion.
That's worse than not training them at all. At least with no training, you have a blank slate. With bad training, you have an active barrier to adoption: someone who thinks they tried AI, it didn't work, and now it's a conversation they're closed to.
The "free" workshop is not free. It costs staff time, it costs credibility with your team, and if the content is wrong, it costs you the adoption momentum that the training was meant to generate. We think that calculation is worth making before selecting a training provider based primarily on subsidy eligibility.
If your organisation does need to find funding for AI training, here's an honest rundown of what's actually available and relevant:
If your L&D brief has funding requirements, let's figure out what's actually possible together. Most constraints have a workaround — or a better question behind them.
WhatsApp us to talk ↗