HomeAI Training Singapore › How We Train
Methodology Output-First

We Don't Do Slide Decks.

Our training is designed around one outcome: participants leave with something they built, and a habit they can repeat. Here's how we get there.

See it in action ↗
The starting point

The Problem With Most AI Training

Most corporate AI training is designed around one thing: awareness. The goal — explicit or implicit — is for participants to leave knowing that AI tools exist, that they're impressive, and that they should probably start using them. The format follows the goal: slides, a live demo, maybe a "try it yourself for five minutes" segment, a Q&A, then lunch.

Participants leave knowing AI is impressive. They return to their desks and open Excel. Nothing changes.

This isn't a cynicism about people. It's a problem of design. Awareness does not produce behaviour change. You can watch a YouTube video about swimming and still not know how to swim. You can sit through a demo of Claude and still have no idea how to apply it to the specific report you're supposed to write on Thursday. The gap between "I've seen this work" and "I can do this" is the gap that almost all corporate AI training fails to close.

We've run enough sessions, been in enough rooms, and spoken to enough L&D managers who've already been burned by awareness-only workshops to know exactly what the pattern looks like. It's one reason we write about why AI training fails as openly as we do. The industry has a design problem, and acknowledging it is the first step to building something better.


Our design principle

Output First. Always.

Every ANCHR session is built around a single design constraint: every participant leaves with something real they built.

Not a worksheet. Not a reflection exercise. Not a list of "next steps" they'll review and promptly forget. An actual working thing — a Claude workflow, an automated report structure, a prototype tool, a prompt template — applied to a problem from their real job, built by them during the session, ready to use on Monday morning.

This principle isn't harder to design for than awareness training. In some ways it's simpler: when the session goal is "everyone builds a thing," every design decision is in service of that. You don't need to explain why AI is useful in abstract terms if you're demonstrating it on a live problem right now. You don't need to convince people to adopt it when they've already used it successfully once in the room. The output is the argument.

There's also a subtler effect that L&D practitioners will recognise: when someone builds something themselves, they own it in a way that watching a demo never produces. They remember the moment they got it working. They know how to replicate it. They have something concrete to show a colleague. That's the beginning of a habit, not just a memory of a workshop.


Before we step in the room

The Discovery Call

Every group training programme starts before the first participant enters the room. It starts with a 20–30 minute call with the L&D lead, people manager, or whoever owns the training brief.

We ask specific questions:

From that conversation, we design a custom build challenge specific to the team's actual work. HR teams get HR use cases — drafting job descriptions, summarising exit interview notes, building onboarding frameworks. Finance teams get finance workflows — variance analysis summaries, board report structures, vendor comparison write-ups. Operations teams work on the operational challenges that are actually eating their week.

Nobody in an ANCHR session is role-playing being a marketer to understand prompting. Nobody is using a generic "write me a blog post" exercise to learn a skill they'll apply to something completely different. The connection between the session and the job is direct and immediate — because we built the session around the job.

This takes more time to design than pulling out a standard deck. It's worth it. Generic training produces generic outcomes. Specific training produces specific behaviour change.


Inside the session

The Three-Block Structure

Our sessions run on a consistent three-block structure, regardless of length or format. The proportions are deliberate and non-negotiable — particularly the last block, which is the one most training programmes cut when they run short on time.

Block 1 Context 20–30% of session time
What the tool is. How it thinks. The one mental model that unlocks everything. We cover this quickly and with intention — enough for participants to understand what they're working with and why, not so much that we're still on slides when we should be building. Most AI tools have a relatively small set of principles that explain most of their behaviour. We teach those principles, not a catalogue of features. Features change; the principles are durable.
Block 2 Guided Build 30–35% of session time
The trainer builds something live, narrating every decision out loud. Not a polished demo — a real build, including the moments where it needs adjustment, where the prompt gets revised, where the output needs refinement. Participants follow along on their own devices. Questions are actively encouraged because they reveal the genuine confusion points, which are often more instructive than the clean demonstration. This block demystifies the process: participants see that it's not magic, it's method, and they can replicate the method.
Block 3 Independent Build 35–40% of session time
Participants work on their own real problem. Not a practice exercise — their actual task, their actual workflow, their actual output. The trainer circulates, helps people who are stuck, and adapts on the fly based on what's coming up. This is where learning consolidates. This is where false assumptions surface and get corrected. This is where the "I can actually do this" moment happens. It's the block most training programmes cut. It's the block that determines whether anything changes on Monday morning.

The ratio matters. If you spend 80% of a session in context and guided build and only 10 minutes on independent work, you've designed an event, not a training. We hold the independent build block firmly because we know what happens when it gets squeezed: adoption drops, the skill doesn't transfer, and the ROI calculation never materialises.


After the session

Reinforcement: What Keeps It Working

A single workshop does not change habits. This is not a controversial statement — it's one of the most thoroughly documented findings in adult learning research. The question isn't whether reinforcement matters. It's what kinds of reinforcement are practical, and how to build them into the programme design without requiring a six-month coaching engagement.

Here's what we provide and offer as part of every ANCHR engagement:

Resources pack — included in every session
Every tool, prompt, workflow, and framework used in the session, documented and sent within 24 hours. Not a slide deck — a working reference guide that participants can open while they're at their desk trying to replicate what they built. Structured so they can find what they need in under 30 seconds.
Follow-up check-in call — available on extended programmes
A 30-minute call 2–3 weeks post-session for the L&D lead or team manager. We go through what's being used, what isn't, and why. This conversation surfaces the adoption barriers that weren't visible during the session and lets us course-correct before the four-week behaviour window closes.
AI champion designation — available on extended programmes
One person in the team is identified as the go-to for AI questions and accountability. We give them additional support, context, and resources so they can sustain peer-level help within the team long after the training ends. Champions don't need to be the most senior person — they need to be curious, respected, and willing to answer questions.
ANCHR alumni community — available on extended programmes
Participants get access to a shared community of ANCHR programme alumni where they can ask questions, share workflows that are working, and stay current as tools evolve. This is less about structured learning and more about maintaining the connection to a peer group who's doing the same work — which turns out to be one of the most reliable adoption reinforcers we've found.

We're direct about which elements are standard and which require extended programme scope. We don't promise a six-touch reinforcement system in a quote for a half-day workshop. What we do promise: every session is designed from the start with reinforcement in mind, not bolted on afterwards.


Side by side

Typical AI Training vs. The ANCHR Approach

Here's the comparison L&D managers usually want to see. We think it's more honest to put it in a table than to describe it in paragraphs where we control the framing.

What Typical Corporate AI Training ANCHR AI Training
Content type Generic prompting exercises; standard use cases not tied to your industry Real workflows from your team's actual jobs, built in a pre-session discovery call
Session format Lecture + demo, with a brief "try it" segment Context block + guided build + independent build
Time spent building Under 10% of session time, often less 35–40% of session time on independent builds
What you leave with Notes, a slide deck, and a vague intention to "try this" A working prototype you built, applied to your real task
Customisation None, or cosmetic (logo on the slide deck) Pre-session discovery call; custom build challenge per team
Post-session support None, or a generic resources PDF Working resources pack; optional follow-ups and champion support
Curriculum currency Last updated when the course was approved or the deck was last refreshed Updated every time a meaningful Anthropic or tool update drops
Adoption design Assumed (participants are responsible for following through) Built into the session structure and post-session touchpoints

The honest caveat: our approach takes more of your L&D lead's time upfront. The discovery call is a real meeting. The customisation requires input from your side. If you need a training programme that requires zero preparation from your team, our model isn't it. But if you need training that actually changes how your team works — we'd argue the upfront investment is the point, not the obstacle.


What we skip

What We Don't Do

A training methodology is as much about what you refuse to do as what you commit to. Here's what you won't find in an ANCHR session:


Fit check

Who This Works For

Not every training context is the right fit for our approach. We'd rather be clear about this upfront than oversell and underdeliver.

Our methodology works best for teams that have a specific workflow problem. Not "we should be doing AI." A concrete thing: "our team writes five client reports a week and it takes four hours each, and that's unsustainable." That specificity is what the discovery call is designed to unlock, and it's what makes the build challenge land.

It works well for managers who want to see measurable change, not just activity. If your success criterion is "people attended," we'll deliver that. If your success criterion is "people are using AI tools differently six weeks later," we'll design for that — but it requires a different set of commitments from both sides.

It works particularly well for L&D leaders who've already been burned by one-off AI awareness sessions and are looking for an approach with a structural reason to produce different results. We've had this conversation many times. If you're in it, we'll be direct about what the evidence says and what our design does differently.

Where it's a harder fit: teams where leadership hasn't bought in and attendance is mandated rather than chosen. We can work with mixed enthusiasm — and we frequently do — but sessions where a significant portion of participants actively resent being there require a different opening, and we need to know that going in.

See Our Methodology in Action.

The fastest way to understand how we train is to have a 15-minute conversation about your team's actual context. No pitch deck. Just a real discussion about what you need and whether we're the right fit.

WhatsApp us to talk ↗
Common questions

Frequently Asked Questions

Can you train a group with mixed skill levels?
Yes, and we do it regularly. The build challenge is designed to have a floor and a ceiling — a version that works for someone who has never used AI tools, and a version that challenges someone who's been using Claude daily for six months. The independent build block is particularly useful here because participants self-select their level of complexity. The trainer circulates and calibrates. In practice, mixed-level groups are often more productive than homogeneous ones: the more experienced participants end up helping their peers, which deepens their own understanding and builds momentum in the room.
What if our team is resistant to AI?
Resistance usually comes from one of two places: genuine fear about what AI means for their job, or bad experiences with hyped tools that didn't deliver. Both are addressable. We spend the first 10–15 minutes of every session naming the concern directly rather than papering over it. The most effective antidote to AI anxiety is using AI to solve your own problem before lunch — when the thing you built is actually useful, the conversation changes. We can't guarantee everyone converts in a single session, but we've rarely left a room where the majority didn't shift.
How do you ensure people actually use AI tools after training?
The honest answer is: we can't force it, and anyone who says they can guarantee post-session adoption is overselling. What we can do is design for it — and that design runs through everything: the specificity of the build challenge (so participants have something immediately applicable), the independent build block (so they leave having done it once), the resources pack (so they can replicate it without help), and the follow-up structures. Adoption is a probability game. Good design increases the probability significantly. No design makes it certain.
How long is a standard ANCHR session?
Our most common format is a half-day workshop (3–3.5 hours) for teams of 10–25 people. Full-day sessions are available for larger teams or when the build complexity warrants it. We've designed for 90-minute executive briefings at the high-compression end, and multi-day capability programmes for organisations doing serious AI transformation work. The three-block structure scales — the proportions stay constant, the depth and complexity of the build adjusts to the time available.
What tools do you primarily train on?
Our core tools are Claude (via Claude.ai and the API), Claude Cowork for team-based collaboration workflows, and Claude Code for non-technical professionals who want to build lightweight automations and tools without needing to write code. We update our curriculum every time there's a meaningful Anthropic release — which is one reason we've deliberately stayed outside the SkillsFuture approval process, as explained in our SkillsFuture page. If your team uses other specific tools, we'll scope those in during the discovery call.

Keep reading

Related Pages

Why AI Training Fails The structural reasons most workshops don't stick
AI Training ROI How to measure and justify the investment
Enterprise AI Training Singapore Multi-cohort and large-team programmes
Half-Day AI Workshop Singapore Our most popular format, explained