One of my most memorable training participants was 58 years old, had never opened ChatGPT, and built a working report automation tool inside two hours. That didn’t happen by accident — and it didn’t happen because she was secretly tech-savvy. It happened because the training was designed for someone exactly like her.
The term “digital immigrant” was coined by Marc Prensky back in 2001, and it’s aged imperfectly — but the underlying idea still holds for AI training contexts. A digital immigrant, in practical terms, is anyone who learned to work before the current generation of digital tools became the default. They built their skills, their habits, and in many cases their professional identity around ways of working that didn’t involve AI — or even much software at all.
This isn’t just about age. Yes, a 58-year-old senior manager who has been running teams since before smartphones clearly fits the profile. But so does the 45-year-old specialist who is extraordinarily deep in her domain — finance, legal, operations — and has successfully resisted every new tool rollout for the past decade because, frankly, she’s been doing fine without them. And so does the 35-year-old who is warm, collaborative, and excellent at her job, but who freezes up a bit whenever someone mentions a new platform, and who has learned to route around technology rather than through it.
What these people share is not incompetence. They often have 15 to 25 years of hard-won expertise. They’re the ones who actually know how things work. They are, in many organisations, the most valuable people in the room. The mistake is to design AI training as though their experience is a liability rather than the most powerful resource you have.
Most AI training inadvertently signals contempt for digital immigrants. The trainer is often young — sometimes younger than the participants’ children. The example use cases are about writing LinkedIn posts, generating social media content, or automating a side hustle. The exercises assume a certain comfort level with making mistakes in public, trying something that might not work, and looking slightly confused in front of colleagues.
For someone with 20 years of professional credibility, that context is hostile. They didn’t build a career by looking confused. They built it by knowing things. And when a trainer who wasn’t alive when they started working confidently explains that “AI is just like a really smart assistant,” the 55-year-old regional director sitting in the back row can tell — immediately — that this session wasn’t designed for her.
The result is disengagement that looks like resistance. L&D teams often interpret this as a people problem: “Some employees are just not open to change.” In my experience, it’s almost always a design problem. The training didn’t respect where the participant was starting from. It didn’t honour what they already know. And it didn’t make them feel like capable adults.
Design for dignity. That’s the whole principle.
I’ve run AI training for hundreds of non-technical professionals across Singapore and the region, and I’ve made plenty of mistakes along the way. The approach that works consistently for digital immigrants comes down to four principles:
The most direct example I can point to is documented in detail at our case study on the 55-year-old who built an AI workflow in two hours. I’m not going to restate all of it here, but the core design decision is worth calling out.
When she arrived at the session, I didn’t start with “here’s what AI can do.” I started with “tell me about the most tedious thing you do every week.” She described her weekly operations report: pulling data from three sources, formatting it consistently, writing the same summary narrative with minor variations each time. It took her about four hours. She hated it.
We spent the session building an AI-assisted workflow specifically for that task. By the end, she had a draft process that cut four hours to under forty-five minutes. She didn’t leave with a theoretical understanding of AI prompting. She left with a tool she could use on Monday morning. That’s the difference.
The key was not starting with AI’s capabilities. The key was starting with her expertise — her deep knowledge of what the report needed to contain, what the readers cared about, what language worked for her organisation. AI was the accelerant. Her expertise was the fuel.
I’ve tested a lot of approaches that don’t land for digital immigrants, and it’s worth being specific about them so L&D managers don’t inadvertently book a session that reinforces the wrong things:
Here’s something I’ve noticed across cohort after cohort: the gap between “engaged during the session” and “still using AI six weeks later” is significantly wider for digital immigrants than for tech-comfortable participants. The reason is simple: when they hit a wall after the session — a prompt that doesn’t work, an AI output that seems wrong, a use case their training didn’t cover — they have no safety net. And without a safety net, the path of least resistance is to go back to the old way.
The organisations I’ve seen achieve real adoption with this group have three things in common. First, a follow-up check-in at two to three weeks — not a full training session, just a touchpoint. Second, an AI champion who participants know personally and feel comfortable asking “dumb questions” without judgment. Third, a safe channel for ongoing support — a WhatsApp group, a Slack channel, something informal and low-stakes.
This isn’t complicated. But it requires someone to own it. The training provider can run the session; the organisation has to run the follow-up. If there’s no plan for what happens after the training day, the training day is likely to have a half-life of about three weeks.
The best outcome isn’t the participant who builds the most impressive thing in the session. It’s the 58-year-old who opens AI every morning and uses it for something real — for the next year, and the year after that.
That’s the bar worth optimising for. Not the session feedback scores, not the impressive demo at the end of the day — the quiet, sustained habit change that compounds over time. For digital immigrants, that outcome is absolutely achievable. It just requires training that was designed for them.