How one of technology’s most familiar tools can help demystify generative AI adoption
If you’ve struggled to explain how large language models work to colleagues, clients, or stakeholders who don’t have technical backgrounds, you’re not alone. The challenge isn’t just about understanding the technology, it’s about finding the right mental models that make abstract concepts tangible.
One of the most effective analogies I’ve developed over months of explaining AI systems to business leaders, journalists, and everyday users is surprisingly simple: voice recognition. The irony isn’t lost on me that voice recognition itself uses AI extensively, operating probabilistically though through different mechanisms. But this familiar technology provides an invaluable framework for understanding how to work productively with generative AI.
The Labor Shift: Physical to Cognitive
When you use voice recognition, the first thing you notice is the reduction in what we might call “physical labour” or the “heavy lifting” of typing. Speaking your thoughts requires less mechanical effort than typing them, which makes this technology invaluable to some users, moderately useful to others, and genuinely problematic (whether real or imagined) to a third group.
This mirrors exactly what happens with generative AI. The “mental labour” shifts from researching, drafting, coding, or creating from scratch, to prompting, directing, editing, and iterating. For some business applications, this shift is transformative. For others, it’s incrementally helpful. And for certain use cases, it introduces more friction than it removes.
Understanding which category your use case falls into is the first step toward productive AI adoption.
The Mental Reframe: Speaking vs. Writing vs. Prompting
The second transformation voice recognition forces is more subtle but equally important: it changes the kind of mental labour you perform. Speaking engages different cognitive processes than typing. Most people find that they compose thoughts differently when speaking aloud versus writing them out character by character.
Working with large language models requires a similar cognitive shift. You’re not writing content or code directly, you’re articulating instructions, context, and desired outcomes. This is a fundamentally different mental function that requires practice and skill development. Just as some people never fully adapt to dictating emails instead of typing them, some professionals will find the prompting paradigm less natural than direct creation. It’s about how your brain transforms thought into words. The adjustment is a creation of your own neuroplasticity or how easily you can adapt to change.
The key insight for business leaders is that this represents a genuine transition cost, not just resistance to change. Training and adaptation time should be factored into AI adoption timelines.
The Approximation Problem: When Good Enough Isn’t
Here’s where the analogy becomes particularly illuminating. Both voice recognition and LLMs take your input and attempt to reflect your instructions faithfully, but neither can do so perfectly. Voice recognition can’t recognize every word, every pronunciation variant, every proper noun; it’s simply not possible, so it approximates based on probability.
This is exactly what a probabilistic language model does, although through different mechanisms and with very different weighting systems. Voice recognition primarily deals with acoustic uncertainty (did you say “there” or “their”?), while LLMs handle semantic and contextual uncertainty with a creative output not an accurate rendition as the goal, across much more complex domains.
Both systems are trying to guess your intent from incomplete information. Both will make mistakes. Both require that you understand their limitations in order to work with them effectively.
The Verification Imperative: Different Problems, Same Necessity
After using voice recognition, you must review your output. You fix misrecognized words, add punctuation, correct grammar and syntax, fix proper nouns, and insert line breaks. Only then does your dictated text become usable in written form.
Working with LLMs requires analogous but distinct verification work. You must check factual accuracy, ensure logical consistency, verify that instructions were understood correctly, and confirm that output meets your requirements. The verification challenge differs significantly: with voice recognition, verification is relatively straightforward (you can see obvious errors), while with LLM output, verification can be the most demanding part of the process, especially for technical, legal, or high-stakes applications.
Critically, for both technologies, the smaller and more focused the input, the easier the verification process becomes. This has enormous implications for how businesses should structure AI workflows.
The Starting Point Changes Everything
What makes this analogy so powerful is that it illustrates a fundamental truth about working with generative AI: it’s not necessarily less intellectual labour. It’s labour from a different starting point, requiring different skills.
When you dictate instead of type, you haven’t eliminated cognitive work, you’ve shifted it from the mechanics of typing to the clarity of speech, then to the diligence of verification. When you prompt an LLM instead of writing from scratch, you haven’t eliminated expertise requirements: you’ve shifted them from direct creation to effective instruction, then to rigorous validation.
This reframe helps explain why AI adoption success varies so dramatically across different contexts. The technology isn’t universally “easier” or “harder”; it changes the nature of the work, and those changes benefit some workflows while complicating others.
Practical Implications for Business Leaders
Understanding this parallel offers several actionable insights:
First, recognize that AI tools require genuine skill development. Just as effective dictation requires practice in articulating thoughts clearly and completely, effective AI use requires developing prompting skills, understanding model limitations, and building verification protocols.
Second, factor transition costs into adoption planning. The shift from direct creation to AI-assisted workflows isn’t instantaneous, just as shifting from typing to dictation isn’t seamless for most users.
Third, match tools to tasks based on where the cognitive load actually falls in your workflows. Voice recognition excels when physical input is the bottleneck. LLMs excel when scale, speed, or initial draft generation is the constraint—but only when verification capabilities are strong.
Fourth, invest in verification systems before scaling AI adoption. The verification challenge is often the hidden cost in AI implementations, just as editing dictated text can sometimes take longer than typing would have. There can often be highly productive, unexpected outcomes found in what can feel like rework, as terrible it can feel in the moment, like Green Day losing an entire album and going on to make American Idiot to replace it. It’s a difficult lesson, but a useful lesson that can be invaluable.
Analogies as Accessibility Tools
The real value of this analogy lies in its accessibility. Most business professionals have used voice recognition or at least understand how it works. By connecting an unfamiliar technology (generative AI) to a familiar one (voice recognition), we create a mental bridge that makes the abstract concrete.
When I use this framework to explain AI capabilities and limitations to non-technical stakeholders, I consistently see recognition register. The lights go on. Suddenly, the discussion shifts from hype and fear to practical questions about workflow adaptation, skill development, and appropriate use cases.
That shift, from mystification to practical engagement, is exactly what the business world needs as we navigate the genuine transformation AI represents. A kind of revolution, a slow but significant disruption: a massive and not entirely welcome evolution in how we use technology that goes beyond how we use it to how we use it to be more productive at tasks we have always done in call parts of our lives: communication, planning, structure, moving ideas forward. It forces us to think about how technology enables what we do, at which state of our productive output, and if you want to think more deeply about it, where we are on the continuum of human-machine collaboration.
And sometimes, the best way to understand the future is by paying attention to the present we’ve already adapted to, using technology we barely noticed.





