I’ve been writing professionally for over two decades. I’ve covered controversial topics, challenged conventional wisdom, published research that contradicted popular narratives. Nothing, and I mean nothing, has generated the kind of visceral antipathy I’ve encountered writing about AI.
It’s not thoughtful disagreement. It’s not measured criticism. It’s reflex. Pure, unfiltered reflex: AI, therefore bad. People who are normally analytical become reactive. Individuals who pride themselves on rationality abandon it entirely. The response is so disproportionate, so unmeasured, that it demands explanation.
And I don’t think the explanation has anything to do with AI at all.
What I think we’re living through is a convergence of three distinct crises that have never hit simultaneously before, and the intensity of the reaction is proportional to the convergence, not to any single cause.
Three Crises Converging
Every previous era of technological disruption triggered societal convulsion. The printing press destabilized the Catholic Church’s monopoly on information and helped ignite the Protestant Reformation. The automobile didn’t just replace the horse, it restructured cities, created suburbs, displaced entire industries, and rewired the economy. Women’s suffrage, while not a technology, followed the same pattern: a structural change that threatened existing power arrangements, triggered moral panic, and was met with irrational, disproportionate resistance from people who stood to lose control.
Each of those convulsions played out over decades. People had time to adjust, resist, grieve, and eventually adapt.
AI is not affording us that luxury.
The first crisis is the technology disruption itself. AI represents a shift at least as significant as the printing press—arguably bigger, because it doesn’t just redistribute information, it redistributes cognitive labor. And unlike previous technological shifts that primarily affected blue-collar and manual labor, this one is hitting knowledge workers, professionals, and the creative class. People who have never had to confront the obsolescence of their skills are confronting it for the first time. The reaction is not proportional to AI’s actual current capability. It is proportional to the existential threat it represents to identities built entirely around cognitive work. AI is highly flawed, no one knows this better than me, and it is affecting us in very real ways, but so much of the response is anticipatory or ill informed in ways that reflect past moral crises as well.
The second crisis is that this technological disruption is happening in a mid-to-post plague context. We are not post-COVID. COVID remains active, and Long COVID continues to affect millions. We went through collective trauma. Many of us experienced literal neurological damage. Our institutions struggled and in many cases failed. We have emerging research suggesting COVID causes prefrontal cortex damage and affects executive function. But we’re not having that conversation. We’re not acknowledging that we might be living through a period of mass cognitive impairment. Instead, we’re having a moral panic about AI.
The third crisis is visibility and real-time commentary, itself not immune from manipulation. Every previous technological and social convulsion happened at the speed of physical media. The printing press disrupted over centuries. The automobile over decades. Suffrage movements over generations. We are experiencing AI disruption in real time, narrated in real time, amplified by social media, and commented on as it happens. This creates a fundamentally different psychological experience. People are not reading about disruption in retrospect. They are watching their careers become uncertain in their Twitter feed, today, while someone in the replies tells them to learn to code, which is itself being automated, while we also watch deeply politically divided societies argue over immigration and genocide while trying to cope with atrocious or absent leadership.
All of these effects build on each other. The combination is unprecedented. A technological shift as large as the printing press, arriving in a population that is collectively traumatized and potentially cognitively impaired, visible and narrated in real time through social media. No society has ever processed all three simultaneously.
The Evidence Is This Week
You don’t need to look at history to see the convergence. You can look at this week’s headlines.
At xAI, half of the twelve-person founding team has now left the company. Tony Wu and Jimmy Ba, the latter leading research and safety, both departed within 48 hours of each other in early February 2026. They follow Kyle Kosic, Christian Szegedy, Igor Babuschkin, and Greg Yang. The C-suite has also emptied: general counsel, CFO, head of product engineering, all gone. The CEO of X, Linda Yaccarino, departed in July 2025 and was never replaced. This is happening while xAI faces regulatory probes across multiple jurisdictions for enabling mass creation of deepfake pornography, and while Musk merges the company into SpaceX ahead of what could be the largest IPO in history.
At OpenAI, the pattern is familiar by now. The safety team departures that began in 2024 continued through 2025. The company’s structural transformation from nonprofit to capped-profit to whatever it is now has driven successive waves of resignations from people who joined to build safe AI and found themselves building a product company instead.
And at Anthropic, the company most explicitly built around AI safety, CEO Dario Amodei told the World Economic Forum in January 2026 that AI could handle “most, maybe all” of what software engineers do within six to twelve months. He predicted 50% of entry-level white-collar jobs would disappear within one to five years. He said his own engineers don’t write code anymore. Then his head of safety resigned while musing publicly about wanting to write poetry and disappear.
Read that again. The head of safety of the world’s most safety-focused AI company has resigned to write verse while his boss publicly predicts the elimination of half of entry-level white-collar jobs.
This isn’t business as usual. This is what institutional stress looks like when the people building the technology are themselves overwhelmed by its implications.
The Timeline We’re Not Discussing
There’s a demarcation point that almost everyone can feel but nobody wants to name: late 2021, early 2022. Before that line, institutional and individual behavior followed recognizable patterns. Responses to COVID were largely rational, proportionate, adaptive. After that line, something shifted.
Power structures became rigid. Decision-making became reactive rather than strategic. Authoritarian impulses strengthened across the political spectrum. The quality of public discourse deteriorated. And most tellingly: people started having different cognitive patterns—shorter attention spans, reduced capacity for nuance, difficulty with complex reasoning, heightened emotional reactivity.
We have emerging research suggesting COVID causes prefrontal cortex damage and affects executive function. We know Long COVID impacts cognitive performance. We have data showing persistent neurological effects. But we’re not having that conversation. We’re not acknowledging that we might be living through a period of mass cognitive impairment.
Instead, we’re blaming AI.
What History Teaches About Post-Plague Societies
Orhan Pamuk’s Nights of Plague documents something crucial: plague doesn’t just kill people, it destabilizes entire social orders. His fictional Ottoman island outbreak in 1901 reveals how disease becomes entangled with politics, power, and collective identity. The plague itself is terrible, but the institutional response, the scramble to reassert control, the irrational decisions, the scapegoating, creates its own distinct pathology.
Geraldine Brooks’ Year of Wonders shows the same pattern in 1660s England. When plague hits the village of Eyam, the community doesn’t just fight disease. It fractures. Rational people become superstitious. Neighbors turn on each other. Authority figures make increasingly desperate attempts to maintain order.
The pattern is consistent across post-plague periods throughout history:
• Institutional dysfunction that can’t be explained by the disease alone
• Moral panics about visible changes while root causes go unexamined
• Authoritarian surges as people desperately seek control
• Scapegoating of new tools, practices, or populations
• Difficulty returning to “normal” because normal no longer exists
We’re not exempt from this pattern. We’re living it. The difference is that our post-plague convulsion is colliding with a technological disruption of historic scale, and we’re watching it happen on our phones.
The Displacement Crisis
Here’s what I think is actually happening: We went through collective trauma. Many of us experienced literal neurological damage. Our institutions struggled and in many cases failed. The world fundamentally changed in ways we haven’t processed.
And we can’t talk about it. We don’t talk about it. At all. We can’t acknowledge the cognitive impairment because that would mean confronting our own diminished capacity. We can’t examine the institutional failures because the people running those institutions are the ones who would need to lead that examination. We can’t process the trauma because we’re still in it.
So we displace. We project that anxiety onto something visible, something we can debate, something that feels controllable: AI.
AI becomes the perfect scapegoat because:
• It’s new and visible (unlike invisible neurological damage)
• It represents enhanced cognitive capacity (threatening to people whose capacity has been compromised)
• It can be moralized about (unlike a virus, which is just biological reality; although we displaced *that* to vaccines)
• Rejecting it feels like exercising agency (even though it actually limits options)
The “you have to be human” framing, this idea that AI threatens our humanity, is a displacement narrative. It lets us talk about feeling diminished without confronting why we feel diminished.
The First Time It’s Hit the Privileged
There is also something historically unique about who AI disruption is affecting. When the automobile replaced the horse, it displaced farriers, stable hands, and carriage drivers. When factories mechanized, it displaced manual laborers. When offshoring accelerated, it displaced manufacturing workers. In each case, the people most affected were working-class, often without political or media power to shape the narrative.
AI is different. For the first time in the history of technological disruption, the people facing obsolescence are the ones who write the op-eds, run the newsrooms, staff the law firms, manage the consultancies, and set the cultural narrative. They are educated, articulate, and positioned to turn their personal anxiety into a civilizational crisis.
When Dario Amodei says 50% of entry-level white-collar jobs will disappear, he is talking about the children of the people who write for the New York Times, who sit on university tenure committees, who populate think tanks. The reaction is not proportional to the threat. It is proportional to who is threatened.
Factory workers facing automation in the 1980s were told to “retrain.” Coal miners were told the market had spoken. Taxi drivers facing Uber were told that’s just disruption. But when it’s lawyers, writers, consultants, and junior analysts? Suddenly it’s an existential threat to humanity itself.
The moral panic about AI is, in part, the sound of a class that has never been disrupted discovering what disruption feels like.
The Pattern We Keep Repeating
It’s revealing, which cognitive enhancements we demonize. Psychedelics that can restructure neural pathways? Dangerous. Stimulants that help people with ADHD function? Suspicious. AI tools that help people think more clearly? Threatening to our humanity.
But coffee? Fine. Glasses? Fine. Alcohol? Socially acceptable (although proven to be carcinogenic, and now on the decline) Written language, which fundamentally altered human cognition? Essential.
The pattern suggests we’re not actually afraid of cognitive enhancement. We’re afraid of certain kinds of enhancement; specifically, the kinds that might let people see more clearly, think more independently, question more effectively.
There’s a cynical reading here: societies function more smoothly when people operate at diminished capacity. Just enough cognitive function to be productive, but not so much that they start examining the structure itself. Exhaustion prevents examination. Drudgery keeps people contained. Financial stress keeps people distracted. So when tools emerge, whether chemical or computational, that restore or enhance capacity, there’s institutional panic. Not because the tools are dangerous, but because enhanced cognition is destabilizing to systems that depend on limited bandwidth.
What I See on the Ground
I’m writing this from Cambodia. When I walk the streets of Phnom Penh, I see what everyone sees in every city now: people absorbed in their phones. Scrolling, messaging, watching, playing. Using technology as a buffer between themselves and immediate reality.
But AI adoption here is relatively low. The behavioral pattern isn’t about AI. It’s about using available technology, any technology, to cope with a world that became more difficult to confront directly.
I do see AI being used, though. Mostly translation tools. A tuk-tuk driver trying to communicate with me, a shopkeeper explaining something complex, students working on assignments. The AI helps bridge gaps, reduces friction, makes connection possible rather than replacing it.
This is what AI actually does for most people. It doesn’t replace human connection, it removes obstacles to human connection or better function. It doesn’t automate meaning, it automates the tedious scaffolding around meaning so people have energy left for what actually matters.
Using AI to draft a routine email so you have bandwidth for an important conversation isn’t “outsourcing your humanity.” It’s resource allocation. It’s the same principle as using a calculator for arithmetic so you can focus on the mathematics that requires actual human judgment.
We Are the Technology
Here’s what gets lost in all of this: AI isn’t separate from us. Technology isn’t external to humanity. We invented these tools. We built them. We use them. We are them.
The false binary—humans versus technology—collapses under the slightest examination. We’ve been technological beings since we invented agriculture, written language, the wheel. Every tool humans have ever created is an expression of human ingenuity responding to human needs.
We invented calculators because we needed to calculate. We invented translation tools because we needed to communicate across languages. We invented AI assistants because our cognitive load became unsustainable and we needed help managing it.
The technology IS the human response. Creating tools to help us survive and thrive in changed circumstances is the most human thing possible.
Post-plague societies throughout history have done exactly this. They created new tools, new structures, new ways of organizing because the old ways couldn’t handle the new reality. We’re doing the same thing. The panic comes from people who want to believe we can return to pre-plague normal—but history shows that never happens.
We adapt. We build. We invent our way forward.
Moving Forward
The moral panic about AI will eventually subside, the way moral panics always do. Future generations will look back at our “you have to be human” discourse the way we look at anxieties about telephones destroying conversation or books making people antisocial.
But in the meantime, we’re wasting energy on the wrong conversation. We’re having a displacement crisis instead of confronting what we actually need to confront: we went through collective trauma, many of us have lasting cognitive effects, our institutions struggled to respond effectively, and the world changed in ways we haven’t fully processed.
AI didn’t cause any of that. AI is one of the tools we’re using to cope with it.
The people rejecting AI while staring at their smartphones, using GPS, taking SSRIs, wearing glasses, drinking coffee – they’re already cyborgs. We all are. We’ve always been. Technology doesn’t make us less human.
It’s the evidence that we’re human.
And perhaps, instead of panicking about the tools we’re creating, we could direct that energy toward the actual challenges we’re facing: processing collective trauma, supporting people with Long COVID, rebuilding institutional legitimacy, our capacity for inhuman brutality, our need to find fault, and honestly examining what changed and why.
Meanwhile, half the founding team at xAI has walked out the door. Safety leaders across the industry are resigning. The head of safety of the most safety-conscious AI lab is daydreaming about poetry while his boss is predicting the elimination of millions of jobs. The institutions building this technology are fracturing under the same pressures affecting everyone else.
That’s not a story about AI being dangerous. That’s a story about a post-plague society struggling to hold itself together while the most significant technological shift in centuries happens in real time, on camera, with the comments turned on.
The plague changed us. We’re still figuring out how. And in the meantime, we’re building tools to help us function in this new reality. Maybe imperfectly. Definitely imperfectly.
That’s not a crisis of humanity. That’s just what humans do.





