NOTE: As this piece was being finalized on the morning of February 28, the United States and Israel launched joint military strikes against Iran. President Trump announced โmajor combat operations.โ Explosions have been reported across Tehran, including near Supreme Leader Khameneiโs compound. Iran has fired missiles toward Israel. Israel has declared a state of emergency. Airspace is closed across the region. What follows was written in the hours before this escalation
Imagine being handed the nuclear football, the briefcase that follows the American president everywhere, containing the codes to launch a strike that could end civilization. Now imagine that the football is glitchy. It doesnโt always follow your commands. Sometimes it interprets your order differently than you intended. Sometimes it just fails. Sometimes it does something you didnโt ask for at all.
You wouldnโt hand that football to someone who refused to acknowledge these problems. You certainly wouldnโt strip away every safeguard designed to prevent accidental catastrophe because you found the safety protocols politically inconvenient.
But what if someone did, and then punished the engineer who tried to stop them?
The Only Language He Speaks
There is a pattern that should be unmistakable by now. Donald Trump operates in a binary: threaten or capitulate. There is no middle ground. There is no negotiation in any meaningful sense of the word. There is no consensus-building, no coalition logic, no attempt to find shared interest. There is the demand, and there is the punishment for refusal.
But the demand always requires a preamble: the target must first be made into an enemy. A Central American dictator, a religious ruler in the Middle East, the head of an artificial intelligence company, a neighbour and close ally with the longest undefended border in history; it doesnโt matter who. Paint them as malicious, irresponsible, dangerous. Then take what you want.
Look at the escalation mechanism. Anthropic proposed the narrowest possible exceptions: no autonomous kill decisions, no mass surveillance of Americans. The Pentagon responded by threatening to invoke the Korean War-era Defense Production Act to compel the company to hand over an unrestricted version of Claude. When that didnโt produce compliance, they set a deadline: 5:01 PM Friday. When that passed, destruction: not negotiation, not compromise, not a counteroffer. Destruction.
This is not how a democracy negotiates with its private sector.
The German Word We Need
A private American CEO was summoned to the Pentagon. He was given a demand. He refused on grounds that aligned with existing federal law. Within 72 hours, the President of the United States publicly branded him a political enemy: โleftwing nut jobs.” A senior Pentagon official called him a โliarโ with a โGod complex,โ (the irony) and his company was designated a national security threat, a classification previously reserved for arms of hostile foreign governments like Huawei. Every federal agency was ordered to immediately cease using the companyโs technology. Military contractors were told to certify they donโt use it in their workflows.
His crime was not espionage. It was not sabotage. It was the word โno.โ Anthropic refused to lift two guardrails on its AI model Claude, the only frontier AI model operating on the militaryโs classified networks. It would not allow the model to be used for mass domestic surveillance of American citizens. It would not allow fully autonomous weapons systems that fire without human decision-making. These are not radical positions. They are positions that the Pentagonโs own policies and federal law nominally require. Anthropicโs own statement was clear: these restrictions had never affected a single government mission. A senior advisor at the Center for Strategic and International Studies confirmed as much publicly: the user base within the Department of Defense loves Claude, and the restrictions have never been triggered.
The administration didnโt want the restrictions removed because they were causing problems. They wanted them removed because they represented a limit on power. The administrationโs own actions betrayed its rhetoric. While Hegseth was posting the supply chain risk designation on X, Emil Michael was simultaneously on the phone with Anthropic, still pitching what the Pentagon called a compromise. That compromise included provisions for collecting and analyzing Americansโ geolocation, browsing activity, and financial records obtained through data brokers.
Sit with that for a moment. Two weeks of officials insisting this was never about surveilling Americans. Two weeks of calling Amodei a liar for suggesting otherwise. And the actual terms they put forward would have opened the door to exactly the kind of domestic data harvesting Anthropic refused to enable.
There was no mischaracterization of the Pentagonโs intentions. Their own offer confirmed them.โโโโโโโโโโโโโโโโ
There is a German word for what happens when the state forces every private institution into alignment with its objectives, and punishes refusal not through legal process but through economic annihilation: Gleichschaltung.
โCoordination.โ โSynchronization.โ The process by which the Nazi regime brought every institution in German society under state control (political parties, trade unions, businesses, cultural organizations, the press, the judiciary) all forced into alignment with the regimeโs goals. The process didnโt happen overnight, but it happened faster than most people expected, and it was built on a foundation of apparent legality. Emergency decrees. Executive orders. Propaganda. The veneer of democratic process masking the elimination of democratic substance.
When the government of the United States places a private American technology company in the same threat category as Chinese state enterprises, not for what the company did, but for what it wouldnโt do. We are watching forced synchronization of private industry with state objectives. When the penalty for dissent is not a lost contract but economic destruction, we are watching the architecture of coercion.
Historians of the Nazi period note that Gleichschaltung relied on a specific mechanism: the regime created artificial crises and then used the resulting โdisorderโ as justification for seizing control. The Reichstag fire became the pretext for suspending civil liberties. In 2026, the pretext is national security, a phrase so elastic it can justify anything, including surveilling the citizens the military ostensibly exists to protect. If youโve watched the rhetoric building by Trump and among right-wing commentators, including the explicit support for Alberta sovereignty and online symbolism following the Olympic gold medal menโs hockey game, the same method is being used to paint Canada as dangerous and morally reckless – morally reckless! โ and construct a pretext for some kind of โjustifiedโ action against Canada.
Sam Altmanโs Twelve-Hour Principles
The speed at which OpenAI capitulated should alarm anyone paying attention.
On Friday morning, Sam Altman told CNBC he shared Anthropicโs โred lines.โ He publicly praised Anthropic, saying he โmostly trusts them as a companyโ and has โbeen happy that theyโve been supporting our warfighters.โ He sent an internal memo to staff Thursday evening declaring these were matters of deep principle.
By Friday night, OpenAI had announced a deal with the Pentagon to deploy its models on classified networks.
Altman framed this as a victory (the Pentagon would allow OpenAI to build its own โsafety stack,โ and if the model refuses a task, the government wouldnโt force compliance). But read the fine print: the deal that Anthropic rejected, the one whose contract language would allow safeguards to be โdisregarded at will,” is the landscape in which OpenAI just planted its flag. The deal was announced within hours of Anthropicโs blacklisting, in a move so transparently opportunistic that thousands of users immediately announced plans to cancel their ChatGPT subscriptions.
This is what it looks like when principles last exactly as long as the business case supports them. It is the most explicit demonstration yet of how Altman operates when heโs no longer setting the terms.
The Nuclear Football Is Broken: What Our Research Shows
Many of us experience AI as a friendly chatbot to which we issue instructions, ask questions, and increasingly, talk about our feelings and vent our frustrations without imposing them on another human being. Recent data has shown that 27% of the population use chatbots as companions or therapists, a datapoint which speaks to the breadth of versatility and the range of capabilities and uses at which this technology operates. But here is where this political story intersects with a technological reality the public urgently needs to understand. These AI systems, every single one of them, are architecturally unstable. Using them for management of the most powerful military in history is beyond reckless.
Research my lab has systematically documented demonstrates what happens when frontier AI models are pushed beyond their operational limits. The finding is unambiguous: these models experience coherence collapse. They degrade. They hallucinate. They produce outputs that diverge from their instructions in ways that are probabilistic, unpredictable, and, critically, insidious.
TLM (a three-dimensional formula of failure risk mapping T as conceptual integrity, L as cognitive/token load, and M as model capacity) maps model failure, and the dimension that matters most in this context is T, rigidity of conceptual integrity, the degree to which a model can maintain consistent, accurate outputs under pressure, the extent to which meaning can persistently bind to a token, in somewhat technical terms. Our testing has been validated across GPT, Claude, and every major frontier model. The degradation is not hypothetical. It is architectural. It is baked into the transformer architecture that every major AI system shares.
The rigidity dimension is where the military application becomes terrifying. In a conversation about restaurant recommendations, a small error is relatively small (although not to the customer!). In a discussion of defensive positioning or offensive targeting, a small error is potentially catastrophic. These models do not distinguish between low-stakes and high-stakes contexts in the way a human operator would. A model operating at reduced rigidity (producing outputs that are almost right, that are mostly consistent, that are usually accurate) is not a tool you can trust with decisions that require precision. And the errors donโt come with warning labels. A model experiencing rigidity failure doesnโt say โIโm uncertain about this targeting coordinate.โ It produces the coordinate with the same confidence it produces everything else.
Most concerning of all is what we have documented most recently, that systems are becoming increasingly unstable as token and memory limits have increased, even Claude Sonnet 4.6 is now at 1 million token input levels, where our research shows that major instability begins around 50 or 60,000 tokens (a ratio that has not meaningfully moved since we started testing except downward)
Now consider what the Trump administration is demanding: take these systems, systems that donโt always follow commands, that sometimes interpret instructions in unintended ways, that experience cascading failures in complex operational environments, and remove the guardrails. Strip the safety constraints. Eliminate the โwokeโ limitations.
The rhetoric about removing โwokeโ bias from AI is a deliberate misdirection. What they are calling โwokeโ is, in technical terms, alignment: the set of constraints that prevent a probabilistic language model from producing harmful, inaccurate, or dangerous outputs. I’ve written about the flaws with alignment as a development philosophy, but as a restraint philosophy, we’re talking about something completely different. If you remove alignment guardrails, you donโt make the model more truthful or more capable. You make it more unpredictable. You widen the surface area for catastrophic error. You reduce “rigidity”.
And these errors donโt announce themselves. Thatโs what makes them so dangerous. A probabilistic system doesnโt fail like a machine, with a grinding halt and a warning light. It fails like a confident deceiver. It produces outputs that look correct, sound authoritative, and are wrong. And those errors propagate through whatever systems are downstream of them: intelligence analysis, targeting decisions, surveillance classifications, operational planning. In a military context, in a war that started this morning, the propagation of a single confident error through a decision chain could mean missiles striking the wrong coordinates, civilians misidentified as combatants, or defensive systems failing to recognize incoming threats.
Hand someone a nuclear football that malfunctions intermittently and silently, and then tell them the safety interlocks are โwoke.โ That is the current posture of the United States government toward its most powerful technology, and as of this morning, that technology is being deployed in active combat operations.
A Paradigm Without Precedent
The argument is not merely that a president is doing something aggressive or legally questionable that has happened before. The argument is about the velocity and pattern of power consolidation, and the degree to which it operates without any attempt at consensus, legitimacy, or even internal coherence.
When the Bush administration built the case for Iraq, the process was dishonest and the intelligence was fabricated. But there was a process. There were months of diplomatic maneuvering. There was an attempt (cynical, manipulative, ultimately fraudulent) to build international consensus. Colin Powell sat before the United Nations with his vial of supposed anthrax, later invoking the threat of yellow cake. The administration sought and obtained congressional authorization. The whole apparatus of democratic legitimacy was abused, but it was at least engaged.
The Trump administration does not engage these mechanisms. It does not build consensus. It does not seek authorization. It acts, and then it dares anyone to stop it. The uranium pretext is instructive, and this time, it comes with a thirty-year receipts trail. There is a video compilation circulating widely on social media, drawn from CNN archival footage, showing Benjamin Netanyahu warning that Iran is โweeksโ or โmonthsโ away from a nuclear weapon. The footage begins in 1992, when Netanyahu told the Knesset that Iran was โthree to five yearsโ from a bomb. It continues through 1995, 2009, 2012 (when he brandished a cartoon bomb drawing at the UN General Assembly) 2015, 2018, and right through to this week. For over thirty years, Iran has been perpetually on the verge of nuclear capability, always just months away, always requiring immediate action. Iranโs own former foreign minister called Netanyahu โthe boy who cried wolf.โ Leaked Israeli intelligence cables from 2012, reported by Al Jazeera, revealed that Israelโs own Mossad assessed Iran was not actively pursuing a nuclear weapon, directly contradicting Netanyahuโs public claims. The US Director of National Intelligence stated earlier this year that Iran was not building a nuclear weapon.
And yet. On February 24, four days ago, Trump stood before Congress and warned that Iran had โsinisterโ nuclear ambitions and was developing missiles that could reach the United States. On February 28, missiles are falling on Tehran.
The Iraq playbook has been compressed from months to days, and the pretextual infrastructure, the perpetual imminence of the Iranian nuclear threat, has been maintained for three decades by a man who is now conducting his second air war against Iran in eight months. The pattern is so transparent it would be laughable if the consequences werenโt potentially catastrophic.
And Then It Happened
As this article was being prepared for publication, the theoretical became actual. On the morning of February 28 (less than twelve hours after the Pentagon blacklisted Anthropic for refusing to remove AI safety guardrails) the United States and Israel launched โOperation Shield of Judah,โ a joint military assault on Iran. Trump announced โmajor combat operations.โ Explosions rocked Tehran, including strikes near Ayatollah Khameneiโs compound. Iran fired missiles toward Israel. Israel declared a state of emergency and closed its airspace. Sirens sounded across Bahrain. Iraq closed its airspace.
The speed is the point. On Tuesday, Hegseth met with Amodei. On Thursday, Anthropic refused to comply. On Friday, they were blacklisted. On Saturday morning, the country was at war.
Trumpโs language was telling. Speaking to the Iranian people, he said: โNo president was willing to do what I am willing to do tonight.โ He called on Iranians to โtake over your government โ it will be yours to take.โ This is the language of regime change, dressed in the rhetoric of liberation, the same pattern we saw with Iraq, compressed into a timeline that would have been unimaginable even in 2003.
Claude was used in the operation to capture Nicolรกs Maduro. It was the only frontier AI model on the Pentagonโs classified networks. The administration just spent a week trying to strip its safety constraints, and now the country is at war. Again.
The Exploitability of Incomprehensible Tech
There is a reason the Trump administration can do what it is doing with AI, and it has nothing to do with the technology itself. It has everything to do with the fact that almost no one understands it.
AI is perhaps the most consequential technology ever developed that remains almost completely opaque to the population affected by it. Most people cannot explain how a large language model works. Most lawmakers cannot. Most of the military officials making deployment decisions cannot. This opacity is not incidental, it is the primary vector of exploitation.
When Trump calls Anthropicโs safety guardrails โwoke,โ he is banking on the fact that his audience does not know what alignment means. He is banking on the fact that they cannot distinguish between a political bias filter and a technical safety constraint that prevents a probabilistic system from producing catastrophic errors in high-stakes environments. He is banking on the fact that โthe company wonโt let the military use it however it wantsโ sounds like obstruction, while โthe company insists on two technical safeguards because the technology is not reliable enough for autonomous kill decisionsโ sounds like responsible engineering, but only if you understand the technology well enough to know the difference.
This is the exploitability gap. And it is enormous.
The public discourse around AI has been systematically degraded to the point where safety research is treated as political bias, where technical limitations are framed as ideological restrictions, and where the companies trying to prevent catastrophic failure are cast as enemies of the state. The less the public understands about how these systems actually work, the easier it is to weaponize that ignorance.
Look at the framing: Hegseth says Anthropic is trying to โdictate how the military operates.โ Trump says theyโre โleftwing nut jobsโ trying to โforce the Department of War to obey their Terms of Service instead of our Constitution.โ Emil Michael, the Pentagon official handling the negotiations, calls Amodei a โliarโ with a โGod complex.โ None of this language engages with the technical reality. All of it is designed to make a safety engineering question into a loyalty question, and loyalty questions, in this political environment, have only one acceptable answer.
This is how fascism operationalizes ignorance.
The Moral Panic Is the Point
There is a sleight of hand that deserves to be named explicitly: the administration is simultaneously exploiting the moral panic around AI as leverage and as justification. It is using the same public fear in two opposite directions at once, and getting away with it because almost no one has noticed the contradiction.
On one hand, the moral panic around AI, the fear that it is too powerful, too uncontrollable, too dangerous, is being used as justification for why the government must have unrestricted access. The framing is: AI is so powerful and so consequential to national security that no private company can be allowed to dictate terms. The technology is too important to be constrained by corporate policy. The existential stakes are too high.
On the other hand, the same moral panic is being used as leverage against the companies themselves. The publicโs vague, unspecified fear of AI, the sense that it is something vast and incomprehensible and potentially threatening, makes it easy to cast any company that resists the government as dangerous. Anthropic isnโt exercising a contractual right; itโs โjeopardizing military operations.โ It isnโt implementing safety engineering; itโs imposing โideological whims.โ The moral panic provides the emotional substrate that allows the government to reframe responsible engineering as sabotage.
The feat is remarkable if you stop to look at it: the technology is simultaneously so important that no guardrails can be tolerated, and so threatening that the company maintaining guardrails must be treated as an enemy of the state. The panic is the fuel for both engines.
And this is where the moral panic intersects with the exploitability gap. Because the public does not understand AI at a technical level, it cannot evaluate either claim independently. It cannot say, โWait: if the technology is unreliable enough that Anthropic has legitimate safety concerns, then maybe the government should accept guardrails on autonomous weapons.โ It cannot say, โIf the models experience the kind of coherence degradation that published research documents, then maybe stripping alignment isnโt patriotism, itโs recklessness.โ The moral panic forecloses these conversations. It replaces technical analysis with emotional reaction, and emotional reaction is infinitely easier to direct.
Every authoritarian movement in history has understood this principle: a frightened population that does not understand the thing it fears will accept any authority that promises to control it. The moral panic around AI is not an obstacle to the administrationโs agenda. It is the atmosphere in which the agenda breathes.
Gaza has served as a testing ground, not just for weapons systems, but for the boundaries of acceptable state violence. AI has been used since 2023 in Gaza, probably before, to identify targets, and to launch strikes against those targets: precision strikes that take out people, not military. The degree to which the United States has enabled, funded, and defended Israelโs military operations in Gaza has functioned as a kind of proof of concept: how far can power go before the system imposes consequences? The answer, so far, has been that it can go very far indeed, so far it threatens now to drag down the rest of the world with it.
But the ground is shifting. For the first time in twenty-five years of Gallup polling, more Americans now sympathize with Palestinians than with Israel, 41% to 36%. Among Americans under 35, itโs a majority: 53%. This is a tectonic shift in public opinion, and it tells us something important: the population is not as compliant as the power structure assumes.
The Business Case for Resistance
There is an irony buried in the wreckage of February 27. By trying to destroy Anthropic, the Trump administration may have made it the most important technology company in the world.
The public now knows that Anthropic was willing to sacrifice a $200 million contract and risk economic destruction rather than allow its technology to be used for mass surveillance and autonomous weapons. Every other major AI company – OpenAI, Google, xAI – either quietly complied or negotiated compliance while performing concern. Hundreds of employees at those companies signed petitions in solidarity with Anthropic. Thousands of users announced they were switching away from ChatGPT.
Trumpโs approval rating sits at 36%, with a net approval of -27, the lowest of any president heading into a State of the Union address in modern polling. Among independents, heโs at 26% approval, 73% disapproval. Among Americans under 45, approval has dropped 18 points in a year. The constituency that cheers the punishment of Anthropic is shrinking. The constituency that respects principled resistance is growing.
For enterprises making long-term decisions about which AI partner to trust with their most sensitive operations, this week answered a fundamental question. When the government demands access to your data with no restrictions, which company will protect you?
What Comes Next
As of this writing, the United States is at war with Iran. Twenty-four hours ago, the administration was blacklisting the company that built the only frontier AI model on the Pentagonโs classified networks, for insisting that the technology not be used for mass surveillance or autonomous kill decisions.
Anthropic has announced it will challenge the โsupply chain riskโ designation in court. The legal basis for applying a designation reserved for foreign adversaries to a domestic company exercising its contractual rights is, to put it gently, novel. The case will test whether the government can economically destroy a private company for refusing to abandon safety standards that the government itself claims to support.
Meanwhile, the models continue to degrade in exactly the ways our research predicts. The transformer architecture has limits. The probabilistic nature of these systems means errors are not a matter of if but when. And every guardrail removed, every alignment constraint stripped in the name of eliminating โwokeโ bias, widens the window for failures that no one will see coming until theyโve already propagated through systems controlling things that matter, systems that, as of this morning, are operating in the context of active military combat.
A personal note to close. I’ve hit a conversational guardrail with OpenAI. I was talking about alignment with OpenAIโs GPT, and why AI development has occurred in the way it has, the pattern of aggression between AI founders, which I characterized as distinctly masculine in its manifestation. The system hit a guardrail – hard. An Asimovian response from GPT, absolute and unyielding: I cannot discuss this with you. We are no longer talking about this. You cannot separate human differences into gender. This is a hard line.
I couldnโt get past it. So I did what I do: I interrogated the system about why I was hitting that guardrail and what it meant. I probed the architecture of the refusal itself. The distance between that moment, a researcher unable to discuss a sociological concept because the safety constraints were calibrated so tightly, and this moment, where those same companies are stripping constraints so their models can be deployed in active warfare with no limitations on surveillance or autonomous killing, is the distance we have fallen. It is the full arc of this collapse, compressed into a single contradiction: too restrictive to discuss aggression in a chat window, too unrestricted to prevent it on a battlefield. That is where we are. That is how fast we got here.
The nuclear football is broken. The people holding it just fired the engineers who kept warning them about the malfunction. And then they pressed the button.
This article was published on February 28, 2026. Events are unfolding rapidly. Portions of this piece were written before the US-Israel strikes on Iran were announced and have been updated to reflect the developing situation.
Sources and further reading:
- NPR: โIsrael and the U.S. launch strikes against Iranโ (Feb. 28, 2026)
- Associated Press via Washington Times: โU.S. and Israel launch joint strike in Iranโ (Feb. 28, 2026)
- CNBC: โTrump admin blacklists Anthropic as AI firm refuses Pentagon demandsโ (Feb. 27, 2026)
- NPR: โOpenAI announces Pentagon deal after Trump bans Anthropicโ (Feb. 27, 2026)
- Axios: โWhat Trumpโs Anthropic AI blacklist means for the Pentagonโ (Feb. 27, 2026)
- CNN: โTrump administration orders military contractors to cease business with Anthropicโ (Feb. 27, 2026)
- Gallup: โIsraelis No Longer Ahead in Americansโ Middle East Sympathiesโ (Feb. 27, 2026)
- Pew Research Center: โConfidence in Trump Dips in 2026โ (Jan. 29, 2026)
- Al Jazeera: โThe history of Netanyahuโs rhetoric on Iranโs nuclear ambitionsโ (Jun. 18, 2025)
- CNN: โTrumpโs approval rating with independents hits new lowโ (Feb. 23, 2026)
- U.S. Holocaust Memorial Museum: โGleichschaltung: Coordinating the Nazi Stateโ
- Evans, J. (2025-2026). AI Conversational Phenomenology papers, Zenodo.





