Friday, April 17, 2026
spot_img

Project Glasswing and the Extraordinary Power Paradox We’ve Never Faced Before

Last updated on April 10th, 2026 at 08:24 am

For the past several months, across a series of research papers and analyses, I’ve been asking the same question in different forms: Who controls the AI infrastructure that governments, businesses, and citizens depend on? Who holds the power? Who has the keys?


The answer keeps getting more complicated, and paradoxically, more starkly simple.

What started as a sovereignty question (which country controls the compute, the models, the data pipelines) has become something more fundamental. We are watching public power transfer to private power at a speed and scale that has no historical precedent. Not through revolution or conquest, but through capability. The entities that can do the most consequential things in the digital world are no longer governments. They are companies. And the gap is widening.


This transfer looks benevolent. Some of it is. The AI tools available to ordinary people today are genuinely empowering, and some of the companies providing them are acting in good faith. But let’s not be naive about the full picture. AI is already being used by the Israeli military, through systems called Lavender and The Gospel, to generate kill lists and automate targeting decisions in Gaza, with intelligence officers reporting that human review of AI-selected targets sometimes lasted twenty seconds, and that the military authorized killing up to 15-20 civilians per junior operative identified by the system. AI is being used by U.S. Immigration and Customs Enforcement, through Palantir’s ELITE and ImmigrationOS platforms, Clearview AI facial recognition, and a $1.2 billion network of private skip-tracing contractors, to build what NPR has documented as a mass surveillance apparatus targeting immigrants and American citizens who protest ICE’s activities, drawing data from Medicaid records, social media monitoring, license plate readers, iris scanners, and spyware capable of intercepting encrypted messages. These are not edge cases or hypothetical misuses. They are current, documented, state-sponsored deployments of AI capability against civilian populations.

The benevolence is selective. We know that humans have exercised restraint with nuclear power since 1945 despite massive proliferation and arsenals across the world. But this is a different kind of power in a different form if availability. Will mythos be made available to the general public. Will it be available to governments? Corporations? Specific actors? Investors? How will access be determined? As many AI thinkers have postulated some for decades, the existential question is real and it is now here. These are now existential questions. As Simon Chesterman of the National University of Singapore put it just weeks ago: “Sovereignty – understood as the authority to set rules, allocate resources, and shape collective futures – is migrating from public institutions to private actors. The danger is not that machines will rule humanity. It is that those who control them increasingly shape the conditions under which humanity governs itself.”

Benevolence itself is discretionary. It is not governed by regulation, treaty, or democratic mandate. It persists at the pleasure of the companies and the investors behind them. The MIT Center for International Studies frames the core of this dynamic as a dilemma in which private, unaccountable authority is accorded a form of legitimacy that enables non-state actors to establish rules and pursue private interests, not the interests of the state or even of the market. Foreign Affairs called it the “AI Power Paradox,” the observation that both the U.S. and China see AI development as a zero-sum game, but neither can govern the private actors who actually control the capabilities.

This is a problem that has no precedent in human history: a technology exists that, if made widely available, puts catastrophic power into too many hands, but if kept concentrated, puts that same catastrophic power into too few hands. There is no safe point on the distribution spectrum.

We have never confronted this before. Not really.

Nuclear weapons are the closest analogy, and while it’s not direct because individuals rarely possess nuclear authority, they’re instructive because of the governance frameworks built around them. The Nuclear Non-Proliferation Treaty, the arms control regime, is collapsing in real time. New START expired on February 5, 2026 with no replacement. The U.S.-Israeli strikes on Iran have, according to the Bulletin of the Atomic Scientists, raised serious doubts that the NPT can hold as a central pillar of international security. The Trump administration is simultaneously pursuing a nuclear cooperation deal with Saudi Arabia that weakens nonproliferation provisions, threatening to resume nuclear testing, and letting the last binding limits on the world’s two largest arsenals lapse. The lesson of Iran, Iraq, Libya, and Ukraine (contrasted with North Korea) is becoming conventional wisdom: if you don’t want a nuclear power to attack you, get nuclear weapons.

The nonproliferation framework was the best answer humanity produced for the last version of this problem. It took decades to build and it is failing. Now we have a new version of the problem, moving at a pace measured in months, not decades, and we have no framework at all.

On April 8, 2026, Anthropic announced Project Glasswing.

What Just Happened

Project Glasswing is a cybersecurity initiative built around an unreleased AI model called Claude Mythos Preview. This model can autonomously find exploitable vulnerabilities in essentially any software system on earth. It has already found thousands of zero-day flaws (previously unknown security holes) in every major operating system and every major web browser. Some of these bugs survived decades of human review and millions of automated security tests.

The specifics are striking. Mythos found a 27-year-old vulnerability in OpenBSD, one of the most security-hardened operating systems in the world, used to run firewalls and critical infrastructure. It found a 16-year-old flaw in FFmpeg, a video codec embedded in countless applications, in a line of code that automated tools had tested five million times without catching. It autonomously chained together multiple Linux kernel vulnerabilities to escalate from ordinary user access to full system control.

It did most of this without human guidance.

The partner list for Project Glasswing reads like a who’s who of global digital infrastructure: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Anthropic is committing up to $100 million in usage credits. More than 40 additional organizations that build or maintain critical software will get access.

This, on its face, can appear as a good thing. Finding and patching vulnerabilities before attackers exploit them is defensive work that benefits everyone. Anthropic deserves credit for choosing this approach rather than unobtrusively selling offensive capabilities to the highest bidder.

But “good thing, done by well-intentioned people, right now” is not a governance framework. And the distinction matters more than many business leaders currently understand.

The Competitive Consolidation Problem


There is another dimension to the Glasswing arrangement that deserves scrutiny: what exactly are partners getting access to, and what does it mean for market concentration?


Mythos is described as a “general-purpose frontier model.” It is not a narrow security tool. It is the most capable coding, reasoning, and agentic AI system ever benchmarked, scoring higher than any model ever tested on software engineering, mathematical reasoning, and autonomous computer use. The stated purpose of partner access is defensive security: vulnerability detection, penetration testing, endpoint hardening. But the model itself is a general-purpose engine delivered through the same API infrastructure – Claude API, Amazon Bedrock, Google Vertex AI, Microsoft Foundry – that these companies use for everything else. There is no public indication of technical guardrails that restrict partner use to security tasks only, and the benchmarks Anthropic published emphasize general capability, not narrow security function. Without public testing and, we can’t know what the limitations or flaws of the model itself are either, or provide any accountability.


This means the twelve Glasswing partners, which already include the largest cloud providers, the largest chipmaker, the largest cybersecurity firms, and one of the largest financial institutions on earth, now have access to a model that is materially more capable than anything available to their competitors, customers, or the public. The competitive advantage this confers extends well beyond security. A model that scores 93.9% on software engineering benchmarks and can autonomously chain complex multi-step technical tasks is far more than just a vulnerability scanner, it is an operational advantage across every dimension of software development, infrastructure management, and technical decision-making.


Over 99% of the vulnerabilities Mythos has discovered remain unpatched. The partners scanning their own systems know where the holes are. Their competitors do not. Their customers do not. The governments whose infrastructure depends on the same software do not. This is an information asymmetry with direct competitive and geopolitical consequences, concentrated in the hands of companies that were already the most powerful actors in the digital economy.


And the security of the arrangement itself is not beyond question. During the lead-up to the Glasswing announcement, a misconfigured content management system left a draft Mythos blog post and roughly 3,000 internal Anthropic assets publicly searchable. Days later, a packaging error briefly exposed the complete original source code of Claude Code – 512,000 lines – to anyone running a routine software install. Neither incident compromised model weights or training infrastructure. But for an organization asking the world to trust it as custodian of an autonomous cyber-offense capability, operational lapses of this kind illustrate exactly the fragility that Acton’s law predicts: human systems fail, and the more consequential the system, the more catastrophic the failure.

The name itself is revealing. “Mythos” comes from the Ancient Greek for “utterance” or “narrative,” the system of stories through which civilizations made sense of the world. Anthropic chose to name their most dangerous model after the human need for meaning-making. That’s either remarkable self-awareness or remarkable hubris, and possibly both.

The Distribution Paradox

Democratize this technology and you hand autonomous offensive cyber capability to every state actor, criminal organization, and lone operator with API access. Concentrate it and you create an asymmetry so profound that whoever controls it effectively holds a skeleton key to global digital infrastructure, with no democratic accountability governing how they use it.

This is not a spectrum with a comfortable middle. It is a genuine paradox, and it’s worth asking whether it has a structural resemblance to a tautology, whether the very act of creating this capability guarantees an ungovernable outcome regardless of distribution choices. If every point on the access spectrum is dangerous, the problem isn’t access. The problem is existence.

That may sound abstract. It is not. Anthropic themselves frame the urgency plainly: these capabilities will proliferate beyond responsible actors, and the timeline is measured in months. The recent New Yorker profile of Sam Altman asks disturbing questions about his ethics handling extreme power. We must ask these questions because concentration of power implies Spider-Man-like responsibility in people who may not have demonstrated it. And the recurring use of superhero references is intentional. We are talking about human beings who now have superhero-like powers, not physically, but through the application of technology. Are they up to the task? How do we plan to find this out?

Political philosophy and governance theory have grappled with versions of this problem, but never this version.

The Collingridge Dilemma (1980) holds that technology’s social consequences can’t be predicted early enough to control, and by the time they become apparent, the technology is too entrenched to change. Glasswing is a live demonstration: the capability exists, proliferation is coming, and governance is already lagging both.

The guardianship problem, quis custodiet ipsos custodes, who watches the watchmen, goes back to Plato’s Republic and was repeated in Alan Moore’s The Watchmen, which addressed this exact question. The recent film Bugonia went further with (spoiler) its apocalyptic scenario. Existential power when discretionary cannot sit in limited hands governed by pure personality, in character dependent power. But it is also unsafe distributed. Who do you trust with power that can’t be safely distributed?

Plato’s answer was philosopher-kings with special training and no property. Moore’s Dr Manhattan, holding such power, opts out, only to watch himself, off-planet, become a tool and an ostensible adversary, an excuse. He does not directly choose to destroy the world, and instead withdraws so as not to influence an outcome, with a restraint humans historically are loath to demonstrate, but his existence shapes an existential crisis, and Ozymandias ultimately uses his image or the geopolitical vacuum around him to stage a mass-casualty event meant to avert total apocalypse.

But the question isn’t really who watches the watchmen anymore.

There are no watchmen.

Anthropic’s answer is a public benefit corporation with a trust document. There’s a company based in San Francisco that built something, and a trust document that says they’ll be careful with it. Will their partners?

The offense-defense balance from security studies describes what happens when offensive capability becomes cheap and widely available while defense remains expensive and centralized: structural instability. Nuclear weapons were the previous case, but they required state-level resources and decades of development. Mythos-class capabilities could emerge from a well-funded lab in months.

Elinor Ostrom’s commons governance showed that communities can self-govern shared resources without privatization or state control — but her framework assumed the resource was something people wanted to sustain. What we have here is an anti-commons: something where access creates harm, but exclusion creates different harm.

The dual-use research dilemma from biosecurity is the closest practical precedent. After the H5N1 gain-of-function controversy, the scientific community confronted exactly this: publishing research enables defense but also offense, while classifying it concentrates power in whoever controls access. They created review boards and voluntary moratoria — institutional speed bumps. They never fully resolved it. The debate is still active fifteen years later.

The honest answer from theory is that there is no known stable solution to this class of problem. Every historical approach has been a managed compromise: nonproliferation treaties (leaky but better than nothing), biosafety review boards (slow but functional), classification regimes (effective but anti-democratic). All of them depend on institutional trust that erodes over time. All of them are, at this moment, visibly eroding. What we also know from nuclear technology, is that secrets spread. Capabilities spread, even the most complex and dangerous in the world.

Follow the Money

To understand why the institutional constraints on Mythos are thinner than they appear, follow the capital.

Anthropic has raised approximately $67 billion across 17 funding rounds, with a valuation around $380 billion. Amazon is the largest single investor at roughly $8 billion, though it holds no controlling stake. Google owns 14 percent, a figure revealed only through court documents, but has no voting rights and no board seats. The most recent $30 billion Series G round brought in NVIDIA ($10 billion), Microsoft ($5 billion), and sovereign wealth funds from Singapore (GIC), the UAE (MGX), and Qatar (QIA). There are now over 230 investors on the cap table.

The company is structured as a public benefit corporation with a Long-Term Benefit Trust designed to prevent any single investor from overriding the safety mission. This is a thoughtful and unusual governance mechanism. It is also, fundamentally, a legal document enforced within the American corporate law system, a system that is currently experiencing its own stress test.

The company burns approximately $19 billion a year and is not yet profitable. That creates continuous dependency on external capital, and every fundraising round is a negotiation in which new parties gain access and influence. This is not a flaw in the design. It is the design.

The Threat Surface

Here is where business leaders need to stop skimming and pay close attention, because the assumptions underlying Project Glasswing’s benevolent framing are all contingent.

The capability is real. The constraints are institutional.

The Long-Term Benefit Trust works as long as the legal system that enforces it remains stable, as long as the people inside the organization remain committed to the mission, and as long as no external actor applies sufficient pressure to override it.

Nationalization or forced access. A model that can autonomously find exploitable flaws in every major operating system is, by any reasonable definition, a dual-use technology with direct national security implications. The U.S. government has the Defense Production Act, IEEPA, CFIUS, and executive orders at its disposal. Anthropic’s page notes they are in “ongoing discussions with US government officials” about Mythos. Today those discussions are collaborative. Under a different political temperature, or the same administration on a different Tuesday, they could be coercive. We are watching an administration that attacked Iran’s nuclear facilities, let the last arms control treaty lapse, and is offering Saudi Arabia enrichment capability while threatening to resume nuclear testing. The idea that such an administration would approach an autonomous cyber-offense tool with measured institutional respect is, to be charitable, optimistic.

Espionage. A model that has already mapped thousands of zero-day vulnerabilities in every major operating system and browser is, from an intelligence perspective, one of the most valuable targets on earth. The vulnerability database alone, before you even consider the model itself, represents an attack surface map of global digital infrastructure that every intelligence agency in the world would pay enormously to access. With 230+ investors across multiple sovereign jurisdictions, with sovereign wealth funds from Singapore, the UAE, and Qatar on the cap table, and with a company of 2,500 employees handling the most sensitive offensive security data ever assembled in one place, the espionage surface is vast. You don’t need to steal the model. You need to steal its output. And the number of people and organizations with legitimate access to that output is growing, not shrinking. Did Jonathan Pollard steal nuclear secrets from the US and give them to Israel? The jury still out on that question but one thing we know is true. Israel has nuclear capabilities and is the most aggressive power on the planet at the moment.

Insider compromise. A company burning $19 billion a year depends on continuous infusions of external capital. Every funding round is a pressure point. With investors spanning multiple sovereign jurisdictions and geopolitical interests, the surface area for influence, overt or covert, is enormous. You don’t need to nationalize a company if you can compromise its decision-making through capital dependency. You don’t need to blackmail the CEO if you can recruit a mid-level engineer with access to the vulnerability database. The history of intelligence operations tells us that the question is not whether such attempts will be made, but whether they have already been made.

Proliferation. Anthropic built Mythos. Others will build equivalents. The Glasswing announcement is itself an acknowledgment that the defensive window is closing. The issue isn’t whether these capabilities spread but whether defenders patch faster than attackers exploit. That race is now permanent, and it has no finish line.

The Sovereignty Gap

Conspicuously absent from the Glasswing partner list: any Canadian organization. Any European government agency. Any institution from the countries whose critical infrastructure runs on the same software Mythos just proved is vulnerable.

If you’re a Canadian government executive running OpenBSD on your firewalls, Anthropic found and patched a 27-year-old hole in it. Your government wasn’t in the room when they decided what to do about it, when to disclose it, or who to tell first. That’s not malice. It’s structural asymmetry, the kind that my ongoing research into AI sovereignty dependency chains has been mapping for months.

The Glasswing partner list is a precise illustration of what I’ve called Layer 5 dependency: the actual security of your infrastructure is now determined by a private company’s choices about when and how to share vulnerability data, governed by a trust document in San Francisco, funded by capital from Singapore, Abu Dhabi, Doha, and Redmond. Every democratic state that isn’t in that room is a downstream consumer of security decisions they had no role in making.

Canada has no seat at this table. Neither does the EU. Neither does any government in the developing world. The defensive benefit flows outward from a private American company at a pace and priority order determined by that company’s partnerships and commercial relationships. If your country isn’t a partner, you learn about the holes in your infrastructure when Anthropic decides you learn, or when an attacker finds them first. In the face of this kind of power, conventional military decisions, or the moves of the Carney government to become NATO investment compliant, look pointless, even foolish, moves from a different era, anachronistic.

What Business Leaders Need to Understand

The average executive reading this needs to internalize one thing: the security of your digital infrastructure is now subject to decisions made by AI systems and the private institutions that control them.

This isn’t theoretical. It’s already happening. Mythos is already scanning. Vulnerabilities are already being patched, or not, based on Anthropic’s disclosure timeline, their partner agreements, and their internal triage decisions. Your exposure is a function of your proximity to that decision-making, and unless you’re on the partner list, your proximity is zero.

The relevant questions for any business leader or government executive:

What software are you running that might be affected? If the answer is “any major operating system or browser” (and it is) you’re in scope.

What’s your patch latency? When Anthropic discloses a vulnerability to a software vendor, how long does it take that fix to reach your systems? Days? Weeks? The gap between discovery and your patch is your actual risk window.

Who is making security decisions on your behalf, and what governs them? Not what product you bought. Who is actually deciding when you learn about threats to your infrastructure? A PBC trust document in San Francisco? A vendor’s responsible disclosure policy? A partner agreement you’re not party to?

What is your espionage exposure? How many of the organizations with access to Mythos output intersect with your supply chain, your cloud provider, your security vendor? Each intersection is a potential intelligence pathway.

What happens when the political weather changes? The PBC structure, the LTBT, the collaborative framing of Glasswing – these all assume a stable operating environment. We are watching nuclear nonproliferation collapse in real time under an administration that treats international frameworks as suggestions. What does your risk profile look like when the same political instincts are applied to AI?

The Existential Question

Project Glasswing is, in the most literal sense, a demonstration that AI has reached the point where it can autonomously map and compromise the digital infrastructure of civilization. Anthropic chose to use that capability for defense. That choice is protected by corporate governance, not democratic accountability. And the capability will spread regardless.

The nuclear parallel is not a metaphor. It is a warning. We built a nonproliferation regime over decades, with treaties, inspections, and international institutions — and it is failing right now, under pressure from the very state that created it. The AI capability governance problem is harder, faster, and starting from zero.

Every previous framework for managing dangerous capability, treaties, review boards, classification systems, voluntary moratoria, has been a managed compromise that depends on institutional trust and erodes over time. We are now being asked to trust that a corporate governance mechanism, in a single private company, in a single country, will hold against the combined pressures of geopolitical competition, intelligence operations, capital dependency, and political volatility.

This is the question that matters now: Who holds the keys to your digital infrastructure, and what mechanism (not who, but what mechanism) ensures they keep using them responsibly?

If the answer is “goodwill and a trust document,” we need better answers. The nuclear nonproliferation regime took decades to build and is failing. We don’t have decades. Anthropic says we have months.


Jen Evans is the founder of B2BNN and Pattern Pulse AI. Her ongoing research series “Whose AI Runs the Government?” examines AI sovereignty dependency chains in Canadian federal and provincial infrastructure. She is the originator of Evans’ Law and the Nudgment framework.

Featured

Jennifer Evans
Jennifer Evanshttps://www.b2bnn.com
principal, @patternpulseai. author, THE CEO GUIDE TO INDUSTRY AI. former chair @technationCA, founder @b2bnewsnetwork #basicincome activist. Machine learning since 2009.