Friday, April 17, 2026
spot_img

Dominance to Drift in Three Months: The Fall of OpenAI

Update #1: From Copilot to Cowork

Today, Microsoft CEO Satya Nadella announced Copilot Cowork. This is Microsoft’s declaration of independence from OpenAI in product form, and an acknowledgement that Copilot was failing at the user level.


The headline feature is the shift from single-turn assistance to multi-step autonomous delegation; you describe an outcome, the agent builds its own plan and executes across apps over hours. That’s genuinely different from the original Copilot, which was essentially GPT in a sidebar.


Rapid product evolution is far from a new strategy for Microsoft. In fact it’s essentially been their product management philosophy since inception: launch something competitors have done that works, push it across the distribution ecosystem, figure out the user flaws and iterate iterate iterate. But the buried lede is dimension two: Microsoft is now running Claude under the hood for the hard parts. They moved to Anthropic’s models specifically because Claude proved more reliable for the complex instruction-following that agentic workflows demand: navigating file systems, executing multi-step logic without hallucinating steps. That’s Microsoft publicly acknowledging, through its engineering choices, that OpenAI’s models aren’t reliable enough for the product Microsoft needs to ship.


Read that against everything below.

Microsoft’s flagship enterprise AI product, the thing that’s supposed to justify the entire AI investment to shareholders, is now powered by the competitor that refused the Pentagon deal, that’s eating OpenAI’s enterprise market share, and that just hit number one on the App Store. Microsoft owns 27% of OpenAI and is building its marquee product on Anthropic’s technology.


The Work IQ layer is organizational memory that understands relationship graphs, project priority, and individual preferences. That’s signal infrastructure that our (still publicly unpublished) Nudgment framework describes, implemented at the individual productivity level. It’s pattern recognition across previously siloed data.


The strategic implication is stark: Microsoft doesn’t need OpenAI anymore. It has its own frontier models coming, it has Claude for reliability-critical agentic work, and it has the distribution through Office 365 that no AI lab can replicate. OpenAI just lost another door.​​​​​​​​​​​​​​​​

Original Post:


Imagine for a moment an alternate universe, which may exist somewhere, one where Sam Altman stayed out. In November 2023, OpenAI’s board fired Altman. Within days, under intense pressure from Microsoft and a staff revolt, they brought him back. It was framed as a correction: the adults had panicked, the visionary returned, and the AI revolution would continue uninterrupted. The dynamics of this drama, seemingly about governance and development direction, have never really been fully publicly explained.

In retrospect, it was a fulcrum moment. It’s arguable that OpenAI has only lost ground steadily since then. Altman could’ve stayed fired stayed home, hung out with his kids, something he’s talked about recently almost longingly.

Instead the company today finds itself in an almost impossible position. It’s lost its enterprise lead to Anthropic. The Microsoft relationship has visibly chilled. It’s fighting a consumer war against a competitor with seemingly unlimited pockets in Google. It’s burning through cash. It’s dealing with a boycott after seemingly selling its soul for an ugly deal with the Trump Pentagon, a deal initially offered to Anthropic. It’s losing key staff. It’s cancelling high profile projects like Stargate. Its investors are cutting back. To be blunt, it’s running out of strategic options. What does the future hold for Altman and OpenAI? What will it take to turn the negative trajectory around? Twenty-eight months later, it is worth asking the uncomfortable question: what if the board was right?


Not necessarily about the specific reasons, which were muddled and poorly communicated. But about the underlying signal: that OpenAI, under Altman’s leadership, was building narrative faster than it was building substance. That the gap between what was being promised and what was being delivered was widening, not closing. That the velocity of ambition was outpacing the organization’s capacity to execute.
Because in March 2026, that gap has become a chasm. And the signals are no longer weak. They are screaming.


The Compound Fracture

On paper things don’t look so terrible. Yes, the knives are out and the critics are vocal, but that’s been the case for a long time. Sure, competitors are gnawing at its dominance, but wasn’t that inevitable? The company still leads the industry in revenue, and subscribers, right? Right. But it’s where and how attrition is happening that signals something developing that is now very difficult to address.

What is happening to OpenAI right now is not a single crisis. It is a compound fracture; multiple breaks occurring simultaneously across every structural dimension of the company, each one amplifying the others. Infrastructure. Capital. Talent. Product. Politics. Partnerships. Any one of these in isolation would be manageable. Together, they describe something closer to organizational coherence collapse.
The infrastructure is fracturing.

The Stargate expansion at Abilene, Texas, the flagship data center campus that was supposed to anchor OpenAI’s $500 billion AI infrastructure vision, exemplifies the issues. It is dead. Oracle and OpenAI cancelled the planned expansion from 1.2 gigawatts to 2 gigawatts after negotiations broke down over financing and, critically, over OpenAI’s inability to forecast its own demand. The partners couldn’t agree because OpenAI kept changing its mind about how much compute it needed. Meta is already circling the excess capacity. Nvidia put down a $150 million deposit to broker the deal (not to save OpenAI’s position, but to ensure its own chips would power whoever moved in next).


The broader Stargate project, announced at the White House alongside President Trump in January 2025 with fanfare suggesting the Manhattan Project had returned, has been plagued by squabbles between stakeholders over site ownership, system control, and who gets to decide what. Liquid cooling infrastructure failed during winter weather, forcing buildings offline for days. The $500 billion headline was narrative. The reality is partner disputes, reliability failures, and a company that cannot commit to its own capacity projections.


The capital partner is pulling back. Nvidia CEO Jensen Huang said publicly at the Morgan Stanley conference last week that the opportunity to invest $100 billion in OpenAI is “probably not in the cards.” The actual investment shrank from the September 2025 headline of $100 billion to $30 billion, and Huang suggested even that may be Nvidia’s last. The circular financing concern that analysts have flagged for over a year — Nvidia invests in OpenAI, OpenAI spends it on Nvidia chips, both book the transaction as growth — is becoming impossible to ignore. When the world’s most valuable semiconductor company starts publicly walking back its commitment to you, that is not a negotiating tactic. That is a signal.


The talent is bleeding out. Vice-president of research Jerry Tworek, who spent seven years at OpenAI and led reasoning research, left in January after his appeals for resources were repeatedly denied. Model policy researcher Andrea Vallone joined Anthropic after being handed what she described as an impossible task, protecting the mental health of users becoming emotionally dependent on ChatGPT. Economist Tom Cunningham departed. Teams behind Sora and DALL-E felt neglected as resources were redirected to ChatGPT under Altman’s December 2025 “code red” (a panicked response to Google’s Gemini 3 outperforming on key benchmarks). OpenAI’s mission alignment team was disbanded entirely.


Then, just this weekend, robotics leader Caitlin Kalinowski resigned over the Pentagon deal, writing: “Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
The pattern is unmistakable. Safety people left. Research people left. Now principles-based people are leaving. What remains is a product organization optimizing a chatbot.


The product advantage is eroding on both flanks. In the enterprise market, Anthropic now leads with 40% market share versus OpenAI’s 27%, according to Menlo Ventures. In the consumer market, Google’s Gemini has surged to 650 million monthly users, up from 450 million in July 2025, backed by a company that never needs to raise another dollar and never needs to monetize Gemini aggressively because it’s funded by search advertising. There are clear arguments to be made that versions have been regressing. GPT-5.2 was underwhelming. The thinking models and fast models that constitute 5.4 are derivative, variations on architectural approaches that every major lab is pursuing. That sound you didn’t hear was the normal reaction to an open AI model release static hyperbole from developers on X proclamations of dominance over leaderboard positions and eval’s think pieces on METR implications – none of this happened. The silence around 5.4 was shocking. Open-source models like Qwen are eating the bottom of the market where “good enough” is sufficient and free.


The moat that ChatGPT’s early lead was supposed to create has not materialized. Users are switching. Developers are diversifying. Enterprise customers are hedging.


And the boycott is real. The QuitGPT movement claims over 2.5 million participants, people who have cancelled subscriptions, deleted the app, or shared the boycott. Protesters gathered outside OpenAI’s Mission Bay headquarters with chalk-covered sidewalks and picket signs. The trigger was what happened on February 27 and 28, 2026, and it tells you everything you need to know about where the market’s values actually sit.


The founding partnership is dissolving. Microsoft AI chief Mustafa Suleyman declared a mission of “true AI self-sufficiency” and confirmed that Microsoft will build its own frontier models. Microsoft has added Claude to its Office 365 offerings alongside ChatGPT. OpenAI’s Codex is evolving into a product that competes directly with GitHub Copilot, Microsoft’s crown jewel developer tool. The company that owes its existence to Microsoft’s $13.75 billion is now building tools that compete with Microsoft’s products, while Microsoft is building models that compete with OpenAI’s. The joint statement released February 27 insisting the partnership “remains unchanged” reads like a communiqué issued by allies who are already negotiating the terms of separation. Microsoft CFO Amy Hood’s reported concern, that catering to OpenAI’s increasingly expensive demands could harm Microsoft if the servers built to run AI don’t turn a profit, is not a partnership concern. It is an exit calculation.


The Pentagon Chose Anthropic. OpenAI Was the Fallback.



The Department of Defense did not go to OpenAI first. It went to Anthropic. Anthropic was the Pentagon’s preferred AI partner. And Anthropic said no.
Specifically, Anthropic sought legal guarantees that its technology would not be used for mass surveillance of Americans or for fully autonomous weapons systems. The Trump administration declined to agree to those terms. Anthropic CEO Dario Amodei wrote: “I cannot in good conscience accede to the Pentagon’s request.” The administration responded by declaring Anthropic a “supply chain risk” and ordering all federal agencies to phase out its products.


Within hours, OpenAI accepted the deal. Same Pentagon. Same terms that Anthropic had refused. Sam Altman, who had initially expressed support for Anthropic’s position, posted to X that the technology would not be “intentionally used for domestic surveillance,” a hastily issued clarification that satisfied almost nobody.


And then the market spoke. Claude shot to number one on Apple’s App Store. Anthropic reported record signups. OpenAI got a boycott, a robotics leader resignation, and chalk on its sidewalks.
The company that positions itself as the most important AI company in the world got the government contract not because it was the first choice, but because the first choice had principles it wasn’t willing to compromise. The market rewarded the company that said no and punished the company that said yes. That is not a PR problem. That is a signal about where value actually resides in this market — and OpenAI either can’t read it or won’t.



The Narrative Machine


How did the company that launched the generative AI revolution arrive here? The answer is not complicated. It is a failure mode that every enterprise strategist should recognize: narrative overcommitment.
OpenAI has been telling a story since the moment Altman returned from his brief exile: that it was the most important company in the world, building the most consequential technology in history, at a scale that required unprecedented capital, unprecedented partnerships, and unprecedented trust. The story was compelling. It attracted $13.75 billion from Microsoft, billions more from other investors, a $500 billion infrastructure announcement at the White House, and a valuation that reached $730 billion before a single quarter of profitability.


But the story became the strategy. Each new partnership was evaluated against the narrative rather than against independent market analysis. Each new product launch was framed as a chapter in the story rather than as an offering that needed to justify itself on its own terms. The narrative said “we need more compute than has ever been assembled,” so Stargate was announced. The narrative said “we are the AI company for the United States government,” so the Pentagon deal was accepted without the guardrails that a more discerning organization would have demanded. The narrative said “we are the indispensable platform,” so enterprise customers were pursued even as the product team was being gutted to optimize a consumer chatbot.


The internal “code red” that Altman issued in December 2025 (redirecting all resources toward ChatGPT after Google’s Gemini 3 outperformed on key benchmarks) is the clearest evidence that the narrative is being managed rather than the business. A company with genuine strategic depth does not panic-redirect its entire organization in response to a competitor’s product launch. A company running on narrative does, because the narrative cannot accommodate the possibility that the competitor might be ahead.


An Organization That Moved So Fast on Narrative That It Ran Out of Strategic Space


Here is the question that every investor, every partner, and every enterprise customer should be asking: what is OpenAI’s durable competitive advantage?


It is not model superiority. GPT-5 did not create separation, it fell behind as Claude ascended. Multiple labs are producing comparable performance. The architecture is not proprietary; transformers are an open research artifact.


It is not data. OpenAI and others are being sued by the New York Times and others over training data. Its data advantage is a legal liability, not a moat.


It is not infrastructure. The Stargate vision is fragmenting. Microsoft is building independence. Oracle negotiations collapsed. The company cannot forecast its own compute needs.


It is not trust nor safety. The Pentagon deal alienated the consumer base. The senior departures alienated the research community. The political donations alienated progressive users. The for-profit conversion alienated the original mission constituency. The company is being sued for clear user harm. The team that formed Anthropic left because of concerns around these very principles.


It is not profitability. OpenAI does not expect to be profitable until 2029. It is burning through cash at a rate that requires continuous new investment — $115 billion in projected spend on the road to profitability, $80 billion more than originally projected.


What OpenAI still has is scale. 900 million weekly active users. 50 million paid subscribers. 2.5 billion queries per day. Those numbers are real. But scale without margin, without trust, without a defensible product moat, and without organizational stability is a fixed cost base that must be fed regardless of whether the revenue model works. It is a vulnerability masquerading as a strength.


The Pickle


OpenAI’s strategic options have not just narrowed. They have narrowed in a way where every available move closes another door.
The API infrastructure play; become the model provider that everyone builds on; runs directly into Anthropic, which is winning enterprise customers on trust and reliability, and into open-source models like Qwen, which are eating the bottom of the market where “good enough” is free. OpenAI would be fighting for the middle of a market being compressed from both ends.


The consumer super-app play (make ChatGPT the everything platform, which is what the code red was about) runs into Google. Google owns Android, Chrome, and Search. That’s distribution OpenAI cannot buy at any price. And Google doesn’t need to monetize Gemini aggressively. OpenAI has to make ChatGPT profitable. Google just has to make Gemini good enough to keep users in the Google ecosystem. That is an asymmetric fight OpenAI cannot win.


The government and defense play (which they just lunged at) is costing them the consumer base in real time and alienating the talent pipeline. You cannot be the Pentagon’s AI provider and the progressive creative class’s favorite tool simultaneously. They chose, and the market is responding.


The IPO play: get to public markets before the compound fracture becomes visible in the financials, let retail investors absorb the risk. Huang hinted that this is coming. It may be the only move left. But post-IPO scrutiny is brutal, and every quarter of losses against a $730 billion valuation generates exactly the kind of signal-reading that the market does well when analyst reports replace press releases.


The Microsoft acquisition play: let the founding partner absorb the company. Microsoft already owns 27%. But full absorption triggers antitrust scrutiny that neither company wants, and Microsoft is actively building independence because its own CFO is worried that OpenAI’s demands could harm Microsoft’s margins.


Or they could do what they are currently doing: continue to run on narrative, continue to announce partnerships that don’t fully materialize, continue to redirect resources based on competitive panic, continue to lose the people who built the technology in the first place, and hope that scale alone is enough to survive until profitability arrives in 2029.


This is the classic arrogance of assuming that because you are so far ahead of everyone else, your position will last forever. And then: Anthropic eating your enterprise lunch. Google eating your consumer lunch. Open source eating your developer lunch. The military not even wanting you, wanting the company you helped create by pushing out your original safety-focused co-founders.


From Twenty-Five Years to Three Months


IBM took twenty-five years to traverse from strategic dominance to strategic drift. Five business units, same governance, radically different trajectories, and the whole thing played out across a quarter century.
OpenAI is doing it in real time. The compression is the point. In a probabilistic, high-velocity environment where competitive disruption can come from anywhere, from an open-source model in China, from a principled refusal in San Francisco, from a solo developer in Vienna, the time between reading a signal and losing the window to act on it has collapsed from years to months.


You can go from dominating the world to being an also-ran in three months.


That is not hyperbole. It is a description of what is happening right now to a $730 billion company that had every resource, every partnership, every talent pool, and every first-mover advantage imaginable, and that moved so fast on narrative that it ran out of strategic space.


The Board Was Reading Signals


The board that fired Sam Altman in November 2023 could not articulate what they saw. They stumbled through the communication, failed to build a coalition, and lost the political battle within days. But the signal they were responding to, that the gap between narrative and reality was becoming dangerous, has only widened since.


Every dimension of the current crisis traces back to narrative outpacing execution. The $500 billion Stargate announcement that couldn’t survive contact with financing reality. The “code red” that gutted research to optimize a chatbot. The Pentagon deal that was accepted without guardrails because the narrative demanded a government relationship. The political donations that alienated users because the narrative required proximity to power. The Microsoft partnership that is dissolving because both partners are building toward independence while publicly insisting nothing has changed.


Maybe the board was right. Maybe what OpenAI needed in November 2023 was not a visionary who could tell the most compelling story about artificial intelligence. Maybe what it needed was a leader with the discernment to know when the story and the reality had diverged, and the judgment to close the gap before it became structural.


That kind of leadership cannot be installed through a boardroom coup or a staff revolt. It can only be cultivated through sustained, consequential attention to the signals that matter. It is the organizational capacity to recognize, prioritize, and act on what the data is telling you rather than what the narrative needs you to believe.


OpenAI had all the signals. It read none of them. And now the compound fracture is setting.

Featured

Jennifer Evans
Jennifer Evanshttps://www.b2bnn.com
principal, @patternpulseai. author, THE CEO GUIDE TO INDUSTRY AI. former chair @technationCA, founder @b2bnewsnetwork #basicincome activist. Machine learning since 2009.