Friday, April 17, 2026
spot_img

THE INVERTED AI BUBBLE

Last updated on March 27th, 2026 at 07:31 am

Why AI’s Real Crisis Is Underpricing, Not Overvaluation

UPDATE 3/27: A brilliant management decision by Apple, reading the token market signals.


Apple’s revised approach to AI assistants reinforces a core dynamic behind the inverted AI bubble: the decoupling of perceived capability from actual economic cost. Abandoning its Gemini integration and its OpenAI partnership, and rather than relying like everyone else on continuous large-model inference, Apple is pushing execution down to the device layer through structured intents, local models, and tightly controlled system integrations.

The result is an experience that appears fully featured to the user while generating little to no marginal token cost for the provider. This stands in direct contrast to the prevailing model across much of the industry, where each interaction incurs compute overhead that scales with usage. In effect, Apple is demonstrating that many high-frequency assistant tasks do not require frontier reasoning at all, only reliable orchestration. That realization undercuts the assumption that more intelligence, and more tokens, are necessary to deliver better AI products. The model has become so market dense so quickly that the orchestration layer is now a product in and of itself.

At the same time, this architecture exposes the limits of what current AI can sustainably deliver. Anthropic’s approach is full management, including remote. Apple’s system is not a fully agentic, cross-application intelligence layer; it is a bounded orchestration framework that resolves intent and selectively hands off to external models, including integrations with multiple providers when needed.

This is the kind of constraint-driven design that the inverted bubble necessitates and predicts: capability is preserved by narrowing scope, not expanding it. As enterprises evaluate where AI creates durable value, this shift matters. Systems that minimize token dependency, reduce exposure to long-chain reasoning failures, and rely on deterministic execution paths will prove more economically viable than those optimized for maximum generative breadth. Apple’s design does not eliminate the need for large models, but it reframes them as escalation layers rather than default infrastructure, a subtle but significant inversion of how the market has been positioning AI to date. And it saves the company from having to engage in the token war.

ORIGINAL POST

It’s not a bubble. Every bubble in economic history has shared the same basic pathology: speculative demand drives prices above intrinsic value until the fiction collapses. Tulips, dot-com stocks, subprime housing: the pattern is always the same. People pay too much for something that isn’t worth what they think it is.

What is happening in artificial intelligence right now is the opposite. If you remember Jeremy Irons’ famous soliloquy from the movie Margin Call, about the music stopping, we are not watching the music stop.

We’re watching it speed up so fast that it’s hard to discern the notes.

The demand is real. Enterprises are supply-constrained and backlogged. Cloud providers report that AI capacity sells out as fast as it comes online. The technology is integrated into search, logistics, advertising, retail, customer service, software development, and an expanding list of critical business functions. This is is operational dependency. And it is destroying the companies that provide it.

The five largest hyperscalers – Amazon, Alphabet/Google, Microsoft, Meta, and Oracle – are projected to spend nearly $700 billion on AI infrastructure in 2026 alone, a 60% increase over 2025’s already historic levels. For the first time in history, these companies hold more debt than cash. Their free cash flow is collapsing. Amazon is projected to go negative by as much as $28 billion. Alphabet’s free cash flow is expected to plummet 90%, from $73.3 billion to $8.2 billion. These were the greatest cash-generating machines ever built. Now they are borrowing to keep the lights on in data centers that are already insufficient.

This is not a bubble. It is an inverted bubble – something the global economy has never experienced at this scale. And if you’re waiting for a traditional correction, you are looking in the wrong direction.

The Numbers That Should Concern Everyone

The scale of what is happening is genuinely unprecedented. There is no historical parallel for an industry simultaneously experiencing explosive real demand, collapsing unit economics, and accelerating capital expenditure funded by debt.

According to Bank of America, the hyperscalers would need to spend 94% of their operating cash flow to fund their AI buildouts in 2025 and 2026, up from 76% in 2024. UBS puts the number even higher, approaching 100% of operating cash flows, against a 10-year average of 40%. These are companies whose entire investment thesis was built on the idea that they generated more cash than they could spend.

The Free Cash Flow Collapse

Company2025 FCF2026 FCF (Projected)Change
Amazon$7.7 billionNegative $17B–$28BNegative swing of $25B–$36B
Alphabet$73.3 billion$8.2 billionDown ~90%
MetaDecliningNear zero; negative by 2027–28Accelerating decline
MicrosoftUnder pressureSignificant decline expectedCapex up >60%
OracleAlready negativeNegative through 2029Deepest structural deficit

Amazon’s situation is particularly striking. The company announced plans to spend $200 billion on AI infrastructure in 2026, exceeding analyst expectations by more than $50 billion. Morgan Stanley projects the company will burn through $17 billion in negative free cash flow this year; Bank of America sees a deficit hitting $28 billion. Amazon’s trailing twelve-month free cash flow had already collapsed 71% in 2025, to $11.2 billion from $38.2 billion the prior year. In a quiet SEC filing, Amazon signaled it may tap equity and debt markets to fuel the buildout. Its stock fell 12% in February 2026, the worst month since December 2022.

The Debt Explosion

The Big Five raised $121 billion in bonds in 2025 alone, up from an average of $28 billion per year between 2020 and 2024. In 2026, that figure is projected to reach $159–$175 billion. Morgan Stanley and JP Morgan project the technology sector may need to issue as much as $1.5 trillion in new debt over the next few years to finance AI and data center infrastructure.

Morgan Stanley’s analysis is specific: of $2.9 trillion in total global data center spending projected through 2028, only $1.4 trillion can be covered by corporate self-funding. The remaining $1.5 trillion represents a financing gap that credit markets must fill, a sum larger than the entire high-yield bond market and leveraged loan market combined.

That gap will be filled by a combination of $800 billion in private credit, $200 billion in corporate debt, $150 billion in asset-backed and commercial mortgage-backed securities, and $350 billion across private equity, venture capital, and bank lending. The collateral underlying all of this debt is AI infrastructure whose useful life is measured in years, not decades.

As one Mirabaud Asset Management portfolio manager put it, this movement toward the bond market is fundamentally altering the relationship between hyperscalers and investors. These companies broke what investors describe as the “unwritten agreement” that kept speculative AI investments disengaged from debt markets. Capital intensity now reaches 45–57% of revenue, ratios that resemble industrial utilities, not technology companies.

Meanwhile, investors are hedging by trading more credit default swaps, aka insurance policies against bond defaults, on individual tech companies. Alphabet issued a century bond maturing in 2126. It was the first by a tech company since Motorola in 1997. Whether Alphabet will exist to repay it is an open question.

Why Prices Aren’t Going Up

This is the central paradox of the inverted bubble. In any normal market, when costs explode and demand surges, prices rise. That is how economics works. In AI, prices are falling, or at best, holding flat, while the cost of delivering the product spirals beyond the capacity of the wealthiest companies in human history to absorb it.

There are at least five interlocking reasons why costs are not being passed through.

1. The Competitive Death Spiral

Nobody wants to be the first to raise prices because the market reads it as weakness. The pricing war among AI providers has intensified throughout 2025 and into 2026. OpenAI has repeatedly reduced costs across model generations. Google bundles free features and token allowances. xAI prices aggressively to undercut everyone. Anthropic offers tiered pricing that compresses margins at every level. Each generation of models gets cheaper per token even as the infrastructure to run them gets more expensive.

This is a classic race-to-the-bottom dynamic, but with a critical difference: the floor keeps dropping. When Meta open-sources Llama for free, it sets a price anchor of zero that every commercial provider must contend with. Meta can afford this because its revenue model is advertising, not API fees. But the existence of a free frontier model means every commercial model is competing against a free alternative, which limits pricing across the entire industry.

2. The Enterprise Lock-In Play

Cloud providers are treating AI as a loss leader. AWS, Azure, and Google Cloud Platform are absorbing AI inference costs because AI is the customer acquisition engine for their broader cloud ecosystems. The AI product itself is a promotional rate designed to lock enterprises into multi-year cloud commitments. The price the market sees is a fiction, a subsidized introductory offer on a product that does not have sustainable unit economics at current rates.

The Axios comparison to the “millennial lifestyle subsidy” era is apt. Venture capital once underwrote cheap Uber rides and DoorDash deliveries. Eventually, all of those companies had to charge enough to cover costs and make a profit. The AI version of this dynamic is orders of magnitude larger and involves the core infrastructure of global enterprise computing.

3. Self-Consumption as Demand

There is a critical distinction between two kinds of circularity in the AI economy, and the market is conflating them.


The first is financial circularity: Nvidia invests in OpenAI, OpenAI buys Nvidia chips, and the money goes in a circle. Paul Kedrosky and Michael Burry have documented this. It’s important, but it’s a financing problem.


The second is something different and, I would argue, more dangerous: operational circularity. The hyperscalers are not just building AI infrastructure for customers. They are the customers. They are eating too much of their own dog food. Google consumes enormous quantities of its own AI inference to run search ranking, ad targeting, YouTube recommendations, Gmail filtering, Maps routing, and Android features. Amazon runs AI across logistics, warehouse robotics, product recommendations, Alexa, and fraud detection. Microsoft embeds it across Office, GitHub Copilot, Bing, and Teams. Meta uses it for content ranking, ad optimization, content moderation, and Reels.


This internal consumption is real. It burns real compute on real GPUs drawing real electricity. And it shows up in the capex numbers as “AI infrastructure investment,” which gets cited by analysts and executives as evidence of surging AI demand. When Satya Nadella says “we have more demand than supply,” he is telling the truth. But a significant portion of that demand is Microsoft consuming its own product to maintain revenue streams (Office subscriptions, Azure contracts, advertising) that already existed before AI.


This is not a financing trick. It is a demand signal that is being fundamentally misread. The market sees hyperscaler AI consumption and interprets it as market growth. But when the provider is also the largest customer, what you are measuring is not new economic value. You are measuring defensive spending, the cost of staying competitive in adjacent businesses. The incremental return on that AI investment is marginal. The cost is enormous. And it inflates the demand signal that justifies further infrastructure investment, further borrowing, and further below-cost pricing to attract the external customers who are supposed to eventually make the economics work.


Strip out the self-consumption, and the real external demand picture – the actual paying customers generating actual new revenue – looks very different from the $700 billion headline.

4. The Accounting Subsidy

There is a hidden subsidy embedded in the financial statements of every major AI provider, and it is keeping prices artificially low.

The chips at the heart of AI infrastructure have a functional lifespan of one to three years due to rapid technological obsolescence and physical wear. But companies depreciate them over five to six years, spreading the cost across a longer period than the economic reality warrants. A Princeton CITP analysis documented how this accounting mismatch allows companies to subsidize application-layer pricing, expand capacity faster than true economics justify, and report profitability metrics that attract capital on more favorable terms than their operational reality warrants.

Consider the scale: Microsoft’s roughly $80 billion annual AI infrastructure spend, if half goes to computing hardware with a true three-year lifespan, creates actual replacement costs of approximately $13 billion per year. By depreciating over six years, reported annual depreciation is only $6.5 billion, an apparent $6.5 billion annual cushion that enables Microsoft to subsidize OpenAI’s infrastructure costs during the critical years when customer relationships are being formed.

Michael Burry (of Big Short fame) has warned that hyperscalers have systematically extended the useful years rating for their servers, allowing them to frontload expenditure and report higher profits even if revenue doesn’t materialize. Nvidia CEO Jensen Huang was characteristically blunt about the reality of hardware obsolescence: when discussing the Blackwell chip, he quipped that once it shipped in volume, “you couldn’t give Hoppers away.”

And here is the number that crystallizes the problem: the five hyperscalers plan to add approximately $2 trillion of AI-related assets to their balance sheets by 2030. Given that AI assets typically depreciate at around 20% per year, this implies an annual depreciation expense of $400 billion, more than their combined profits in 2025.

What makes this an industry-wide problem rather than a company-specific risk is that every hyperscaler is running some version of the same play, and none of them can stop. Amazon shortened its AI infrastructure depreciation to 5 years and took a $700 million hit to operating income, effectively an admission that the 6-year number was wrong. But Meta moved in the opposite direction, extending to 5.5 years across three successive adjustments, adding billions to reported profits while its free cash flow collapsed from $54 billion to $20 billion. Google and Oracle book 6 years. Microsoft hedges with a 2-to-6-year range in its SEC filings, an extraordinary spread that amounts to an admission that they genuinely do not know how long their own infrastructure will be worth anything. CoreWeave books 6 years on hardware that Nvidia replaces annually. Even the more conservative neoclouds, Lambda Labs at 5 years, Nebius at 4, are booking longer than the realistic economic life of the equipment.


This is not individual risk management. It is an industry-wide fiction maintained by mutual agreement not to tell the truth about how fast the hardware dies. If the entire industry moved to realistic 3-year depreciation tomorrow, reported annual depreciation would roughly double. Margins would compress across the board. Earnings estimates would be slashed. Stock prices would correct. And the cushion that enables below-cost AI pricing would vanish overnight, forcing the repricing event described in Scenario 1.
The reason nobody moves is that it would be unilateral disarmament. The first company to adopt honest depreciation looks worse on paper than every competitor still booking fantasy schedules. So the fiction holds, until an external force breaks it. An SEC inquiry into depreciation practices. An auditor revolt. A high-profile default that exposes the gap between book value and economic reality. Or enough institutional investors demanding answers that the cost of maintaining the fiction exceeds the cost of telling the truth.


This is the load-bearing wall of the inverted bubble. If the depreciation fiction breaks, everything else breaks with it: the pricing, the debt assumptions, the earnings, and the competitive equilibrium that prevents correction.

5. The Architecture Ceiling

Every model company is searching for efficiency gains because the compute ceiling is real. Models are being compressed, distilled, and templated to squeeze maximum output from constrained infrastructure. The explosion of “small” and “nano” model tiers (GPT-5.4 Nano at $0.20 per million input tokens, Gemini 2.5 Flash-Lite at $0.10) is not just a product strategy. It is a survival mechanism. Companies cannot keep scaling infrastructure fast enough to meet demand at current quality levels, so they are creating lower-resource alternatives to manage the load.

But even these efficiency gains cannot keep pace with the physics of the problem. Nvidia’s GB200 NVL72 systems require approximately 120–132 kilowatts per rack, vastly outstripping the traditional 6–10 kilowatt standard. These racks weigh 3,000 pounds and exceed the structural ratings of most raised data center floors. Power transformer lead times have stretched to 128 weeks. The International Energy Agency projects that global data center electricity consumption will double to 945 terawatt-hours by 2030. Microsoft’s CEO has admitted that GPUs sit idle in inventory because the company lacks the electricity to install them.

The infrastructure ceiling is only half the constraint. The other half is inside the models themselves.


There is a telling silence in the industry right now. Twelve months ago, even three months ago, every model release was accompanied by benchmark charts, arena leaderboard rankings, massive token limit increases, and breathless announcements about new capability thresholds. That arms race appears to be over. The recent release cycle has not been about who can reason better or score higher on MMLU. It has been about who can run cheaper. GPT-5.4 Nano. Gemini 2.5 Flash-Lite. Claude Haiku 4.5. The naming conventions tell the story: the frontier has stalled, and the competition has shifted from capability to cost. Nobody is talking about context window breakthroughs or reasoning benchmarks anymore because the constraint is no longer what the models can do, but what the infrastructure can afford to let them do.


Every efficiency gain available at the model level has been exploited. Distillation, quantization, speculative decoding, prompt caching, mixture-of-experts routing: the engineering teams at every major lab have been working this problem relentlessly for two years. The result is visible in the product tiers: the explosion of “nano,” “mini,” “flash,” and “lite” models is not innovation. It is triage. These are degraded versions of existing capabilities being offered to the same markets at prices the infrastructure can survive.


The harder truth is that efficiency gains at the frontier have begun to trade directly against output quality. We documented this in client advisories in December, February and March: models across every major provider began exhibiting increased destabilizing, even regression. Templating, heuristic shortcutting, and reduced depth of source engagement. These are not model failures. They are cost management strategies implemented at the inference layer. When a model generates a templated response instead of reasoning through a novel problem, it uses fewer tokens, less compute, and less time on the GPU. The user gets a worse answer. The provider saves money.


This is the mechanism that connects the architecture ceiling to the quality collapse. The labs have run out of room to cut costs without cutting quality. Every remaining efficiency gain is a quality tradeoff. And none of it is being disclosed to the enterprises building workflows on top of these models.

The efficiency race is a sign of desperation. And astoundingly, none of these efficiency gains are resulting in price increases. They are being passed through as further price reductions, feeding the competitive death spiral described above.

What Makes This Unprecedented

Traditional bubbles have a clear structure. Speculative demand inflates prices beyond intrinsic value. Eventually, the market recognizes the gap. Prices correct. Losses are absorbed. The cycle resets.

The inverted bubble has a different structure entirely:

 Traditional BubbleInverted Bubble (AI 2024–2026)
Demand typeSpeculative / fictitiousReal / operational
Price directionInflating above valueCompressed below cost
Correction mechanismPrice crashPrice explosion or supply crisis
Who gets hurt firstSpeculatorsProviders
Financing modelLeverage on inflated assetsLeverage on deflated pricing
Historical precedentTulips, dot-com, housingNone at this scale

The housing crisis analogy works, but inverted. In 2007, banks were lending against inflated asset values. In 2026, hyperscalers are borrowing against deflated product pricing: the collateral is the assumption that they will eventually figure out how to charge what it costs. If they cannot, the debt does not get serviced. The bonds flooding credit markets are backed by infrastructure whose useful life may be shorter than the debt maturity, built to deliver a product whose price does not cover its cost.

Goldman Sachs notes that consensus capex estimates have proven too low for two years running. At the start of both 2024 and 2025, consensus implied 20% capex growth; actual spending exceeded 50% in both years. The spending is accelerating faster than anyone predicted, and the revenue to justify it remains a matter of faith.

A Tomasz Tunguz analysis puts the bet in stark terms: at 60% gross margins and 5% borrowing costs, a five-year payback on $431 billion in AI capex requires $180 billion in annual AI revenue. Current AI revenue across all providers is approximately $35 billion. The industry is underwriting 5x growth in five years against hardware that may need replacing in three.


Why the Jevons Paradox Does Not Apply

The Jevons Paradox is the observation, first documented by the English economist William Stanley Jevons in 1865, that when a resource becomes more efficient to use, total consumption increases rather than decreases. Jevons was studying coal: James Watt’s improvements to the steam engine made coal-powered machinery far more efficient per unit of work, but instead of Britain using less coal, consumption exploded, because efficiency made steam power economical for applications that hadn’t been viable before. Bloomberg Opinion columnist Gautam Mukunda recently applied this framework to AI, arguing that the technology has crossed the “hidden threshold” on its S-curve where demand goes vertical, citing Epoch AI data showing the cost of running a model at a given performance level dropping at a median rate of 50x per year.

But that statistic obscures a more uncomfortable reality. The cost of useful AI inference is not falling. It is rising. Reasoning models, the architecture every major lab is now pursuing to improve performance, work by generating enormous chains of internal computation before producing an answer. They consume orders of magnitude more tokens per task than their predecessors. The lightweight models whose falling token prices populate the efficiency charts are not the models enterprises depend on for complex work. The headline price reductions come from distilled, quantized, and compressed tiers that trade capability for cost: a thinner, weaker product under the same brand. Meanwhile, the frontier models that enterprises actually need are getting hungrier, not leaner, and the infrastructure required to run them is getting more expensive, not less.


Jevons also had the luxury of studying a resource whose supply chain was domestic. Britain sat on coal. The AI supply chain enjoys no such advantage. Advanced semiconductor fabrication is concentrated almost entirely in Taiwan, where TSMC produces the overwhelming majority of the world’s leading-edge chips. The rare earth minerals essential to chip manufacturing flow predominantly through China. Export controls, tariff escalation, and the ongoing militarization of the Taiwan Strait mean that the physical inputs to AI infrastructure are subject to disruption risks that have no parallel in Jevons’ coal economy. A single geopolitical escalation; a blockade, a sanctions expansion, a retaliatory export ban, could constrain the supply of GPUs for years, not months. Power transformer lead times are already at 128 weeks. The assumption embedded in every AI investment thesis is that supply will scale to meet demand. Geopolitics says otherwise.


And the vulnerability no longer stops at the supply chain. As we documented in earlier analysis, data centers themselves have become military targets. On March 1, Iranian drones struck three AWS facilities in the UAE and Bahrain, the first confirmed military attack on a hyperscale cloud provider. The IRGC explicitly claimed the strikes were legitimate because the facilities hosted AI systems used by the US military. An IRGC-affiliated outlet subsequently published a target list of 29 tech facilities across four countries, naming AWS, Microsoft, Google, Oracle, Nvidia, IBM, and Palantir. The era of the data center as neutral civilian infrastructure is over. The $1.5 trillion in debt being issued to finance AI infrastructure is collateralized against physical assets that are now, for the first time, sitting in the targeting packages of state militaries. Every valuation model, every bond covenant, every depreciation schedule assumes these facilities will operate uninterrupted for the duration of their useful life. That assumption died on March 1.
Jevons assumed that efficiency gains were real, that the resource maintained its character as consumption scaled, and that the mines would keep producing.

Jevons was also not studying an industry seeking nuclear power while producing steam power. The push for AGI (artificial general intelligence) and superintelligence also change the economics.

In AI, the demand explosion is real, but the efficiency that supposedly enables it is partly a statistical artifact a function of measuring cost at a fixed performance level while the performance level the market actually requires keeps climbing. The supply chain runs through the most contested geopolitical corridors on earth. And the infrastructure itself has been reclassified, by an adversary in an active war, from commercial real estate to military objective. Victorian coal miners profited from rising volume because the coal still burned the same, the mines were under their feet, and nobody was bombing them. The AI hyperscalers enjoy none of these advantages. The Jevons Paradox describes what is happening to demand. The inverted bubble describes what is happening to the companies meeting it.


There are three historical parallels that rhyme with what’s happening, but none of them fully match — and the differences are where this thesis gets its originality.

The American Railroads (1870s–1890s): The Closest Match


This is the strongest parallel and it’s striking how closely it maps. Rate wars among railroads threatened to demoralize the financial structures of rail and water carriers, driven by overbuilding, unregulated competition, and the peculiar ability of railroads in impaired financial condition to cause and exaggerate the effects of rate-cutting contests. Real demand for freight transport was enormous: the entire agricultural and industrial economy depended on it. But by the end of the century, average bond yields had sunk to 3.3% and dividends to 3.5%, a virtually 50% drop, and only 30–40% of railroad stock paid any dividends at all.


The competitive dynamics were almost identical to what we’re seeing: the Grand Trunk, weak and perpetually teetering on the edge of bankruptcy, claimed that prices should only be high enough to cover operating costs, ignoring dividends and interest, which is essentially Meta’s strategy of giving Llama away for free to anchor prices at zero. Three railroads built parallel tracks two miles apart along the same 200-mile route in Kansas , which Charles Francis Adams called “the maddest specimen of railroad construction” he’d ever heard. And then his own railroad built tracks into the same territory. Sound familiar?


The resolution was exactly what Scenario 4 predicts: competition among railroads led some into bankruptcy, sunk others heavily in debt, and ignited bitter rate wars, ultimately requiring J.P. Morgan to step in and reorganize entire rail systems as the price for extending credit. Consolidation through exhaustion, followed by eventual pricing power for survivors.


But here’s the critical difference: railroad infrastructure didn’t depreciate. Steel rails laid in 1880 were still functional in 1920. The tracks were the tracks. AI infrastructure depreciates in three years and the debt matures in five to ten. The railroads had a stranded capital problem; AI has a stranded capital problem plus a technological obsolescence cycle that the railroads never faced.

The Telecom Fiber Bust (1996–2002): The Cautionary Cousin


The telecom parallel is the one everybody reaches for, and the numbers are eerily similar. In the five years after the Telecommunications Act of 1996, telecommunications companies invested more than $500 billion, mostly financed with debt, into laying fiber optic cable. Telecom stocks lost $2 trillion in market value. Twenty-three telecom companies went bankrupt. Altogether, the industry owed a trillion dollars, “much of which will never be repaid.”
By the mid-2000s, around 85% of the fiber optic cable laid in the late ’90s remained unused. Bandwidth prices plummeted roughly 90%. And the accounting fraud was industry-wide. Almost every major telecom company had to restate 1999 and 2000 numbers. It was not an Enron or WorldCom problem; it was industry-wide.


There’s again a critical difference though that makes the inverted bubble thesis distinct: the telecom bubble was a traditional bubble. The growth in capacity vastly outstripped the growth in demand. They built fiber nobody needed. In AI, the hyperscalers report they are supply-constrained. Microsoft has GPUs sitting in inventory because it lacks the electricity to power them. The demand is real. The problem isn’t that they built too much; it’s that what they’re building costs more than they can charge for it.


The telecom aftermath is instructive: the long-term winners were not the builders of railroads or fiber, but instead their customers, the early adopters of these technologies who avoided the risk of large speculative capital outlays while still benefiting from gains provided by the new technology. Netflix rode telecom’s subsidized infrastructure to a larger valuation than any individual telecom or cable company. The AI equivalent would be the application-layer companies that build on subsidized inference.

Post-Deregulation Airlines (1978–2001): The Chronic Price War


This one is less cited but might be the most structurally relevant. After the 1978 Airline Deregulation Act, eight major carriers and more than 100 smaller airlines went bankrupt or were liquidated. The mechanism was exactly what we’re describing: real demand (passenger volumes tripled), but many airlines overexpanded, faced overcapacity, and therefore had to sell their product at low prices, suffering declining profits as a result.


Pan Am, TWA, Eastern, Braniff: fierce competition resulted and drove fares down. Passengers flocked to airports in record numbers. The product was genuinely underpriced relative to what it cost to deliver. The correction took decades and looked like exactly the consolidation scenario: eventually four airlines controlled 80% of the market and could finally set sustainable prices.


So what’s genuinely new?


Three things make the AI inverted bubble unprecedented:


The depreciation mismatch. Railroads had durable infrastructure. Telecom fiber still sits in the ground twenty-five years later, now carrying the modern internet. Airline fleets last decades. AI hardware obsolesces in one to three years, but is financed with five-to-ten-year debt and depreciated over six years on the books. There is no historical example of an infrastructure boom where the underlying assets became worthless faster than the financing instruments matured.


Self-consumption. Neither railroads, nor telecoms, nor airlines were simultaneously their own largest customers. The hyperscalers are. Google building AI infrastructure to protect search advertising revenue is fundamentally different from a railroad building track to serve external freight customers. The demand signal is circular in a way that has no parallel.


The scale of the accounting subsidy. The Princeton CITP analysis makes clear that the extended depreciation schedules are functioning as a competitive weapon: they enable below-cost pricing during the customer acquisition phase. This has elements of the telecom fraud era, but it’s technically legal. It’s GAAP-compliant earnings manipulation in service of market capture, at a scale that dwarfs anything the telecoms attempted.


So the honest answer to the question is: the components have all appeared before: ruinous price competition (railroads), debt-financed infrastructure overbuilding (telecom), chronic below-cost pricing that destroys providers while serving customers (airlines). But the combination of all three, plus the depreciation bomb, plus self-consumption, plus the sheer dollar scale ($1.5 trillion in projected debt), is genuinely novel.

How It Breaks: The Correction Scenarios

If the inverted bubble corrects, it will not look like a traditional crash. Here are the plausible scenarios, in order of likelihood.

Scenario 1: The Price Explosion

The most likely correction is a sudden repricing of AI services to reflect actual costs. This would be triggered when one or more hyperscalers can no longer sustain the subsidy, either because debt markets tighten, because the accounting subsidy unwinds (through depreciation schedule corrections or regulatory action), or because the competitive dynamics shift enough that raising prices becomes viable.

In this scenario, every enterprise that built AI into its operations at subsidized rates suddenly faces real pricing. The impact would be enormous. Companies that integrated AI into core workflows, customer service, content moderation, code generation, logistics optimization, would face a choice between absorbing dramatically higher costs or ripping out capabilities that have become operationally essential. This is the inverse of a market crash: instead of asset prices falling, input costs explode.

Early indicators suggest this scenario is already approaching. AI API spend has become one of the fastest-growing line items for engineering teams, and it often stays invisible until the bill arrives. OpenAI is reportedly exploring new monetization strategies beyond per-token fees, and an IPO. The millennial lifestyle subsidy always ends. This one will end too.

Scenario 2: The Capacity Crunch

If prices cannot rise, demand gets rationed. This scenario is already partially underway. Microsoft’s $80 billion Azure backlog stems from power constraints, not demand softness. GPU inventory sits idle because there is not enough electricity to install it. Power transformer lead times of 128 weeks mean that capacity ordered today will not be operational for two and a half years.

In a full capacity crunch, AI becomes a rationed resource. Access is allocated by contract size, strategic importance, or willingness to pay premium rates, creating a de facto price increase through scarcity rather than explicit repricing. Small and medium enterprises are priced out first. Startups building on AI APIs discover that their infrastructure provider cannot guarantee the capacity they need. The “democratization of AI” narrative reverses, and AI becomes a competitive advantage available primarily to companies large enough to secure dedicated capacity.

Scenario 3: The Debt Crisis

The most dangerous scenario is a propagating failure in AI infrastructure financing that spills into broader credit markets. The $1.5 trillion debt projection is not hypothetical: it represents real bonds being purchased by insurance companies, sovereign wealth funds, pension funds, endowments, and retail investors. If the AI infrastructure those bonds finance becomes stranded, because hardware obsolesces faster than expected, because demand plateaus, because pricing never reaches sustainable levels, the losses propagate through the financial system.

The $13.3 billion in data-center-backed asset-backed securities issued in 2025: a 55% increase over 2024, are a direct echo of the mortgage-backed securities that amplified the 2008 crisis. These are complex financial instruments backed by physical assets whose value depends on assumptions about technology lifecycle and revenue generation that have no historical precedent. Morgan Stanley itself acknowledges that the monetization speed of AI is a key micro risk to the entire financing framework.

A Massachusetts Institute of Technology study found that 95% of organizations are getting zero return from generative AI projects. If that statistic is even directionally correct, the revenue assumptions backing $1.5 trillion in debt are catastrophically optimistic. Bond buyers are already hedging: credit default swap activity on individual tech companies is rising. JPMorgan strategists warn that a flood of data center financing could cause supply indigestion across dollar-denominated credit markets. We know it’s absurd to a certain extent to bank everything on these types of predictions because AI has been generating billions for the advertising industry, for example, for well over a decade. The MIT study and its ilk are very specific sets of analysis, but they could be enough to cause a chill.

Scenario 4: Consolidation Through Exhaustion

Some players simply exit. Oracle is already the most visible stress case: negative free cash flow projected through 2029, $156 billion in total capex commitments, and a leverage profile that has spooked Wall Street. CoreWeave (a key player in Canadian data sovereignty) carries $18.8 billion in debt against hardware that depreciates in two to three years while booked for six. Its stock has fallen 57% from its peak.

In this scenario, the AI infrastructure market consolidates to two or three survivors with the deepest balance sheets, and those survivors finally gain pricing power. The correction happens not through market forces but through attrition. Companies that cannot sustain the losses exit, and the remaining players inherit enough market share to begin charging sustainable prices. The cost of this consolidation is measured in stranded assets, defaulted bonds, and eliminated competition.

Scenario 5: The Quality Collapse

There is a final scenario that is already visible to anyone paying close attention.

Under extreme cost pressure, providers degrade the product instead of raising the price. We are experiencing this and have been since December. Models get templated, responses become more generic, reasoning capabilities are reduced to save compute. The product that enterprises integrated at peak quality is silently replaced by a cheaper version of itself.

This is the scenario in which the inverted bubble does not pop, it slowly deflates. Prices stay low, but what you get for those prices steadily diminishes. Enterprises that built workflows around AI capabilities discover that those capabilities are being eroded by the same cost pressures that created them. The promise of AI is maintained in marketing; the reality is managed through quiet degradation.

What Comes Next

The inverted bubble is not a prediction, it describes current conditions. The hyperscalers are borrowing against future revenue from a product they are currently selling below cost, to build infrastructure whose useful life may be shorter than the debt maturity, using accounting assumptions that mask the true economics. The competitive dynamics prevent any individual actor from correcting the pricing, and the demand signal, inflated by self-consumption, justifies further escalation.

We need to ask not whether this resolves. But how. A price explosion redistributes costs to enterprises. A capacity crunch rations access. A debt crisis propagates through credit markets. Consolidation eliminates competition. Quality collapse betrays the value proposition.

Most likely, several of these scenarios unfold simultaneously and reinforce each other. Prices rise, some players exit, quality diverges between providers, and capacity constraints create a tiered market in which the cost of AI correlates directly with the size of the buyer.

For business leaders, the implication is clear: every AI integration decision made today is being made at subsidized prices that will not last. The strategic question is whether an organization can absorb the true cost when the subsidy ends, and what you will do if the infrastructure your business depends on is owned by a company that can no longer afford to provide it.

We have never seen an industry founder on the weight of its own success. We are watching the signals mount that it is happening now.

Jen Evans is the founder of PatternPulse AI. She is the author of Evans’ Law, the Nudgment framework, and the AI Sovereignty Maturity Model.

SOURCES

Bank of America Securities, AI Capex Analysis, 2025–2026

UBS Credit Strategy, Hyperscaler Capex Projections, February 2026

Morgan Stanley, “Credit Markets’ Role in AI Financing Gap,” August 2025

Morgan Stanley / JP Morgan, $1.5 Trillion Debt Projection, November 2025

CNBC, “Tech AI Spending Approaches $700 Billion in 2026,” February 6, 2026

Bloomberg / Yahoo Finance, Amazon Free Cash Flow Projections, March 2026

Pivotal Research, Alphabet FCF Forecast, February 2026

IEEE ComSoc Technology Blog, Hyperscaler Capex > $600B, December 2025

Goldman Sachs, “Why AI Companies May Invest More than $500 Billion in 2026,” December 2025

Princeton CITP, “Lifespan of AI Chips: The $300 Billion Question,” October 2025

Tomasz Tunguz, “The 12x Bet on AI,” March 2026

Axios, “AI Companies Like OpenAI, Google Cover Costs. But Not Forever,” March 2026

Introl Blog, Hyperscaler Capex $690B Analysis, February 2026

Mellon Investments, “Record-Breaking AI-Related Debt Issuance,” November 2025

Brandywine Global / Morgan Stanley IM, “Brave New World of AI Capex,” 2025

CoStar, “Hyperscalers’ $680 Billion AI Capital Expenditure,” March 2026

Featured

Jennifer Evans
Jennifer Evanshttps://www.b2bnn.com
principal, @patternpulseai. author, THE CEO GUIDE TO INDUSTRY AI. former chair @technationCA, founder @b2bnewsnetwork #basicincome activist. Machine learning since 2009.