What Happens When the AI Infrastructure Your Country Relies On Breaks?
Paper 3 in the “Whose AI Runs the Government?” series asks what happens to countries that built public services on AI infrastructure they don’t own, and introduces a planning framework for cost instability that no institution is currently using.
In Paper 1 of the “Whose AI Runs the Government?” series, we mapped eight national approaches to AI sovereignty using the Sovereign AI Maturity Model. In Paper 2, published as a summary here called “The Inverted AI Bubble” we established that AI’s real crisis is not overvaluation but underpricing: the hyperscalers are borrowing against future revenue from a product they sell below cost, to build infrastructure whose useful life is shorter than the debt maturity, using accounting assumptions that mask the true economics.
Paper 3 asks the question that follows: what happens to countries that depend on that infrastructure when it breaks?
The answer, for Canada, is that the institutional mechanisms meant to detect and respond to infrastructure dependency failures are structurally incompatible with the rate of change. The instruments are too slow. The thresholds are undefined. In several cases, the critical point has already been passed.
The Four Triggers

Infrastructure dependency, snapshots of which are shown here based on publicly available information in a constant state of evolution, does not fail in one way. It fails along four distinct vectors, each of which changes the relationship between a country and the AI infrastructure it relies on. These triggers are not mutually exclusive; in practice, they cascade.
Change of Ownership. The entity that owns the infrastructure changes hands through acquisition, merger, distress sale, or PE takeover. The data, the contracts, and the jurisdictional obligations do not automatically transfer intact. No Canadian institution is consulted. No review is triggered.
Change in Pricing. The cost of AI services reprices to reflect actual economics. As the inverted bubble thesis establishes, current AI pricing is subsidized. When the subsidy ends, every organization that built AI into core operations at promotional rates faces a choice between absorbing dramatically higher costs or ripping out capabilities that have become operationally essential.
Change in Temperature. The infrastructure provider changes its security posture, privacy policies, encryption standards, or terms of service. This is the trigger nobody is monitoring. A company can change its temperature overnight. There is no treaty, no regulation, and no collective bargaining agreement that governs it.
In the April 2026 context, “Change in Temperature” is no longer abstract. It’s the lived risk: a U.S. administration issues an executive order or policy framework on AI “bias” / “truth-seeking” / procurement standards, and the hyperscalers everyone in Canada actually uses (OpenAI, Anthropic, Google, Microsoft, etc.) adjust their security posture, content filters, model behavior, or terms of service literally overnight to stay eligible for the massive U.S. federal market. Canadian institutions get the downstream effects with zero consultation, zero notice, and zero recourse.
Change in Capability. The product degrades. Models get templated, reasoning capabilities are compressed, and response quality diminishes as providers manage cost pressure through silent quality reduction rather than explicit price increases. This is Scenario 5 from the inverted bubble analysis: the product that enterprises and governments integrated at peak quality is quietly replaced by a cheaper version of itself.
The Three Cost Layers
Each trigger must be assessed across three cost layers: financial cost (direct monetary impact), failure cost (what breaks when the trigger fires), and sovereignty cost (the jurisdictional, strategic, and democratic implications). Financial costs show up on a balance sheet. Failure costs are non-linear: a health records system offline for a week produces months of backlog and a potentially incalculable human cost. Sovereignty costs are the hardest to quantify and the most consequential.
The Scenarios
The paper models four specific, plausible scenarios against all four trigger types and all three cost layers:
CoreWeave Acquired by Foreign Private Equity. CoreWeave carries $21.4 billion in total debt, a debt-to-equity ratio of 894%, and its stock has fallen more than 50% from its peak. It operates the Ontario data center where Cohere (Canada’s most prominent AI company) is the anchor tenant in a facility that received $240 million in Canadian federal funding. Bell Canada has announced a $12 billion data center in Saskatchewan with CoreWeave and Cerebras as tenants. There is no domestic alternative at equivalent scale. No pre-qualified replacement exists. No data portability requirements were embedded in the original contracts. And no Canadian institution has a mandate to monitor CoreWeave’s financial stability as a sovereignty-relevant event.
Cohere Collapses, Pivots, or Gets Acquired. Cohere is valued at $7 billion with $240 million in annual recurring revenue, competing against companies with 10–50x that revenue. Its compute infrastructure is provided by CoreWeave. Its “Canadianness” is a positioning advantage, but it runs on American infrastructure and is funded largely by international capital. There is no second Cohere. No other Canadian AI company operates at comparable scale.
Hyperscaler Repricing or Capacity Rationing. Canada has no domestic hyperscaler. When the subsidy ends (whether through explicit repricing, capacity rationing, or tiered access based on contract size) Canada has no alternative. Foreign companies would determine which Canadian government functions can afford AI and which cannot. Pricing decisions made in Seattle, Mountain View, and Redmond would directly shape Canadian public service capacity.
Ontario Health Records on Compromised Infrastructure. In March 2026, Ontario announced a provincewide medical records digitization initiative. No timeline was given. No funding was committed. No public discussion has occurred regarding whose infrastructure it will run on. It is a real-time example of a province making infrastructure-dependent decisions without the institutional mechanisms to assess the sovereignty implications. Meanwhile, Ontario’s energy planning for data centers is being shaped by American nuclear deregulation priorities rather than an independent assessment of Canadian infrastructure needs.
The Cost Instability Problem
This is the new section in Paper 3, and it may be the most operationally useful for anyone making AI integration decisions right now.
Every scenario in the paper assumes that at some point, cost changes. But AI cost movement in 2026 is fundamentally different from any infrastructure cost that institutions have previously had to govern. There are at least five distinct cost vectors operating simultaneously, all pulling in different directions:
Commodity token deflation. Unit costs for lightweight models are falling. This is the number in headlines and board decks. In March 2026, a Gartner headline predicted inference costs will fall 90% by 2030 (although the analysis in the post does not fully bear this out). Bloomberg Opinion columnist Gautam Mukunda framed this through the Jevons Paradox, the observation that when a resource becomes more efficient, total consumption increases rather than decreases. He is correct about the demand pattern. But the Jevons framing leads institutions to a dangerous conclusion: that falling token costs mean falling total costs. They do not.
Frontier inference cost escalation. The models enterprises actually need for complex work are getting hungrier, not leaner. Reasoning models consume orders of magnitude more compute per task. Gartner’s own analysis, buried beneath the 90% headline, acknowledges that agentic models require 5–30x more tokens per task and that “overall inference costs are expected to increase.” The accompanying research note is titled “Frontier Scale Models Threaten Software Margins and Solvency.” We are relying on time strapped people to read past the headline.
Infrastructure cost explosion. Nearly $700 billion in hyperscaler capex in 2026. $1.5 trillion in projected debt. Free cash flow collapsing across all five major providers. These costs are not visible in current AI pricing. They are deferred.
Artificial price compression. Current pricing is held below cost by competitive dynamics, accounting subsidies, and loss-leader strategies. In March 2026, OpenAI’s head of ChatGPT described current pricing as “accidental.” Twenty-four percent of tracked AI models changed prices in March alone. A Forrester survey found 70% of CIOs cite “AI cost unpredictability” as their top barrier to adoption.
Silent quality degradation. Providers are managing cost pressure by serving thinner models under the same brand names. The invoice looks the same. The output is worse. This is a cost increase that never appears on a balance sheet.
The Planning Range
Based on these dynamics, the paper proposes that institutions abandon point-estimate cost projections and adopt range-based planning:
Base case: 25–55% increase in total AI operational costs over 24 months, assuming no major disruption and a gradual repricing correction.
Stress case: 100–300% increase, triggered by any combination of hyperscaler repricing, capacity rationing, provider insolvency, or geopolitical supply chain disruption.
Tail risk: service discontinuity, in which the cost question becomes moot because the infrastructure is no longer available at any price.
Canadian government departments using Gartner’s headline number to plan AI integration budgets is building on a foundation that appears reassuring, but Gartner’s own analysis says is false.
The Sovereign Exposure Registry
The paper proposes a standing operational registry of sovereign exposure, building on the EU AI Act’s registration framework but addressing its three critical gaps: the EU registers AI systems but not infrastructure dependency; it does not monitor for trigger events; and it does not require replacement readiness. Canada has the opportunity to build what the EU did not: a registry that tracks not just what AI systems are deployed in public services, but who owns the infrastructure they run on, what trigger exposures exist, and what happens when the ground shifts.
The Question This Paper Poses
How close are we to discovering that anti-sovereignty elements already exist in Canadian AI infrastructure?
The difficult answer is that we are already there. Canadian federal money is invested in infrastructure owned by a company carrying $21.4 billion in debt with a 894% debt-to-equity ratio. Canada’s primary domestic AI company runs on that same company’s infrastructure. Ontario is digitizing health records without asking who owns the compute. And no Canadian institution has a mandate to monitor any of this as a sovereignty-relevant event.
The scenarios in this paper are not worst cases. They are base cases. If there is a single actionable test of whether Canada is serious about AI sovereignty, it is this: the Ontario health records initiative should not proceed one step further without infrastructure sovereignty requirements, data portability mandates, contingency architecture, and a tested migration path embedded in its design. Every day it advances without these is a day closer to irreversible dependency over the most sensitive data a province holds.
The full paper, edited for client disclosure and including detailed scenario tables, decision paths, and the complete cost instability analysis, will be available next week on ResearchGate and Zenodo.
Jen Evans is the founder of PatternPulse AI. She is the author of Evans’ Law, the Nudgment framework, and the AI Sovereignty Maturity Model. This is Paper 3 in the “Whose AI Runs the Government?” series.

