Thursday, May 14, 2026
spot_img

Canadian AI Sovereignty Paper 9: The Procurement-Strategy-Policy Gap


Three Announcements, Two Days, Two Levels of Government, One Pattern


By Jen Evans, Principal, Pattern Pulse AI; co-founder, Tech Reset Canada; publisher, B2BNN


Paper 9 in the “Whose AI Runs the Government?” series.

UPDATE: May 13: Within hours of this paper’s publication, the federal government made a third announcement in three days. AI Minister Evan Solomon announced $66 million through the AI Compute Access Fund to subsidize compute access for 44 Canadian small and medium-sized companies, at Web Summit Vancouver. Asked directly whether the TELUS data centres announced 48 hours earlier would provide the compute for the 44 funded projects, the minister said they would not. “They’re not tied together exactly,” Solomon told reporters, framing sovereign compute as an eventual goal rather than a present condition of the funding. The two announcements are not coordinated. The minister confirmed it on the record.


The subsidy structure tells the rest of the story. The fund pays 67 percent of eligible costs for Canadian cloud-based compute and 50 percent for non-Canadian. The 17-point preference is real and is not nothing. It is also not a sovereignty instrument. The structure routes public money to whatever compute the 44 companies are already using, which at the scale these companies require is overwhelmingly AWS, Azure, and Google Cloud. The fund is a demand-side consumption subsidy for foreign hyperscalers with a Canadian-preference rounding error. It does not build sovereign compute. It does not require sovereign compute. It does not coordinate with the sovereign compute commitment announced 48 hours earlier. By the time the BC clusters are operational, the funded companies will be on multi-year cloud contracts with whoever could serve them now.


The announcement is silent on the layer above compute. What models are these 44 companies using? Cohere is the domestic model layer the sovereignty rhetoric implies. OpenAI, Anthropic, Google, and Meta are the model layers the economics imply. The fund does not require Canadian models. It does not reference Cohere. It does not describe a model sovereignty framework at all. The compute substrate has been treated, partially, as a sovereignty question. The model layer that runs on top of the substrate has not been treated as a sovereignty question at the federal procurement level in any of the three announcements this week.

Original post:



It has not been a good 24 hours for Canadian AI sovereignty, limping out of several gates with little strategic coherence, a lack of clarity, and some immediate, visible failures.

There’s a hole at the centre of it all, made up of gaps in policy, technology, budgeting, planning, strategy, enforcement, implications, or expertise. It is not a small hole. Today it has become clear just how big a hole we have dug, and how hot a political hot point burns for the federal and Ontario governments: not just to demonstrate progress on AI deployment, but to demonstrate any control over, or cohesive strategy for its use. Three announcements in two days show what that pressure has produced. When, according to the Auditor General, less than 5% of Ontario public sector workers have taken training on the tools they use, when only 6% of public sector employees are using approved tools, when three sovereign AI data centres are being built in B.C. without a national strategy yet released, without information on where they fit into the architecture, when AI tools in use for medical records management are hallucinating (as is their wont, especially around proper nouns) we are not only coming up short, we are leaving the very semblance of AI security and sovereignty in the dust.


On May 11, the federal government and TELUS announced a $2 billion commitment over five years to build “sovereign AI infrastructure” across three British Columbia data centres. The compute stack is NVIDIA Vera Rubin and Grace Blackwell. The networking fabric is NVIDIA Quantum InfiniBand and Spectrum-X. The systems software is NVIDIA AI Enterprise. TELUS is positioned as the first North American service provider to become an official NVIDIA Cloud Partner. The buildings will be in Canada. The renewable energy is Canadian. The heat recycling will warm 150,000 Vancouver homes. The technology stack inside the buildings is American.


BNN Bloomberg’s coverage placed the word sovereign in scare quotes. The scare quotes are the appropriate journalistic posture.


On May 12, Ontario’s Auditor General Shelley Spence released a report finding that the province’s AI strategy “still lacks several key components.” Among the findings: as of August 2025, 3 percent of Ontario Public Service staff had completed the Ministry’s Responsible Use of AI training. Of 400 AI websites accessed by OPS staff between April and August 2025, 244 (about 60 percent) were flagged as unsafe and unsecured by the government’s own Microsoft Defender cybersecurity tool. The Ministry had not blocked access to those sites on OPS-provided devices and had not implemented controls to prevent staff from uploading Ontarians’ personal information or sensitive corporate data to them. Sixty percent of Supply Ontario approved AI medical scribes recorded a different drug than what was prescribed in clinical testing. Nine of the 20 approved scribes fabricated information and made unsolicited treatment suggestions.


Three announcements. Two levels of government. The same pattern.


The Procurement-Strategy Gap


This series has been arguing across eight papers that sovereignty rhetoric and sovereignty architecture are not the same thing. The TELUS announcement and the Ontario auditor general’s report show the gap operating in real time.


The federal government is committing $2 billion to a procurement frame that anchors Canadian “sovereign AI” to a single foreign vendor’s product roadmap, before the coordination architecture exists, before provinces have aligned, before the workforce framework that would implement deployment has been bargained, and before the federal AI strategy that is nine months past its original timeline has been released. The Ontario government has deployed AI to 15,000 weekly Copilot users while training 3 percent of them, has left 60 percent of accessed AI sites unblocked, has approved medical AI scribes that hallucinate drug prescriptions at a 60 percent rate, and has announced a provincewide medical records initiative that the auditor general has now documented is operating without the strategy components that would make any of it safe.
Procurement is moving faster than strategy at every level. The political imperative to show progress on AI is producing announcements that the underlying policy work cannot defend.


What the Federal Announcement Does Not Answer


The TELUS announcement provides marketing language and renewable energy figures. It does not provide answers to the questions a sovereign AI procurement decision would have to answer.
How does this cluster relate to existing Canadian AI infrastructure? Bell AI Fabric was announced with substantially similar sovereign framing. The existing TELUS Rimouski capacity sold out, according to the announcement. Is the new BC cluster complementary to Bell AI Fabric or duplicative of it? Does it integrate with the load-bearing triumvirate of Cohere, CoreWeave, and Palantir that the series identified as Canada’s current de facto AI architecture? CoreWeave is the largest North American NVIDIA reseller. TELUS is now an official NVIDIA Cloud Partner. The relationship between the two is not described. Cohere is presumably the domestic model layer the infrastructure would serve. Whether Cohere has access to these clusters and under what terms is not described either.


What workloads will the cluster serve? Government workloads, commercial workloads, academic workloads, or some combination? Will federal departments use this infrastructure for sensitive workloads under the security framework that does not yet exist? Will provincial systems be able to use it under data sovereignty terms that have not been published? The Ontario provincewide medical records initiative announced in March is a live procurement decision that needs sovereign infrastructure. The auditor general’s report makes clear it also needs a strategy, training, and security controls that do not exist. Is the TELUS cluster intended to serve Ontario health data, and if so under what governance? Whose? What is Palantir’s role? What is the actual stack control architecture? Every meaningful technical layer of the compute substrate is NVIDIA. The buildings are Canadian. The cooling system is Canadian. The electricity is Canadian. The chips, the networking, the systems software, the orchestration, and the upgrade path are not. The framework from Paper 1 distinguishes capability sovereignty from hosting sovereignty. The announcement is hosting sovereignty being marketed as capability sovereignty. Owning the building is not controlling the stack.


Why are these announcements so deliberately vague? Sovereign infrastructure does not require exposing operational details. Sovereign infrastructure does require explaining what is being built, why, against what alternative, and how it advances a national strategy. The TELUS announcement explains none of these things. The vagueness is a political feature, not a security feature. It protects the announcement from the analysis the framework would otherwise apply.

Under what labour framework is the sovereign AI partner deploying AI on its own workforce? On April 30, less than two weeks before the announcement, the Canadian Telecommunications Workers Alliance testified before the House of Commons Standing Committee on Industry and Technology. The alliance represents 32,000 workers across Bell, Rogers, and TELUS. Its testimony described AI being used to monitor telecom workers and to disguise the accents of offshore call centre agents. The alliance asked the federal government for restrictions on AI use in the sector and for a permanent federal working group on artificial intelligence bringing government, industry, and civil society together. The federal government named TELUS as Canada’s sovereign AI partner twelve days later. The April 30 testimony is not referenced in the announcement. The sovereign AI partner is deploying AI on its workforce under terms its own workers are asking the federal government to restrict. The procurement announcement does not address this.

Three Orders, One Failure


The constitutional architecture from Paper 8 explains why these two announcements produce a single integrated failure rather than two parallel ones.
Federal jurisdiction over AI procurement at the federal layer, provincial jurisdiction over AI deployment in health, education, transit, and most public service, and municipal operations under provincial legislative authority are three layers of government that do not automatically coordinate with each other. The federal government cannot direct provincial AI procurement. The province can direct municipal AI procurement. None of the three orders is currently operating under shared standards, shared compliance reporting, shared incident reporting, shared workforce frameworks, or shared analytical coordination. The five elements Paper 8 identified as the architecture sovereignty requires are absent at every layer and absent in the connections between layers.


The TELUS announcement is the federal layer making a procurement decision that the federal government has the constitutional authority to make. The Ontario auditor general’s report is the provincial layer documenting that the province’s AI deployment is operating outside the policy framework the province has the constitutional authority to enforce. The municipal layer, which is not visible in either document, is where Ontario municipalities operating under the same provincial legislation are presumably accessing the same unsafe AI sites that the OPS staff are using, with even less training and even less oversight, because most municipalities do not have AI directives of their own. The pattern compounds as it moves down the constitutional architecture.


Coordination across these layers is not consolidation. Provinces retain procurement authority. Municipalities operate under provincial direction within provincial constitutional jurisdiction. The federal government retains authority over trade and national security. The architecture Paper 8 specified would align standards across the orders while preserving authority within each. None of the alignment exists. Neither of the announcements this week reflects awareness that the alignment is required.


What the Auditor General’s Reports Document


The Ontario report makes the operational consequences of the procurement-strategy gap visible at the provincial layer.


Three percent of OPS staff trained on responsible AI use as of August 2025. Sixty percent of AI sites accessed by staff flagged as unsafe by the government’s own Microsoft Defender. No controls to prevent upload of citizen personal information or sensitive corporate data. The Ministry knew which sites were unsafe and did not block them. The province has been celebrating that OPS has the highest Copilot use in Canada. It has not been celebrating that 94 percent of OPS AI use is outside the Copilot agreement that keeps data in Canada, as Jack Hauen and The Trillium reported from the auditor’s findings. Six percent of civil servant AI use is on the approved tool with a Canadian data agreement. Ninety-four percent is on tools without one.


The medical AI scribes are a separate operational failure with clinical consequences. Sixty percent of approved scribes recorded a different drug than what was prescribed. Seventeen of 20 missed key mental health details in at least one of the two simulated patient conversations Supply Ontario used to test them. Nine of 20 fabricated information and made unsolicited treatment suggestions. These are tools that Supply Ontario approved and that family doctors and other healthcare professionals are using in clinical examinations now. The procurement decision was made before the testing produced these results, or in spite of them. Every AI tool will hallucinate when used outside the conditions and limits of probabilistic systems. Training is what teaches users where those conditions end. Three percent training is what untrained deployment of any AI tool produces.


This is the workforce framework gap from Paper 8 showing up at the operational layer. It is the absence of a strategy showing up at the deployment layer. It is the procurement-before-policy pattern producing exactly the failure modes the framework predicted.

The Energy Question


The renewable energy and heat recycling components of the TELUS announcement are real, are good, and are unrelated to sovereignty. Climate values and AI sovereignty are separate policy questions. The announcement conflates them. A foreign-controlled compute substrate powered by 98 percent renewable electricity is still a foreign-controlled compute substrate. The renewables do not make the substrate Canadian.


The energy footprint is also worth treating on its own terms. The International Energy Agency projects global data centre electricity demand will more than double by 2030, reaching nearly 1,000 TWh, roughly the power use of a major industrialized country, driven largely by AI workloads. Canadian regulators and utilities have already warned that AI-led data centre growth could put significant strain on provincial grids, particularly in Ontario, Quebec, and Alberta. The BC cluster is one input into a much larger pressure on Canadian grids that no announcement has addressed at the architecture level.


Why Are We Announcing Investment Ahead of a Delayed National Strategy?

The federal AI strategy is nine months past its original timeline. The CUSMA review opens July 1. The coordination architecture from Paper 8 does not exist. The federal IT agreement that would carry the workforce framework expired in December 2025 and is in active bargaining at PIPSC. The federal government has just committed $2 billion to a procurement frame that anchors the entire federal AI architecture before any of these foundational elements are in place.

PSAC, which represents 245,000 federal public servants, has been at the bargaining table for five months attempting to add 15 AI clauses to its agreement with the federal government, including one establishing that AI cannot be a substitute for public service employees. Those negotiations are at an impasse. Ottawa’s chief data officer Stephen Burt has publicly stated that AI adoption will result in job cuts in the public service. The federal AI strategy continues to claim that automation will free public servants to focus on higher-value work. The procurement decision was made into the gap between what the strategy says about workers and what the government’s own chief data officer says will happen to them, against active bargaining that is refusing the workforce protections the chief data officer’s statement makes necessary.


Procurement before strategy is the inverse of what sovereignty requires. A country building sovereign AI capability would publish the strategy first, build the coordination architecture next, bargain the workforce framework alongside, and procure the infrastructure last, against the architecture the strategy specified. Canada is doing the inverse. The procurement decision is being made first, in the absence of the architecture, anchored to a vendor whose product roadmap will define what the architecture has to accommodate afterwards.


The three levers Paper 8 identified are trade-law obligations, federal spending power, and voluntary intergovernmental coordination. The TELUS announcement is the federal government using its most flexible lever, federal spending power, to entrench foreign-vendor dependency rather than to build domestic capability. The spending power was the lever that operates with the least trade-law constraint. The government is spending the lever it has the most domestic flexibility on, on the procurement frame that gives it the least sovereignty.

What Partnership Has to Look Like


Nothing in this analysis argues for a fully domestic AI stack. A fully domestic stack is not available to Canada at any timeline relevant to the policy decisions being made this week. The frontier model layer, the leading-edge chip layer, and the hyperscale cloud layer are American. Partnership with American providers, and with European, Japanese, and other allied providers, is a structural requirement of any Canadian AI strategy that intends to operate in this decade. The question is not whether to partner. The question is on what terms.


Partnership terms are where sovereignty either exists or does not. A procurement decision that hands a foreign vendor a long-dated commitment, a single point of dependency, an upgrade path the vendor controls, and a workload concentration the vendor can leverage is a partnership in name and a dependency in substance. A procurement decision structured around portability, multi-vendor architecture, exit rights, data residency enforceable under Canadian law, contractual protection for citizen data under Canadian jurisdiction, audit rights against the vendor’s stack, and the capacity to respond when terms or geopolitics change is a partnership that preserves the country’s ability to manage its own affairs.


The trust environment makes the structure harder, not optional. American AI providers are operating under an administration that has signalled willingness to use AI infrastructure, AI procurement, and AI export controls as instruments of foreign policy. European providers are operating under a regulatory regime that does not always align with Canadian priorities. Chinese providers are not currently realistic partners for federal procurement, for many reasons, even if they are increasingly the option for startups. No vendor in this market can be relied upon to behave the same way in 2028 that they behave in 2026. The procurement structure has to assume that vendor behaviour will change, that policy environments will shift, and that the terms negotiated today will be tested under conditions that did not exist when they were signed.


The instruments that protect Canadian autonomy under partnership are not exotic. Contractual data residency with Canadian-law enforcement, not vendor assurance. Multi-vendor architecture at every layer where concentration creates leverage. Exit rights with enforceable timelines and cost ceilings. Workload portability requirements written into procurement standards. Audit rights that survive vendor consolidation, acquisition, or strategic pivot. Continuity-of-service obligations that bind across changes in vendor ownership or jurisdiction. Compliance reporting that the Canadian government, not the vendor, controls. These are the terms that turn dependency into partnership. None of them appear in the announcements of the last three days.


The Policy Gap


What is missing across all three orders of government is enforceable policy on AI use, deployment, procurement, and accountability.

AI internal usage policy is not window dressing. It defines how government workers will apply a wide-reaching, not well understood, extremely powerful and rapidly evolving technology.
Strategy says what a government intends to do. Policy says what a government requires to be done and what happens when it is not. Ontario has a Responsible Use of AI directive that 3 percent of public service staff have been trained on. The directive exists. The training infrastructure to make the directive operative does not. The security controls the directive specifies have not been implemented. The auditor function the directive references has now reported that the directive is functionally inoperative across the public service it is supposed to govern. A directive that 97 percent of relevant staff have not been trained on and that the responsible ministry has not enforced is not policy. It is aspirational language.


The federal government has neither the strategy nor the policy. The AI strategy is nine months past its original timeline. There is no federal AI directive applicable across federal departments with bargained workforce terms, no published procurement standards for AI tools used in federal operations, no incident reporting framework for AI failures in federal deployment, and no compliance audit function for AI use in federal departments and agencies. The TELUS announcement was made into this absence. The $2 billion commitment will be governed by whatever the eventual policy turns out to be, against a procurement frame the policy did not specify.


What enforceable AI policy across orders of government would require is not mysterious. Mandatory training before access. Blocked access to flagged AI sites on government devices. Controls preventing upload of personal or sensitive data to unapproved AI tools. Procurement standards for AI tools deployed in service of citizens. Accuracy testing before clinical deployment and continuous testing after. Incident reporting requirements for AI failures. Audit functions to verify compliance. Consequences for noncompliance that fall on the institutions deploying the tools, not on the citizens whose data is exposed by the deployments. None of these elements are exotic. All of them exist in adjacent policy domains like medical devices, financial services, and aviation. The Ontario auditor general’s report documents the absence of all of them at the provincial layer. The TELUS announcement was made in the absence of all of them at the federal layer.


The political pressure to show progress on AI is producing announcements that look like progress without doing the policy work that progress would require. The pattern will continue to produce announcements until the policy work catches up. The two announcements this week show what the absence of policy costs in real procurement decisions and real citizen data exposure. They are not anomalies. They are the predictable output of the system operating without the policy architecture that would govern it.

The Architecture That Is Missing


The reversed sequence this series has argued for since Paper 1 is now glaringly absent. Publish the federal AI strategy first. Build the five-element coordination architecture next. Bargain the workforce framework alongside it. Only then procure the infrastructure. Instead, governments are procuring first and hoping the rest materialises later.
Paper 5 set out the five minimum instruments a functioning sovereignty regime actually requires. None are in place for the TELUS announcement or the Ontario deployments:

  • Sovereign Exposure Registry – a mandatory, public, machine-readable log of every AI contract, vendor dependency, and data-flow risk.
  • Sovereignty Trigger Framework – binding tests that automatically escalate any procurement exceeding defined foreign-control thresholds.
  • Contingency Architecture Requirement – every vendor must deliver and test a fully documented, air-gappable migration path before signing.
  • Tempo-Matched Monitoring Function – real-time capability tracking that matches the speed of vendor roadmaps, not annual audits.
  • Provincial Sovereignty Backstop – statutory language that prevents federal deals from pre-empting provincial data and governance rules.
    Paper 8 added the five-element coordination architecture (shared standards, compliance reporting, incident reporting, workforce frameworks, and analytical coordination) plus the three operational levers that make it enforceable across federal, provincial, and municipal lines. None of those elements or levers appear in the TELUS press release or the Ontario medical-scribes rollout.
    What changes tomorrow is therefore simple and non-negotiable. Any new AI infrastructure or tool contract, federal or provincial, must, at signing:
  1. Enter the Sovereign Exposure Registry.
  2. Include a tested contingency architecture.
  3. Be scored on genuine capability sovereignty, not just hosting.
  4. Carry mandatory accuracy, incident, and hallucination reporting before clinical or public-service use.
    These are not long-term aspirations. They are the minimum operational requirements for the procurement decisions being made this month. The $2 billion TELUS cluster and Ontario’s medical AI program are not exceptions; they are the rule until the architecture exists. The hole at the centre is no longer theoretical. It now has a price tag and a delivery date.

In the TELUS/NVIDIA case, this scoring would require explicit disclosure of single-vendor control over the entire compute, networking, and software stack, and a binding plan to maintain Canadian operational sovereignty even if U.S. policy, pricing, or technology roadmaps shift.

CUSMA Looming


The CUSMA review opens July 1. The federal position that has to be formed for the review now has to account for two new facts. The TELUS announcement has anchored considerable federal AI procurement to a single foreign vendor before the trade-law architecture has been formed. The Ontario auditor general’s report has documented that provincial AI deployment is operating without the policy framework that would make any of it defensible at the trade-law layer.


CUSMA does not just structure Canadian procurement. It also rewards procurement that operates inside defensible policy frameworks under the national security exception. A federal government invoking the national security exception to justify sovereign AI procurement has to demonstrate that the procurement operates under a coherent national security policy framework. Canada cannot currently make that demonstration because the framework does not exist in published form. The Ontario findings make the demonstration harder because they show that even where partial policy frameworks exist at the provincial level, the enforcement infrastructure does not exist.


Paper 10 will take up the CUSMA architecture under which the federal position has to be formed, including how the three levers operate inside CUSMA Articles 19.11, 19.12, 19.16, and 32.2, how the national security exception applies to procurement decisions that the announcements of the last two days have already made, and what the federal position would have to include about enforceable policy across all three orders of government to be defensible at the trade-law layer.


The window for refining the position is narrowing. The procurement defaults that will define what the position has to defend are being set in the window. The work is policy. The work has not been done.

Featured

Canadian AI Sovereignty Paper 11: Capital Follows Capability

The Three-to-Five-Month Window, the Room Canada Is Not In,...

Canadian AI Sovereignty Paper 8: The Coordination Architecture

Federal, Provincial, Municipal, and What Makes Sovereignty Operational By Jen...

How AI is Modernizing Payment Card Personalization in a Regulated Canadian Market

As artificial intelligence (AI) reshapes the financial services industry,...

AI and The Grid: Mythos, Power and Canadian Sovereignty

By Jen Evans, Principal, Pattern Pulse AI; co-founder, Tech...
Jennifer Evans
Jennifer Evanshttps://www.b2bnn.com
principal, @patternpulseai. author, THE CEO GUIDE TO INDUSTRY AI. former chair @technationCA, founder @b2bnewsnetwork #basicincome activist. Machine learning since 2009.