Last updated on March 22nd, 2026 at 06:20 am
UPDATE: March 22, 2026 — Governance Requires Presence
In Japanese aesthetics, wabi-sabi is the recognition that beauty and function emerge from imperfection, impermanence, and incompleteness. A handmade bowl is not flawed because it is asymmetrical. It is honest. It was shaped by someone who touched the clay. Wabi-sabi demands presence with the material, not mastery over an abstraction of it.
AI governance needs this. Not perfection. Not a finished regulatory architecture delivered in advance of a technology that is still revealing what it is. But presence. Observation. The willingness to work with the material before writing the rules.
On March 21, Prof. Dr. Cristina Vanberghen published “Trump’s AI Strategy Exposes Europe’s Strategic Ambiguity,” an analysis of the same U.S. AI framework I wrote about here. Her argument is that the Trump administration’s approach represents deliberate strategic coherence; not laissez-faire but a blueprint for consolidating advantage across compute, data, and models. She argues that federal preemption of state AI laws prevents fragmented compliance, that governance can evolve alongside deployment, and that Europe’s real problem is normative power without industrial depth. She is right about Europe’s structural weakness. She is wrong about what the U.S. framework represents and what other nations should learn from it, and how they should approach sovereignty.
On “strategic coherence”: The framework is seven pages. It does not contain the words “hallucination,” “reliability,” “coherence,” or “verification.” Its companion legislation requires LLMs to be “truthful” and “neutral” – terms that have no technical meaning when applied to systems that generate probabilistically plausible text rather than retrieve facts. Coherence requires substance. This framework has a facade.
The same criticism applies, arguably more forcefully, to the Blackburn bill. At 300 pages, with duty of care provisions, third-party audits, and liability mechanisms, it looks like serious governance. That is what makes it more dangerous than the seven-page White House document. The White House framework is an empty room and visibly so. The Blackburn bill is a furnished room built on a foundation that does not connect to the ground. Its truthfulness and neutrality mandates construct an elaborate compliance architecture around a technology the authors appear to have experienced as a text box. You cannot audit an LLM for truthfulness because an LLM does not have a stable, auditable relationship to truth. The bill will not produce accountability. It will produce a certification industry that certifies things that cannot be certified. Neither document was written by anyone who “touched the clay”.
On preemption as a solution to fragmentation: Federal preemption removes regulatory capacity from states and concentrates it at the federal level. The federal level has then chosen not to regulate. The result is not coherence. It is centralised absence. The GUARDRAILS Act legislators calling this a regulatory vacuum are not one side of a debate. They are describing a condition.
On what “coherence” actually looks like: If you want to see what a strategically coherent national AI framework looks like, look east. As I wrote earlier this month, China’s 15th Five-Year Plan is a 141-page mandate that aligns ministries, provinces, industrial capital, energy systems, and technology champions around a shared objective, 90 per cent AI integration across the economy by 2030. AI is mentioned 52 times, up from 11 in the previous plan. Beijing is building sovereign compute infrastructure, deploying autonomous AI systems at port scale, pursuing open-source foundation models as a substrate strategy for the Global South, and integrating AI policy with demographic planning for an ageing population. You do not have to admire the political system to recognise that this is what industrial coherence looks like when a government has decided AI is a structural priority. The U.S. response is a seven-page document about children’s safety and free speech. The comparison is clarifying.
On governance evolving alongside deployment: I agree with this premise. In fact it is central to my own work. You can and should build governance and technology at the same time. But co-evolution requires that governance actually exist. The U.S. framework does not govern alongside deployment. It deregulates and preempts. There is no observation, no iteration, no measured response to what the technology is revealing about itself. Calling this co-evolution is like calling an empty chair a conversation partner.
On Europe’s industrial deficit: Her strongest point. U.S. hyperscalers control roughly 65–75 per cent of Europe’s cloud market. European regulatory authority is structurally constrained by infrastructure dependence. But the conclusion that Europe should learn from the U.S. model inverts the problem. The U.S. model is the source of the dependence. Adopting the governance framework of the country that benefits from your infrastructure deficit is not a path to sovereignty. It is a path to deeper tenancy.
On the Mistral levy and Europe’s shift toward industrial policy: She is right that this shift is real and significant. Where she stops short is in asking why Europe’s regulatory and industrial strategies remain disconnected. The answer is the same on both sides of the Atlantic: the people designing these policies are not engaging with how the technology actually behaves in deployment.
This is the gap that matters. Not the gap between innovation and precaution. Not the gap between speed and safety. The gap between policy and presence.
The Nudgment framework was built to address exactly this. It establishes signal discernment as an organisational capability: the ability to observe what a technology is actually doing, to distinguish meaningful signals from noise, and to act in proportion to what you understand. The Nudgment Maturity Ladder moves organisations from reactive compliance through to strategic anticipation, not by demanding perfection but by building the institutional capacity to pay attention and respond with care. It is, in its own way, wabi-sabi applied to governance: accept that the technology is incomplete, that your understanding is incomplete, that the rules will need to change, and begin anyway, with your hands on the material.
To call the U.S. framework “coherent” is to confuse the absence of friction with the presence of strategy. Governance that co-evolves with technology is not only possible, it is the right approach. But it requires people in the room who have touched the clay.
Original Post:
On March 18 and 20, 2026, the United States released two competing AI governance frameworks. The White House published a seven-page set of legislative principles emphasizing deregulation, federal preemption of state laws, and the protection of free speech. Senator Martha Blackburn published then reaffirmed today a 300-page bill proposing duty of care requirements, third-party audits, liability provisions, and the repeal of Section 230. Both claim to represent the president’s agenda. Neither addresses how large language models actually function, how they degrade, or how governments should govern their own use of them. The country that produces the most powerful AI systems on earth cannot coordinate an AI governance framework with itself. Every other nation making sovereign AI decisions should take note of what that means for the durability of any American commitment, legislative, commercial, or strategic, that their own AI infrastructure depends on.
The White House released a national AI legislative framework. It is seven pages long. It covers children’s safety, energy permitting, copyright, free speech, workforce training, and the preemption of state AI laws. It does not contain the words “hallucination,” “reliability,” “coherence,” or “verification.” It does not address how AI systems actually work.
Rather than a regulatory framework for artificial intelligence, it is a regulatory framework for the internet, updated with the word “AI” pasted over the previous concerns. Children’s safety. Content moderation. Copyright. Platform liability. Section 230. These are 2018 problems wearing a 2026 label. The framework does not appear to have been written by anyone who has used these systems for sustained, complex work, or who understands what happens inside them when they do.
The centrepiece of the “companion”/competing legislation, Senator Blackburn’s TRUMP AMERICA AI Act, released two days ago, requires that federally procured large language models be “truthful in responding to user prompts seeking factual information” and “neutral” in their outputs, enforced through third-party audits for “viewpoint or political affiliation discrimination.”
It’s almost like no one involved has ever used an LLM. This requirement reveals a fundamental misunderstanding of what a large language model is. LLMs do not retrieve facts. They generate probabilistically plausible text based on pattern completion across training data. They have no stable, auditable relationship to truth. The same prompt produces different outputs on different runs. The same model degrades predictably over extended reasoning chains, what my own research describes as coherence collapse. These systems actively mask their own failures through confident-sounding confabulation. A legislative requirement that an LLM be “truthful” is like a legislative requirement that a weather forecast be correct. You can write it into law. You cannot make the atmosphere comply.
The “neutrality” requirement is worse. Neutral according to what baseline? Measured by whom? Every design decision in an LLM, what data it was trained on, how its outputs were reinforced through human feedback, what safety constraints were applied, what was excluded, encodes assumptions. There is no neutral. There is only transparent or opaque. The framework demands the former while providing no mechanism to achieve it and no definition of what it would mean.
What the framework does provide is a comprehensive architecture for federal preemption of state AI laws. This is, structurally, the most consequential element of the document, and the most revealing. The White House framework explicitly calls on Congress to prevent states from regulating AI development “because it is an inherently interstate phenomenon with key foreign policy and national security implications.” States would be blocked from penalizing AI developers for harms caused by third parties using their models. The Attorney General’s AI Litigation Task Force, established in December, is already empowered to challenge state laws that the administration deems inconsistent with federal policy. In this part and the “no bias” part, it’s like they’ve been talking to one specific person (ok maybe a few) who operates an LLMs, companies that use them, and supporting technologies.
The practical effect is to strip regulatory capacity from the level of government closest to citizens while concentrating it at the federal level, and then to fill that federal level with a framework that does not address how AI systems function, how they fail, or how governments should govern their own use of them. This is not deregulation. It is the centralisation of regulatory authority combined with the absence of substantive regulation. The states cannot act. The federal government has chosen not to.
This matters beyond the United States. The framework is being released into a global environment where every other major AI-producing nation is building governance architecture: the EU AI Act, China’s 15th Five-Year Plan with its 90 per cent AI integration target, Canada’s Quantum Champions Programme, India’s sovereign foundation model strategy, Switzerland’s public infrastructure AI. The United States, the country that produces the most powerful AI systems on Earth, has responded with a document that treats the technology as a consumer product to be deregulated and a speech platform to be protected from ideological bias.
There is nothing in this framework about AI systems integrated into government operations. Nothing about sovereignty, the question of who controls the AI that processes tax data, health records, immigration applications, benefits determinations. Nothing about what happens when these systems are not tools used by the state but operational components of it. Nothing about the architectural reality that every other serious AI governance effort in the world is now grappling with.
The framework does get one thing right, almost by accident. The provision that states should retain authority over “requirements governing a state’s own use of AI, whether through procurement or services they provide like law enforcement and public education” implicitly acknowledges that government use of AI is a different category from commercial development. But the framework does nothing with this insight. It names the exception and walks away.
The seven pages read like they were written for an industry that builds chatbots, not an industry that is building the operational layer of the state. The United States has the most capable AI systems in the world and the least serious governance framework of any major AI-producing nation. That gap, between capability and governance, is the single most important variable in the global AI sovereignty landscape, and this framework widens it.

