The biggest cultural event of 2023 wasn’t planned or foreseen by anyone, and that’s the point.
My favourite Nudgment example is Barbenheimer.
Because everyone gets it, immediately.
On July 21, 2023, two films opened on the same day. One was a neon-pink comedy about a fashion doll having an existential crisis. The other was a three-hour historical drama about the man who built the atomic bomb. Neither studio planned what happened next.
What happened was Barbenheimer.
Within hours of the release date announcement, the internet had fused two completely unrelated cultural products into a single phenomenon that was larger, louder, and more commercially powerful than either film could have been alone. People dressed in pink to see a double feature. They bought double-feature tickets. When was the last time that happened?They made memes that crossed aesthetic boundaries no marketing team would have attempted. The combined opening weekend was over a billion dollars globally. It became the defining cultural moment of the summer, arguably of the year.
And nobody whose job it was to predict these things saw it coming.
This is a story about what happens when signals emerge, when data from unrelated domains converge, and the organizations responsible for detecting patterns are looking for nothing at all.
The Signal Was in the Collision
Warner Bros. knew Barbie was going to be big. Universal knew Oppenheimer had prestige momentum. Both studios had sophisticated analytics, market research, audience segmentation, and decades of institutional knowledge about how films perform. What neither studio’s apparatus was built to detect was the emergent pattern that only became visible when you stopped looking at each film as an independent data point and started looking at the space between them.
The cultural signal wasn’t Barbie. It wasn’t Oppenheimer. It was the juxtaposition; the absurd, delightful collision of pink and apocalypse, frivolity and gravity, Margot Robbie and Cillian Murphy. The signal existed in the relationship between two unrelated inputs, and it was invisible to any system designed to analyze one domain at a time.
The internet saw it immediately. Not because the internet is smarter than studio analytics, but because the internet is a cross-domain pattern detection system by nature. It processes culture, politics, aesthetics, humour, and timing simultaneously, without departmental boundaries. It doesn’t care which studio owns which film. It sees everything at once and surfaces the anomalous convergence before anyone in a silo can.
These were both enormous releases, and so the Internet did what the Internet does, and it said: let’s make them happen together. It’s not like it was the first time two massive films debuted on the same day, but it was the juxtaposition of the completely unrelated and absurd, both the antidote to the other that made the combination a phenomenon.
The way it spread is what signal detection looks like when it works.
Now Imagine the Stakes Are Higher Than Box Office
In the research framework I have been developing, what I call the Nudgment framework, the capacity to detect weak and emergent signals before they harden into crises depends on exactly the kind of cross-domain pattern recognition that the internet performed accidentally with Barbenheimer. The difference is that in government, the domains aren’t film studios. They are health data, infrastructure systems, immigration flows, economic indicators, environmental sensors, and social services patterns.
The signals that matter most in governance are almost never visible within a single ministry or department. A localized spike in emergency room visits means one thing to a health ministry. A regional infrastructure failure means something to a public works department. A supply chain disruption means something to an economic ministry.
But the combination of all three, happening simultaneously, in the same geography, at the same time, might mean something that none of them can see individually. It might be the early warning of a cascading failure. It might be nothing. The point is that without cross-domain visibility, you cannot even ask the question.

This is what we call Layer 6 in the sovereign AI architecture model I published as part of the “Whose AI Runs the Government?” series. Layer 6 (Signal Intelligence) is a hypothetical sovereign capability that continuously monitors across domains, identifies anomalous patterns, assesses their significance, and surfaces weak signals to human decision-makers before they harden into crises.
No country has built it. It barely even exists in corporations.
A Version of This Happens Every Quarter
Barbenheimer was visible because it was cultural. But the same architectural failure, signals siloed by domain, invisible at the point of convergence, plays out inside organizations constantly, with consequences that are quieter and more expensive.
In the Nudgment framework, this is the departmentalism problem. Signals don’t respect organizational boundaries. A shift in accounts receivable aging is a finance metric. A hiring slowdown is an HR metric. A production bottleneck is an operations metric. A growing order backlog is a sales metric. In most organizations, four departments are looking at four dashboards and seeing four unrelated problems.
What none of them sees, unless the correlation layer has been designed, is that they are looking at the same signal expressed four different ways through four different systems. Finance sees the cash flow tightening. HR sees the hiring freeze. Operations sees the timeline slipping. Sales sees the backlog building. The pattern is only visible when you read across domains, and the organizational architecture is designed to prevent exactly that.
This is what we call systemic hallucination: the condition in which an enterprise is making high-speed, AI-informed decisions across multiple functions simultaneously, but without the cross-functional signal discernment to detect that the inputs are degraded, the outputs are misaligned, or the decisions are compounding into a trajectory that no single department can see. The organization isn’t lying. It isn’t incompetent. It is producing confident, internally consistent narratives that diverge from reality, at a rate proportional to the speed of its AI adoption and inversely proportional to its capacity to read signals across boundaries.
The Salesforce Agentforce case is a textbook example. Salesforce detected the AI signal correctly: agentic AI was strategically important. What it missed were the secondary signals: that its customer base wasn’t ready, that its pricing model was misaligned, that forcing adoption through narrative momentum would produce backlash rather than buy-in. Each of those signals was visible within a different function. Customer success saw the readiness gap. Finance saw the pricing confusion. Community teams saw the skills shortage. The pattern they formed when read together, a product being pushed faster than the ecosystem could absorb, was invisible because no system was reading across all three simultaneously.
ADP provides the counterexample. A payroll company processing data for 41 million workers doesn’t get to hallucinate. A payroll error means someone can’t pay rent. The tolerance for confident-but-wrong is zero. So ADP built what most organizations haven’t: a cross-domain signal detection architecture where anomaly data from payroll, HR, compliance, and compensation feeds into a unified system, with human judgment at both ends and AI in the middle. It’s unglamorous. It doesn’t generate headlines about the year of agentic AI. It just catches errors before they reach the person whose grocery money depends on the answer being right.
The Barbenheimer lesson for enterprises is the same one it teaches about culture: the signal that matters most is often the one that lives in the space between your departments, in the relationship between data points that no single team owns. The organizations that see it are the ones that have built the architecture to look across boundaries. The ones that don’t are the ones producing confident quarterly narratives that sound coherent internally and bear less and less resemblance to what’s actually happening.
Why the Studios Missed It
It was not a failure of intelligence. Both Warner Bros. and Universal had access to more data, better analytics, and deeper market research than the average person posting memes on Twitter. The failure was architectural. Their signal detection systems were designed to analyze one product in one market through one lens. Barbie’s team was tracking Barbie metrics. Oppenheimer’s team was tracking Oppenheimer metrics. There was no system looking at the relationship between the two, because the two existed in completely separate organizational, competitive, and analytical structures.
The pattern was invisible not because it was hidden, but because the architecture was not built to see it.
This is precisely the failure mode that faces governments deploying AI across operational functions. Tax administration has its own systems. Immigration has its own systems. Public health has its own systems. Each department optimizes within its own domain. When something goes wrong in one domain, the department responsible investigates within its silo. When something goes wrong across domains, when the signal only becomes legible by combining data from health, infrastructure, and immigration simultaneously, nobody is watching.
The Barbenheimer case is charming because the stakes were low. Nobody was harmed by studios failing to anticipate a meme. But the architectural failure is identical to the one that governments face when they deploy AI across operational functions without building cross-domain signal detection. The signals that predict crises don’t announce themselves within departmental boundaries. They emerge in the spaces between.
The Sovereignty Dimension
There is another layer to this, and it is the one that connects Barbenheimer to the sovereignty question.
If a government did build a cross-domain signal detection capability, a system that monitored across health, infrastructure, economic, and social services data to surface weak signals before they became crises, that system would, by definition, require access to the most sensitive data the state produces. Not just the data itself, but the pattern layer above the data: what the government is watching, what anomalies it is tracking, what it is worried about before it knows it is worried about it.
A foreign provider operating that system would have better situational awareness of the state’s emerging vulnerabilities than the state’s own intelligence services. This is not a data breach. It is a structural intelligence advantage.
This is also why sovereign AI infrastructure matters. Not because of abstract nationalism, and not because foreign providers are malicious. It matters because the highest-value cognitive function a state can build, the ability to see what is coming before it arrives, is also the function that is most dangerous to outsource.
Barbenheimer was the internet doing signal detection for fun. The question is what happens when the stakes are national security, public health, and democratic governance, and the system that detects the pattern is controlled by someone who doesn’t answer to your citizens.
And unlike Barbenheimer, the stakes are not box office receipts. They are population-scale behavioral change that is reshaping entire industries while the people who need the signal most don’t know it exists.
GLP-1 receptor agonists, the class of drugs that includes Ozempic and Mounjaro, were developed for diabetes management and weight loss. That’s the single-domain story. But the emerging cross-domain signal is far larger: large-scale observational research, including a 2026 study of US veterans published in the BMJ, has found associations between GLP-1 medications and reduced substance use across multiple categories: alcohol, tobacco, cannabis, opioids, and stimulants. Some people are seeing reduced gambling behavior, reduced interest in romance, reduce dependency on external stimulus in general. The plausible mechanism is that these drugs affect reward pathways in the brain, not just appetite. If these associations hold up in clinical trials, we are looking at a pharmacological intervention that is altering human consumption and impulse patterns at population scale, patterns that have nothing to do with weight loss.
Now ask yourself: who owns this signal?
The pharmaceutical companies see drug sales data. The restaurant industry sees declining same-store traffic and changing portion preferences. The alcohol industry sees volume declines it is attributing to generational trends. The gambling sector sees shifts in engagement patterns. Grocery retailers see basket composition changing; fewer impulse categories, different product mix. Insurance companies see claims data shifting. Public health agencies see substance use metrics moving. Each of these industries and institutions is looking at its own data, in its own silo, and constructing its own explanation for what it is seeing.
Nobody is reading the convergence.
This is Barbenheimer at industrial scale, except instead of two films creating a cultural moment, a single pharmacological intervention is generating secondary signals across restaurants, alcohol, gambling, grocery, insurance, and public health simultaneously, and the pattern is visible only if you read across all of them at once. A restaurant chain attributing declining traffic to the economy is looking at a GLP-1 signal through the wrong lens. An alcohol company attributing volume decline to Gen Z preferences is constructing a narrative that may be partially or entirely wrong. A public health agency celebrating declining substance use rates may be measuring a pharmaceutical effect rather than a policy success, and if the prescriptions stop or become unaffordable, the improvements reverse.
The questions this should provoke are not abstract:
What would it look like if someone were actually synthesizing the GLP-1 signal across domains? What if restaurant traffic data, alcohol sales, gambling revenue, grocery basket composition, insurance claims, and substance use metrics were being read together, as a system, rather than interpreted in six different silos by six different industries with six different explanatory narratives?
Who should be doing this? Is it a public health function? A central statistics agency? A regulatory body? An insurance consortium? A cross-industry research initiative? Right now the answer is: nobody. The signal is accruing in fragments across industries that do not share data, do not share interpretive frameworks, and in most cases do not know they are looking at the same underlying phenomenon.
How would you build the infrastructure to detect this kind of signal before it hardens into obvious data? That is the nudgment question. Not “what does the data say?” – individual sectors already have the data. The question is: who is reading across the boundaries? Who has the architecture to see that a diabetes drug is reshaping the restaurant industry, the alcohol market, the gambling sector, and the public health landscape simultaneously?
And this is where the nudgment framework matters most: this is not a failure. Nobody dropped the ball. There is no ball to drop, because no organization, no industry, and no government agency is responsible for reading across these domains simultaneously. The signal is emergent. It is accruing in the spaces between industries that have no shared data infrastructure, no common interpretive framework, and in most cases no awareness that they are observing different facets of the same phenomenon. With AI we now have the ability to do pattern detection at scale, early. But very few organizations have the model set up to detect these kinds of signals.
This is the core argument of the Nudgment paper: the judgment to read a signal like this cannot be engineered, purchased, or installed. It can only be cultivated and detected, and it can only emerge in conditions where the signal is set up to be detectable in the first place. What can be built is the detection infrastructure: in this case, the correlation layer that makes cross-domain signals visible so that when judgment does develop, it has something to work with.
The GLP-1 case is a real-time illustration of what that means in practice. The data exists. The signal is coherent enough to warrant attention. What doesn’t exist is the architecture that would allow anyone: a public health agency, a cross-industry consortium, a central statistics body, to see the restaurant data next to the alcohol data next to the gambling data next to the substance use data and recognize that they are looking at one signal, not six. Building that architecture is not the same as building judgment. It is building the conditions in which judgment can emerge. That is the first act of cultivation, and right now, for the GLP-1 signal, nobody has taken it.
And perhaps most importantly: what are the consequences of not reading it? If the GLP-1 effect on reward pathways is real and durable, then every industry exposed to discretionary, impulse-driven, or addiction-adjacent consumption is operating on demand models that are already wrong. The organizations that detect this first and adjust will have an asymmetric advantage. The ones that construct reassuring single-domain narratives, “our customers are just being cautious,” “this is a temporary trend,” “Gen Z drinks less,” are in the early stages of systemic hallucination. They are telling themselves a coherent story that has stopped updating.
This is what nudgment looks like as a live question. Not a historical case study. Not a framework applied retrospectively. A real signal, accruing right now, across domains that don’t talk to each other, with nobody at the controls. The data exists. The pattern is visible, if you know where to stand. The question is whether anyone will build the architecture to see it before the single-domain narratives harden into strategies that are already obsolete.
The Signals We Need to Watch For
Governments are currently in the same position, except the signals they are failing to detect are not going to announce themselves as cheerfully as a pink-and-mushroom-cloud meme. The signals that predict cascading public health failures, infrastructure breakdowns, or economic disruptions are quiet. They emerge slowly. They cross domains that don’t talk to each other. And by the time they are visible to everyone, they are no longer signals. They are crises.
The capability to see them early exists as a theoretical possibility. No government has built it. Most governments are not even asking the question. And the infrastructure they are building their AI operations on; the foreign cloud platforms, the foreign orchestration layers, the foreign models, is not designed to enable this kind of cross-domain visibility. It is designed to optimize within silos, because silos are how platforms sell services.
Barbenheimer was a gift. It showed us what cross-domain signal detection looks like when it happens by accident, in a low-stakes environment, with delightful results. Will governments and organizations build the architecture to do it on purpose, allowing for highly calibrated management, with sovereign infrastructure, before the next signal arrives, the one that is not a meme, and not fun, and not something you can catch up to after the fact?

