Geoffrey Moore, Systems of Record, AI and the Inverted Value Layer
Geoffrey Moore is more than a marketing legend; he invented the dominant characterization of the adoption cycle in the ubiquitous tech marketing bible Crossing the Chasm. In a recent interview with Diginomica, he argues that the end product of enterprise AI is a deterministic action, and that the universe of deterministic actions lives in systems of record and systems of engagement. SAP has them, Oracle has them, Salesforce has them. OpenAI does not. Therefore, the value lives where the actions live.
He is half right. Systems of record are operationally indispensable. But calling them the locus of value confuses the execution layer with the intelligence layer. Systems of record are, in the end, repositories. They store, enforce, and process.
The value is not in the action. It is in knowing which action to take, when, and why. That is the significance of AI’s pattern recognition intelligence; seeing invisible patterns in data, and applying them to future strategy, execution and action. Moore’s framework systematically underweights it.
This is not an oversight specific to Moore but he is a highly visible example of it. What is genuinely difficult to understand is that in 2026, after years of deployment, experimentation, and billions in investment, pattern recognition remains virtually absent from the mainstream enterprise AI conversation. It is not a contested idea that lost an argument. It is a critical, underutilized capability with enormous value that has simply never entered the room.
This is not a minor omission. It reflects a structural bias in how the enterprise technology world has thought about value for decades, and it matters enormously for how organizations invest in AI going forward.
The execution fallacy
Enterprise technology has long been organized around a simple chain: data is captured, processes are defined, actions are executed, and value is realized. Systems of record sit at the center of this model, enforcing determinism, consistency, and auditability. For decades, competitive advantage came from optimizing these systems: faster processing, cleaner workflows, better reporting. AI was initially absorbed into this worldview as an enhancement layer, promising incremental efficiency gains without challenging the underlying structure.
Moore’s framework preserves this hierarchy. In his telling, agentic AI will prove its worth when it can produce whole products for specific business processes; deterministic, repeatable, auditable outcomes. The systems of record remain sovereign. AI earns its place by serving them.
But pattern recognition does not fit neatly into this model. It introduces an interpretive step between data and action, one that cannot be fully specified in advance, one that I’ve argued the near future enterprise and public sector organization will be structured around. Recognizing a meaningful pattern is not the same as retrieving a record, executing a rule, or automating a task. It involves judgment under uncertainty, weighing weak signals, and deciding which anomalies deserve attention and which can be ignored. That is precisely why it is so valuable, and why Moore’s framework has no natural place for it.
Why pattern recognition is hard to own and easy to ignore
One reason pattern recognition is downplayed is that it is hard to buy and difficult to operationalize. Enterprises are comfortable purchasing software that can be installed, certified, audited, and placed on a roadmap. Pattern recognition is emergent. It does not behave deterministically, and its successes are often invisible because they surface opportunities and prevent problems that must then be acted upon, rather than resolve them. Noticing a risk earlier than competitors, connecting signals across silos, or reframing a problem before it becomes expensive rarely shows up cleanly in quarterly metrics. These are counterfactual wins, and traditional ROI frameworks struggle to accommodate them.
Moore would likely point out that this is exactly why the chasm exists: pragmatic buyers need whole products with measurable outcomes, not emergent capabilities with ambiguous returns. Fair enough.
But that is a description of the adoption challenge, not a statement about where value actually resides. Confusing the two is how enterprises end up investing heavily in automation while underinvesting in interpretive capacity. AI becomes a tool for doing the same things faster, rather than for seeing different things sooner.
Pattern Recognition Is Not the Missing Model Capability
It is tempting to assume that enterprise AI’s failure to capitalize on pattern recognition reflects a technical limitation. It does not. Modern AI models are exceptionally good at detecting statistical and structural patterns across vast datasets. Pattern recognition is not a scarce capability at the model level; it is abundant, commoditized, and improving rapidly.
The failure lies elsewhere.
What enterprises lack is pattern recognition as an organizational capability: the ability to notice emerging signals across systems, interpret their significance over time, and act on them with authority. Model-level pattern detection and enterprise-level pattern recognition are not the same thing. One is computational. The other is institutional.
Most enterprise AI deployments stop at execution. They optimize for determinism, auditability, repeatability, and narrow task success. These qualities are valuable, but they crowd out a very different kind of intelligence: interpretive judgment under uncertainty. Pattern recognition at the enterprise level requires tolerance for ambiguity, cross-domain synthesis, and the willingness to act before outcomes are fully measurable. These are precisely the traits most organizations are structurally designed to suppress.
When Everything Works and Nothing Is Learned
This is how pattern recognition blindness persists even in “successful” AI programs.
An enterprise can deploy AI across customer support, sales forecasting, churn prediction, and supply-chain optimization, and still miss the early signal that a product category is structurally failing, a regulatory risk is quietly consolidating, or a market has shifted beneath its feet. Each system performs well in isolation. KPIs are met. Dashboards glow green. The organization learns nothing.
Local optimization masks global blindness. Patterns that matter do not belong to a single workflow or function; they emerge across time, systems, and domains. When AI is confined to task execution, those patterns are never surfaced as strategic signals. They remain fragmented, inert, and ultimately ignored.
Pattern Recognition Without Authority Is Just Observation
Even when patterns are detected, enterprises often lack the governance structures to act on them. There is rarely a clear owner for cross-system insight, no formal forum where weak signals can interrupt execution, and no decision rights attached to interpretive findings. Pattern discovery becomes advisory at best, something to be noted, not something empowered to change direction.
This is why enterprises do not suffer from a shortage of data or signals. They suffer from a shortage of interpretive capacity: the ability to determine which patterns matter, when they matter, and who is authorized to respond. Without that layer, AI remains a sophisticated automation engine rather than a strategic intelligence system.
Until enterprises elevate pattern recognition from a background model capability to a first-class organizational function, embedded in governance, decision-making, and leadership accountability, AI will continue to execute brilliantly while failing to see what matters most.
It should become enterprise signal detection.
An uncomfortable reclassification
There is a deeper narrative tension that Moore’s framework avoids. If pattern recognition is acknowledged as the primary source of intelligence-driven value, then systems of record are reclassified. They remain essential, but as execution substrates rather than strategic differentiators. The center of gravity shifts upward, away from transaction processing and toward sense-making.
For vendors and organizations alike, this is an uncomfortable move. It challenges long-standing power structures and business models built around controlling workflows rather than interpreting reality. When Moore says that SAP and Oracle and Salesforce have the library of deterministic actions and OpenAI does not, he is describing a real operational fact. But he is also, perhaps inadvertently, defending an incumbency model by treating the execution layer as the value layer. The question is not whether enterprises need systems of record. Of course they do. The question is whether those systems are where competitive or operational intelligence enter the picture, from existing datasets, or merely where it gets carried out.
Trapped value, revisited
Moore’s concept of trapped value is genuinely useful. Every organization has processes where productivity is constrained by legacy systems, fragmented data, or manual workarounds. His instinct; find the constraint, apply technology there; is sound.
But consider what “trapped” actually means in most cases. The data is in the system of record. It has been there for years. The trap is not that the data is inaccessible. The trap is that no one can see the pattern. Twenty-five instances of SAP across sixteen acquired companies, to use Moore’s own example, do not lack for data or deterministic actions. What they lack is an interpretive layer that can make sense of the whole. That is a pattern recognition problem, not an execution problem, and no amount of optimizing the systems of record will solve it.This should be obvious by now. The fact that it is not, that the enterprise world’s most respected adoption theorist can discuss trapped value at length without once naming the capability most likely to release it, tells you how deep the blind spot runs.
This is the gap in Moore’s framing. He correctly identifies where trapped value accumulates (in fragmented, over-layered enterprise systems) but then points to those same systems as the answer. The systems are the terrain. Pattern recognition, freeing this trapped data, is the map.
Governance and the sense-making question
Pattern recognition also raises governance questions that Moore’s framework sidesteps. Who decides which patterns are important insight, which matter? Who sets the priors that shape interpretation? Who is accountable when an inferred pattern leads to the wrong indicator, the wrong decision? There is judgement and accountability required in the answers, opportunity and risk. These are not product questions. They are institutional ones. They touch on authority, responsibility, and trust.
As long as AI is framed as an assistant or a copilot, safe metaphors that emphasize support and execution rather than judgment, these questions can be deferred. Once AI is acknowledged as a sense-making layer, they become unavoidable. Moore’s preference for metaphors that embed relationship, such as the tutor or the concierge, gestures toward this but does not go far enough. A tutor helps you learn what is already known. Pattern recognition surfaces what is not yet understood.And when deployed properly, it is *the* competitive advantage of AI in the enterprise.
The real hierarchy
None of this diminishes the importance of systems of record or deterministic workflows. Enterprises run on them, and they will continue to do so. Moore is right that no one is ripping out SAP, and the so-called SaaSpocalypse is a fantasy. Half a century of business acumen encoded in enterprise software is not going anywhere.
But treating systems of record as the primary locus of intelligence is increasingly misaligned with reality. In a world saturated with data, the scarce resource is not information or action capacity. It is attention, interpretation, and the ability to recognize emerging patterns before they harden into crises or missed opportunities.
The irony is that everyone already relies on pattern recognition. Senior leaders do it informally, drawing on experience and intuition. Analysts do it manually, stitching together reports from disparate systems. What AI offers is not the replacement of these human capabilities, but their amplification and scaling. Moore’s framework acknowledges none of this. It sees the world from the systems of record outward, and treats everything upstream of execution as ancillary.
Respectfully, that has the hierarchy upside down. Systems of record are where actions are executed. Pattern recognition is where value is created. Until enterprise AI strategy reflects that distinction, organizations will continue to optimize the execution layer while starving the intelligence layer, and wonder why the technology that was supposed to be transformative keeps delivering incremental results.
Moore’s chasm model remains indispensable for understanding how technologies get adopted. But his answer to where the value lives needs inverting. The deterministic action is the last mile. The first mile, the mile that determines whether the action is the right one, is pattern recognition. That is the capability enterprise AI continues to sleep on, ironically the kind it pays consultants billions to generate externally, and it is the one that will separate the organizations that merely adopt AI from those that are actually transformed by it.
Applying pattern recognition to owned corporate data is not an emerging insight waiting for its moment. It is an enormous competitive advantage, and a strange and costly failure of recognition that has persisted for years, continuing every quarter. Organizations pour resources into optimizing infrastructure and deploying actions without investing in understanding which actions matter. At some point, the question stops being “why isn’t anyone talking about this?” and becomes “what is it about corporate structure that (aside from a couple of examples) makes it so resistant to seeing where its own value comes from?”





