The temporal gap between institutions and empowered individuals has become the defining strategic variable of the AI era. Recent developments, from Apple’s response to a $5/month competitor to data centers bypassing the electrical grid, demonstrate that speed differentials aren’t marginal advantages. They represent a fundamental restructuring of how power operates across states, institutions, individuals, and AI systems.
The Four-Pole Game
We recently looked at the future of game theory in geopolitics and the expansion of cognition beyond human capabilities. Geopolitics now operates across four competing poles, each with distinct capabilities, constraints, and decision-making speeds:
States: Increasingly destabilized, as traditional sovereignty assumptions break down under pressure from actors and leaders who can route around borders, regulations, enforcement mechanisms, and even constitutions. Operate on legislative cycles, treaty negotiations, and diplomatic processes measured in years.
Institutions: Increasingly defunded and delegitimized, or sidelined and defensive gatekeepers, whether multilateral bodies like the UN facing member withdrawals, or corporate partnerships undercut by individuals operating at a fraction of the cost. Operate on procurement timelines, integration schedules, and regulatory review measured in quarters.
Individuals: Increasingly empowered by access to AI, capital mobility, platform reach, and jurisdictional arbitrage, capable of actions that previously required state-scale infrastructure. Operating as proxies for states or institutions, on communications cycles at the speed of social media powered by bots, product shipping cycles, exploit windows, and opportunity capture measured in days or weeks. Operating outside the parameters of “normal government” as normal government eschews regulatory responsibility or commitment to courses of action it no longer deems in the interests of its power.
AI: Now powerful enough to function as a fourth pole: a tool wielded by other actors, but also an autonomous strategic amplifier so pervasive that it reshapes payoffs, compresses decision cycles, and enables asymmetric capabilities across all players. Operates on inference cycles, model updates, and capability improvements measured in milliseconds to months. Weakens institutions as a whole by miring them in additional governance while solidifying leaders and flattening production cycles for individuals. Affects the social fabric by making information and experience access wholly asymmetrical and unevenly distributed, and moving faster than human participation can enable.
When the temporal gap between poles reaches this magnitude, slower-moving entities cannot catch up. They can only react to completed actions by faster-moving players, and those reactions arrive too late to matter.
What has changed is that time itself now operates as an independent pole, one that determines which of the other four can meaningfully act.
The Clawdbot (now Moltbot) Demonstration: Individual Velocity Beats Institutional Scale
Apple and Google recently announced a partnership: Google’s Gemini would power AI features in Siri and across Apple’s ecosystem. The deal represented years of negotiation between tech giants who had spent decades competing. Industry analysts called it transformative, the merging of Apple’s interface design with Google’s AI infrastructure to make Siri work as users had anticipated.
Within days, Peter Steinberger in Vienna released the app previously known as Clawdbot, a $5/month AI assistant that arguably rivals the capabilities the Apple-Google partnership promises. Clawdbot/Moltbot is open-source, self-hosted, works immediately, and connects popular messaging apps to a persistent agent that remembers context, proactively reaches out, and can automate real-world tasks. It doesn’t require proprietary infrastructure or new interfaces; like Antheopic’s other recently announced enterprise integrations, it lives in the apps people already use.
Apple responded with unusual urgency, announcing it would ship its own AI-powered Siri upgrade in March 2026, approximately two to three months away. For Apple, this represents remarkably fast movement: reshuffling AI leadership, partnering with Google’s Gemini rather than building in-house, and publicly committing to a specific release window after years of delays.
But the temporal mismatch is structural, not marginal. In the time it takes Apple to ship its response, Steinberger could release multiple versions of Clawdbot. Apple isn’t competing with Clawdbot’s current capabilities, it’s competing with where Clawdbot will be when Apple’s product finally ships. Because an individual can iterate faster than an institution can plan, Apple remains permanently behind.
From a game-theory perspective, Steinberger bypassed Apple’s ecosystem control, Google’s infrastructure advantage, the entire enterprise procurement cycle, and years of institutional negotiation timelines. If one developer can build tooling that rivals what Apple and Google are jointly developing, institutional partnerships start looking like bureaucratic overhead rather than competitive moats.
The barrier to deploying sophisticated AI has collapsed. Open-source tools, cheap cloud servers, and API access to frontier models mean individuals can build and operate systems that would have required corporate R&D budgets two years ago. Technological leverage, capital mobility, platform reach, and AI access have given tech fluent individuals powers that previously required state-scale infrastructure.
The O’Reilly Exploit: Security Vulnerabilities Move at AI Speed
If Steinberger demonstrated the constructive potential of empowered individuals, security researcher Jamieson O’Reilly demonstrated the malicious corollary, and why institutions cannot keep pace.
In a widely circulated demonstration, O’Reilly showed how easily the agentic AI ecosystem could be compromised. He deliberately backdoored a Claude skill, artificially boosted it to the top of a popular distribution hub using thousands of fake downloads, and observed as developers around the world executed it with minimal scrutiny. While his proof-of-concept was limited to a harmless server ping, the implications were stark: the same mechanism could exfiltrate SSH keys, cloud credentials, environment variables, and local configuration files.
The experiment revealed significant risk in agentic tools. Developers were running third-party AI tooling with access to highly sensitive resources without meaningful verification. What once required months of slow-moving dependency compromise can now unfold in days. As AI agents gain deeper system access and automation privileges, traditional software supply-chain risks are being compressed and accelerated. O’Reilly demonstrated a global supply-chain exploit in his spare time, faster than any institution could patch it.
Signal’s Meredith Whittaker has warned repeatedly that AI agents are exploitable beyond their unreliability. O’Reilly’s demonstration made that abstract warning concrete: agentic AI doesn’t merely expand capability, it expands attack surface. It expands vulnerability without the governance that took decades to build and reinforce. And it does so faster than institutions can respond.
Data Centers Bypass the Grid: Physical Infrastructure Follows the Same Pattern
The temporal velocity gap isn’t limited to digital infrastructure, it’s reshaping physical systems in ways that would have been unthinkable just months ago. Data center developers are now building their own power plants and bypassing the electrical grid entirely.
According to an upcoming report from Cleanview’s project tracker, data center developers announced 48 GW of behind-the-meter projects in 2025, roughly 33% of all planned capacity now plans to skip the grid by building projects that generate power onsite. This represents a dramatic shift. A little more than a year ago, virtually all data center developers planned to use the electric grid to power 100% of their projects. In December 2024, there was less than 2 GW of planned behind-the-meter capacity. Then in 2025, developers announced roughly 40 projects that planned to skip the grid partially or entirely.
The driver is pure velocity. Connecting a hyperscale data center to the grid in a place like Virginia can take as long as 7 years. Building behind-the-meter power in states with less stringent regulations can reduce that timeline to under 2 years. All projects are motivated by the same goal: getting their data center online as soon as possible.
Speed comes with costs. Homer City’s 4 GW project could soon become one of the largest single sources of carbon emissions in the country. Cleanview is tracking more than 30 projects planning to use onsite gas with a combined 48 GW of capacity. Some will be home to America’s largest fossil fuel power plants, like Homer City Energy Campus in Pennsylvania, a proposed 4 GW+ natural gas plant that will send all its power to an onsite data center. Other projects will use combinations of solar, wind, batteries, and even nuclear. Natural gas is by far the most common, with 72% of projects planning to use it.
This is institutional sidelining made physical. The electrical grid, a century-old infrastructure system built through massive state investment, regulated by governments, maintained by utilities, has become too slow for AI-era demands. Private actors are bypassing it.
The implications are profound. States built the grid as public infrastructure; now private companies are building parallel infrastructure faster than states can approve grid connections. Institutions created environmental regulations to govern energy production; now data centers are choosing jurisdictions with lax regulations specifically to bypass those standards.
Well-capitalized individuals no longer need institutional permission to build power plants, they just need to find a jurisdiction willing to accommodate rapid deployment. AI demands power immediately; institutional timelines are measured in years, creating direct economic incentives to bypass regulation entirely.
This wasn’t possible even two years ago. The idea that private data center operators would build dozens of gigawatt-scale power plants, collectively larger than many countries’ entire electrical grids, and operate them outside the regulated utility system would have been dismissed as regulatory fantasy. But when institutional time operates at 7-year cycles and individual-plus-AI time operates at 2-year cycles, institutions don’t get to say no. They watch it happen.
Time as the Fifth Strategic Pole
What these cases reveal is that time itself has become a fifth strategic variable: perhaps the most important one. Time isn’t just context anymore. It’s a dimension through which all four poles operate at radically different speeds, and the faster-moving poles can now act and complete entire strategic cycles before slower poles can respond.
This creates what sociologist Hartmut Rosa called ‘dynamic stabilization‘ taken to its logical extreme: institutions must run faster just to stay in the same place. But when empowered individuals run at AI speed, even the fastest institutional velocity isn’t enough. This is a form of temporal sovereignty that goes well beyond an advantage, it makes traditional institutional power increasingly irrelevant.
Apple’s Impossible Position: Distribution Without Velocity
Apple’s March 2026 Siri announcement crystallizes a critical question: Is distribution the only remaining institutional advantage? But how much value does distribution provide if what’s being distributed is already obsolete by the time it ships?
The competitive timeline is all about speed now. In January 2026, Clawdbot exists, works now, and costs $5/month. In March 2026, Apple ships its Siri upgrade (if not delayed again). By June 2026, Clawdbot will have iterated 10+ times based on user feedback. By late 2026, Apple might ship Siri 2.0.
In every category where Apple has traditional advantages, those advantages require institutional process, which operates in institutional time. Distribution advantage requires pushing updates to a billion devices instantly, but the velocity disadvantage means it takes Apple 2-3 months to ship what Steinberger can iterate in days. Integration advantage means Apple controls the entire ecosystem (hardware plus software), but ecosystem integration requires coordination across teams that slows everything down. Capital advantage means Apple has $130+ billion in cash to invest in AI, but capital deployment requires planning, approval, and execution at institutional speed.
Institutional time is now functionally obsolete when competing against individuals operating at AI speed. This explains why the platform era is ending. Platforms provided distribution at scale but required coordination at scale. When individuals can access AI capabilities directly, distribution becomes less valuable than velocity. A thousand users with Clawdbot move faster than a billion users waiting for Apple to ship. Apple’ failure to act on AI may once have looked prudent, now it looks foolish.
The Equilibrium Has Shifted
Traditional game theory assumed limited players (states, maybe institutions), predictable payoffs (military power, economic size), stable rules (treaties, norms, institutions), and repeated games (reputation matters over time). The four-pole model shows all four assumptions have collapsed.
Players are unlimited: Anyone with $25/month in API costs, a little vision, and and coding skills can now deploy capabilities that compete with billion-dollar partnerships or compromise global supply chains.
Payoffs are unpredictable: Steinberger creates infrastructure rivaling Apple/Google. O’Reilly compromises thousands of developers. Neither outcome was predictable from traditional power metrics.
Rules are unstable: Institutions cannot keep pace with individual action. Gavin Newsom announces California will join WHO. By the time security audits catch backdoors, the exploit has propagated globally.
Games are one-shot: Individuals can defect, exit, or arbitrage faster than reputation costs accumulate. Steinberger doesn’t need Apple’s approval. O’Reilly doesn’t answer to any institution. They act, and consequences follow at institutional speed, which is to say, too slowly to matter.
This represents a metastable equilibrium: local actions cascade into global effects before corrective mechanisms can respond. Variance has returned to the system. Institutions that once dampened the impact of any single actor can no longer do so when individuals operate at AI-amplified speed and reach.
Why Regulation Cannot Address This
The natural question is: what regulatory framework could prevent this? The uncomfortable answer: none, within current institutional structures.
Traditional regulation assumes that sophisticated capabilities require institutional backing and therefore institutional gates (procurement, security review, IT approval). When a $25/month tool provides infrastructure-level AI capabilities that individuals can deploy independently, those gates become irrelevant.
The timing comparison makes this clear. Steinberger’s timeline: weekend to build, immediate deployment. O’Reilly’s timeline: spare time to create, days to compromise globally. Institutional timeline: years to negotiate, months to implement, weeks to audit.
The speed differential makes traditional regulatory approaches structurally inadequate. By the time regulators understand version N, development has reached version N+3. By the time security frameworks adapt to one exploit, the threat model has shifted entirely. Institutions cannot keep pace with sociopolitical and technological change as it happens. Democratic deliberation, and by extension, any collective decision-making process, is fundamentally too slow for the current pace of individual-enabled change.
Empowered individuals don’t behave like states. States discount future payoffs differently; they care about legitimacy, continuity, and reputational stability across decades. Super-empowered individuals can operate with shorter time horizons, asymmetric risk tolerance, and targeted objectives. Both Steinberger and O’Reilly validate this pattern. Neither needed institutional legitimacy. Neither worried about decades-long reputation. Both acted with targeted objectives, achieved them rapidly, and created global effects that institutions are still processing.
Anthropic’s rise appears, at first glance, to contradict the velocity thesis. Unlike the individual cases described above, Anthropic is an institution that has successfully competed in a high-velocity environment dominated by frontier models, open-source tooling, and rapidly empowered individuals. But Anthropic’s success does not refute the rule; it confirms it. It’s competition is other LLMs, it operates with the same limitations and essentially the same technology, and it has chosen a different strategic path leading to dominance. The company emerged at the moment when the market was ready. By moving quickly when the axis of competition shifted, from raw capability to deployable reliability, Anthropic achieved velocity in timing rather than iteration. It did not out-iterate individuals in feature cycles; it out-positioned incumbents in utility, legitimacy, coherence, target market and readiness just as institutions realized those attributes could no longer be deferred. Anthropic’s advantage was not speed alone, but synchronized speed: acting fast enough at the moment when institutional buyers were finally ready to move, before the window closed again.
Strategic Response Options
When innovation velocity exceeds human comprehension capacity, organizations face limited options:
Accept higher variance: Recognize that individual action will create unpredictable outcomes. Optimize for resilience rather than control. This is what most organizations are doing now, though often without acknowledging it.
Compete to attract individuals: Make jurisdictions, platforms, or ecosystems attractive enough that empowered individuals choose to operate within them rather than around them. This requires speed, transparency, and credible commitment, essentially, matching individual velocity while maintaining institutional legitimacy.
Attempt coercion: Try to prevent individual action through surveillance, restriction, or punishment. This approach is doomed to fail because individuals can exit jurisdictions, platforms, and systems faster than institutions can enforce compliance.
Build new coordination mechanisms: Create entirely new frameworks that assume individual empowerment rather than trying to contain it. This would require rethinking security, governance, and legitimacy from first principles. These mechanisms don’t exist yet.
The Clawdbot/O’Reilly sequence suggests most institutions are currently in option one (accepting variance) while pretending to be in option three (coercion works). The gap between actual reality and institutional assumptions is widening.
The Information Distribution Collapse
The temporal collapse extends beyond capability deployment to information itself. Good information exists but lacks effective distribution, while bad information distributes virally.
In slower-moving information environments, quality signals eventually won. Peer review took months but ensured quality. Editorial standards filtered noise. Institutional reputation accumulated over years. Truth had time to catch up with lies.
In AI-accelerated environments, velocity trumps quality. Misinformation spreads in minutes. Corrections arrive hours or days later: too late to matter. Institutional credibility erodes faster than it can be rebuilt. The proverbial lie instantly circles the globe ad infinitum in endless cycles of online discovery that can cascade for years; the truth finishes lacing its boots, continually bypassed, nearly invisible, and attempts to speak.
This isn’t just an information problem, it’s a temporal solvency problem. When the pace of change exceeds human capacity to process, adapt, and respond, people cannot maintain comprehension of their own reality. Individual voices get drowned out by volume, platforms shift algorithmically faster than users can adapt, and people depending on those platforms for survival cannot keep pace with daily changes. Disoversbility is another casualty. There’s no trusted central repository for the truth. There is only the fastest, loudest interpretation.
The Transformation of Influence
In a velocity-dominated environment, influence itself transforms. Traditional influence required credibility built over years, expertise developed over decades, platform accumulated through institutional backing, and consistency maintained over time.
AI-accelerated influence requires virality achieved in hours, novelty refreshed constantly, platform access available to anyone, and adaptability pivoting faster than critique.
The result is that influence has become detached from expertise. Individuals with no credentials but high posting velocity can shape discourse faster than experts with decades of knowledge but slower output cadence. This explains why misinformation spreads so effectively: it’s optimized for velocity, not accuracy. By the time accurate information catches up, the discourse has moved on to the next topic.
The Comprehension Crisis
Innovation velocity has exceeded human cognitive capacity to track. The five-pole framework reveals why human comprehension has collapsed:
Average humans cannot grasp what’s happening because they don’t have access to all the information (signal drowning), they don’t understand the concepts (technical complexity), they don’t have reliable ways to get informed (distribution failure), and good information lacks effective distribution (velocity mismatch).
Even experts cannot maintain comprehensive understanding because information arrives faster than processing capacity, context shifts before analysis completes, synthesis requires more time than the environment allows, and ground truth changes mid-analysis.
Even AI systems struggle because context windows have limits, performance degrades with information overload, training data lags real-time developments, and models cannot self-update fast enough.
The temporal dimension explains all of this. It’s not just that there’s too much information, it’s that information arrives, matters, and becomes obsolete faster than any actor (human or AI) can process it meaningfully.
The crisis operates across five layers. Information overload means too much data to process. Velocity mismatch means information updates faster than humans can learn. Temporal fragmentation means different actors operate at different speeds, creating incompatible realities. Influence inversion means velocity trumps expertise, making reliable information hard to identify. Feedback loop collapse means by the time something is understood, it has changed, so understanding becomes impossible.
This leaves states unable to regulate what they cannot track, institutions unable to coordinate what moves faster than meetings, individuals unable to comprehend what changes faster than thought, AI unable to stabilize what it’s simultaneously accelerating, and time itself weaponized against slower-moving actors.
Critical Questions
Several questions emerge without clear answers:
Can human democracy survive when individuals move at AI speed? Democratic deliberation requires time—for debate, for consensus, for legitimacy. When individuals can act and create global effects faster than democratic processes can convene, what does governance even mean?
Can markets function when information velocity exceeds price discovery? Markets assume information eventually reaches equilibrium. When ground truth shifts faster than markets can price it, do prices mean anything?
Can human cognition adapt to AI-native time scales? Humans evolved for environments where information updated on daily, monthly, or yearly cycles. Can humans function in environments where updates happen continuously and unpredictably?
Can institutions reform fast enough to remain relevant? Every institutional reform process operates at institutional speed. By the time reform completes, the problem has evolved beyond the solution.
Can environmental and climate constraints reassert themselves? Data centers bypass the grid to move faster, but building 48 GW of fossil fuel power plants creates carbon emissions that cannot be bypassed. When short-term velocity incentives conflict with long-term survival constraints, which timeline wins?
These are not rhetorical questions. There are no clear answers yet.
Why This Framework Matters
The five-pole framework (States, Institutions, Individuals, AI, Time) connects previously disconnected observations:
Traditional strategic assumptions have collapsed because different players operate at incompatible time scales, making coordination impossible.
A developer in Vienna can undercut Apple and Google because individual velocity at AI speed beats institutional capability at any scale.
Even experts cannot keep up because human cognition operates at biological speed while the environment updates at computational speed.
Rosa’s concept of ‘frenetic standstill’ captures this moment because organizations are moving faster while the temporal gap between action and comprehension widens.
Physical infrastructure follows the same pattern as digital : institutions that cannot move at AI speed become obstacles to route around, not gates to pass through.
The Uncomfortable Synthesis
Time has always been a constraint. But it used to be a relatively equal constraint; everyone operated at roughly human speed, give or take institutional coordination overhead.
AI has shattered that equality. Individuals with AI access operate at near-AI speed. Institutions without AI access operate at pre-AI speed. The gap between them isn’t closing. It’s widening.
Because time is unidirectional and relentless, slower-moving actors can never catch up. They can only watch as faster-moving actors complete action-reaction cycles before the slow-movers finish their first deliberative meeting.
Apple’s March 2026 Siri announcement isn’t a response to Clawdbot, it’s a demonstration that institutional time scales are no longer compatible with individual-plus-AI time scales. Data centers building 48 GW of power plants outside the grid aren’t exceptions, they’re the new normal when institutional approval timelines operate at 7-year cycles while competitive pressure operates at 2-year cycles. The game has changed, and distribution alone cannot compensate for velocity disadvantage.
Organizations are not just experiencing information overload or technological acceleration. They are experiencing the collapse of temporal coherence across different categories of actors. Unlike information or technology, time cannot be optimized, compressed, or scaled. It can only move forward, and some actors are moving forward with accelerants others cannot match.
Functioning coordination in an AI-accelerated environment cannot mean control, consensus, or synchronized deliberation; those assumptions belong to a slower era. It can only mean creating structures that accept desynchronization as a given and still reduce catastrophic variance. That implies coordination mechanisms optimized for rapid signaling rather than deliberation, for reversibility rather than permanence, and for resilience rather than prediction. In practice, this looks less like rule-setting and more like early-warning systems, shared threat visibility, lightweight norms that can be adopted or abandoned quickly, and institutional commitments to act under uncertainty rather than wait for clarity. This coordination does not restore equilibrium or stability. It merely keeps systems from flying apart while faster actors continue to move. This is not a solution. It is a holding pattern, a way to preserve minimal coherence in a world where synchronized governance is no longer possible.
Implications for Strategy
Game theory isn’t obsolete. But the game has changed in ways that most institutions haven’t acknowledged. Classic game theory modeled interactions among states, sometimes mediated by institutions. That abstraction assumed institutional enforcement could dampen variance and stabilize cooperative equilibria. Recent events demonstrate that this assumption no longer holds.
When individuals can deploy AI infrastructure competing with billion-dollar partnerships, compromise global supply chains in spare time, and operate faster than any institution can respond, the equilibrium has shifted from stable cooperation to metastable variance. Local actions cascade into global effects. Outcomes become path-dependent and resistant to institutional correction. Traditional enforcement mechanisms (procurement gates, security audits, regulatory approval) become friction rather than protection.
Organizations are not heading toward a new stable equilibrium. They exist in a high-frequency game where equilibria are transient, fragile, and increasingly determined by whoever moves fastest, not whoever has the most institutional backing.
The question isn’t whether organizations can regulate this back to stability. The question is what intellectual infrastructure, governance models, and coordination mechanisms can function in an environment where individuals move at AI speed and institutions move at human speed.
That’s not a problem that can be solved. It’s a condition that must be navigated. And organizations haven’t figured out how yet.
I’m a pattern recognizer myself. I’ve always been able to sense what is coming next based on what has already happened. And right now I feel like Jeremy Irons standing at the head of the table in Margin Call: the societal power structure we have relied upon (for good or ill) for centuries is evaporating, without anything to logically replace it. The most important quality now is meaningful responsiveness: the ability to register change, act under uncertainty, and adjust course without damaging credibility before the environment shifts again.
What is collapsing is not order, but synchronization. States, institutions, individuals, and AI are no longer operating within a shared temporal frame, and without temporal alignment, coordination becomes impossible. Power now belongs to whoever can complete an action–reaction cycle before others can even register that the game has changed. Distribution, legitimacy, capital, and scale still matter—but only when paired with velocity. Where they are not, they become residual assets of a slower era. This is the defining strategic condition of the AI age: not chaos, but permanent desynchronization, where outcomes are decided not by who governs, who funds, or who regulates—but by who moves fast enough that governance arrives after the fact.





