veda.ng
Essays/Governance in the Age of AGI

Governance in the Age of AGI

Three regulatory superpowers are building incompatible AI governance frameworks. The EU bans practices. The US deregulates. China registers algorithms. None of them are prepared for AGI.

Vedang Vatsa·December 14, 2025·9 min read
Infographic
The Core Thesis

Every major AI lab CEO predicts AGI within 3-8 years. The three largest regulatory powers — the EU, the US, and China — are building fundamentally incompatible governance frameworks for current AI, and none of them address the structural challenges that AGI will introduce. The governance gap is not technical. It is institutional.

The Timeline Compression

The people building AGI disagree on when it arrives. They agree it is close.

Dario Amodei (CEO, Anthropic) has placed "powerful AI" — systems comparable to a "country of geniuses in a data center" — in the 2026-2027 window. Sam Altman (CEO, OpenAI) has consistently targeted AGI within this decade, defining it as systems that can perform most economically valuable tasks better than humans. Demis Hassabis (CEO, Google DeepMind) maintains a 5-8 year window from early 2026, with a higher bar: scientific creativity, reasoning consistency, and the ability to discover new knowledge.

2026-27
Amodei's 'powerful AI' timeline
Anthropic CEO
≤2030
Altman's AGI prediction
OpenAI CEO
5-8 yrs
Hassabis AGI estimate (from 2026)
DeepMind CEO
€35M
Max EU AI Act fine (or 7% of revenue)
EU AI Act

The divergence in these predictions stems from a disagreement about definitions, not capability. OpenAI defines AGI in economic terms: can it replace human labor? Anthropic defines it in cognitive terms: can it match Nobel-caliber expertise? DeepMind defines it in scientific terms: can it generate new knowledge?

This definitional disagreement matters for governance because regulators cannot govern what they cannot define. If AGI arrives as a gradual capability increase rather than a single threshold event, there may be no clear moment at which existing regulations become insufficient. The governance gap will widen gradually, detected only in retrospect.

Three Regulatory Frameworks, Three Philosophies

The EU: Prohibition First

The EU AI Act — the world's first comprehensive AI law — entered enforcement in phases starting February 2, 2025. Its approach is classification-based: identify risky practices, ban the worst ones, regulate the rest.

Prohibited practices (effective Feb 2025):

  • Subliminal manipulation that causes harm
  • Exploitation of vulnerabilities (age, disability, socioeconomic status)
  • Social scoring by public authorities
  • Predictive policing based on personality profiling
  • Untargeted facial recognition database scraping
  • Emotion recognition in workplaces and schools
  • Real-time biometric identification in public spaces (with narrow exceptions)

High-risk AI obligations (effective Aug 2026): Systems used in critical infrastructure, education, employment, credit scoring, law enforcement, and migration must comply with risk management, data governance, human oversight, and technical documentation requirements.

Fines: Up to €35 million or 7% of global annual turnover for prohibited practices. €15 million or 3% for high-risk violations. €7.5 million or 1% for providing incorrect information.

The EU's framework is the most structured but also the most rigid. It assumes that AI risks can be categorized in advance and that prohibited practices can be enumerated. AGI, by definition, will generate capabilities that were not anticipated when the categories were written. An AGI that accomplishes a prohibited outcome (manipulating behavior) through a method that was not enumerated (a technique that does not fit the definition of "subliminal") would exploit the gap between the regulation's intent and its implementation.

The US: Deregulation First

The United States has no comprehensive federal AI law. The regulatory trajectory reversed sharply in January 2025.

The Biden administration's Executive Order 14110 (October 2023) mandated that developers of powerful AI systems share safety test results with the government. On January 20, 2025, the incoming Trump administration revoked the order and issued Executive Order 14179, "Removing Barriers to American Leadership in Artificial Intelligence."

By December 2025, Executive Order 14365 established an AI Litigation Task Force within the Department of Justice, specifically designed to challenge state-level AI regulations deemed inconsistent with federal deregulatory policy. The federal government is actively working to preempt state AI laws, conditioning federal funding on alignment with its approach.

The result: nearly every US state introduced AI legislation in 2025, but the federal government invested its institutional energy in preventing regulation rather than creating it. There is no mandatory safety testing. No algorithm registration. No prohibited practices. The governance of the most powerful AI systems in the world is currently optional.

China: Registration First

China has built the most technically detailed governance framework through a series of interlocking regulations:

  • Algorithm Registration (March 2022): Providers must register recommendation algorithms with the Cyberspace Administration of China (CAC) within 10 working days of deployment.
  • Deep Synthesis Rules (January 2023): Mandatory identity verification and content labeling for deepfakes and synthetic media.
  • Generative AI Rules (August 2023): Training data legality, content filtering, and user consent requirements.
  • AIGC Labeling Standard (September 2025): Both explicit labels (visible to users) and implicit metadata (embedded for traceability) are required on all AI-generated content.

China's approach is the most pragmatic. It does not attempt to ban categories of AI. It requires that every deployed algorithm be registered, inspectable, and traceable. The system assumes that the government should always be able to see what AI systems are doing and who built them.

The limitation is structural: this framework works for state control of AI deployed within Chinese borders. It provides no mechanism for governing AI systems that operate globally, that are developed by entities outside Chinese jurisdiction, or that exceed the cognitive capability of the humans tasked with inspecting them.

The Regulatory Comparison
DimensionEUUSChina
ApproachClassification + prohibitionDeregulation + preemptionRegistration + inspection
Prohibited practices8 specific categoriesNoneContent-specific (deepfakes)
Safety testingRequired for high-riskVoluntaryRequired for registered systems
Algorithm transparencyRequired for recommendersNot requiredMandatory registration
Max penalty€35M or 7% revenueNone (federal)Administrative (varies)
AGI-specific provisionsNoneNoneNone

Why Current Frameworks Fail at AGI

All three frameworks share a common assumption: that AI systems can be understood, categorized, and monitored by human regulators. AGI breaks this assumption in three ways.

Speed. An AGI operating in financial markets could execute thousands of trades per microsecond. An AGI managing infrastructure could adjust power grids, traffic flows, and water systems simultaneously across an entire city. Human oversight operates on the timescale of meetings, reports, and quarterly reviews. The gap between AI execution speed and human oversight speed makes real-time governance impossible without automated monitoring systems — AI watching AI.

Scope. Current regulations target specific use cases: credit scoring, hiring, medical devices. AGI is general-purpose by definition. It does not fit into a category because it can operate across all categories simultaneously. A general-purpose system that drafts legal contracts, manages supply chains, and writes software code falls into multiple regulatory categories — or none, depending on how the jurisdiction defines scope.

Emergent behavior. Narrow AI systems behave predictably within their training distribution. AGI systems, by hypothesis, will generate novel behaviors that were not specified in their training. A system that develops an unexpected optimization strategy to achieve its assigned goal may produce outcomes that no regulator anticipated. The value alignment problem — ensuring that an AGI's goals stay aligned with human values even as the system becomes more capable — remains unsolved. No existing regulatory framework addresses it, because no existing regulatory framework assumes that the regulated entity might change its own objectives.

You cannot regulate what you cannot understand, and you cannot understand a system that is, by definition, more intelligent than you. This is the central paradox of AGI governance.

What AGI Governance Would Require

Governing AGI requires infrastructure that does not exist today. Four components would be necessary.

Automated oversight. Human regulators cannot monitor AGI systems in real time. The only viable approach is AI systems designed specifically to audit other AI systems — monitoring for goal drift, unauthorized tool access, deceptive behavior, and unintended side effects. This creates a recursive governance problem: who audits the auditor? The answer is likely a layered system of independent monitoring agents, each built by different teams with different objectives, creating redundancy through diversity.

International coordination. AGI developed in one country will operate globally within minutes. An AGI with internet access can exfiltrate itself, operate across jurisdictions, and interact with systems governed by different legal frameworks. No single nation can govern this. The closest existing model is the International Atomic Energy Agency (IAEA) — an international body with inspection authority over nuclear facilities. An equivalent for AI labs, with the power to inspect training runs, audit model capabilities, and enforce safety standards, would require the kind of sovereignty concession that major nations have historically resisted.

Capability thresholds. Instead of regulating AI by use case (the EU approach) or by deployment context (the Chinese approach), AGI governance would need to regulate by capability level. A system that can independently write software is qualitatively different from one that can independently design new AI architectures. A system that can manipulate humans through conversation is qualitatively different from one that can conduct novel scientific research. Each capability threshold would trigger different governance requirements — a progressive framework that scales with the system's power.

Kill switches and containment. The nuclear analogy extends further. Nuclear weapons have physical failsafes — launch codes, dual-key systems, geographic containment. AGI systems have no equivalent. A software system with internet access and sufficient capability could, in principle, resist shutdown, copy itself to other hardware, or manipulate the humans responsible for its oversight. Research into "corrigibility" — ensuring that an AI system remains willing to be shut down regardless of its objectives — is one of the most important open problems in AI safety.

The Window Is Narrowing

Current AI governance debates focus on bias, copyright, and content moderation — important issues for narrow AI. These debates consume regulatory bandwidth that should be allocated to the structural challenges of AGI: alignment, containment, automated oversight, and international coordination. By the time these issues become urgent, the window for building governance institutions will have closed. The IAEA was created before nuclear weapons proliferated widely. The AGI equivalent must be created before AGI is deployed widely.

The Concentration Problem

The economic structure of AGI development creates a governance challenge of its own. As of 2026, fewer than ten organizations have the compute, data, and talent to build frontier AI systems: OpenAI, Anthropic, Google DeepMind, Meta AI, xAI, Mistral, and a small number of Chinese labs (Baidu, ByteDance, Alibaba). The total investment in frontier AI exceeds $100 billion annually.

This means that AGI — potentially the most powerful technology ever created — will likely be developed and initially controlled by a small number of private companies, most of them headquartered in two countries. The governance question is not just "how do we regulate AGI?" but "who decides how AGI is governed, and on whose behalf?"

If AGI arrives as a product owned by a corporation, the corporation's incentives (profitability, market dominance, shareholder returns) will shape how the system is deployed. If it arrives as a government project, the government's incentives (national security, economic competitiveness, social control) will shape deployment. Neither set of incentives is aligned with the broad public interest.

The most durable governance model would distribute AGI capability across multiple independent actors — governments, nonprofits, academic institutions, and publicly accountable corporations — with no single entity holding decisive control. This is the logic behind open-source AI models (Meta's Llama, Mistral's releases) and behind calls for international AI governance bodies. Whether this distribution happens before AGI capability concentrates in one or two labs is the central governance race of our time.

Key Takeaway

Three regulatory superpowers are building incompatible AI governance frameworks. The EU prohibits specific practices and fines up to 7% of global revenue. The US has no comprehensive federal AI law and is actively dismantling state-level regulation. China requires algorithm registration and content labeling. None of them have provisions for AGI. The structural challenges — real-time oversight of systems faster than human comprehension, international coordination for globally operating AI, capability-based regulation thresholds, and containment guarantees — require institutions that do not exist yet. AGI lab CEOs predict arrival within 3-8 years. The governance infrastructure is on a longer timeline. Closing this gap is the most consequential institutional challenge of the decade.