Chapter 32: Key Takeaways — Global AI Governance Frameworks
Core Concepts
1. The Jurisdiction Gap AI systems cross borders effortlessly, while governance authority does not. An AI system can be developed in one country, hosted in a second, and harm users in a third, with no single national legal system having clear authority to address the problem. This jurisdiction gap is the foundational challenge of global AI governance — and it creates structural pressure for international coordination even among governments that prefer unilateral regulatory approaches.
2. The Race-to-the-Bottom Risk Without international coordination, governments face competitive pressure to weaken AI governance standards in order to attract AI development activity. This "regulatory arbitrage" dynamic — in which companies locate sensitive activities in the most permissive jurisdiction — means that the governance floor tends to be set by the least responsible major actor. International minimum standards, if genuine and enforced, relieve this pressure by removing the competitive advantage of governance failure.
3. Soft Law Foundations All current major international AI governance frameworks — the OECD AI Principles, the UNESCO Recommendation, the G7 Hiroshima Code of Conduct, and the GPAI mandate — are soft law: normative frameworks that create political expectations but impose no legally binding obligations. Soft law can be valuable, establishing vocabulary, concepts, and normative expectations that may eventually harden into binding commitments. But it is insufficient alone for governance challenges that involve powerful commercial interests and real costs of compliance.
4. The OECD AI Principles as Foundation The OECD AI Principles (2019), the first intergovernmental AI ethics agreement, established five core principles — inclusive growth, human-centred values, transparency, robustness, and accountability — that have become the conceptual vocabulary of global AI governance. Subsequently endorsed by G20 nations, the principles lack enforcement mechanisms but have shaped national AI strategies, regulatory frameworks, and the vocabulary of AI policy globally.
5. UNESCO's Breadth and Limits The UNESCO Recommendation on AI Ethics (2021), adopted by all 193 UNESCO member states, represents the broadest normative consensus ever achieved in AI governance. Its coverage of environmental sustainability and gender equality was genuinely novel. But universal adoption required language general enough for governments with radically different AI practices to endorse — producing commitments that are simultaneously global and largely unenforceable.
6. The G7's Advanced AI Focus The G7 Hiroshima AI Process (2023) was triggered by the rapid deployment of large language models and represents wealthy democratic nations' attempt to develop shared approaches to governing the most advanced AI systems. Its Guiding Principles and Code of Conduct create voluntary commitments for AI developers but exclude China and Russia, limiting their effectiveness as global governance instruments.
7. The Three-Bloc Dynamic Three major governance models are competing to shape global AI norms: the EU's regulation-first approach (comprehensive, rights-based legal frameworks), the US's market-first approach (sector-specific regulation, voluntary standards, executive action), and China's state-control approach (heavy state involvement oriented toward party priorities rather than individual rights). Each shapes global norms through different mechanisms — law, market dominance, and technology export respectively.
8. The Brussels Effect The EU AI Act's extraterritorial scope, combined with the size of the EU market, creates a powerful mechanism for raising global AI governance standards: companies that want EU market access must meet EU AI Act requirements, and many find it cheaper to apply those requirements globally than to maintain different products for different markets. This "Brussels Effect" makes EU regulation de facto global regulation for companies serving global markets — a market-driven mechanism that may be more effective than formal international governance processes.
9. The Representation Gap Current global AI governance systematically underrepresents the communities most vulnerable to AI harm: Global South nations, civil society organizations, workers subject to algorithmic management, and people in countries with weak rule of law. This representation gap shapes the substance of governance — frameworks developed by wealthy-country actors tend to address wealthy-country AI concerns while underaddressing the AI harms most acute in developing country contexts.
10. The Role of Business Business professionals operate in a global AI governance environment that is simultaneously incomplete and consequential. Organizations with EU market exposure face real compliance obligations under the EU AI Act. Organizations operating across multiple jurisdictions must navigate a complex patchwork of national requirements. And organizations that engage constructively in governance processes — contributing technical expertise, supporting civil society advocacy, and building genuine governance capacity rather than performing compliance — can help shape the regulatory environment they will operate in for decades.
Summary Points
-
Global AI governance is necessary because AI systems cross borders while regulatory authority does not, creating a jurisdiction gap that enables regulatory arbitrage and a race to the bottom in governance standards.
-
The major international AI governance frameworks (OECD AI Principles, UNESCO Recommendation, G7 Hiroshima Process, GPAI, UN AI Advisory Body) are all soft law instruments — normative without being legally binding — that have established important conceptual foundations but face significant implementation gaps.
-
The three-bloc dynamic (EU regulation-first, US market-first, China state-control) shapes global AI norms through market power, technological diffusion, and diplomatic influence in ways that often exceed the reach of formal governance processes.
-
The Brussels Effect — by which EU regulation becomes de facto global regulation through market mechanisms — is currently the most effective mechanism for raising global AI governance standards, though it is limited to market-connected contexts and reflects EU rather than universal priorities.
-
Civil society and Global South inclusion in global AI governance is both a justice imperative and a practical requirement: governance frameworks that don't represent affected communities will consistently fail to address the most acute harms.
-
The path toward effective global AI governance requires a portfolio approach: technical standards, mandatory incident reporting, regulatory cooperation agreements, binding national and regional law, and sustained civil society engagement, rather than waiting for comprehensive international treaty-making that remains politically out of reach.