34 min read

When 193 UNESCO member states adopted the Recommendation on the Ethics of Artificial Intelligence in November 2021, it was celebrated as a historic milestone. For the first time, virtually every government on Earth had agreed — in writing — to a set...

Chapter 32: Global AI Governance Frameworks

The Promise and the Gap

When 193 UNESCO member states adopted the Recommendation on the Ethics of Artificial Intelligence in November 2021, it was celebrated as a historic milestone. For the first time, virtually every government on Earth had agreed — in writing — to a set of principles governing how artificial intelligence should be developed and deployed. The ceremony at UNESCO headquarters in Paris carried the weight of genuine accomplishment. Diplomats, researchers, and civil society organizations that had spent years pushing for exactly this kind of global agreement allowed themselves a moment of satisfaction.

Within months, the contradictions became impossible to ignore. Several signatory governments were actively expanding facial recognition surveillance systems. Others were using AI-powered content moderation to suppress political opposition. China, a signatory, was deepening its social credit system. Russia, another signatory, was deploying AI for military purposes that international humanitarian law experts found deeply troubling. The United States, which had participated in drafting the recommendation, was simultaneously allowing tech companies to operate with governance frameworks far lighter than what the recommendation described. The gap between international AI commitments and national AI realities was not merely visible — it was vast.

This gap is not unique to AI. International environmental agreements have routinely been undermined by national interests. Arms control treaties have been signed and later abandoned. But AI presents its own distinctive challenges. It moves faster than diplomatic processes. Its impacts are often invisible or technical. The commercial interests bound up with AI development are enormous, and those interests are concentrated in a handful of powerful companies that operate across borders with ease. And unlike nuclear weapons or chemical agents, AI is not produced by governments — it emerges from labs and startups and corporate R&D departments in ways that make traditional arms-control-style governance models nearly impossible to apply directly.

Yet the need for coordination is real. An AI system developed in California operates in Germany, Kenya, and Indonesia simultaneously. A bias baked into a hiring algorithm in one country produces discrimination in dozens of others. A decision by one government to allow mass AI surveillance can destabilize neighboring democracies by providing templates and technology. The competitive dynamics between major AI powers create pressure to cut corners on safety and ethics — the classic "race to the bottom" that international governance is designed to prevent. The alternative to imperfect global governance is not no governance but ungoverned AI, and that prospect is genuinely alarming.

This chapter examines who is building global AI governance, what authority these bodies actually have, and what progress — however halting — is being made. It is not an optimistic chapter, but it is not a pessimistic one either. Global governance of transformative technology is extraordinarily difficult, and the honest story of AI governance in its first decade is one of incomplete frameworks, unfulfilled commitments, and real but insufficient progress — alongside genuine innovation in how international institutions are trying to adapt.


Learning Objectives

By the end of this chapter, you will be able to:

  1. Identify the major international AI governance frameworks, including the OECD AI Principles, the UNESCO Recommendation, the G7 Hiroshima AI Process, and the Global Partnership on AI, and describe what each accomplishes and what each lacks.

  2. Explain the fundamental coordination problem in global AI governance: why it is needed, why it is difficult, and what structural barriers make binding agreements hard to achieve.

  3. Analyze the "three-bloc dynamic" — the EU's regulation-first approach, the US market-oriented approach, and China's state-control model — and describe how each shapes global AI norms through different mechanisms.

  4. Evaluate the concept of "soft law" in AI governance: what voluntary frameworks can and cannot accomplish, and under what conditions binding agreements become necessary.

  5. Assess the "Brussels Effect" — the mechanism by which EU regulation becomes de facto global regulation — and its implications for businesses operating across jurisdictions.

  6. Identify the representation gaps in current global AI governance, including the underrepresentation of Global South nations, civil society organizations, and affected communities.

  7. Apply knowledge of global AI governance frameworks to evaluate a specific organization's exposure to international governance norms and its strategic options for engagement.

  8. Articulate what genuine, effective global AI governance would require, and identify realistic near-term steps toward that goal.


Section 1: Why AI Governance Needs Global Coordination

The case for global AI governance begins with a simple observation: AI systems do not respect national borders. A facial recognition system trained on data from one country and deployed by a company headquartered in a second country to screen citizens of a third country raises legal and ethical questions that no single national legal system can adequately address. The data was collected in one jurisdiction, the model was trained in another, the company is incorporated in a third, and the harm — if harm occurs — falls on people in a fourth. Which law applies? Which regulator has authority? Which court can hear a claim?

This is the jurisdiction gap, and it is not hypothetical. When Clearview AI scraped billions of photographs from social media platforms and sold facial recognition services to law enforcement agencies across the United States and several foreign governments, it illustrated precisely this problem. The company operated in ways that multiple national data protection authorities found illegal — Canada's Privacy Commissioner, Australia's Information Commissioner, the UK's Information Commissioner's Office, the French CNIL, and Italy's Garante all reached adverse findings or issued orders against the company. Yet the company continued operating, primarily because its most significant market — US law enforcement — had no equivalent prohibition. The patchwork of national enforcement produced confusion but no effective accountability.

The jurisdiction gap is compounded by what economists call regulatory arbitrage: the tendency of companies to locate their most sensitive activities in the most permissive jurisdiction. If one country prohibits a particular AI application and another permits it, companies face an incentive to structure their operations to take advantage of the permissive environment. This is not illegal or even necessarily unethical — it is simply rational commercial behavior in response to regulatory fragmentation. But it means that the standards of the most permissive country effectively become the floor for global operations.

This creates a competitive dynamic that has been described as a "race to the bottom" in governance standards, though the metaphor is imperfect. In practice, the dynamic is more complex. Companies often prefer clearer, more predictable rules over a patchwork of inconsistent national requirements, which is why some businesses actually support comprehensive regulation. And reputational concerns, civil society pressure, and the risk of being caught in a scandal provide some floor even in the absence of regulation. But the structural incentive toward the lowest common denominator in governance is real and documented.

The competitive dynamic extends to governments as well. Nations view AI as a strategic technology with enormous implications for economic competitiveness, military capability, and geopolitical influence. This creates pressure to prioritize the speed of AI development over the rigor of its governance — to be the place where AI companies want to operate, not the place where AI companies face the most scrutiny. The United States' historically light-touch approach to AI regulation reflected, in part, a desire not to handicap American companies relative to Chinese competitors. China's approach, while involving heavy state involvement in AI, focused that involvement on promoting national AI champions rather than constraining them. Both approaches sacrificed governance quality for competitive positioning.

The argument for global coordination, therefore, is not merely moral but strategic. Without coordination, every nation faces a dilemma: regulate AI rigorously and risk competitive disadvantage, or regulate lightly and accept the governance failures that follow. Global minimum standards, if genuine and enforced, relieve this dilemma by ensuring that no nation can gain competitive advantage through governance failure. They also create the predictability that businesses, paradoxically, often benefit from — a global regulatory floor is easier to comply with than 195 inconsistent national regimes.

The difficulty of achieving that coordination should not be understated. The international system is built around sovereign nation-states, each of which jealously guards its regulatory authority. Trade agreements, arms control treaties, and environmental accords have all required decades of negotiation and still face regular defection. AI presents additional challenges: it is changing faster than diplomatic processes can track; the technical details that matter most for governance are often opaque even to experts; the commercial interests arrayed against strong regulation are formidable; and the geopolitical tensions between the major AI powers — the United States and China in particular — make anything resembling US-China cooperation on AI governance exceptionally difficult to achieve in the current environment.

The result is a field of governance that is simultaneously necessary, insufficient, and actively developing. Understanding what global AI governance currently consists of, what it can realistically accomplish, and what would be required to make it more effective is essential for any business professional operating in a global environment.


Section 2: The OECD AI Principles (2019)

On May 22, 2019, the Organisation for Economic Co-operation and Development Council adopted the Recommendation of the Council on Artificial Intelligence — better known as the OECD AI Principles. This was the first intergovernmental agreement on AI ethics and governance, a milestone achieved through several years of careful diplomatic work. The principles were subsequently adopted by the G20 nations at their summit in Osaka, effectively extending their reach to the world's largest economies, including major non-OECD members such as China, India, Brazil, and South Africa — though adoption by these nations carried different political weight than OECD membership.

The OECD AI Principles are organized around five value-based principles that apply to all actors involved in the AI lifecycle. They are: (1) inclusive growth, sustainable development and well-being — AI should benefit people and the planet; (2) human-centred values and fairness — AI systems should respect the rule of law, human rights, democratic values, and diversity; (3) transparency and explainability — there should be transparency and responsible disclosure about AI systems sufficient to enable oversight; (4) robustness, security and safety — AI systems should be robust, secure and safe; and (5) accountability — organizations and individuals developing and deploying AI should be accountable for the proper functioning of AI systems.

The principles also include five complementary recommendations for governments: investing in AI research and development; fostering a digital ecosystem for AI; shaping an enabling policy environment for AI; building human capacity and preparing for labor market transformation; and international cooperation for trustworthy AI.

Who adopted the principles matters for understanding their significance and their limitations. All 38 OECD member states adopted them, as did Argentina, Brazil, Colombia, Costa Rica, Peru, Romania, Ukraine, and Vietnam as adherents. The G20 endorsement extended nominal commitment to China, India, Indonesia, Saudi Arabia, and others. By 2024, 46 countries had formally adopted the principles or indicated adherence.

What the principles require, in practice, is less than their broad language might suggest. Because they are a "recommendation" rather than a binding legal instrument, they create no enforceable obligations. There is no compliance mechanism, no reporting requirement, no penalty for non-compliance. Individual governments may choose to translate the principles into domestic law — and many have cited the OECD AI Principles as the foundation for their national AI strategies and regulatory frameworks — but the international instrument itself imposes no requirements.

The implementation gap is substantial. Research examining how signatory governments have translated the OECD AI Principles into domestic policy found significant variation, with most governments adopting high-level strategic documents that echoed the principles' language while making few concrete regulatory commitments. The OECD's own monitoring of implementation revealed that progress was highly uneven, with EU member states making the most structural progress (largely through EU-level regulatory development) and many other signatory governments making primarily declaratory progress.

The OECD AI Policy Observatory (OECD.AI) is perhaps the principles' most concrete institutional legacy. Established to monitor and support implementation of the principles, the Observatory maintains a database of AI policy initiatives across member and partner countries, tracks the evolution of national AI strategies, and publishes comparative analyses of AI governance approaches. It serves a valuable epistemic function — creating shared understanding of how different governments are approaching similar problems — even if it lacks enforcement authority.

The OECD AI Principles also established the definitional and conceptual foundations that subsequent governance frameworks have built upon. Their risk-based approach — the idea that the level of oversight required should reflect the potential harm of the AI system — became the conceptual backbone of the EU AI Act. Their emphasis on explainability, accountability, and human oversight influenced how national regulatory bodies framed their guidance. The principles may have accomplished less in practice than their proponents hoped, but they helped establish the vocabulary and conceptual architecture of global AI governance in ways that have had lasting effect.


Section 3: UNESCO Recommendation on AI Ethics (2021)

If the OECD AI Principles represented the first diplomatic achievement in global AI governance, the UNESCO Recommendation on the Ethics of Artificial Intelligence represents the broadest. Adopted by acclamation of all 193 UNESCO member states in November 2021, the recommendation is the first global normative instrument on AI ethics — meaning it represents the formal position of effectively every government on Earth on a set of core AI ethics principles.

The Recommendation is substantially more comprehensive than the OECD AI Principles. It covers 11 areas of action: transparency and explainability; fairness and non-discrimination; human oversight; privacy and data protection; human oversight and determination; safety and security; responsibility and accountability; education and awareness; adaptive governance and agile development; environment and ecosystem; and cultural diversity and inclusion. Its treatment of environmental sustainability — recognizing that AI systems have significant energy and resource footprints that must be addressed — was genuinely novel for a governance instrument of this kind. So was its explicit attention to gender equality and the structural underrepresentation of women and gender minorities in AI development.

The Recommendation also goes further than the OECD Principles in its attention to human rights foundations, stating explicitly that "AI systems should be designed and developed, used, and decommissioned in ways that respect, protect and promote human rights and fundamental freedoms." This human rights framing, which positions AI governance as an extension of international human rights law, has been influential in civil society advocacy even if its implications for binding enforcement remain limited.

What is novel about the UNESCO Recommendation in terms of process is its ambition toward implementation. Recognizing that previous international AI ethics instruments had become largely symbolic, UNESCO developed a Readiness Assessment Methodology designed to help member states evaluate their current capacity to implement the Recommendation and identify gaps. The methodology covers governance and regulation, culture and education, science, technology, and innovation, economic context, infrastructure, and institutional framework.

The implementation challenge is real, however. The Recommendation is not a treaty — it creates no legally binding obligations. UNESCO itself has limited enforcement capacity; it is primarily a normative and technical assistance organization. And the gap between adopting a recommendation in a UNESCO general conference vote and actually changing domestic AI governance practices is enormous. Several governments that voted in favor of the Recommendation were simultaneously engaged in exactly the kinds of AI applications the Recommendation's principles would condemn: mass surveillance, social control, manipulation of information ecosystems.

Civil society critics of the Recommendation pointed to what they saw as a fundamental contradiction: the consensus required to get 193 governments to agree produced a document whose principles were simultaneously aspirational and vague enough to be endorsed by governments with radically different intentions. A government that uses AI to repress its population can endorse a recommendation calling for "human oversight and determination" without any tension — because the question of whose oversight and which humans' determination is left entirely to domestic discretion.

The Recommendation's defenders respond that this critique, while valid, understates the value of establishing global normative baselines. Even imperfect consensus documents create reference points that civil society, journalists, opposition political parties, and eventually courts can use to hold governments accountable to their stated commitments. The history of human rights law suggests that normative commitments, even without immediate enforcement mechanisms, can shift political dynamics over time in meaningful ways.


Section 4: The G7 Hiroshima AI Process (2023)

The deployment of GPT-4 in March 2023 changed the politics of global AI governance in ways that few anticipated. The capabilities demonstrated by large language models — reasoning, coding, creative writing, apparent common sense — alarmed policymakers in ways that more specialized AI systems had not. Calls for AI governance that had previously been considered niche concerns suddenly reached the highest levels of government. Within months, every major AI governance body was scrambling to address what had become an urgent policy priority.

The G7, meeting in Hiroshima in May 2023, established the Hiroshima AI Process — a working group tasked with developing common approaches to governance of advanced AI systems. By October 2023, the process had produced two key documents: the G7 Guiding Principles for Organizations Developing Advanced AI Systems, and the G7 Code of Conduct for Organizations Developing Advanced AI Systems. These were endorsed by G7 leaders — the heads of government of the United States, United Kingdom, European Union, Canada, France, Germany, Italy, and Japan — giving them political weight even in the absence of legal force.

The Guiding Principles cover eleven areas: taking appropriate measures to identify, evaluate, and mitigate risks; identifying and mitigating vulnerabilities and security risks; investing in cybersecurity and physical security; implementing appropriate data input measures and protections; developing and deploying reliable content authentication; transparently reporting capabilities, limitations, and areas of appropriate and inappropriate use; working to ensure AI advanced systems meet relevant technical and societal standards; implementing appropriate data governance measures; advancing research on societal risks; prioritizing development of technical tools addressing global challenges; and developing and implementing governance frameworks for advanced AI.

The Code of Conduct translates these principles into more specific commitments for AI developers — though still at a high level of generality. It calls on organizations to commit to safety testing before release, to sharing information about safety incidents, to investing in cybersecurity, to developing technical mechanisms for content provenance, and to working toward international technical standards.

The voluntary nature of the Hiroshima AI Process is fundamental to understanding both what it can accomplish and what it cannot. The G7 cannot impose binding requirements on private AI developers; it can only create normative pressure and establish expectations. Major AI companies — Google, Microsoft, OpenAI, Anthropic, Meta, Mistral, and others — have engaged with the process and indicated commitment to its principles, though the specifics of how those commitments translate to actual practices remain largely opaque.

The Hiroshima process also illustrates the political dynamics that make global AI governance challenging. The G7 represents wealthy democratic nations, but not China or Russia, which are significant AI powers. An AI governance framework that excludes China has significant limitations — not because China should necessarily be setting the terms of global AI ethics, but because AI governance that China does not engage with will not constrain Chinese AI development or the global diffusion of Chinese AI systems. The Hiroshima process produced an important diplomatic achievement within its political constraints, but those constraints are real.


Section 5: The Global Partnership on AI (GPAI)

The Global Partnership on AI was established in June 2020 as an international, multi-stakeholder initiative to guide the responsible development and use of AI in a manner consistent with human rights, inclusion, diversity, innovation, and economic growth. Its founding members included Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, Republic of Korea, Singapore, Slovenia, the United Kingdom, the United States, and the European Union. The partnership has grown since and now includes 29 member countries.

GPAI's structure reflects its multi-stakeholder design: it includes representatives from member governments, independent researchers, industry, international organizations, and civil society. It operates through four working groups focused on Responsible AI, Data Governance, the Future of Work, and Innovation and Commercialization. The partnership is supported by a secretariat hosted at the OECD in Paris, with two Centre of Expertise — one in Paris and one in Montreal — that provide research and analytical support.

GPAI has produced substantive research and policy analysis on topics including AI and COVID-19, responsible AI in the context of the climate crisis, AI and the future of work, and data governance frameworks. This research function is arguably GPAI's most distinctive contribution — providing a forum where AI experts from government, academia, industry, and civil society can engage on technical policy questions in a format that pure diplomatic processes cannot replicate.

The limitations of GPAI mirror those of other international AI governance institutions. It has no enforcement authority, no binding standard-setting power, and no mechanism to ensure that the positions of its working groups influence national policy. Its membership, while broader than the G7, still substantially underrepresents the Global South — Africa, with 54 countries, is represented through a handful of members and has limited influence over GPAI's agenda. Civil society organizations participate in GPAI processes but report that genuine influence over outcomes remains limited.

The relationship between GPAI and other international institutions has also been complicated. Initially conceived partly as a counterweight to Chinese influence in AI standard-setting processes at the International Telecommunication Union, GPAI has had to navigate questions about its relationship with the OECD AI Policy Observatory, UNESCO's AI work, and the AI governance activities of various UN bodies. The proliferation of international AI governance forums has itself become a governance problem — creating coordination costs, duplicating effort, and potentially allowing governments to dilute accountability by distributing commitments across many processes without deepening any of them.


Section 6: The UN AI Advisory Body (2024)

In October 2023, UN Secretary-General António Guterres announced the formation of a High-level Advisory Body on Artificial Intelligence, reflecting the rapid elevation of AI governance to the highest levels of international political attention. The body, comprising 39 members drawn from government, private sector, civil society, and academia across multiple regions, was tasked with producing recommendations for global AI governance by the time of the Summit of the Future in September 2024.

The Advisory Body's interim report, published in December 2023, identified five key gaps in existing global AI governance: governance gaps in monitoring and enforcing compliance; gaps in computing power access; talent and knowledge gaps; gaps in data access; and gaps in AI applications for sustainable development. Its final report, published in September 2024, recommended the establishment of a UN entity dedicated to AI governance — with options ranging from a new office within an existing UN body to a new standalone international agency — as well as development of an international panel on AI modeled on the Intergovernmental Panel on Climate Change.

The report generated significant debate. Proponents argued that AI governance requires the universal membership that only the UN system can provide, and that only a UN body could credibly represent the interests of countries in the Global South in governance of a technology being developed primarily in the Global North. Critics argued that existing UN bodies are too slow, too bureaucratic, and too captured by geopolitical dynamics to provide effective governance of fast-moving technology. The experience of the ITU with AI standard-setting — where Chinese government actors have sometimes used the standard-setting process to legitimize surveillance technologies — colored some Western governments' enthusiasm for more UN involvement.

Geopolitical tensions were visible throughout the Advisory Body's work. The deep US-China technology competition made genuine US-China cooperation on AI governance extremely difficult, and both powers viewed international AI governance through the lens of strategic competition. Russia's positions on AI governance were complicated by its illegal invasion of Ukraine and the associated international isolation. Small island states and least-developed countries argued, with considerable justice, that they were being asked to agree to governance frameworks over which they had minimal influence and that addressed their specific vulnerabilities only superficially.

The Summit of the Future produced the Global Digital Compact, which included commitments on AI governance and endorsed the principle of establishing an international scientific panel on AI. This represented real, if limited, progress in building the institutional architecture for global AI governance at the UN level.


Section 7: The Three-Bloc Dynamic

To understand global AI governance, one must understand the three major political blocs that are shaping it, often in competition with each other: the European Union, the United States, and China. Each bloc has a distinct governance philosophy that reflects its political system, economic interests, and strategic objectives. Their competition — and occasional cooperation — largely determines what global AI governance looks like in practice.

The European Union has pursued what can be characterized as a regulation-first approach: comprehensive, rights-based legal frameworks that apply to AI systems operating in the European market regardless of where they are developed. The GDPR, applied to AI applications since its 2018 implementation, established that data protection rights constrain what AI systems can do even when the AI is developed outside the EU. The EU AI Act, adopted in 2024, goes further — creating a comprehensive regulatory framework for AI based on a risk-tier approach. The EU's approach reflects its constitutional commitment to fundamental rights and human dignity, its historical experience with authoritarian technology deployment (notably in the Nazi and Soviet periods), and a political economy in which incumbent European companies compete with American and Chinese AI companies and have an interest in governance frameworks that impose compliance costs on all market participants.

The United States has historically pursued a market-first approach, emphasizing innovation and competitiveness over comprehensive regulation. The US approach has relied on sector-specific regulation by existing agencies (the FTC, EEOC, CFPB), voluntary frameworks (notably the NIST AI Risk Management Framework), and executive action, while Congress has struggled to pass comprehensive AI legislation. This approach reflects American political culture's skepticism of federal regulation, the political influence of the technology sector in Washington, and genuine concern about handicapping US companies relative to Chinese competitors. The approach has been shifting: the Biden administration's October 2023 Executive Order on AI represented a significant step toward more active federal involvement in AI governance, and the Trump administration's January 2025 executive order revoking Biden's AI order while simultaneously directing development of an AI Action Plan reflects ongoing but contested evolution.

China's approach involves deep state involvement in AI governance, but of a distinctive kind. The Chinese state is simultaneously the most aggressive promoter of Chinese AI development — directing enormous resources to national AI champions, building AI into national industrial policy — and the most comprehensive regulator of AI's social applications. China has produced regulations on algorithmic recommendation (2022), deep synthesis (2022), and generative AI (2023) that are in some respects more technically detailed than anything produced by Western regulators. But the purpose of Chinese AI regulation is not to protect individual rights from AI but to ensure that AI development aligns with the political priorities of the Chinese Communist Party — stability, security, and the party's continued dominance. The Chinese state uses AI as an instrument of social control in ways that its governance frameworks are designed to facilitate, not constrain.

Each of these blocs is attempting to shape global AI norms through different mechanisms. The EU does so primarily through the Brussels Effect — the process by which EU regulation becomes de facto global regulation because multinational companies find it easier to apply EU standards globally than to maintain different products for different markets. The US does so through the dominance of American AI companies in global markets and the influence of American technical standards, academic norms, and technology culture. China does so through the export of Chinese AI technology and infrastructure — particularly to the Global South — combined with engagement in international standards bodies.

The implications for global AI governance are significant. When these three blocs agree on something — as they all agree, at a high level, on the value of AI safety research — global progress becomes possible. When they disagree — as they disagree on the relative priority of individual rights versus state security in AI governance — global coordination becomes very difficult. And when they are actively competing, as the US and China are competing over AI infrastructure in Africa and Southeast Asia, governance considerations are often subordinated to geopolitical strategy.


Section 8: Soft Law vs. Hard Law

All of the global AI governance instruments described in the preceding sections are examples of soft law: normative frameworks that create expectations and exert political pressure but that are not legally binding and cannot be directly enforced. This is the normal condition of international AI governance, and it raises an important question: what can soft law accomplish, and when do we need binding law?

The case for soft law is not merely pragmatic — it is genuinely substantive. Soft law instruments can be agreed to more quickly than binding treaties, because governments are more willing to commit to non-binding frameworks than to enforceable legal obligations. They can be more flexible, adapting more easily to new circumstances than rigid treaty regimes. They can bring in actors — particularly the private sector — that would not be subject to formal treaty obligations. And they can establish normative expectations that, over time, may harden into binding commitments through a process sometimes called "hardening" — the gradual translation of soft norms into binding law.

The history of international human rights law provides instructive precedents. The Universal Declaration of Human Rights, adopted in 1948, was explicitly non-binding — a declaration of principles rather than a treaty. It had no direct enforcement mechanism. And yet it has shaped international law, domestic constitutions, judicial decisions, and political discourse for more than seven decades. The norms it established have been incorporated into binding treaties, national constitutions, and regional human rights systems in ways that the drafters could not have anticipated. The UDHR demonstrates that non-binding normative frameworks can have profound and lasting effects on how power is exercised and constrained.

The case against exclusive reliance on soft law is also compelling. Soft law instruments are routinely ignored by governments when compliance would be costly or politically inconvenient — which is precisely the situation in which enforcement is most needed. The history of voluntary corporate social responsibility commitments in other domains — environmental protection, labor rights, anti-corruption — is one of companies making commitments they do not keep and facing no meaningful consequence. Without binding enforcement, the most powerful actors have the weakest incentives to comply.

The analogy to climate governance is instructive, if somewhat disheartening. International climate governance spent decades in the soft law phase — voluntary commitments, aspirational targets, declaratory frameworks — before the Paris Agreement established nationally determined contributions as a formal legal framework. Even with the Paris Agreement's legal structure, enforcement remains limited and many countries have failed to meet their stated commitments. The lesson is not that soft law is useless but that soft law alone is insufficient for managing problems that involve powerful economic interests and genuine costs of compliance.

For AI, the treaty-making challenge is formidable. The diversity of national AI governance philosophies makes consensus on substantive commitments very difficult. The pace of technological change makes it hard to design treaty provisions that won't be obsolete within years of entry into force. The dual-use character of AI — the same technology that enables beneficial applications also enables harmful ones — makes categorical prohibition strategies largely ineffective. And the US-China geopolitical competition makes the kind of bilateral agreement that anchored nuclear arms control essentially impossible in the current environment.

Realistic progress in AI governance likely requires a portfolio approach: hard law at the level where it is achievable (EU regulation, national laws, sectoral frameworks), soft law norms at the international level where hard law is not yet possible, technical standards that create de facto requirements without formal legal status, multi-stakeholder processes that build shared understanding across sectors and geographies, and sustained civil society advocacy that holds governments and companies accountable to their stated commitments.


Section 9: Civil Society in Global AI Governance

The governance frameworks examined in this chapter share a structural weakness: they are almost entirely composed of governmental and corporate actors. The people most likely to be harmed by AI systems — historically marginalized communities, workers subject to algorithmic management, individuals in countries with weak rule of law, people affected by AI-enabled surveillance — have almost no voice in the international governance processes ostensibly designed to protect them.

This representation gap is not accidental. International governance processes are expensive to participate in. Diplomacy happens in expensive cities — Geneva, Paris, Brussels, Washington — and requires organizational capacity that most civil society groups lack. Language is a barrier: the dominant languages of international AI governance are English and French, with some Spanish, and the perspectives of communities speaking Arabic, Swahili, Hindi, Indonesian, or Portuguese are systematically underrepresented.

The civil society organizations that do participate in global AI governance are concentrated in a handful of wealthy countries and tend to reflect the priorities of their funders, which are often European or North American foundations. Organizations like Access Now, Algorithm Watch, AI Now Institute, the European Digital Rights network (EDRi), and the Centre for AI and Digital Policy do important work and have influenced major governance processes. But they represent a narrow slice of the affected public — primarily urban, educated, digitally connected, and from wealthy democracies.

The inclusion problem is not merely procedural — it shapes the substance of governance. When Global South communities are not represented in AI governance processes, the resulting frameworks tend to address the AI harms most visible in wealthy countries (e.g., bias in credit scoring, deepfakes, job displacement from automation) while underaddressing the harms most acute in developing countries (e.g., AI-enabled surveillance deployed by authoritarian governments, data extraction without benefit-sharing, AI systems that don't work for local populations). The priorities of governance are shaped by who is at the table.

Some reform efforts are underway. The UN AI Advisory Body made explicit efforts to include members from the Global South. Several civil society AI governance networks are explicitly designed around Southern perspectives. UNESCO's implementation methodology includes efforts to engage national civil society in assessing AI governance readiness. But these are incremental improvements to a structural problem. Genuine inclusion of affected communities in AI governance would require sustained financial support for Southern civil society AI work, translation and accessibility of governance documents, remote participation options, and genuine willingness by powerful actors to share agenda-setting power.


Section 10: The Path Forward

What would genuine, effective global AI governance require? The honest answer is: more than is currently politically achievable. But that answer should not license complacency. International governance of transformative technology always begins in inadequacy and works toward something better — imperfectly, slowly, with many failures, but with genuine progress over time.

The realistic near-term agenda for global AI governance includes several elements. Technical standards — agreed specifications for how AI systems should be documented, tested, and evaluated — represent perhaps the most achievable near-term goal. Standards bodies including ISO, IEC, and IEEE are actively developing AI standards, and the process of standards development, while imperfect, is more technical than political and therefore somewhat more tractable. Mandatory incident reporting — requirements that AI developers report significant failures, accidents, or misuse — would create the empirical database needed for evidence-based governance while remaining less politically contentious than substantive regulation. International regulatory cooperation — agreements between national AI regulatory authorities to share information, coordinate investigations, and avoid creating regulatory arbitrage opportunities — can build the institutional relationships needed for more ambitious coordination.

Over the medium term, the most promising pathway to effective global AI governance may run through the EU AI Act's extraterritorial effects. As the world's most comprehensive AI regulatory framework, applied to the world's largest single market, the EU AI Act creates compliance requirements for any AI company that wants access to European customers. Major AI companies — American, Chinese, and others — are already restructuring their governance processes to meet EU AI Act requirements. If those requirements are effectively designed and enforced, the Brussels Effect will raise global AI governance standards more effectively than any diplomatic process, because it operates through market mechanisms rather than political will.

The role of business in global AI governance is contested but important. Large technology companies have enormous resources and technical expertise that international governance processes lack. They also have obvious interests in governance outcomes and powerful incentives to shape governance to serve those interests. The history of industry participation in governance processes — in pharmaceutical regulation, financial services, environmental policy — is mixed at best: industry expertise has improved the technical quality of regulation, while industry lobbying has repeatedly weakened the protective effectiveness of regulation. The solution is not to exclude industry from governance but to design processes that incorporate industry expertise while structurally limiting industry's ability to capture governance outcomes.

For business professionals, the practical implications of global AI governance's current state are several. Organizations operating across jurisdictions need to understand that different markets have different governance requirements, and that the EU AI Act, in particular, has extraterritorial effects that apply to their products even outside the EU. They should engage constructively with governance processes — both because those processes will shape the rules they operate under and because genuine engagement can improve governance quality. They should resist the temptation of ethics washing — making public commitments to AI governance principles without backing them with meaningful organizational change — which is becoming increasingly detectable and increasingly costly from a reputational standpoint. And they should recognize that the most sustainable business strategy in an increasingly regulated global AI environment is not to minimize compliance but to build genuine governance capacity that can adapt as requirements evolve.

The gap between international AI commitments and national AI realities, with which this chapter began, is real and troubling. But it is not fixed. Governance gaps have been closed before — in nuclear safety, in aviation, in international trade — through sustained multilateral effort. The question for AI governance is not whether that gap can be closed but whether the will to close it can be built before the harms of ungoverned AI make the cost of that learning intolerably high.


Summary

Global AI governance is simultaneously necessary, insufficient, and actively developing. The major international frameworks — the OECD AI Principles (2019), the UNESCO Recommendation (2021), the G7 Hiroshima AI Process (2023), the Global Partnership on AI, and the UN AI Advisory Body — have established important normative foundations and created forums for governance deliberation. But they are all voluntary, their implementation is uneven, and they systematically underrepresent the communities most vulnerable to AI-related harm.

The three-bloc dynamic — EU regulation-first, US market-first, China state-control — shapes global AI norms through market power, technological diffusion, and political influence in ways that formal governance processes often cannot match. The Brussels Effect, through which EU regulation becomes de facto global regulation through market mechanisms, may be the most effective current mechanism for raising global AI governance standards. Soft law instruments play an important normative role but are insufficient alone; the path toward more effective governance requires a portfolio of technical standards, regulatory cooperation, binding national and regional law, and sustained civil society engagement.

The representation gap in current global AI governance — the systematic underrepresentation of Global South nations, civil society organizations, and affected communities — is not merely procedural. It shapes the substance of governance in ways that consistently disadvantage the most vulnerable. Addressing this gap is both a justice imperative and a practical requirement for governance that actually works.


This chapter continues with Case Study 1: The Brussels Effect, Case Study 2: AI Governance in Africa, Key Takeaways, Exercises, Quiz, and Further Reading.