27 min read

Imagine two children born on the same day in 2024 — one in Palo Alto, California, and one in Nairobi, Kenya. Both will grow up in a world saturated with artificial intelligence. But the AI systems that shape their lives will be designed in different...

Learning Objectives

  • Analyze AI development as a geopolitical phenomenon shaped by national interests, values, and power structures
  • Compare AI strategies across major global powers and identify the assumptions each strategy embeds
  • Evaluate digital sovereignty and data colonialism as frameworks for understanding AI's global impact
  • Assess AI's impact on the Global South as both a site of extraction and a source of innovation
  • Formulate a position on equitable global AI governance grounded in evidence

"Technology is neither good nor bad; nor is it neutral." — Melvin Kranzberg, historian of technology


Chapter Overview

Imagine two children born on the same day in 2024 — one in Palo Alto, California, and one in Nairobi, Kenya. Both will grow up in a world saturated with artificial intelligence. But the AI systems that shape their lives will be designed in different places, governed by different rules, trained on different data, and optimized for different purposes. The child in Palo Alto will interact with AI products built largely by American companies operating under relatively light regulation. The child in Nairobi will increasingly encounter AI systems built in Silicon Valley, Beijing, or Bangalore — systems that may not reflect her language, her community's values, or her country's priorities, and over which her government has limited leverage.

This chapter is about that asymmetry — and about the many others like it playing out across the globe.

Up to this point in the book, we have largely discussed AI within a single frame: technology developed primarily in the United States and a handful of other wealthy nations, governed (or not) by their domestic policies. That framing is not wrong, but it is dangerously incomplete. AI is not just a technology. It is a geopolitical force — one that is reshaping power relationships between nations, concentrating wealth in new ways, and raising profound questions about who gets to shape the digital infrastructure on which modern societies increasingly depend.

You do not need to be a foreign policy expert to care about this. If you use a social media platform, a search engine, or a generative AI tool, you are already a participant in the global AI landscape. The question is whether you will be a passive participant or an informed one.


In this chapter you will learn to:

  1. Analyze AI development as a geopolitical phenomenon shaped by national interests, values, and power structures
  2. Compare AI strategies across major global powers and identify the assumptions embedded in each
  3. Evaluate digital sovereignty and data colonialism as frameworks for understanding AI's global impact
  4. Assess AI's impact on the Global South — as both a site of extraction and a source of innovation
  5. Formulate a position on equitable global AI governance grounded in evidence and multiple perspectives

Learning Paths

Fast Track (60 minutes): Read sections 19.1, 19.4, 19.5, and 19.7. Complete the Global AI Governance Debate Framework and Project Checkpoint.

Deep Dive (3–3.5 hours): Read all sections, complete all Check Your Understanding prompts, explore both case studies, and add the global perspective layer to your AI Audit Report.


19.1 The AI Geopolitics Map: Who's Building What

If you wanted to understand the global automobile industry in 1960, you would study Detroit, Tokyo, and a handful of European cities. The rest of the world was largely a market for cars, not a maker of them. AI in the mid-2020s is at a similar inflection point — dominated by a small number of countries and companies, but with that dominance increasingly challenged, contested, and complicated.

Let us start with a snapshot. As of 2025, AI research and development is concentrated in a remarkably small number of places:

  • The United States is home to most of the world's leading AI companies (Google DeepMind, OpenAI, Anthropic, Meta AI, Microsoft), the majority of top-tier AI research universities, and the world's largest pool of AI venture capital. The U.S. approach has generally prioritized innovation and market-driven development, with regulation arriving slowly and unevenly.

  • China has the world's second-largest (and in some measures, the largest) AI ecosystem. Companies like Baidu, Alibaba, Tencent, and ByteDance invest heavily in AI. The Chinese government has published explicit national AI strategies with specific targets. China has certain structural advantages: an enormous population generating vast amounts of data, a government willing to deploy AI at scale in public services and surveillance, and a growing pool of AI talent.

  • The European Union has positioned itself not as an AI builder but as an AI regulator. The EU's AI Act, finalized in 2024, is the most comprehensive AI regulation in the world. Europe's strategy bets that setting the rules of the game is as powerful as building the game itself.

  • Other significant players include the United Kingdom, Canada, Israel, South Korea, Japan, India, the United Arab Emirates, and Singapore — each with distinct strategies reflecting their resources, values, and strategic interests.

The concept we need here is techno-nationalism — the idea that a nation's technological capabilities are directly tied to its economic competitiveness, military power, and geopolitical influence. Techno-nationalism is not new (think of the Space Race), but AI has intensified it because AI is a "dual-use" technology: the same techniques that power a medical diagnostic tool can power a surveillance system or an autonomous weapon.

📊 Global Perspective: AI Investment by Region

Region Share of Global AI Private Investment (2024) Number of Notable AI Companies Primary Strategy
United States ~50% 4,600+ Innovation-first, light regulation
China ~25% 1,400+ State-directed, rapid deployment
European Union ~8% 900+ Regulation-first, rights-based
United Kingdom ~5% 600+ Innovation sandbox, balanced
Rest of World ~12% Varies widely Highly varied, often import-dependent

Source: Compiled from Stanford HAI AI Index Report, OECD AI Policy Observatory, and industry analyses. Figures are approximate and shift year to year.

This table reveals an uncomfortable truth: the vast majority of the world's population lives in countries that consume AI systems but have very little say in how those systems are designed, trained, or governed. This is not merely an economic issue. It is a question of power.

🔄 Check Your Understanding: Looking at the table above, what does the concentration of AI investment in just two countries suggest about who gets to shape the "rules" of how AI works? What might be the consequences for countries in the "Rest of World" category?


19.2 The U.S.-China AI Race: Competition and Implications

No discussion of global AI can avoid the relationship between the United States and China. Calling it a "race" is common in media coverage, though some experts argue the metaphor is misleading — it implies a single finish line when in reality these two countries are pursuing different goals in different ways.

The United States has historically relied on a model of private-sector innovation supported by government research funding (through agencies like DARPA, NSF, and NIH) and relatively permissive regulation. This model produced the internet, the smartphone, and most of the foundational AI breakthroughs of the past decade — including transformers, the architecture behind modern large language models. American AI strength is concentrated in a handful of companies with extraordinary resources: as of 2025, the five largest U.S. tech companies spend more on AI research annually than most countries' entire science budgets.

China's approach is different in kind, not just degree. The Chinese government published its "New Generation AI Development Plan" in 2017, explicitly targeting global AI leadership by 2030. The plan coordinates investment across government, state-owned enterprises, and private companies in a way that has no American equivalent. China's advantages include:

  • Scale of data. With over a billion internet users and widespread adoption of mobile payment, ride-hailing, and social media, Chinese companies have access to enormous datasets. Critically, Chinese data protection norms (at least historically) have allowed companies to collect and use data in ways that would face legal challenges in Europe or public backlash in the United States.
  • Speed of deployment. Chinese companies and government agencies have deployed AI systems in urban management, transportation, healthcare, and public security at a pace that outstrips Western counterparts.
  • Government alignment. When the Chinese government identifies AI as a national priority, resources flow in a coordinated way that market-driven systems cannot easily replicate.

But framing this as a simple "race" obscures several important nuances:

First, competition is selective. The U.S. and China are rivals in some AI domains (foundation models, autonomous systems, AI chips) but deeply intertwined in others. Many Chinese AI researchers trained at American universities. American companies manufacture products in Chinese factories using Chinese-made components. The relationship is more accurately described as "competitive interdependence."

Second, the race framing can be used to justify shortcuts. When policymakers argue that "we cannot afford to fall behind China," they sometimes use that urgency to resist regulation, downplay safety concerns, or increase surveillance capabilities. The race metaphor can become a tool for shutting down legitimate debates about how AI should be governed.

Third, other countries get squeezed. When two superpowers frame AI as a zero-sum competition, smaller nations are pressured to choose sides — to adopt Chinese-built infrastructure (like Huawei's 5G networks) or American-dominated platforms (like cloud computing services from Amazon, Google, and Microsoft). This pressure limits the policy space available to countries that might prefer a different approach entirely.

⚖️ Comparison: U.S. vs. China AI Strategies

Dimension United States China
Primary driver Private sector + venture capital Government + state-directed enterprise
Regulatory approach Light, patchwork, sector-specific Strategic tolerance with state control
Key advantage Talent, research institutions, capital Data scale, deployment speed, coordination
Key vulnerability Regulatory fragmentation, inequality International trust, semiconductor access
Approach to AI ethics Industry self-regulation + emerging laws State-defined values, social stability focus
Export strategy Platform dominance (Google, Meta, Microsoft) Infrastructure export (smart cities, surveillance)

Let us connect this to one of our anchor examples. CityScope Predict, the predictive policing system from Chapter 1, operates within a single American city. But the underlying technology — algorithmic prediction of human behavior based on historical data — is being exported globally. Chinese companies have sold "smart city" and "safe city" packages to governments across Africa, Southeast Asia, Latin America, and Central Asia. These systems integrate surveillance cameras, facial recognition, traffic management, and predictive analytics into a single platform. The selling point is efficiency and modernization. The concern is that governments with weak democratic institutions may use these tools to monitor dissent, suppress opposition, or entrench authoritarian control — and that the communities affected have little recourse.

💡 Intuition: Think of the U.S.-China AI relationship not as a sprint with a clear winner, but as two different rivers carving through the same landscape. They flow in different directions, fed by different sources — but they both reshape the terrain for everyone else living in that landscape.

🔄 Check Your Understanding: Why might framing U.S.-China AI relations as a "race" be misleading? Identify one way the race metaphor could lead to poor policy decisions.


19.3 Europe's Third Way: Regulation as Strategy

If the United States leads with innovation and China leads with state-directed deployment, Europe has chosen a third path: regulation as competitive strategy. The idea is deceptively simple. Europe may not have the venture capital of Silicon Valley or the data scale of China, but it does have something those competitors lack: the world's largest single consumer market (the EU has roughly 450 million people with high purchasing power) and a demonstrated willingness to use that market power to set global standards.

This strategy has a name: the Brussels effect. Coined by legal scholar Anu Bradford, it describes the phenomenon whereby EU regulations become de facto global standards because companies find it easier to design one product that complies with the strictest rules rather than maintaining different versions for different markets.

We have already seen the Brussels effect in action with data protection. The EU's General Data Protection Regulation (GDPR), which took effect in 2018, forced companies worldwide to rethink how they handle personal data — not because these companies wanted to, but because losing access to the European market was unthinkable. Apple, Google, and Meta all redesigned products to comply with GDPR, and those redesigned products are often the ones used globally.

The EU AI Act, finalized in 2024 and rolling into enforcement through 2025-2027, attempts to replicate this success for artificial intelligence. The Act takes a risk-based approach:

  • Unacceptable risk: Some AI applications are banned outright — including real-time remote biometric surveillance in public spaces (with narrow exceptions for law enforcement), social scoring systems, and AI designed to manipulate behavior.
  • High risk: AI systems used in critical areas like healthcare, education, employment, law enforcement, and migration must meet strict requirements for transparency, human oversight, data quality, and documentation.
  • Limited risk: Systems like chatbots must disclose that users are interacting with AI.
  • Minimal risk: Most AI applications (spam filters, video game AI, etc.) face no additional requirements.

The European approach reflects a specific set of values — particularly the idea that fundamental rights should constrain what technology is permitted to do, even if those constraints slow innovation. This is a genuine philosophical difference from the U.S. approach, which tends to treat innovation as the primary value and regulate only when clear harms emerge.

Critics of the European approach argue that:

  • It risks making Europe an AI consumer rather than an AI producer. If the most innovative companies are in the U.S. and China, Europe may end up regulating technologies it did not build and does not fully understand.
  • Compliance costs may be prohibitive for smaller companies. Large tech companies can absorb the cost of compliance; startups may not be able to.
  • Regulation freezes categories that technology outpaces. By the time the AI Act is fully enforced, the technology may have evolved in ways the legislation does not anticipate.

Defenders counter that:

  • The Brussels effect proves the model works. GDPR succeeded in raising global data protection standards. The AI Act could do the same for AI governance.
  • Innovation without guardrails is not automatically beneficial. The U.S. approach produced both remarkable AI breakthroughs and systems like CityScope Predict that raise serious civil liberties concerns.
  • Rights-based governance builds public trust, which is itself a competitive advantage. If people trust European AI systems more, European companies may benefit in the long run.

🗺️ Global Perspective: The EU's approach has already influenced AI governance proposals in Brazil, India, Canada, and several African nations. Whether this influence represents genuine "regulatory leadership" or a form of European soft power that imposes Western values on other contexts is itself a contested question.

🔄 Check Your Understanding: What is the Brussels effect, and how might it shape global AI governance even in countries outside the EU?


19.4 The Global South: AI Recipients, AI Innovators, or Both?

The term "Global South" is imperfect — it lumps together countries with vastly different economies, political systems, and technological capacities. Nigeria is not the same as Brazil, which is not the same as Indonesia, which is not the same as India. But the term captures something real: a shared experience of being on the receiving end of technological systems designed elsewhere.

For many countries in Africa, Latin America, Southeast Asia, and parts of the Middle East, AI arrives as a finished product. The recommendation algorithms, content moderation systems, credit-scoring tools, and facial recognition platforms that shape daily life were designed in San Francisco or Shenzhen, trained on data that may not represent local populations, and governed by terms of service written in English (or Mandarin) with no local input.

Consider ContentGuard, our content moderation anchor example. A system like ContentGuard operates globally — the same platform, the same moderation rules, applied across dozens of countries and hundreds of languages. But content moderation is inherently cultural. What counts as "hate speech" varies across legal systems and cultural contexts. Satire, political dissent, and religious criticism are treated very differently in the United States, India, Myanmar, and Nigeria. When a single AI system applies a single set of rules across all of these contexts, the results are predictable: the system performs best in the language and cultural context it was primarily trained on (usually English) and worst in the contexts most different from that baseline.

This is not a hypothetical problem. Research has documented that automated content moderation systems are significantly less accurate in Arabic, Burmese, Amharic, and many other languages compared to English. During the 2021 conflict in Ethiopia, Meta's content moderation systems struggled to identify hate speech and incitement in Amharic and Tigrinya — languages spoken by tens of millions of people — contributing to the spread of content that may have fueled real-world violence.

But the Global South is not merely a passive victim of others' technology. This framing, while sometimes accurate, misses important counter-stories:

Africa is producing genuinely innovative AI work. Research labs like the Masakhane NLP collective have built language models for African languages that major tech companies had neglected entirely. Kenya's M-Pesa mobile money system pioneered financial inclusion through technology. Startups in Lagos, Nairobi, Cape Town, and Accra are building AI applications tailored to local contexts — from agricultural advisory tools that use satellite imagery and local crop data to health screening systems designed for settings with limited medical infrastructure.

India has become a global AI services powerhouse, with a massive workforce engaged in data labeling, model training, and AI system maintenance. This positions India simultaneously as an AI producer, an AI consumer, and — critics argue — an AI labor supplier, performing the low-wage "ghost work" that makes high-profile AI systems function.

Latin American countries are developing distinctive regulatory approaches. Brazil's AI governance framework draws on European models but adapts them to local priorities around social inclusion and democratic participation.

The critical question is whether Global South innovation can develop on its own terms — reflecting local needs, values, and priorities — or whether it will be perpetually constrained by the infrastructure, platforms, and investment patterns controlled by wealthier nations.

🔍 Argument Map: The Global South and AI

Claim: Countries in the Global South are primarily consumers of AI, not producers, and this creates a dependency relationship.

Supporting evidence: - Most foundation models are developed in the U.S. and China - Training large AI models requires compute resources most countries cannot afford - Data flows tend to move from the Global South to companies headquartered in the Global North

Complicating evidence: - African, Indian, and Latin American researchers are producing innovative, locally relevant AI - Some Global South applications (mobile money, agricultural AI) are more advanced than Global North equivalents in their domain - The "AI consumer" framing can be paternalistic, erasing real agency

What additional evidence would you need to evaluate this claim fairly?


19.5 Data Colonialism and Digital Sovereignty

Of all the concepts in this chapter, data colonialism may be the most provocative — and the most important to understand, whether or not you ultimately agree with the framing.

The term draws an explicit analogy between historical colonialism and the current dynamics of the global data economy. Here is the argument:

During the colonial era, raw materials — rubber, minerals, agricultural products — were extracted from colonized territories, shipped to imperial centers for processing, and sold back to the colonies as finished goods at a profit. The colonies provided the raw inputs and the captive market; the imperial powers captured the value.

Data colonialism argues that a structurally similar process is happening with data. People in the Global South (and indeed, people everywhere) generate enormous amounts of data through their daily activities — browsing, shopping, communicating, moving through cities. That data flows to technology companies headquartered in a small number of wealthy countries. Those companies process the data into AI models, products, and services that are then sold or deployed back to the populations that generated the data in the first place. The data generators rarely consent to this process in any meaningful way, receive little or no compensation, and have virtually no control over how their data is used.

The analogy is not perfect, and critics raise legitimate objections:

  • The dynamics are different. Colonial extraction involved physical occupation, military force, and enslavement. Data extraction, while exploitative, operates through terms of service, platform dependency, and market dynamics. Conflating the two risks trivializing historical colonialism.
  • Data is non-rivalrous. Unlike a barrel of oil, data can be copied infinitely. The person who generated the data has not "lost" it when a company collects it (though they may have lost control over it).
  • People in wealthy countries are also subject to data extraction. This is not solely a North-South dynamic.

But defenders of the framework argue that:

  • The structural parallels are real and illuminating. The concentration of data processing and AI development in a few countries, the extraction of value from populations who do not benefit proportionally, and the creation of dependency relationships are all features shared with historical colonial patterns.
  • The material impacts are concrete. When a facial recognition system trained primarily on lighter-skinned faces is deployed in an African country, the people misidentified bear the cost of a system they did not design and cannot reform.
  • Naming the pattern is the first step to changing it. Without a framework for understanding these dynamics, they become invisible.

This is where digital sovereignty enters the conversation. Digital sovereignty is the claim that nations (or communities) should have meaningful control over the data generated within their borders, the digital infrastructure that serves their populations, and the AI systems that affect their citizens' lives.

In practice, digital sovereignty takes many forms:

  • Data localization laws require that data generated in a country be stored on servers within that country (or region). India, Russia, China, and several other nations have implemented versions of this.
  • National AI strategies that prioritize building domestic AI capacity rather than relying on imported systems.
  • Language and cultural technology initiatives that ensure AI systems work in local languages and reflect local norms — like the Masakhane project's work on African language AI.
  • Regulatory frameworks that give governments oversight over AI systems deployed within their borders, regardless of where those systems were developed.

Digital sovereignty is not without tensions. Data localization can be used by authoritarian governments to control information flows and surveill their citizens. National AI strategies sometimes become vehicles for surveillance rather than inclusion. And in a globally interconnected digital economy, strict data localization can fragment the internet in ways that reduce the benefits of global information sharing.

⚠️ Critical Lens: The debate over data colonialism illustrates a pattern we have seen throughout this book: technology is never just technology. It is embedded in economic systems, power structures, and historical legacies. Understanding AI requires understanding these contexts — and that understanding is itself a form of AI literacy as a civic skill.

🔄 Check Your Understanding: In your own words, explain the concept of data colonialism. Then identify one strength and one weakness of using this framework to understand global AI dynamics.


19.6 Toward Global AI Governance

Here is a question with no easy answer: Who should govern AI on a global scale?

The challenge is immediately apparent. AI systems routinely cross borders. ContentGuard operates in over 100 countries. A large language model trained in San Francisco is used by people in Jakarta, Lagos, and Bogota. An autonomous vehicle algorithm developed in one country may be deployed on roads in dozens of others. If each country regulates AI independently, companies face a patchwork of incompatible rules. If we try to create global rules, who writes them? Who enforces them? Whose values do they reflect?

This is the AI governance gap — the mismatch between the global reach of AI systems and the national (or at best, regional) scope of existing governance mechanisms.

Let us map the current landscape of global AI governance efforts:

The United Nations has established an AI Advisory Body and published reports calling for inclusive global AI governance. The UN's strength is its legitimacy and universal membership. Its weakness is a lack of enforcement power and the glacial pace of multilateral diplomacy. The 2024 UN General Assembly resolution on AI was a milestone in establishing principles, but principles without enforcement mechanisms remain aspirational.

The OECD published its AI Principles in 2019, endorsed by over 40 countries. These principles — including transparency, accountability, robustness, and human-centredness — have influenced national AI strategies worldwide. The OECD also maintains the AI Policy Observatory, which tracks AI policies across its member states. However, the OECD represents primarily wealthy nations, raising questions about whose perspectives are centered.

The G7 and G20 have both addressed AI governance, with the G7's Hiroshima AI Process (2023) establishing voluntary commitments for AI developers. These forums move faster than the UN but are even less representative.

Multi-stakeholder initiatives like the Global Partnership on AI (GPAI) bring together governments, industry, civil society, and researchers. These bodies can be nimble and innovative but lack formal authority.

Regional approaches like the EU AI Act, the African Union's AI strategy, and ASEAN's AI governance frameworks represent attempts to coordinate governance at a scale larger than the nation-state but smaller than the globe.

None of these mechanisms is sufficient on its own. The honest assessment is that global AI governance is in its infancy — struggling with the same fundamental tensions that have challenged international cooperation in other domains: sovereignty versus coordination, speed versus inclusiveness, and the persistent problem of enforcement.

🔵 Debate Framework: How Should Global AI Be Governed?

Position A: A binding international AI treaty — Similar to arms control agreements, a binding treaty would establish universal rules for AI development and deployment. - Strength: Provides clear, enforceable standards applicable everywhere. - Challenge: History shows that binding technology treaties are extremely difficult to negotiate, and major powers often resist constraints on their flexibility.

Position B: Regulatory competition and convergence — Let different countries and regions experiment with different approaches. Over time, the most effective approaches will be adopted more widely (like GDPR). - Strength: Allows for experimentation and adaptation to local contexts. - Challenge: Creates regulatory arbitrage — companies migrate to the jurisdiction with the weakest rules.

Position C: Industry self-regulation with government oversight — AI companies set their own standards through voluntary commitments, with governments intervening only when necessary. - Strength: Industry has the technical expertise to write practical rules. - Challenge: Industry incentives do not always align with public interest. Voluntary commitments lack enforcement.

Position D: Multi-stakeholder governance — Governance bodies that include governments, companies, civil society, affected communities, and researchers collaborate on shared standards. - Strength: Most inclusive; incorporates diverse perspectives. - Challenge: Slow, complex, and can be dominated by the best-resourced participants.

Your position: Which approach (or combination) do you find most promising? What values and assumptions underlie your choice?

Consider what meaningful global AI governance would need to address: the compute divide (the gap between countries that can afford the massive computing infrastructure needed to train frontier AI models and those that cannot), the question of whose values are embedded in AI systems that operate across cultural boundaries, the challenge of enforcing rules on companies more powerful than many national governments, and the need for governance mechanisms that can adapt as fast as the technology itself evolves.

There are no simple solutions here. But there is an important principle that connects this global challenge to the theme running through our entire book: AI literacy as a civic skill. Governance decisions about AI — whether made at the local, national, or international level — will be better if the people affected by those decisions understand enough about AI to participate meaningfully. That includes understanding the global dynamics we have explored in this chapter.

💡 Intuition: Think of global AI governance as being at the stage where international aviation was in the 1940s. Planes could cross borders, but the rules for how they did so were still being written. It took decades of negotiation to build the system of international aviation governance we take for granted today. AI governance may require similar patience — and similar ambition.


19.7 Chapter Summary

This chapter has taken us from the geopolitics of AI to the data centers of the Global South and back, covering a lot of terrain. Let us consolidate what we have learned.

AI development is geographically concentrated — and that concentration has consequences. The United States and China together account for roughly three-quarters of global AI investment. This means that the priorities, values, and blind spots of these two countries disproportionately shape the AI systems that the rest of the world uses.

The U.S.-China dynamic is more complex than a simple "race." The two countries have different models (market-driven vs. state-directed), different strengths (talent and capital vs. data scale and deployment speed), and different vulnerabilities. The race framing, while politically useful, can obscure important nuances and be used to justify shortcuts on safety and governance.

Europe's regulatory approach is a genuine strategic alternative. The Brussels effect demonstrates that setting standards can be as powerful as building technology. Whether the EU AI Act will succeed in shaping global norms remains to be seen, but the attempt is historically significant.

The Global South is not just a consumer of AI — it is also a site of genuine innovation. From African language AI to mobile financial inclusion to agricultural technology, Global South innovators are building systems tailored to local needs. But structural barriers — the compute divide, data flows, and investment patterns — constrain what is possible.

Data colonialism and digital sovereignty are frameworks for understanding global AI power dynamics. These concepts are contested and imperfect, but they illuminate real patterns of extraction, dependency, and unequal benefit that operate in the global data economy.

Global AI governance is in its infancy. Existing mechanisms — from the UN to the OECD to regional frameworks — are insufficient for governing AI systems that routinely cross borders. Closing the AI governance gap will require new institutions, new forms of cooperation, and the meaningful inclusion of communities that are currently excluded from decision-making.

📋 Key Concepts Introduced in This Chapter

Concept Definition
Techno-nationalism The linking of a nation's technological capabilities to its economic competitiveness and geopolitical power
Brussels effect The phenomenon whereby EU regulations become de facto global standards due to market power
Data colonialism A framework arguing that global data extraction patterns mirror structural features of historical colonialism
Digital sovereignty The principle that nations or communities should control the data, infrastructure, and AI systems affecting their citizens

🔁 Spaced Review

Before moving on, let us revisit key ideas from earlier chapters:

From Chapter 9 (Bias and Fairness): How might the biases we discussed in Chapter 9 be amplified when an AI system trained in one cultural context is deployed in a very different one? Give a specific example.

From Chapter 13 (Governing AI): In Chapter 13, we explored governance frameworks at the national level. How does the challenge of governing AI change when we move from the national to the global scale? What new difficulties emerge?

From Chapter 17 (AI and Justice): Chapter 17 explored how AI intersects with justice within a single society. How do the justice concerns from that chapter manifest differently when we consider AI's impact across the Global South?


🎯 Project Checkpoint: AI Audit Report — Step 19

Your task: Analyze your chosen AI system's global reach and cross-cultural implications.

  1. Global footprint. Is your AI system deployed in multiple countries? If so, which ones? If you are not sure, research whether the company that operates it has an international presence.

  2. Cultural context. Was the system designed with a specific cultural, linguistic, or national context in mind? How might it perform differently in other contexts?

  3. Governance landscape. Which country's laws primarily govern how this system operates? Would the EU AI Act classify it as high-risk? Does it comply with GDPR? Are there countries where its use might be restricted or where it operates in a regulatory vacuum?

  4. Power analysis. Apply the data colonialism framework to your system. Does data flow from users in one country to a company headquartered elsewhere? Who captures the value? Who bears the risks?

  5. Recommendations. Based on your analysis, what changes would you recommend to make this system more equitable across different global contexts?

Add this global perspective section (400–600 words) to your AI Audit Report.