Case Study 1: The Brussels Effect — How the EU Shapes Global AI Regulation

The Phenomenon

In 2012, Columbia Law School professor Anu Bradford published an article in the Stanford Law Review arguing that a little-understood regulatory mechanism was making the European Union one of the most powerful rule-makers in the world. She called it the Brussels Effect. The core insight was this: because global companies find it cheaper to standardize their products and processes to the strictest applicable standard than to maintain multiple versions for different markets, the regulatory standards of the largest, strictest market tend to become de facto global standards. And the world's strictest major regulatory market, in domain after domain — product safety, food standards, chemicals, data protection — was the European Union.

Bradford's 2020 book, "The Brussels Effect: How the European Union Rules the World," demonstrated the phenomenon across multiple sectors. In data protection, her most famous example, the EU's General Data Protection Regulation effectively became a global standard not because other governments adopted GDPR but because multinational companies found it operationally simpler to apply GDPR standards to all users globally than to maintain different data handling practices for European and non-European users. The result was that billions of people outside the EU — Americans, Indians, Brazilians, Nigerians — received GDPR-standard data protections because of the market power of 450 million European consumers, not because their own governments required it.

The Brussels Effect in AI governance is now underway, accelerated by the EU AI Act's entry into force in August 2024. Understanding it is essential for any business professional navigating global AI compliance.

The EU AI Act: A Brief Structural Overview

To understand the Brussels Effect in AI, one needs to understand what the EU AI Act actually requires. The Act adopts a risk-tiered approach that classifies AI systems into four categories: prohibited AI practices, high-risk AI systems, limited-risk AI systems, and minimal-risk AI systems.

Prohibited AI practices — representing the Act's most aggressive position — include real-time remote biometric identification systems in publicly accessible spaces for law enforcement (with narrow exceptions), AI systems that manipulate persons through subliminal techniques or exploit vulnerabilities, social scoring systems by public authorities, and AI systems used to categorize persons based on biometric data to infer their race, political opinions, trade union membership, religious beliefs, or sexual orientation. These prohibitions took effect in February 2025.

High-risk AI systems — the Act's most demanding regulated category — include AI systems used in biometric identification and categorization, critical infrastructure, education and vocational training, employment and worker management (including CV sorting and performance evaluation), access to essential private and public services (including credit scoring), law enforcement, migration and border control, and administration of justice. Operators of high-risk AI systems must comply with requirements including mandatory conformity assessment, technical documentation, human oversight mechanisms, accuracy and robustness standards, and registration in a public EU database.

These requirements begin applying in August 2026 for most high-risk systems, though general-purpose AI model requirements began applying in August 2025. Penalties for non-compliance can reach €35 million or 7% of global annual turnover for violations of prohibited AI rules, and €15 million or 3% of global annual turnover for other violations.

The Extraterritorial Scope

The critical feature of the EU AI Act for global business is its explicitly extraterritorial scope. The Act applies not only to providers and deployers of AI systems located in the EU but to any provider placing AI systems on the EU market regardless of where they are established, and any provider or deployer located outside the EU when the output produced by the AI system is used in the EU.

This scope is broader, in some respects, than the GDPR's already-significant extraterritorial reach. An American company that develops an AI hiring tool and sells it to companies anywhere in the world, but where some of those companies use it to screen candidates for EU-based positions, may be subject to EU AI Act requirements for that tool. A Chinese company whose AI translation service is used by EU-based businesses is potentially subject to the Act's limited-risk transparency requirements. An Indian software firm that builds AI components that become part of high-risk systems deployed in the EU faces compliance obligations it may not have anticipated.

The practical effect of this extraterritorial scope — combined with the market power of the EU — is that companies serving global markets are effectively required to design their AI systems to EU standards if they want access to European customers. And because building two separate products — one EU-compliant and one not — is operationally complex and expensive, many companies will simply apply EU standards globally.

Evidence of the Brussels Effect in Action

The evidence that the Brussels Effect is operating in AI governance is already substantial, even before the EU AI Act's major requirements have fully taken effect.

Major AI developers restructured their governance well in advance of the Act's effective dates. By 2024, leading AI companies including Google, Microsoft, OpenAI, and Amazon had all hired dedicated EU AI Act compliance personnel, established internal AI governance functions, and begun developing documentation processes aligned with the Act's technical documentation requirements. These organizational changes are not limited to their EU operations — they are being implemented globally, because AI development processes are global.

Product design decisions are being made with EU AI Act requirements in mind. Several AI companies publicly disclosed that they had declined to release certain features in the EU due to compliance uncertainty under the Act — but the design decisions made during that process (about transparency features, human oversight mechanisms, and risk documentation) were applied to their global products. The tail is wagging the dog: EU compliance requirements are shaping product design decisions for global deployment.

The conformity assessment process required by the EU AI Act for high-risk systems is creating a new category of AI auditing infrastructure — independent conformity assessment bodies, AI audit firms, and AI governance documentation standards — that will serve global markets. KPMG, Deloitte, and several specialized AI audit firms have established EU AI Act practices that they are marketing to clients globally, helping companies build governance infrastructure that meets EU standards regardless of where those companies are headquartered or primarily operate.

Standard-setting bodies, including ISO and CEN-CENELEC, are developing AI standards that align with EU AI Act requirements. These standards, once published, will be cited in global procurement contracts, insurance requirements, and vendor due diligence processes regardless of whether the EU AI Act directly applies — the same dynamic that has caused ISO standards aligned with EU product safety requirements to become global baseline expectations.

The Limits of the Brussels Effect

Understanding the Brussels Effect requires understanding its limits as well as its reach. It is a powerful mechanism, but it is not omnipotent, and it operates unevenly across sectors and contexts.

The Brussels Effect requires that companies actually value EU market access. For the largest global AI companies — Google, Microsoft, OpenAI, Amazon, Meta — the EU market is large enough that they cannot credibly threaten to exit it. For smaller companies that primarily serve domestic markets in countries far from the EU, the Brussels Effect may be weak or nonexistent. A Chinese AI company that primarily serves Chinese customers, with some additional business in African and Southeast Asian markets, may have limited incentive to comply with EU AI Act standards. A Brazilian fintech primarily serving Brazilian consumers faces similarly limited exposure.

The Brussels Effect also operates more powerfully for consumer-facing products than for enterprise software or embedded systems. Companies making consumer AI products want access to EU consumers and will adapt their products accordingly. Companies selling AI components to other businesses face a more complex calculus, depending on whether those downstream businesses face EU AI Act obligations.

The Act's enforcement is uncertain. The Brussels Effect depends in part on the credible threat that non-compliance will be detected and sanctioned. EU data protection enforcement under GDPR was initially slow and inconsistent — the Irish Data Protection Commission, which regulates most major US tech companies because of their EU headquarters in Dublin, faced sustained criticism for slow enforcement. If EU AI Act enforcement is similarly slow or inconsistent, the compliance incentive weakens. The new AI Office within the European Commission is taking an active approach, but building enforcement capacity for a complex new regulatory framework takes time.

Finally, the Brussels Effect does not fully capture the governance decisions made by non-market mechanisms. Surveillance AI deployed by authoritarian governments, AI systems used by state security services, and AI applications in sectors where EU market access is not relevant are largely outside the Brussels Effect's reach. The EU AI Act may raise global standards for consumer AI while leaving AI-enabled authoritarianism and state surveillance largely unaffected.

Implications for Global Businesses

For businesses operating across multiple jurisdictions, the Brussels Effect in AI governance creates both obligations and strategic opportunities. The obligations are clear: any company with EU market exposure must take EU AI Act compliance seriously, including extraterritorial provisions that may apply to AI development activities outside the EU. The strategic opportunities are perhaps less obvious.

Companies that invest early and seriously in EU AI Act compliance build governance infrastructure that can adapt as other jurisdictions develop their own AI regulations. The EU AI Act's risk classification framework, technical documentation requirements, and human oversight standards are likely to influence other regulatory frameworks — Canada's proposed AIDA legislation, Brazil's AI bill, and emerging frameworks in Asia and Latin America are all showing EU influence. Building compliance capacity for the EU AI Act is, in a sense, building compliance capacity for the next decade of global AI regulation.

Companies that treat EU AI Act compliance as a governance investment rather than a compliance burden can use it to improve their AI systems in ways that create sustainable competitive advantage. The technical documentation processes required by the Act require organizations to understand their AI systems better — what data they were trained on, what they can and cannot do, where they fail, and how they can be monitored. Companies that develop this understanding systematically are better positioned to identify and address problems before they become reputational or legal crises.

The Brussels Effect also creates advocacy opportunities. Because EU regulation shapes global standards, companies that engage constructively with EU policy processes — contributing technical expertise, identifying practical implementation challenges, and helping regulators understand how AI systems actually work — have disproportionate influence over global regulatory trajectory. This advocacy role requires genuine engagement with substantive questions, not merely lobbying for weaker requirements, and it carries reputational risks if it is perceived as captured by industry interests. But thoughtful, transparent engagement with EU AI governance processes is both good citizenship and good business strategy.

The Limits of Market-Driven Governance

The Brussels Effect is real and significant, but it is worth being clear about what it is not. It is not a substitute for democratic governance — it is a mechanism by which the regulatory choices of one democratic jurisdiction shape the behavior of actors in other jurisdictions, without those actors having had democratic input into those regulatory choices. From a democratic legitimacy perspective, this is imperfect: billions of people in non-EU countries are de facto governed by EU AI regulations they had no role in shaping.

The Brussels Effect also depends on the EU making good regulatory choices. If EU AI Act requirements are poorly calibrated — too onerous in ways that impede beneficial AI applications, or too permissive in ways that allow harmful ones — those poor calibrations will be exported globally along with the Act's beneficial provisions. The stakes of EU regulatory quality are therefore higher than they would be if the EU were simply regulating its own market.

And the Brussels Effect is fundamentally a market mechanism: it works through corporate compliance with commercial requirements. This means it captures the AI governance challenges that matter to European consumers and regulators while potentially missing the challenges that most acutely affect people outside the EU's regulatory vision. The governance of AI-enabled authoritarianism, the ethics of AI data extraction from the Global South, and the governance of AI systems deployed in contexts far from European regulatory oversight are not the EU AI Act's primary concerns, and the Brussels Effect will not address them.

For all its limitations, however, the Brussels Effect represents perhaps the most effective mechanism currently operating to raise global AI governance standards. Until binding international agreements become achievable — a prospect that remains distant — the EU's combination of market power and regulatory ambition is doing more work in global AI governance than any formal international institution. Understanding it is not optional for anyone operating in the global AI landscape.


Discussion Questions: (1) How does the Brussels Effect compare to treaty-based mechanisms as a tool for global AI governance? What are the advantages and disadvantages of each? (2) A mid-sized AI company based in Singapore sells AI recruitment software to clients in Southeast Asia, the Middle East, and increasingly, Europe. At what point does EU AI Act compliance become a business priority? What factors should drive that determination? (3) Some critics argue that the Brussels Effect amounts to regulatory imperialism — the imposition of European values and regulatory preferences on the rest of the world without democratic consent. How would you evaluate this critique?