Case Study 1: The EU AI Act --- From Proposal to Law


Introduction

On April 21, 2021, the European Commission published a proposal for a regulation "laying down harmonised rules on artificial intelligence." What followed was one of the most consequential and contentious legislative processes in recent EU history --- a three-year journey that transformed a 108-page draft regulation into the world's first comprehensive AI law. The EU AI Act's legislative path reveals how democracies negotiate the tension between technological innovation and fundamental rights, and how lobbying, political compromise, and unexpected technological developments shape the rules that govern AI.

For business leaders, the story of the EU AI Act's passage is not academic history. It is a template for understanding how AI regulation develops, how corporate engagement shapes outcomes, and how to anticipate the regulatory frameworks that will govern AI systems for the next decade.


The Commission's Proposal (April 2021)

The European Commission's original proposal was grounded in two strategic objectives: protecting fundamental rights and promoting the EU as a hub for trustworthy AI. The proposal drew on several years of preparatory work, including the High-Level Expert Group on AI's Ethics Guidelines for Trustworthy AI (2019) and the Commission's White Paper on AI (2020).

The Risk-Based Architecture

The proposal's central innovation was its risk-based classification framework --- the four-tier pyramid that would become the Act's defining feature:

  • Unacceptable risk: A small number of AI practices banned outright, including social scoring and subliminal manipulation
  • High risk: AI systems in sensitive domains (healthcare, employment, credit scoring, law enforcement) subject to extensive requirements
  • Limited risk: Transparency obligations for chatbots and deepfakes
  • Minimal risk: No specific requirements for low-risk AI applications

This architecture was designed to be proportionate --- imposing the heaviest regulatory burden only on the AI systems most likely to cause serious harm. The Commission explicitly sought to avoid regulating AI as a monolithic technology, recognizing that a spam filter and a criminal justice algorithm pose fundamentally different risks.

What the Original Proposal Did Not Address

The 2021 proposal had notable gaps that would become the focus of intense debate:

  • General-purpose AI systems (foundation models, large language models) were not specifically addressed. The proposal focused on AI systems deployed for specific purposes, not on the underlying models that could be adapted for many uses.
  • Biometric surveillance exceptions were broader than civil society groups considered acceptable. The proposal permitted real-time remote biometric identification for law enforcement in certain circumstances.
  • Environmental impact of AI systems was not addressed.
  • Enforcement was largely delegated to member states, with limited Commission oversight.

The Parliamentary Debate (2022-2023)

The European Parliament's consideration of the AI Act was shaped by two simultaneous forces: the normal legislative process of committee review and amendment, and the extraordinary emergence of generative AI as a public phenomenon.

The ChatGPT Effect

When OpenAI released ChatGPT in November 2022, the EU AI Act was already deep in legislative review. ChatGPT's explosive popularity --- reaching 100 million users within two months --- forced legislators to confront a category of AI system that the original proposal had not adequately addressed: general-purpose AI models capable of performing a wide range of tasks.

The timing was both fortunate and challenging. Fortunate, because legislators could update the Act before its passage rather than amending it afterward. Challenging, because regulating a technology that was evolving in real-time required legislators to make judgments about capabilities and risks that were genuinely uncertain.

The Parliament's Internal Market Committee and Civil Liberties Committee, jointly responsible for the Act, spent months developing a framework for general-purpose AI. The result was a tiered approach:

  • All GPAI providers would face transparency requirements, including disclosure of training data summaries and compliance with copyright law
  • GPAI models with systemic risk (initially defined by a computational threshold of 10^25 FLOPs) would face additional requirements: adversarial testing, systemic risk assessment, incident reporting, and cybersecurity obligations

This GPAI framework was not in the Commission's original proposal. It was created entirely in response to the emergence of ChatGPT and its successors.

The Biometric Surveillance Debate

The most politically charged debate concerned the use of real-time remote biometric identification (essentially, live facial recognition) in public spaces. The battle lines were clear:

For a complete ban: The European Parliament's negotiators, supported by a coalition of over 200 civil society organizations, argued that real-time facial recognition in public spaces was fundamentally incompatible with democratic freedoms. They cited evidence of racial bias in facial recognition systems, the chilling effect on free assembly and protest, and the disproportionate impact on marginalized communities.

For limited exceptions: EU member state governments (represented in the Council of the EU), particularly France, Italy, and Hungary, argued that law enforcement agencies needed the ability to use facial recognition for specific serious crimes --- terrorism, kidnapping, human trafficking. They proposed a framework with judicial authorization requirements and time limits.

The final compromise permitted real-time biometric identification only for three specific purposes: searching for victims of kidnapping, trafficking, or sexual exploitation; preventing a specific and imminent terrorist threat; and locating or identifying a person suspected of specific serious crimes listed in an annex. All uses require prior judicial authorization (except in genuinely urgent cases, where authorization must be sought within 24 hours). Many civil society organizations considered this a significant concession.


The Lobbying Landscape

The EU AI Act attracted unprecedented lobbying activity from the technology sector. Corporate Europe Observatory and other transparency organizations documented extensive engagement by major technology companies, industry associations, and, to a lesser extent, civil society organizations.

Industry Positions

Large technology companies generally sought narrower definitions of high-risk AI, broader exceptions for general-purpose AI, and longer implementation timelines. Individual companies varied in their specific positions:

  • Some US-based AI companies argued that GPAI regulations would disadvantage European companies (an argument that puzzled observers, given that the largest GPAI developers were US-based)
  • European technology companies were divided: some welcomed harmonized rules that would replace fragmented national approaches; others feared competitive disadvantage relative to less-regulated jurisdictions
  • Chinese technology companies operating in the EU were notably quiet in public lobbying, though they participated through industry associations

Industry associations played a central coordinating role. DigitalEurope, the Computer & Communications Industry Association (CCIA), and the Information Technology Industry Council (ITI) submitted detailed position papers advocating for risk-proportionate regulation, regulatory sandboxes, and harmonization of standards.

Civil society organizations --- including Access Now, the European Digital Rights initiative (EDRi), Algorithm Watch, and the AI Now Institute --- consistently pushed for stronger protections, broader prohibitions, and more robust enforcement. They were generally better organized than in previous tech regulation debates, having learned from the GDPR experience.

The Foundation Model Debate

The most intense lobbying battle concerned the regulation of foundation models (GPAI with systemic risk). The key fault lines were:

Open-source exceptions. Some companies and researchers argued that open-source AI models should be exempt from GPAI requirements, on the grounds that open-source development promotes transparency and that imposing compliance costs on open-source projects would undermine the open-source ecosystem. Others argued that the risk posed by a model depends on its capabilities, not its licensing model, and that an open-source model capable of generating dangerous content is no less dangerous for being open-source.

The final Act exempted open-source GPAI models from most requirements unless they pose systemic risk or are used in high-risk applications. Open-source models that meet the systemic risk threshold (10^25 FLOPs or Commission designation) are not exempt.

The computational threshold. The 10^25 FLOPs threshold for systemic risk was controversial. Critics argued that a fixed computational threshold was a crude proxy for actual risk and would quickly become obsolete as training efficiency improved. Supporters argued that a bright-line threshold provided regulatory clarity. The final Act included both the computational threshold and the ability for the Commission to designate models based on other criteria, providing flexibility.

Downstream liability. A critical question was how liability should be allocated between GPAI providers (who build the model) and deployers (who use it in specific applications). The final Act imposed transparency and documentation obligations on providers, enabling deployers to assess risks and comply with high-risk requirements. This "supply chain" approach to AI governance was one of the Act's most innovative features.


The Trilogue Negotiations (June-December 2023)

The final stage of the legislative process --- the "trilogue" negotiations between the European Parliament, the Council of the EU, and the European Commission --- took place between June and December 2023. The negotiations were marked by marathon sessions, with the final agreement reached on December 8, 2023, after 36 hours of continuous negotiation.

Key Compromises

Biometric surveillance: The compromise described above --- limited exceptions with judicial authorization.

GPAI tiering: The three institutions agreed on the two-tier approach (all GPAI + systemic risk GPAI), with the 10^25 FLOPs threshold and Commission designation authority.

SME provisions: The final Act included several provisions intended to reduce the burden on small and medium-sized enterprises: regulatory sandboxes, proportionate compliance requirements, and reduced penalties.

Implementation timeline: A phased implementation over 36 months, giving companies time to prepare. Prohibitions on unacceptable-risk practices applied first (6 months), followed by GPAI rules (12 months), then high-risk requirements (24-36 months).


Implementation Challenges

With the Act adopted, the focus shifted to implementation --- and the challenges became apparent immediately.

Standards Development

The EU AI Act relies heavily on harmonized standards (technical specifications developed by European standardization organizations --- CEN, CENELEC, and ETSI --- that provide a presumption of conformity). As of early 2026, the standardization process was ongoing, with key questions unresolved:

  • How should "accuracy" and "robustness" be measured for different AI system types?
  • What constitutes adequate "technical documentation" for complex AI systems?
  • How should conformity assessment procedures work for AI systems that evolve through continuous learning?

Regulatory Capacity

The Act requires member states to establish or designate national competent authorities and market surveillance authorities with sufficient technical expertise to oversee AI compliance. Many member states faced challenges in attracting AI talent to government roles, where salaries are often a fraction of private-sector compensation.

Extraterritorial Application

The Act applies to any AI system placed on the EU market or whose output is used in the EU, regardless of where the provider is established. This extraterritorial reach creates enforcement challenges for systems developed and deployed outside the EU but affecting EU citizens --- a familiar challenge from GDPR enforcement.

The Innovation Question

Perhaps the most consequential uncertainty is whether the Act will accelerate or decelerate European AI development. Optimists point to the Act's potential to build consumer trust, create a premium market for trustworthy AI, and provide regulatory certainty. Pessimists argue that compliance costs will disproportionately burden European startups, that the GPAI provisions will drive foundation model development to less-regulated jurisdictions, and that the Act's prescriptive requirements will lock in current AI approaches at the expense of future innovation.


Lessons for Business Leaders

The EU AI Act's legislative journey offers several insights for business leaders navigating AI regulation worldwide:

1. Regulation takes years, but it does arrive. The EU AI Act was proposed in 2021 and adopted in 2024. Companies that dismissed it as hypothetical in 2021 had three years of warning. The same dynamic is playing out in the US, Canada, Brazil, and other jurisdictions. The time to prepare is before the law passes, not after.

2. Technology surprises shape regulation. The emergence of ChatGPT mid-legislative-process fundamentally altered the Act's scope. Companies should expect similar dynamics in other jurisdictions: unexpected technological developments will create regulatory responses that could not have been predicted at the start of the legislative process.

3. Engagement matters. Companies that participated constructively in public consultations, technical working groups, and standards development influenced the final text. Companies that simply lobbied against regulation --- without offering constructive alternatives --- were less effective.

4. The supply chain is regulated, not just the deployer. The EU AI Act's approach to GPAI regulation creates obligations throughout the AI value chain, from model developers to system integrators to end deployers. This has implications for vendor contracts, procurement processes, and liability allocation.

5. Standards are where the details live. The Act's high-level requirements will be translated into specific technical standards over the coming years. Companies that participate in standards development --- through CEN, CENELEC, ISO, or national standards bodies --- will help shape the practical meaning of compliance.

6. Regulatory coherence is a competitive advantage. The EU AI Act will influence AI regulation worldwide, just as GDPR influenced global data protection law. Companies that build compliance infrastructure now will be better positioned when other jurisdictions adopt similar frameworks --- which, based on the GDPR precedent, they likely will.


Discussion Questions

  1. Was the EU AI Act's three-year legislative process too slow, appropriate, or too fast given the pace of AI development? How should democratic legislatures balance deliberation with responsiveness to rapidly evolving technology?

  2. The "ChatGPT effect" forced legislators to add GPAI provisions that were not in the original proposal. What does this suggest about the durability of AI-specific legislation? Should AI laws be designed to be technology-neutral, or is technology-specific regulation necessary?

  3. The biometric surveillance compromise permits limited exceptions with judicial authorization. Is this the right balance? Would a complete ban be more appropriate? Would broader exceptions be?

  4. The open-source exemption for GPAI models was one of the most contested provisions. What are the strongest arguments for and against exempting open-source AI models from regulatory requirements?

  5. The Act relies on harmonized standards that have not yet been developed. What risks does this create? How should companies prepare for compliance when the specific technical requirements are still being defined?


This case study connects to Chapter 27's discussion of AI governance frameworks (which provide the organizational infrastructure for EU AI Act compliance), Chapter 25's examination of bias in AI systems (which motivated several of the Act's high-risk requirements), and Chapter 30's practical guidance on responsible AI implementation (which operationalizes the Act's requirements at the organizational level).