Case Study: The EU AI Act Negotiation
"The AI Act was not written by a single visionary. It was forged in the collision between competing visions of what artificial intelligence should be allowed to do." — Dragos Tudorache, MEP, co-rapporteur for the AI Act
Overview
The European Union's Artificial Intelligence Act — formally adopted in 2024 — was the product of three years of political negotiation, thousands of proposed amendments, unprecedented lobbying, and a last-minute scramble to address technologies that did not exist when the legislative process began. The final text represents not a single coherent philosophy but a layered compromise among institutions, member states, industry groups, and civil society organizations with fundamentally different priorities.
This case study traces the negotiation from the Commission's 2021 proposal through the trilogue to the final agreement, examining the political dynamics, key fault lines, and strategic choices that shaped the world's first comprehensive AI law.
Skills Applied: - Analyzing the political economy of regulation - Identifying how institutional interests shape legislative outcomes - Evaluating the gap between a regulation's ambitions and its compromises - Understanding how technological change disrupts legislative processes
Act I: The Commission Proposal (April 2021)
The Draft
On April 21, 2021, the European Commission published its "Proposal for a Regulation laying down harmonized rules on Artificial Intelligence" — better known as the AI Act proposal. The draft was the culmination of three years of policy development, building on the High-Level Expert Group's 2019 Ethics Guidelines and the 2020 White Paper on AI.
The proposal's central innovation was its risk-based classification system:
- Unacceptable risk: Certain AI practices would be prohibited outright — social scoring by governments, subliminal manipulation, exploitation of vulnerable groups.
- High risk: AI systems used in specified critical domains (biometric identification, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) would face extensive requirements: conformity assessments, risk management systems, data governance, transparency, human oversight, and accuracy standards.
- Limited risk: AI systems with specific transparency risks (chatbots, emotion recognition, deepfakes) would be subject to disclosure requirements.
- Minimal risk: All other AI systems would face no mandatory requirements.
The proposal was deliberately conservative in scope. It regulated AI applications, not AI technology. It focused on high-risk use cases rather than attempting to govern the development process itself. And it placed primary responsibility on "providers" — the entities that developed AI systems and placed them on the market — rather than on the far larger number of "deployers" who used AI systems in practice.
Initial Reactions
The proposal drew praise for its ambition and criticism for its gaps — sometimes from the same stakeholders.
Industry generally supported the risk-based approach (preferable to blanket regulation) but objected to the breadth of the high-risk category, arguing that many listed applications posed minimal real-world risk. The Business Software Alliance warned that the definition of "AI system" was too broad, potentially capturing conventional software.
Civil society organizations, led by groups like Access Now, EDRi (European Digital Rights), and AlgorithmWatch, argued the proposal did not go far enough. They demanded a complete ban on biometric mass surveillance, stronger provisions for affected individuals to challenge AI decisions, and mandatory fundamental rights impact assessments.
Member states split along predictable lines. France, with its ambitious AI industry policy, pushed for lighter regulation of innovation. Germany, with its stronger data protection tradition, supported stricter requirements. Eastern European states, including Hungary and Poland, resisted provisions they feared would constrain government use of surveillance technology.
Act II: The Parliament's Position (2022-2023)
Strengthening the Framework
The European Parliament, generally the most rights-protective of the EU institutions, adopted its negotiating position in June 2023. The Parliament's version significantly strengthened the Commission's proposal:
Biometric surveillance. The Parliament voted for a near-total ban on real-time remote biometric identification in publicly accessible spaces — going well beyond the Commission's proposal, which had allowed exceptions. Parliament members argued that facial recognition technology was fundamentally incompatible with the right to privacy in public spaces, regardless of the purpose.
Foundation models. In the most significant departure from the Commission's draft, the Parliament introduced a new regulatory category: "foundation models" (later renamed "general-purpose AI models" in the final text). This was directly prompted by the explosion of large language models in late 2022 and early 2023. ChatGPT was released in November 2022 — eighteen months after the Commission published its proposal. The Parliament recognized that the Commission's application-focused approach could not adequately address models designed to be adapted for countless purposes.
Transparency and accountability. The Parliament strengthened transparency requirements, including provisions for individuals to be informed when they are subject to an AI decision and to receive explanations of the decision's logic. It also pushed for a database of high-risk AI systems to be publicly accessible.
Fundamental rights. The Parliament added a mandatory fundamental rights impact assessment for deployers of high-risk AI systems — a provision absent from the Commission's proposal.
The ChatGPT Disruption
The impact of ChatGPT on the negotiations cannot be overstated. When the Commission drafted its proposal in 2020-2021, the dominant AI policy concern was algorithmic decision-making in high-stakes contexts — hiring, criminal justice, credit scoring. Foundation models that could generate human-quality text, code, and images were not part of the policy landscape.
ChatGPT's launch in November 2022 changed everything. Suddenly, EU legislators faced questions the proposal was not designed to address: Who is responsible when a foundation model generates misinformation? How should copyright interact with AI training data? What obligations should apply to a model that can be used for virtually any purpose?
The Parliament scrambled to incorporate foundation model provisions. The process was compressed and imperfect — members acknowledged that they were writing rules for a technology they were still learning to understand. But the alternative — leaving foundation models entirely unregulated while the rest of the AI ecosystem was governed — was unacceptable.
Act III: The Council's Position
National Interest and Law Enforcement
The Council of the EU, representing member state governments, adopted a more industry-friendly and security-conscious position:
Biometric surveillance. The Council insisted on broader exceptions for law enforcement use of real-time biometric identification. Member state interior ministries — responsible for policing and national security — argued that a blanket ban would prevent legitimate crime-fighting uses. France, which deployed facial recognition at the 2024 Paris Olympics under a temporary national law, was particularly vocal.
Foundation models. The Council was skeptical of the Parliament's foundation model provisions, which some member states viewed as premature and potentially harmful to European AI companies. Several member states — particularly France, Germany, and Italy, home to Mistral AI, Aleph Alpha, and other European AI startups — argued that overly strict GPAI requirements would disadvantage European companies relative to US and Chinese competitors.
Enforcement flexibility. The Council generally favored giving member states more discretion in implementation and enforcement, resisting provisions that would standardize enforcement across the EU.
Act IV: The Trilogue (October-December 2023)
Thirty-Seven Hours
The trilogue — the three-way negotiation between the Commission, Parliament, and Council — began in October 2023 and culminated in the marathon December sessions. Three fault lines dominated:
Fault Line 1: Biometric surveillance. The Parliament's near-total ban versus the Council's broad law enforcement exceptions. The compromise: real-time remote biometric identification in public spaces is prohibited in principle, but law enforcement may use it for a specified list of serious offenses (including terrorism, trafficking, and sexual exploitation) subject to prior judicial authorization, temporal and geographic limitations, and a requirement to notify the affected individual after the fact.
Fault Line 2: Foundation models / GPAI. The debate centered on the stringency of obligations for GPAI providers. The Commission proposed transparency requirements for all GPAI models, with additional obligations for models posing "systemic risk." The Parliament wanted broad obligations; the Council wanted narrow ones. The compromise: all GPAI providers must comply with transparency requirements (including technical documentation, copyright compliance, and training data summaries). Models exceeding a specified compute threshold (10^25 FLOPs) are presumed to pose systemic risk and face additional obligations — adversarial testing, systemic risk assessment, incident reporting, and cybersecurity measures.
Fault Line 3: SME burden. Industry concerns about compliance costs for small and medium enterprises produced several concessions: reduced documentation requirements for SMEs, access to regulatory sandboxes, and extended implementation timelines.
The 22-Hour Session
The final trilogue session, which began on the afternoon of December 8, 2023, lasted twenty-two hours — running through the night and into the following morning. Negotiators worked through the remaining contentious provisions paragraph by paragraph, fueled by coffee and the political pressure of reaching agreement before the end of the legislative term.
Key decisions made in the final hours included: - The specific list of offenses justifying law enforcement use of real-time biometric identification - The compute threshold for the systemic risk classification - The timeline for the Act's phased implementation - The structure of the enforcement architecture (a new European AI Office within the Commission, plus national authorities in each member state)
At approximately 11:00 a.m. on December 9, the co-rapporteurs announced a political agreement.
Assessment: What the Negotiation Reveals
The Limits of Legislative Foresight
The AI Act's most significant lesson may be that technology moves faster than legislation. A proposal designed for algorithmic decision-making systems had to be retrofitted — mid-negotiation — to address foundation models. The GPAI provisions, written under time pressure and technological uncertainty, are widely acknowledged as the Act's least mature component.
The Power of Institutional Position
Each EU institution brought its institutional DNA to the negotiation. The Commission sought a workable framework that would demonstrate EU leadership without provoking industry backlash. The Parliament championed fundamental rights, reflecting its role as the directly elected body. The Council protected member state prerogatives, particularly in law enforcement and national security. The final text bears the imprint of all three positions — and, inevitably, the internal contradictions that produce.
The Lobbying Landscape
The AI Act attracted unprecedented lobbying. OpenAI's CEO Sam Altman visited Brussels multiple times. Industry associations representing US Big Tech, European AI startups, and traditional industries all made their cases. Civil society organizations mounted sustained advocacy campaigns. The final text shows traces of all these influences — provisions that reflect industry concerns (regulatory sandboxes, SME exemptions) alongside those reflecting civil society demands (FRIA, the high-risk database).
Discussion Questions
-
Was the AI Act's risk-based classification system the right approach? Could the EU have achieved its goals with a simpler framework — for example, a prohibition-only approach (banning specific practices without regulating the rest) or a comprehensive-obligation approach (imposing baseline requirements on all AI systems)?
-
The GPAI provisions were written under extreme time pressure to address a technology that legislators were still learning to understand. What are the risks of regulating a fast-moving technology through legislation rather than through more flexible instruments (guidelines, standards, codes of conduct)? What are the benefits?
-
The biometric surveillance compromise reflects a genuine tension between security and privacy. France's deployment of facial recognition at the 2024 Olympics was a real-world test of the Act's exceptions. Research this deployment and evaluate: Did the safeguards built into the compromise prove adequate?
-
If you were advising a small European AI startup, how would you plan for compliance with the AI Act? What would you prioritize, and what risks would you accept?
Your Turn: Mini-Project
Option A: Research the positions of three specific stakeholders during the AI Act negotiations (e.g., Access Now, DigitalEurope, the French government). Write a one-page analysis for each, comparing their stated priorities to the final text. Which stakeholder's position is most reflected in the law?
Option B: The AI Act's phased implementation means different provisions take effect at different times. Create a detailed timeline of implementation milestones and, for each, identify the key compliance actions organizations must take. Assess whether the timeline provides sufficient preparation time.
Option C: Draft a 1,000-word memo to a hypothetical European AI company, explaining how the AI Act affects their operations and recommending a compliance strategy. Assume the company develops a high-risk AI system in one of the Annex III domains.
References
-
European Commission. "Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)." COM(2021) 206 final, April 21, 2021.
-
European Parliament. "Amendments adopted by the European Parliament on 14 June 2023." P9_TA(2023)0236.
-
Veale, Michael, and Frederik Zuiderveen Borgesius. "Demystifying the Draft EU Artificial Intelligence Act." Computer Law Review International 22 (2021): 97–112.
-
Engler, Alex. "The EU AI Act Will Have Global Impact, but a Lot of Uncertainty." Brookings Institution, June 2023.
-
Bertuzzi, Luca. "AI Act: A Timeline." Euractiv, regularly updated.
-
Access Now. "EU Artificial Intelligence Act: Analysis and Recommendations." Brussels, 2023.
-
Floridi, Luciano. "The European Legislation on AI: A Brief Analysis of Its Philosophical Approach." Philosophy & Technology 34 (2021): 215–222.