23 min read

> "Artificial intelligence is not good or evil in itself. It all depends on how it is used. We want to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly."

Learning Objectives

  • Trace the legislative history of the EU AI Act from proposal through political negotiation to final text
  • Explain the risk-based classification system and assign AI applications to their appropriate risk tier
  • Identify the AI practices the Act prohibits outright and articulate the rationale for each prohibition
  • Describe the requirements imposed on providers and deployers of high-risk AI systems
  • Analyze the Act's treatment of general-purpose AI models and the concept of systemic risk
  • Evaluate the Brussels Effect as it applies to AI governance and assess the Act's global influence

Chapter 21: The EU AI Act and Risk-Based Regulation

"Artificial intelligence is not good or evil in itself. It all depends on how it is used. We want to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly." — European Commission, AI Act press release (2024)

Chapter Overview

In December 2023, after thirty-seven hours of continuous negotiation — including a twenty-two-hour marathon session that stretched through the night — the European Parliament and Council reached a political agreement on the Artificial Intelligence Act. The legislation, formally adopted in 2024, represents the most ambitious attempt by any jurisdiction to regulate artificial intelligence through binding law.

The AI Act is important not only for what it requires but for what it represents: a political decision that certain uses of AI are too dangerous to permit, that others must meet minimum standards before deployment, and that the companies building AI systems must be answerable for the consequences. Whether you agree with every provision or not, the Act sets terms that AI developers, deployers, and users worldwide must now reckon with.

This chapter examines the AI Act in depth — its structure, its requirements, its exceptions, and its implications. We'll pay particular attention to how the Act's risk-based approach works in practice and what it means for organizations like VitraMed that deploy AI in high-stakes domains.

In this chapter, you will learn to: - Trace the political process that produced the AI Act and understand the compromises it reflects - Classify AI systems within the Act's risk tiers and determine applicable requirements - Analyze the prohibited practices and the values they protect - Apply the Act's high-risk requirements to a specific AI system - Assess the global impact of EU AI regulation through the Brussels Effect


21.1 The Road to the AI Act

21.1.1 The Policy Origins

The AI Act did not emerge from a vacuum. It reflects more than a decade of European policy development on artificial intelligence:

  • 2018: The European Commission published its Communication on AI, signaling that regulation was coming.
  • 2019: The High-Level Expert Group on AI published its Ethics Guidelines for Trustworthy AI, establishing seven key requirements: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and non-discrimination, societal and environmental well-being, and accountability.
  • 2020: The Commission released its White Paper on AI, proposing a risk-based regulatory approach and soliciting public comment.
  • April 2021: The Commission published its legislative proposal — the first draft of the AI Act.
  • 2022-2023: The European Parliament and Council developed their respective negotiating positions, with the Parliament pushing for stricter provisions (particularly on biometric surveillance and foundation models) and the Council seeking more flexibility for law enforcement and national security.
  • December 2023: Political agreement reached after final trilogue negotiations.
  • 2024: Formal adoption, publication in the Official Journal, and the beginning of a phased implementation timeline.

21.1.2 The Political Negotiation

The AI Act's final text reflects intense political compromise. Three fault lines dominated the negotiations:

Biometric surveillance. The Parliament pushed for a complete ban on real-time remote biometric identification in public spaces. The Council, representing member state governments, insisted on exceptions for law enforcement. The compromise permits real-time biometric identification by law enforcement only for a limited number of specified serious offenses, subject to prior judicial authorization and other safeguards.

Foundation models and general-purpose AI. When the Commission published its proposal in 2021, ChatGPT did not yet exist. The explosive growth of large language models and other foundation models in 2022-2023 forced negotiators to address a category of AI that the original proposal did not contemplate. The result was a new tier of obligations for "general-purpose AI models," with heightened requirements for models posing "systemic risk."

Innovation and competitiveness. Throughout the negotiations, industry voices warned that overly strict regulation would push AI development outside the EU, ceding competitive advantage to the US and China. The Act includes several provisions designed to address these concerns: regulatory sandboxes for AI testing, reduced obligations for small and medium enterprises, and a focus on "high-risk" applications rather than AI technology in general.

Power Asymmetry: The legislative process reveals a power asymmetry within democratic governance itself. Industry lobbying groups spent an estimated 100 million euros influencing the AI Act's provisions (Corporate Europe Observatory, 2024). Civil society organizations, with a fraction of those resources, had to fight for provisions protecting fundamental rights — and won some battles (transparency obligations, fundamental rights impact assessments) while losing others (law enforcement exceptions to the biometric ban).

Dr. Adeyemi, who followed the negotiations closely and submitted an expert opinion to the European Parliament's IMCO committee, offered her students a characteristically nuanced assessment: "The AI Act is neither the revolution its proponents promised nor the catastrophe its critics predicted. It is a political compromise — messy, imperfect, and incomplete. But it is also a decision. A democratic society decided that AI systems must meet minimum standards. That decision, however imperfect its implementation, matters enormously."


21.2 The Risk-Based Classification System

The AI Act's most distinctive feature is its risk-based approach. Rather than regulating all AI systems equally, the Act classifies AI applications into four risk tiers, with obligations increasing as risk increases.

21.2.1 The Four Tiers

┌─────────────────────────────────────────────┐
│         UNACCEPTABLE RISK (Prohibited)       │
│  Social scoring, manipulative AI, untargeted │
│  scraping for facial recognition databases    │
├─────────────────────────────────────────────┤
│              HIGH RISK                        │
│  AI in critical infrastructure, education,    │
│  employment, law enforcement, migration,      │
│  health, democratic processes                 │
├─────────────────────────────────────────────┤
│            LIMITED RISK                       │
│  Chatbots, deepfakes, emotion recognition     │
│  (transparency obligations only)              │
├─────────────────────────────────────────────┤
│            MINIMAL RISK                       │
│  AI-enabled video games, spam filters,        │
│  inventory management                         │
│  (no specific obligations)                    │
└─────────────────────────────────────────────┘

Minimal risk: The vast majority of AI systems — spam filters, AI-powered recommendation engines for content, AI in video games, AI-optimized inventory management — fall into this category and face no specific obligations under the Act. The Act explicitly states that it does not intend to regulate AI technology per se, only AI applications that pose defined risks.

Limited risk: AI systems that interact with individuals in ways that require transparency. The primary obligation is disclosure: users must be informed that they are interacting with an AI system (e.g., a chatbot), that content has been generated by AI (e.g., a deepfake), or that an emotion recognition or biometric categorization system is in operation.

High risk: AI systems deployed in domains where errors or biases can cause significant harm to fundamental rights, health, or safety. This is the Act's most detailed tier, and we'll examine it in Section 21.4.

Unacceptable risk: AI practices prohibited outright. These are examined in the next section.

21.2.2 The Logic of Risk-Based Regulation

Why organize regulation around risk rather than technology?

Proportionality. A one-size-fits-all approach would impose identical requirements on a spam filter and a medical diagnostic system. Risk-based regulation calibrates obligations to potential harm.

Technology neutrality. By regulating applications and uses rather than specific technologies, the Act avoids becoming obsolete as AI techniques evolve.

Political viability. A total ban on AI or a comprehensive licensing requirement for all AI systems would have been politically impossible. The risk-based approach allowed regulators to focus enforcement resources on the highest-stakes applications.

Limitations. Risk-based regulation requires accurate risk classification, which is itself a contested exercise. Where do you draw the line between "limited" and "high" risk? What happens when a minimal-risk system is repurposed for a high-risk application? These boundary questions are inherent to the approach.

Mira, reading the Act for a class assignment, immediately saw the relevance to VitraMed. "Dad's risk scoring system takes patient health data and generates a risk score that influences clinical decisions. If that's not high-risk AI, I don't know what is."


21.3 Prohibited Practices: The Unacceptable Risk Tier

The AI Act's most powerful provisions are its outright bans. Article 5 prohibits the following AI practices:

21.3.1 Social Scoring by Public Authorities

The Act prohibits AI systems used by public authorities (or on their behalf) for social scoring — the evaluation of individuals based on their social behavior or personal characteristics, leading to detrimental treatment that is unjustified or disproportionate.

Rationale: Social scoring systems treat people as the sum of their data points, denying dignity and autonomy. They create chilling effects on behavior, as people modify their conduct not because of genuine conviction but because of fear that a data point might lower their score.

Scope: This prohibition applies to public authorities. Private-sector social scoring systems — credit scores, insurance risk scores, employer monitoring scores — are not prohibited per se, though they may be regulated as high-risk AI systems if they fall within certain use-case categories.

Connection: This distinction between public and private social scoring connects to a broader debate: if social scoring by the state is unacceptable because it reduces individuals to data points and creates coercive behavior modification, why should social scoring by corporations be acceptable? We examined this question in Chapter 13. The AI Act's answer — banning public scoring while regulating private scoring as high-risk — reflects a political compromise, not a principled distinction.

21.3.2 Manipulative and Deceptive AI

The Act prohibits AI systems that deploy subliminal techniques, manipulative or deceptive practices, or exploit vulnerabilities (age, disability, social or economic situation) to materially distort behavior in ways that cause significant harm.

Rationale: This provision targets AI systems designed to undermine autonomous decision-making — systems that manipulate rather than inform. It connects directly to the dark patterns discussion in Chapter 4: AI-powered persuasion systems that exploit psychological vulnerabilities to drive purchases, political behavior, or other actions against the individual's genuine interests.

21.3.3 Untargeted Scraping for Facial Recognition

The Act prohibits the creation of facial recognition databases through untargeted scraping of images from the internet or CCTV footage. This provision directly targets practices like those of Clearview AI, which built a database of billions of facial images scraped from social media without consent.

21.3.4 Emotion Recognition in Workplaces and Education

The Act prohibits the use of emotion recognition systems in workplace and educational settings, with narrow exceptions for medical or safety purposes.

21.3.5 Real-Time Remote Biometric Identification

The Act prohibits real-time remote biometric identification in publicly accessible spaces for law enforcement purposes — with exceptions. Law enforcement may use such systems for: - Searching for specific victims of kidnapping, trafficking, or sexual exploitation - Preventing a specific, substantial, and imminent threat to life or a foreseeable terrorist attack - Identifying a suspect in connection with a specific serious criminal offense

Each use requires prior judicial authorization (except in emergencies, where authorization must be sought within 24 hours), a fundamental rights impact assessment, registration in the EU database, and notification to the relevant market surveillance authority.

Eli read the exceptions with growing frustration. "So they banned it, except when they didn't ban it. Law enforcement always gets a carve-out, and the definition of 'specific' and 'imminent' is going to be interpreted as broadly as possible. That's what happens with every surveillance exception — it starts narrow and expands."

Sofia Reyes, in a DataRights Alliance briefing that Dr. Adeyemi shared with the class, offered a more measured assessment: "The exceptions are real and troubling. But the baseline ban is also real. Before the AI Act, there was no EU-wide restriction on real-time biometric surveillance at all. Now there's a presumptive prohibition with limited exceptions. That's a genuine structural change, even if the exceptions need constant vigilance."


21.4 High-Risk AI: The Core of the Act

The high-risk tier is where the AI Act does most of its regulatory work. It establishes detailed requirements for AI systems deployed in specified high-stakes domains.

21.4.1 What Qualifies as High-Risk?

High-risk AI systems are identified through two mechanisms:

Annex I: Product safety legislation. AI systems that serve as safety components of products covered by existing EU harmonized legislation (medical devices, machinery, toys, vehicles, etc.) are automatically classified as high-risk.

Annex III: Standalone high-risk use cases. AI systems in the following domains are classified as high-risk:

Domain Examples
Biometrics Remote biometric identification (non-prohibited uses), biometric categorization, emotion recognition (non-prohibited uses)
Critical infrastructure AI management of electricity, gas, water, heating, road traffic
Education and training AI determining access to education, evaluating learning outcomes, monitoring cheating
Employment AI screening CVs, evaluating candidates, making promotion/termination decisions, task allocation, monitoring performance
Essential services AI assessing creditworthiness, setting insurance premiums, evaluating emergency service prioritization
Law enforcement Predictive policing, AI-assisted evidence evaluation, profiling
Migration and border control Risk assessment of migrants, document verification, visa/permit applications
Justice and democracy AI assisting courts, influencing election outcomes

21.4.2 Requirements for High-Risk AI Systems

Providers of high-risk AI systems must comply with the following requirements:

Risk management system (Article 9). A continuous, iterative process that identifies, evaluates, and manages risks throughout the AI system's lifecycle. The risk management system must consider risks to health, safety, and fundamental rights, and must implement risk mitigation measures.

Data governance (Article 10). Training, validation, and testing datasets must be relevant, sufficiently representative, and as free of errors as possible. Data governance must address potential biases and ensure the data is appropriate for the system's intended purpose.

Connection: This data governance requirement connects directly to Chapter 22's treatment of data quality management. The AI Act doesn't just require good AI — it requires good data. As the principle from Chapter 14 reminds us: biased data in, biased decisions out.

Technical documentation (Article 11). Comprehensive documentation must be prepared before the system is placed on the market. Documentation must include a general description of the system, design specifications, development methodology, performance metrics, and a description of the risk management measures implemented.

Record-keeping (Article 12). High-risk AI systems must include automatic logging capabilities to ensure traceability. Logs must record events relevant to identifying risks, facilitate post-market monitoring, and enable auditing.

Transparency and information for deployers (Article 13). Providers must ensure that their systems include clear instructions for deployers, covering intended purpose, level of accuracy, known limitations, specifications for human oversight, and maintenance requirements.

Human oversight (Article 14). High-risk AI systems must be designed to allow effective oversight by natural persons. Human oversight measures must enable the person overseeing the system to fully understand the system's capabilities and limitations, correctly interpret its output, decide not to use the system or override its output, and intervene or halt the system's operation.

Accuracy, robustness, and cybersecurity (Article 15). Systems must achieve appropriate levels of accuracy, robustness against errors and manipulation, and cybersecurity protection.

21.4.3 Conformity Assessment

Before a high-risk AI system can be placed on the EU market, it must undergo a conformity assessment — a structured evaluation to verify that the system meets all applicable requirements.

For most high-risk AI systems, providers may conduct a self-assessment (internal conformity assessment). However, AI systems involving biometric identification and categorization of natural persons require assessment by a notified body (an independent third-party organization designated by a member state).

Accountability Gap: The reliance on self-assessment for most high-risk AI systems has drawn significant criticism. Self-assessment places the assessment burden on the entity that built the system and has a financial interest in its deployment. While the Act includes post-market monitoring and market surveillance mechanisms to catch failures, the initial assessment remains largely in the provider's hands. Critics argue this creates an accountability gap comparable to the self-certification model in financial regulation that contributed to the 2008 crisis.

21.4.4 Deployer Obligations

The Act distinguishes between providers (who develop or commission high-risk AI systems) and deployers (who use them). Deployers of high-risk AI must:

  • Use the system in accordance with the provider's instructions
  • Assign human oversight to competent, trained, and authorized individuals
  • Monitor the system's operation for risks
  • Inform the provider and relevant authorities of incidents or malfunctions
  • Conduct a fundamental rights impact assessment (for public bodies and certain private entities)

21.5 General-Purpose AI Models

21.5.1 A Late Addition to the Framework

The treatment of general-purpose AI models (GPAI) — foundation models, large language models, and similar systems that can be adapted to many downstream tasks — was the most contentious element of the AI Act's final negotiations. The original 2021 proposal did not address these models because they did not fit neatly into the use-case-based classification scheme. A general-purpose model is not itself a "high-risk AI system" — it becomes one (or not) only when deployed in a specific context.

The solution was a new category with its own obligations, layered on top of the risk-based classification for specific deployments.

21.5.2 Obligations for All GPAI Models

All providers of general-purpose AI models must:

  • Prepare and maintain technical documentation, including training and testing processes and evaluation results
  • Provide information and documentation to downstream providers who integrate the model into their own AI systems
  • Establish a policy to comply with EU copyright law, including the text and data mining opt-out provisions
  • Publish a sufficiently detailed summary of the training data used

21.5.3 GPAI Models with Systemic Risk

A GPAI model is classified as posing "systemic risk" if: - It has "high impact capabilities" (assessed based on the cumulative amount of compute used for training, using a threshold of 10^25 floating-point operations as a rebuttable presumption), or - It is designated as posing systemic risk by the European Commission based on specified criteria

Providers of GPAI models with systemic risk face additional obligations: - Perform model evaluations, including adversarial testing (red-teaming) - Assess and mitigate systemic risks, including risks related to major accidents, disruptions to critical sectors, serious effects on public health, safety, security, or fundamental rights - Track, document, and report serious incidents to the AI Office and relevant national authorities - Ensure adequate cybersecurity protection for the model and its physical infrastructure

21.5.4 The Compute Threshold Debate

The 10^25 FLOPs threshold for systemic risk classification has been both praised and criticized. Proponents argue that a quantitative threshold provides clarity and predictability. Critics raise several concerns:

  • Obsolescence. Compute capabilities increase rapidly. A threshold set today may be trivially exceeded by routine models within a few years.
  • Compute is not risk. The amount of compute used to train a model is an imperfect proxy for the risks it poses. A highly capable model trained efficiently (using less compute) could pose greater risks than a less capable model trained profligately.
  • Game-ability. Providers might structure training to stay below the threshold, for example by splitting training across multiple runs.

Ray Zhao, who had been following the Act's implications for NovaCorp's AI-powered financial models, offered a corporate perspective during a guest lecture: "The compute threshold makes my compliance team happy because it's measurable. But it makes my engineering team nervous because it measures the wrong thing. We could build a model well below the threshold that causes real harm in financial markets, and we could build a model above the threshold that's completely benign. Risk is about what the model does, not how big it is."


21.6 Penalties and Enforcement

21.6.1 The Penalty Structure

The AI Act establishes a tiered penalty structure:

Violation Maximum Penalty
Prohibited AI practices 35 million euros or 7% of global annual turnover, whichever is higher
Non-compliance with high-risk requirements 15 million euros or 3% of global annual turnover
Supplying incorrect information to authorities 7.5 million euros or 1% of global annual turnover

For SMEs and startups, penalties are calculated as the lower of the percentage-based or absolute amounts.

21.6.2 The Enforcement Architecture

Enforcement operates at two levels:

National level. Each member state designates one or more national competent authorities, including a market surveillance authority, to enforce the Act within its territory. These authorities have the power to conduct investigations, order corrective actions, impose penalties, and withdraw or recall non-compliant AI systems from the market.

EU level. The European AI Office, established within the European Commission, has primary responsibility for enforcing rules related to GPAI models. The AI Office monitors compliance, conducts investigations, and can impose penalties directly on GPAI providers.

21.6.3 Implementation Timeline

The Act's obligations phase in over time:

Timeframe (from entry into force) Provisions
6 months Prohibited practices ban takes effect
12 months GPAI model obligations take effect
24 months Full application of high-risk AI requirements for Annex III systems
36 months Full application for Annex I (product safety) systems

Common Pitfall: The phased implementation timeline means that companies cannot wait until full application to begin compliance preparations. Conformity assessment, documentation, and risk management system development take time. Organizations deploying high-risk AI should begin compliance work well before the deadlines arrive.


21.7 The Brussels Effect and Global AI Governance

21.7.1 Shaping Global Standards

Just as the GDPR influenced data protection laws worldwide (Chapter 20), the AI Act is expected to shape AI governance globally through the Brussels Effect:

De facto effect: Companies developing AI for global markets will find it efficient to build AI Act-compliant products as their baseline, rather than maintaining separate product versions for different jurisdictions.

De jure effect: Countries considering AI regulation will use the AI Act as a reference point — adopting, adapting, or deliberately departing from its framework. Canada's AIDA, Brazil's AI Bill, and various ASEAN AI governance frameworks have already drawn on elements of the EU approach.

Standard-setting influence: The technical standards that will operationalize the AI Act's requirements — developed by European standardization organizations CEN and CENELEC — will likely become international reference standards, just as ISO standards influenced by European regulatory requirements have done in other domains.

21.7.2 Alternative Approaches

Not every jurisdiction has followed the EU's path:

United States. The US approach to AI governance has relied primarily on executive orders (President Biden's Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, October 2023), voluntary commitments from AI companies, and sector-specific guidance from agencies like the FTC, FDA, and EEOC. No comprehensive federal AI legislation has been enacted, though various bills have been proposed.

United Kingdom. The UK adopted a "pro-innovation" approach in its 2023 AI white paper, relying on existing regulators to apply AI-relevant guidance within their domains rather than creating a new cross-sector regulatory framework.

China. China has enacted targeted AI regulations addressing specific concerns: algorithmic recommendations (2022), deepfakes (2023), and generative AI (2023). These regulations emphasize both user protection and state control over AI-generated content, requiring that AI outputs reflect "core socialist values."

International efforts. The OECD AI Principles (2019), the G7 Hiroshima AI Process (2023), and the inaugural AI Safety Summit at Bletchley Park (2023) represent multilateral efforts to establish common principles for AI governance, though they lack binding force.

21.7.3 The Race to Govern AI

Eli, characteristically, cut through the comparative analysis to the underlying power dynamics: "Everyone's talking about whether the EU or the US has the 'better' approach to AI regulation. But better for whom? The EU's approach protects fundamental rights but might slow deployment. The US approach enables rapid deployment but leaves people unprotected. China's approach gives the state control. In every case, the question is who has power over the technology — and who doesn't."


21.8 VitraMed Under the AI Act

If VitraMed proceeds with EU expansion, its patient risk scoring system would almost certainly be classified as a high-risk AI system under the AI Act. Health-related AI is explicitly listed in Annex III, and a system that generates risk scores influencing clinical decisions meets the criteria.

21.8.1 Classification Analysis

VitraMed's risk scoring system: - Processes health data (special category under GDPR, high-risk domain under AI Act) - Influences clinical decisions about patient care, including treatment prioritization - Operates at scale across multiple clinics and patient populations - Uses machine learning trained on historical patient data

This places it squarely in the high-risk category under Annex III (health and safety domain).

21.8.2 Compliance Requirements for VitraMed

To deploy in the EU, VitraMed would need to:

  1. Implement a risk management system identifying potential harms — missed diagnoses, biased risk scores for underrepresented populations, incorrect prioritization of care
  2. Demonstrate data governance — showing that training data is representative, that biases have been identified and mitigated, and that the data pipeline meets quality standards
  3. Prepare technical documentation sufficient for conformity assessment
  4. Implement logging to enable post-deployment monitoring and incident investigation
  5. Ensure human oversight — clinicians must be able to understand, override, or halt the system's recommendations
  6. Undergo conformity assessment — potentially requiring third-party assessment if biometric data is involved
  7. Register in the EU database of high-risk AI systems

Mira, now deeply engaged in data ethics coursework, drafted an informal assessment for her father. "Dad, this isn't just a compliance checklist. The AI Act is asking whether your system is trustworthy. Is it accurate across different patient populations? Can doctors understand why it flags certain patients? Have you tested for bias against communities that were underrepresented in the training data? These aren't checkbox questions — they're questions about the integrity of the product."

Vikram, to his credit, took the assessment seriously. "If we can't answer those questions honestly, maybe we shouldn't be deploying the system in the EU. Or anywhere."

VitraMed Thread: This moment marks a turning point in the VitraMed arc. Vikram's response — recognizing that regulatory compliance might reveal genuine product weaknesses — foreshadows the maturity stage in Chapters 26-30, where VitraMed moves from reluctant compliance to genuine ethical reflection. Regulation, at its best, doesn't just constrain; it illuminates.


21.9 Chapter Summary

Key Concepts

  • The EU AI Act is the world's first comprehensive AI regulation, adopted in 2024 after years of legislative development and intense political negotiation.
  • The Act uses a risk-based classification system with four tiers: unacceptable (prohibited), high risk (detailed requirements), limited risk (transparency obligations), and minimal risk (no specific obligations).
  • Prohibited practices include social scoring by public authorities, manipulative AI, untargeted facial recognition database creation, emotion recognition in workplaces and education, and real-time remote biometric identification (with law enforcement exceptions).
  • High-risk AI requirements include risk management systems, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy/robustness/cybersecurity standards.
  • General-purpose AI models face transparency and documentation obligations, with additional requirements for models posing systemic risk (including adversarial testing and incident reporting).
  • The Brussels Effect is expected to give the AI Act global influence, shaping AI governance standards worldwide.

Key Debates

  • Is the risk-based approach the right way to regulate AI, or does it leave too many harmful systems in the "minimal risk" category?
  • Are the law enforcement exceptions to the biometric surveillance ban appropriately narrow, or will they expand over time?
  • Is the compute threshold for systemic risk an adequate proxy, or does it measure the wrong thing?
  • Will the AI Act stifle European AI innovation, or create competitive advantage through trust and reliability?

Applied Framework

When assessing an AI system under the AI Act: 1. Classify: What risk tier does the system fall into? Is it covered by Annex I (product safety) or Annex III (standalone use cases)? 2. Identify obligations: What specific requirements apply at that risk tier? 3. Assess compliance: Has the provider implemented risk management, data governance, documentation, logging, transparency, human oversight, and security? 4. Evaluate the gap: Where does the system fall short, and what would compliance require? 5. Consider intent vs. effect: Even if the system complies technically, does it achieve the Act's goal of protecting fundamental rights?


What's Next

In Chapter 22: Data Governance Frameworks and Institutions, we shift from external regulation to internal governance — the organizational frameworks, processes, and roles that enable responsible data management. We'll examine the DAMA-DMBOK framework, data governance councils, data quality management, and metadata management — and build a Python DataQualityAuditor class that demonstrates how data quality principles translate into code.

Before moving on, complete the exercises and quiz to solidify your understanding of the EU AI Act and risk-based AI regulation.


Chapter 21 Exercises → exercises.md

Chapter 21 Quiz → quiz.md

Case Study: The EU AI Act Negotiation: From Proposal to Law → case-study-01.md

Case Study: Social Credit in Practice: China's System Analyzed → case-study-02.md