Exercises: The EU AI Act and Risk-Based Regulation
These exercises progress from concept checks to challenging applications. Estimated completion time: 3-4 hours.
Difficulty Guide: - ⭐ Foundational (5-10 min each) - ⭐⭐ Intermediate (10-20 min each) - ⭐⭐⭐ Challenging (20-40 min each) - ⭐⭐⭐⭐ Advanced/Research (40+ min each)
Part A: Conceptual Understanding ⭐
Test your grasp of core concepts from Chapter 21.
A.1. List the four risk tiers in the EU AI Act's classification system (from Section 21.2). For each tier, provide one example of an AI application that falls within it and explain why.
A.2. The AI Act prohibits certain AI practices outright (Section 21.3). Name at least three prohibited practices and, for each, explain the fundamental right or value that the prohibition is designed to protect.
A.3. Explain the difference between a "provider" and a "deployer" under the AI Act (Section 21.4). Why does the Act assign different obligations to these two roles? Use VitraMed as an example to illustrate: in what circumstances might VitraMed be a provider, and in what circumstances might it be a deployer?
A.4. What is a "conformity assessment" as described in Section 21.4? Why does the Act require conformity assessments for high-risk AI systems, and what happens if a system fails one?
A.5. Define "general-purpose AI model" (GPAI) as the term is used in the AI Act (Section 21.5). How does the Act's treatment of GPAI differ from its treatment of domain-specific high-risk AI systems?
A.6. Explain the concept of "systemic risk" as it applies to GPAI models under the AI Act. What additional obligations does the Act impose on GPAI models classified as posing systemic risk?
Part B: Applied Analysis ⭐⭐
Analyze scenarios, arguments, and real-world situations using concepts from Chapter 21.
B.1. Classify the following AI applications under the EU AI Act's risk tiers. For each, identify the tier and explain your reasoning with reference to the Act's classification criteria:
- (a) A chatbot that provides automated customer service for an online clothing retailer.
- (b) An AI system used by a national police force to predict the likelihood of criminal recidivism, used to inform sentencing recommendations.
- (c) An AI-powered toy that interacts with children and collects voice recordings to personalize responses.
- (d) VitraMed's predictive analytics system that identifies patients at high risk of cardiac events.
- (e) An AI system that generates music in the style of popular artists for a streaming platform.
- (f) A government-deployed AI system that assigns a behavioral score to citizens, restricting access to public services based on the score.
B.2. The AI Act requires that high-risk AI systems maintain "human oversight" capabilities (Section 21.4). Consider the following scenario:
A hospital deploys an AI diagnostic system that analyzes medical imaging to detect early-stage cancers. The system flags suspicious images for radiologist review. However, an internal study reveals that radiologists override the AI's recommendations only 3% of the time — even in cases where the AI's confidence score is below 60%.
Evaluate whether this system satisfies the AI Act's human oversight requirements. What is the difference between formal human oversight (a radiologist is in the loop) and meaningful human oversight (the radiologist exercises independent judgment)? What measures could the hospital implement to ensure genuine oversight?
B.3. Section 21.6 describes the "Brussels Effect" as it applies to AI regulation. A US-based AI company that does not operate in the EU argues: "The AI Act doesn't apply to us. We have no European customers and no European operations." Evaluate this claim. Under what circumstances might the AI Act still affect this company's practices, even without direct legal applicability?
B.4. The AI Act includes provisions for "regulatory sandboxes" — controlled environments in which AI systems can be tested under relaxed regulatory requirements (Section 21.7). Evaluate the trade-offs of this approach. What are the benefits of sandboxes for innovation? What are the risks for the individuals whose data is used in sandbox testing? How would you design a sandbox program that balances these concerns?
B.5. Eli learns that the City of Detroit is considering deploying an AI system to optimize the allocation of building inspection resources. The system would use property data, complaint histories, and demographic data to predict which buildings are most likely to have code violations. Classify this system under the AI Act's risk tiers and identify at least three specific compliance requirements that would apply if it were deployed in the EU.
B.6. The thirty-seven-hour negotiation that produced the AI Act's final text involved significant compromises, particularly on biometric surveillance and GPAI (Section 21.1.2). Select one of these compromises and evaluate it: Did the compromise produce a better outcome than either the Parliament's or the Council's original position? What values were sacrificed to achieve agreement?
Part C: Real-World Application Challenges ⭐⭐-⭐⭐⭐
These exercises ask you to engage directly with the AI Act's provisions.
C.1. ⭐⭐ Risk Classification Exercise. Identify five AI systems you interact with regularly (e.g., recommendation algorithms, voice assistants, navigation systems, spam filters, autocomplete features). For each, classify it under the AI Act's risk tiers. Write a brief justification for each classification. Were any classifications ambiguous? What does this tell you about the challenges of risk-based regulation?
C.2. ⭐⭐⭐ Conformity Assessment Simulation. Select one of the high-risk AI applications from Exercise B.1. Design a conformity assessment checklist for that application based on the requirements described in Section 21.4. Your checklist should cover: (a) data governance, (b) technical documentation, (c) transparency, (d) human oversight, (e) accuracy and robustness, and (f) cybersecurity. For each item, specify what evidence a provider would need to demonstrate compliance.
C.3. ⭐⭐⭐ Fundamental Rights Impact Assessment. The AI Act requires deployers of high-risk AI systems to conduct a fundamental rights impact assessment (FRIA) before deployment. Choose an AI system deployed by a government agency (real or hypothetical) and draft the outline of a FRIA. Your outline should identify: (a) the rights potentially affected, (b) the populations at risk, (c) the severity and likelihood of potential harms, and (d) mitigation measures.
C.4. ⭐⭐ Transparency Requirements. The AI Act requires that certain AI systems disclose to users that they are interacting with an AI. Research three AI-powered chatbots or virtual assistants you have used. For each, evaluate: Does the system clearly disclose that it is AI? If you generate content using it, does the output indicate it was AI-generated? How do these practices compare to the Act's transparency requirements?
Part D: Synthesis & Critical Thinking ⭐⭐⭐
These questions require you to integrate multiple concepts and think beyond the material presented.
D.1. The AI Act regulates AI systems based on their use (what they do) rather than their technology (how they work). Evaluate the strengths and weaknesses of this approach. What are the advantages of use-based regulation over technology-based regulation? What risks does it create? Could a technology-based approach be more effective in some contexts?
D.2. Dr. Adeyemi raises a concern about the AI Act: "It regulates the tool but not the power structures that determine how the tool is used." Develop this critique into a 300-400 word essay. How might the AI Act's focus on technical requirements obscure deeper questions about who controls AI systems, who benefits from their deployment, and who bears the risks?
D.3. Compare the AI Act's risk-based approach to the GDPR's rights-based approach (Chapter 20). Both are EU regulations, but they adopt fundamentally different regulatory strategies. Why might the EU have chosen a risk-based approach for AI when it chose a rights-based approach for data protection? What are the implications of this choice for individuals affected by AI systems?
D.4. The AI Act's treatment of biometric surveillance reflects a political compromise between the Parliament (which sought a complete ban) and the Council (which sought broader exceptions for law enforcement). Construct arguments for both positions. Then assess: Is the final compromise a stable equilibrium, or is it likely to shift — and in which direction?
D.5. Sofia Reyes argues that the AI Act, while significant, primarily protects against harms that regulators can foresee — but the most dangerous AI applications may be ones that have not yet been imagined. How should a risk-based regulatory framework deal with risks it cannot anticipate? Is the AI Act's classification system flexible enough to address novel AI applications, or does it need a mechanism for dynamic risk reassessment?
Part E: Research & Extension ⭐⭐⭐⭐
These are open-ended projects for students seeking deeper engagement. Each requires independent research beyond the textbook.
E.1. The AI Act's Implementation Timeline. The AI Act has a phased implementation timeline, with different provisions taking effect at different times. Research the current implementation status. Which provisions are already in force? Which are upcoming? How are member states establishing their national enforcement authorities? Write a 1,000-word status report as of the current date.
E.2. Comparing AI Regulation Globally. Research AI governance frameworks in at least three jurisdictions beyond the EU (e.g., the US Executive Order on AI, Canada's Artificial Intelligence and Data Act, China's AI regulations, the UK's pro-innovation approach, Brazil's AI Bill). Write a comparative analysis (1,000-1,200 words) examining: (a) the regulatory approach each jurisdiction takes, (b) how each defines and classifies AI risk, (c) the enforcement mechanisms, and (d) how each balances innovation and protection.
E.3. Foundation Models and the AI Act. The AI Act's treatment of general-purpose AI models was its most contested and least mature set of provisions. Research how major GPAI providers (OpenAI, Google DeepMind, Anthropic, Meta AI) are responding to the Act's requirements. Write a 1,000-word analysis of whether the GPAI provisions are workable, what compliance looks like in practice, and whether additional regulatory intervention is needed.
Solutions
Selected solutions are available in appendices/answers-to-selected.md.