Chapter 33: Exercises — Regulation and Compliance: GDPR, EU AI Act, and Beyond
Comprehension Exercises
Exercise 1: Regulatory Framework Mapping For each of the following AI applications, identify which regulatory frameworks potentially apply and the primary compliance obligation from each framework: (a) A chatbot that assists insurance customers in filing claims, deployed by a UK insurer with German and French customers (b) An AI system used by a US bank to flag potentially fraudulent transactions (c) An AI hiring tool used by a US tech company to screen applications for positions based in California, New York, and Germany (d) An AI medical imaging analysis tool sold to hospitals in the US and France
Exercise 2: EU AI Act Risk Classification Practice Classify each of the following AI systems under the EU AI Act's risk tier framework, citing the specific provision that drives your classification. For each system classified as high-risk, identify the three most challenging compliance requirements: (a) A voice assistant that helps elderly patients schedule medical appointments (b) An AI system that analyzes passport photographs to verify identity at border crossings (c) An AI content recommendation algorithm for a streaming video service (d) An AI system that assesses job candidates' suitability for positions in a financial institution (e) An AI tool that assists lawyers in drafting contracts (not making court decisions) (f) An AI system used by a social services agency to determine eligibility for government benefits
Exercise 3: GDPR Article 22 Analysis For each of the following scenarios, analyze whether Article 22 of the GDPR applies, what obligations it creates if it does, and what compliance approach would be appropriate: (a) A bank uses an AI model to generate a credit score, which a human loan officer then uses — along with other information — to make a lending decision (b) An employer uses an AI screening tool to automatically filter job applications, with no human review of filtered-out candidates (c) A marketing platform uses AI to segment customers and determine which promotional offers to display to each customer (d) An insurance company uses an AI model to set premiums, with human review for cases where premiums exceed a defined threshold
Exercise 4: CFPB Adverse Action Compliance A fintech lender uses a random forest model with 150 input variables to make credit decisions. The model has strong predictive performance (Gini coefficient of 0.72) but its outputs are difficult to explain in traditional terms.
(a) What does ECOA Regulation B require when the model generates an adverse action decision? (b) What technical approaches could be used to generate compliant adverse action reasons for this model? (c) What are the limitations of each approach you identify? (d) If the model uses the applicant's neighborhood (zip code) as an input variable, what fair lending concerns does this raise and how should they be addressed?
Exercise 5: State vs. Federal Preemption A company developing an AI hiring tool must comply with both New York City Local Law 144 (requiring annual bias audits and public disclosure of results) and the EEOC's technical guidance on AI in employment.
(a) What does each framework require? (b) Where do the requirements overlap? (c) Where do they create different or additional obligations? (d) If Congress enacted comprehensive federal AI legislation, what questions would arise about whether it preempts state and local AI regulations?
Gap Analysis Exercises
Exercise 6: Compliance Gap Analysis — EU AI Act Your company deploys an AI system that monitors employee performance in a call center, scoring each interaction and generating performance metrics used in performance reviews and promotion decisions. The system was developed in 2021 before the EU AI Act was drafted. Conduct a compliance gap analysis:
(a) What EU AI Act tier does this system fall into? (b) What specific requirements does this tier impose? (c) For each requirement, identify whether your company is likely to be compliant, likely to have a gap, or requires further investigation (d) For each identified gap, describe what remediation would require (e) Estimate a realistic timeline for achieving compliance, assuming resources are available
Exercise 7: Compliance Gap Analysis — GDPR and AI A European e-commerce company uses an AI recommendation engine that predicts customer purchasing behavior and personalizes product recommendations. It also uses an AI-based fraud detection system that automatically blocks transactions it identifies as likely fraudulent. Analyze the GDPR compliance gaps for both systems:
(a) What GDPR provisions are most relevant to each system? (b) Does either system trigger Article 22? If so, what is required? (c) What data subject rights are implicated, and what technical mechanisms are needed to fulfill them? (d) What privacy notice disclosures are required for each system? (e) What documentation and accountability measures are required?
Exercise 8: Multi-Jurisdictional Compliance Analysis A US-based financial services company with customers in the US, Germany, Canada, and Brazil uses an AI system for credit risk assessment. The system processes personal financial data including bank transactions, employment history, and income. Conduct a multi-jurisdictional compliance analysis identifying:
(a) The applicable regulatory frameworks in each jurisdiction (b) The most demanding requirements across all jurisdictions (c) Where requirements conflict or are inconsistent (d) What "highest common denominator" compliance program would satisfy requirements across all jurisdictions (e) Where jurisdiction-specific adjustments would still be needed
Applied and Practical Exercises
Exercise 9: Technical Documentation Draft Draft an outline of the technical documentation required under the EU AI Act for a high-risk AI system of your choice. Your outline should: - Identify the specific content requirements from the EU AI Act's Annex IV - Organize those requirements into a logical documentation structure - Identify what technical information would need to be gathered for each section - Note any sections where the documentation process would likely reveal compliance gaps requiring remediation
Exercise 10: Vendor Due Diligence Checklist Your company is evaluating three AI vendors for a customer service application: one that would automate responses to routine inquiries, one that would analyze customer sentiment and route complex cases to human agents, and one that would personalize marketing offers based on customer behavior. Develop a vendor due diligence questionnaire that addresses EU AI Act and GDPR compliance for each of these use cases. Your questionnaire should include at least 15 specific questions and identify what documentation you would request from each vendor.
Exercise 11: Human Oversight Design A healthcare organization uses an AI system that analyzes electronic health records to identify patients at high risk of hospital readmission within 30 days of discharge. The system flags high-risk patients for follow-up outreach. Design a human oversight program for this system that would satisfy EU AI Act requirements and represent genuinely good clinical governance practice. Your design should address: - What information clinicians receive about each flagged patient - How clinicians are trained to use and question the system's outputs - What documentation of clinician review is required - How disagreements between clinician judgment and AI assessment are handled - What monitoring occurs to assess the quality of both AI and human decision-making
Exercise 12: AI Inventory Exercise Working in a group, select an industry (banking, healthcare, retail, or education) and develop a realistic AI inventory for a representative mid-sized organization in that industry. Your inventory should: - Identify at least 10 plausible AI systems the organization might use - Document each system's purpose, data inputs, and decision outputs - Classify each system under the EU AI Act framework - Identify the applicable regulatory frameworks for each system - Flag the three highest-priority compliance risks
Exercise 13: Adverse Action Reason Generation — Practical Exercise Obtain a publicly available dataset (e.g., the HMDA mortgage data or a Kaggle lending dataset) and, using a machine learning library of your choice: (a) Train a classification model to predict loan approval (b) Implement a SHAP-based explanation layer to generate the top factors for individual predictions (c) Draft sample adverse action notices based on the explanations generated (d) Evaluate whether the adverse action notices would satisfy Regulation B's specificity requirements (e) Identify any cases where the SHAP explanations would not generate satisfactory adverse action reasons
Critical Analysis Exercises
Exercise 14: The Explainability Dilemma A tension exists at the heart of AI lending compliance: the most accurate risk prediction models (complex neural networks, gradient boosting ensembles) are often the least explainable, while the most explainable models (logistic regression with few variables) may sacrifice predictive performance. ECOA requires explanation. Accuracy benefits consumers through more accurate risk assessment.
(a) How should regulators balance the explainability requirement against the accuracy benefits of complex models? (b) Do post-hoc explanation methods (SHAP, LIME) adequately solve this dilemma, or do they create new problems? (c) A credit applicant who was denied credit asks to see the SHAP values that drove their adverse action. What are the practical and legal challenges this raises?
Exercise 15: The GDPR "Right to Erasure" and AI Models A consumer exercises their right to erasure (Article 17 GDPR) — the right to have their personal data deleted. Your company has used their data to train an AI model that is currently in production.
(a) What does the right to erasure require in this context? (b) Is it technically feasible to remove a specific individual's contribution to a trained machine learning model? What are the "machine unlearning" approaches that might address this? (c) What practical compliance approach would you recommend for organizations facing erasure requests for AI training data? (d) How have EU data protection authorities addressed this issue in guidance or enforcement?
Exercise 16: Illinois BIPA and AI A company operating retail stores in Illinois is considering deploying an AI-powered facial recognition system that would identify returning VIP customers as they enter the store, enabling personalized service.
(a) What does Illinois BIPA require for this system? (b) What consent process would be required? (c) What data governance requirements apply? (d) Given BIPA's litigation history, what is the potential liability exposure if the company deploys without adequate compliance? (e) Would your analysis change if the system was used for employee timekeeping rather than customer recognition?
Exercise 17: China's Generative AI Regulation vs. EU AI Act China's Interim Measures for the Management of Generative Artificial Intelligence Services (2023) and the EU AI Act both regulate large language models and generative AI, but through very different frameworks and with very different objectives.
(a) What are the primary obligations that each framework imposes on providers of generative AI services? (b) Where do the frameworks overlap in their requirements? (c) Where do they create conflicting or incompatible requirements? (d) For a global AI company that wants to operate in both China and the EU, what compliance challenges does this dual framework create?
Scenario and Simulation Exercises
Exercise 18: Regulatory Examination Simulation Working in groups, simulate a CFPB supervisory examination of a fintech lender's AI underwriting model. Divide into two groups: CFPB examination team and fintech compliance team.
The CFPB team prepares examination questions covering: model development and validation; fair lending testing results; adverse action compliance; vendor management; and governance and oversight.
The fintech compliance team prepares documentation and responses, identifying gaps in their preparation.
After the simulation, debrief on: What questions revealed the most significant gaps? What documentation would have been most valuable? What organizational changes would improve examination readiness?
Exercise 19: EU AI Act Compliance Project Plan Your organization has been informed by its legal team that its AI-assisted medical diagnosis system is likely high-risk under the EU AI Act and that the high-risk requirements will become applicable in August 2026. You have been asked to develop a compliance project plan.
Draft a project plan that includes: - A gap assessment phase (identifying what requirements apply and where current gaps exist) - A remediation planning phase (prioritizing and sequencing compliance work) - A documentation development phase - A conformity assessment phase (including whether third-party assessment is required) - A registration and launch phase - An ongoing monitoring and maintenance phase
For each phase, identify key activities, responsible parties, timeline, and potential obstacles.
Exercise 20: Compliance Program Design Design a comprehensive AI compliance program for a large bank that uses AI in the following applications: customer service chatbots, credit underwriting, fraud detection, market risk modeling, and employee performance assessment. Your program design should address:
(a) Governance structure — who is responsible for AI compliance and how does it relate to other risk management functions? (b) AI inventory and risk classification — how will the bank identify and classify all AI systems? (c) Documentation standards — what documentation will be required for each risk tier? (d) Testing and validation — what fair lending testing, model validation, and performance monitoring will be required? (e) Human oversight — how will meaningful human oversight be implemented for high-risk applications? (f) Training — what training will employees involved in AI development and deployment receive? (g) Incident response — how will the bank detect, investigate, and report AI-related compliance incidents?
Exercise 21: The Ethics Washing Audit Select a major financial services company, technology company, or healthcare organization that has published AI ethics commitments, responsible AI principles, or similar public documentation. Conduct an "ethics washing audit":
(a) What specific commitments has the company made? (b) For each commitment, what observable organizational practices would demonstrate genuine compliance vs. performative compliance? (c) What evidence is publicly available about the company's actual AI practices? (d) Do you find evidence of a gap between stated commitments and actual practices? (e) What specific changes would transform the company's commitments from ethics washing to genuine governance?
Exercise 22: Regulation B Adverse Action Notice A consumer applies for a mortgage and is denied. The lender's AI underwriting model assigned a score of 42/100 (below the approval threshold of 60/100). The model's SHAP analysis identified the following top factors contributing to the below-threshold score (ranked by magnitude of impact): 1. Debt-to-income ratio (high) 2. Length of credit history (short) 3. Recent late payments (2 in past 12 months) 4. Employment tenure at current employer (under 1 year) 5. Property location (rural area)
Draft a Regulation B adverse action notice that: - Provides the required specific reasons for adverse action - Complies with the 4-factor limitation on adverse action reasons in Regulation B - Accurately reflects the model's actual decision factors - Is written in language accessible to a typical consumer - Addresses what the applicant can do to improve their creditworthiness
Then identify any concerns about whether factor 5 (property location/rural area) is an appropriate adverse action reason and what fair lending analysis would be required before using it.
Exercise 23: GDPR Data Protection Impact Assessment for AI The GDPR requires a Data Protection Impact Assessment (DPIA) for processing that is "likely to result in a high risk to the rights and freedoms of natural persons" — a standard that many AI applications meet. Conduct a DPIA for an AI-powered employee monitoring system that tracks keystroke patterns, application usage, and communication frequency to generate employee productivity scores used in performance reviews.
Your DPIA should cover: (a) Description of the processing and its purpose (b) Assessment of necessity and proportionality (c) Identification of risks to data subjects (d) Measures to address identified risks (e) Residual risks and whether they are acceptable (f) Whether the DPIA's outcome supports proceeding with the system
Exercise 24: Regulatory Arbitrage Analysis A US-based fintech company is expanding internationally. It currently uses an AI lending model that uses alternative data (mobile app usage patterns, device data, and location history) that would face significant regulatory scrutiny under the EU AI Act and GDPR. The company is considering whether to:
Option A: Develop EU-compliant AI underwriting that avoids the most problematic alternative data and meets documentation and explanation requirements Option B: Structure EU operations through a corporate entity that relies on human underwriters rather than the AI model, avoiding EU AI Act coverage Option C: Enter EU markets only through a partnership with a local institution that takes credit risk and uses the US company's AI only for initial screening recommendations
Analyze each option's legal feasibility, compliance costs, and ethical implications. What would you recommend and why?
Exercise 25: The DPO and AI Compliance Officer — Role Design Design the organizational structure and role responsibilities for AI compliance at a mid-sized European bank (3,000 employees, operations in Germany, France, and the Netherlands) that uses AI in retail banking, SME lending, and employee management. Address:
(a) Should the bank have a dedicated AI Compliance Officer role separate from the DPO, or should AI compliance be integrated into the DPO function? (b) What qualifications and skills should each role require? (c) How should these roles interact with IT/Technology, Legal, Risk Management, and Business Unit leadership? (d) What committee structure should support AI governance? (e) How should the bank handle AI compliance for AI systems developed by external vendors vs. internally? (f) What reporting structure ensures that AI compliance concerns reach the Board level?