Chapter 33: Quiz — Regulation and Compliance: GDPR, EU AI Act, and Beyond
Instructions: Choose the best answer for each multiple-choice question. For short-answer questions, write 2–5 complete sentences. This quiz tests knowledge from the main chapter, both case studies, and the key takeaways.
Multiple Choice (Questions 1–15)
1. The EU AI Act entered into force on: - A) January 1, 2023 - B) August 1, 2024 - C) February 2, 2025 - D) August 1, 2026
2. Under the EU AI Act, which of the following is classified as a prohibited AI practice? - A) AI systems used for CV screening in employment decisions - B) Chatbots that do not disclose they are AI - C) Social scoring systems by public authorities evaluating trustworthiness - D) AI systems used for credit scoring with disparate impact on protected groups
3. Which of the following AI applications falls into the EU AI Act's high-risk tier under Annex III? - A) A spam filter that categorizes emails as junk or legitimate - B) An AI system that assists radiologists in identifying potential tumors in X-rays for FDA-cleared devices - C) A generative AI system that writes marketing copy - D) A voice assistant that plays music on command
4. Under the EU AI Act, what is the maximum penalty for violations of the prohibited AI practices provisions? - A) €7.5 million or 1.5% of global annual turnover - B) €15 million or 3% of global annual turnover - C) €35 million or 7% of global annual turnover - D) Unlimited fine at the discretion of the national competent authority
5. The EU AI Act's requirement for "general-purpose AI models" with systemic risk designation applies primarily to: - A) All AI systems that process personal data - B) AI models trained using more than 10^25 floating-point operations - C) AI systems deployed in healthcare or law enforcement contexts - D) AI systems that have caused at least one serious incident in deployment
6. GDPR Article 22 applies when: - A) An AI system processes sensitive personal data such as health or biometric data - B) A decision is based solely on automated processing and produces legal or similarly significant effects - C) An AI system is used in cross-border data processing operations - D) An organization uses AI for employee monitoring purposes
7. The CFPB's position on AI credit underwriting models and the ECOA adverse action requirement is best described as: - A) Complex AI models are exempt from adverse action requirements because providing specific reasons is technically impossible - B) Creditors may use generic adverse action codes for AI-based decisions pending development of XAI standards - C) Model complexity does not excuse compliance with adverse action requirements; specific reasons must be provided - D) AI credit models require only that lenders disclose that an AI was used, without identifying specific decision factors
8. Illinois BIPA is significant for AI compliance primarily because: - A) It is the first US state law to regulate AI decision-making in employment - B) It regulates the collection and use of biometric data and includes a private right of action enabling class action litigation - C) It requires bias audits for any automated employment decision tool used in Illinois - D) It prohibits the use of facial recognition in employment contexts entirely
9. The NIST AI Risk Management Framework organizes AI risk management around which four core functions? - A) Identify, Protect, Detect, Respond - B) Plan, Do, Check, Act - C) Govern, Map, Measure, Manage - D) Assess, Mitigate, Monitor, Report
10. New York City Local Law 144 requires employers using automated employment decision tools to: - A) Obtain written consent from every candidate before using AEDT analysis - B) Conduct annual independent bias audits and publish the results publicly - C) Ensure that all AEDT-based decisions are reviewed by a licensed HR professional - D) Submit annual compliance reports to the NYC Commission on Human Rights
11. Under the EU AI Act, high-risk AI system providers must conduct "conformity assessment" before placing their systems on the EU market. For most high-risk AI systems listed in Annex III, this assessment is: - A) Conducted by the European Commission's AI Office - B) A self-assessment by the provider, without mandatory third-party involvement - C) Conducted by a notified body designated by the member state - D) Waived if the provider is ISO 27001 certified
12. Under GDPR, an individual exercises their "right to erasure" regarding data the organization used to train an AI model currently in production. The most accurate statement about the organization's obligation is: - A) The right to erasure does not apply to data that has been used for AI training, as this constitutes legitimate data processing - B) The organization must retrain the AI model from scratch excluding the individual's data immediately upon receiving the erasure request - C) The organization must take reasonable steps to address the request, though "machine unlearning" raises complex technical questions that regulators are still developing guidance on - D) The right to erasure never applies to AI training data because training data is anonymized by definition
13. China's Interim Measures for the Management of Generative Artificial Intelligence Services (2023) requires, among other things, that: - A) Generative AI providers obtain explicit opt-in consent from users before generating any content - B) Generative AI providers ensure their services do not generate content that "subverts state power" or "undermines national unity" - C) Generative AI providers conduct annual third-party audits and publish the results - D) Generative AI providers share model weights with Chinese government authorities upon request
14. The "disparate impact" standard in fair lending law means that: - A) Only AI models that demonstrably encode intentional racial bias violate ECOA - B) Lending models that use race as an explicit input variable violate ECOA - C) Lending practices that produce statistically significant adverse outcomes for protected groups may violate ECOA even without discriminatory intent - D) Fair lending law only applies to traditional credit bureau-based models, not AI alternatives
15. Under the EU AI Act, high-risk AI system deployers (as distinct from providers/developers) are required to: - A) Conduct their own conformity assessment in addition to the provider's assessment - B) Conduct fundamental rights impact assessments for certain systems, implement human oversight, and maintain logs of system operation - C) Register independently in the EU AI registry separate from the provider's registration - D) Obtain insurance coverage for AI-related harms before deployment
Short Answer (Questions 16–20)
16. Explain why "human oversight" under the EU AI Act must be meaningful rather than nominal. What does nominal human oversight look like in practice, and what would genuine human oversight require?
17. The case study on CFPB and algorithmic lending discusses the "adverse action requirement" under ECOA. Explain why this requirement creates a specific compliance challenge for complex AI models, and describe at least two technical approaches that have been developed to address this challenge.
18. A company based in Singapore sells a B2B AI recruitment screening tool that is used by its clients to evaluate candidates for positions in Singapore, Indonesia, Vietnam, and Germany. What regulatory frameworks apply to this company's AI system, and what are the primary compliance obligations from each?
19. Describe the difference between "disparate treatment" and "disparate impact" in the context of AI anti-discrimination law. Give a specific example of how an AI credit underwriting model could produce disparate impact without any intentional discriminatory purpose.
20. The chapter describes building an "AI inventory" as the foundation of an AI compliance program. What should an AI inventory document for each AI system, and why is maintaining such an inventory practically challenging for large organizations?
Answer Key: 1-B, 2-C, 3-B (medical device AI regulated under existing product safety law — note: Option B depends on FDA clearance which would make it Annex I high-risk), 4-C, 5-B, 6-B, 7-C, 8-B, 9-C, 10-B, 11-B, 12-C, 13-B, 14-C, 15-B. Short answer responses should demonstrate understanding of the relevant concepts and engage with the specific scenarios presented.