Chapter 28 Quiz: AI Regulation --- Global Landscape
Multiple Choice
Question 1. Under the EU AI Act, which of the following AI applications is classified as "unacceptable risk" and therefore prohibited?
- (a) An AI system used by an employer to screen resumes for job applicants
- (b) A social scoring system used by a public authority to evaluate citizens' trustworthiness, resulting in detrimental treatment
- (c) A chatbot used by a bank to answer customer questions about account balances
- (d) An AI-powered recommendation engine used by an e-commerce platform
Question 2. Which of the following best describes the United States' approach to AI regulation as of early 2026?
- (a) A single comprehensive federal AI law modeled on the EU AI Act
- (b) A sector-specific, fragmented approach with no comprehensive federal legislation
- (c) A principles-based approach relying on voluntary industry commitments
- (d) A state-directed approach prioritizing government control of AI development
Question 3. An AI system used to screen job applications would be classified under the EU AI Act as:
- (a) Minimal risk --- no specific requirements
- (b) Limited risk --- transparency requirements only
- (c) High risk --- full conformity assessment required
- (d) Unacceptable risk --- prohibited
Question 4. Which country's AI regulations require that generative AI output reflect "socialist core values"?
- (a) Singapore
- (b) Japan
- (c) China
- (d) India
Question 5. The NIST AI Risk Management Framework is organized around four core functions. Which of the following is NOT one of them?
- (a) Govern
- (b) Enforce
- (c) Measure
- (d) Manage
Question 6. Under the EU AI Act, the maximum penalty for deploying a prohibited AI practice is:
- (a) EUR 7.5 million or 1% of global annual turnover, whichever is higher
- (b) EUR 15 million or 3% of global annual turnover, whichever is higher
- (c) EUR 35 million or 7% of global annual turnover, whichever is higher
- (d) EUR 50 million or 10% of global annual turnover, whichever is higher
Question 7. NYC Local Law 144 requires employers using automated employment decision tools (AEDTs) to:
- (a) Obtain written consent from each candidate before using the AEDT
- (b) Conduct an annual independent bias audit and publish the results
- (c) Submit the AEDT for government certification before deployment
- (d) Limit AEDT use to final-round candidates only
Question 8. Which of the following best describes the UK's approach to AI regulation?
- (a) Comprehensive legislation similar to the EU AI Act
- (b) Principles-based guidance applied by existing sector regulators
- (c) Application-specific regulations targeting individual AI technologies
- (d) No regulatory framework of any kind
Question 9. Under the EU AI Act, a general-purpose AI model (GPAI) with "systemic risk" is defined as one that:
- (a) Has been involved in a documented safety incident
- (b) Was trained with more than 10^25 floating-point operations, or is designated by the European Commission
- (c) Is used by more than one million users in the EU
- (d) Generates revenue exceeding EUR 100 million annually
Question 10. Singapore's Model AI Governance Framework is best described as:
- (a) A legally binding regulation with enforcement mechanisms and penalties
- (b) A voluntary, practical framework designed to be readily implementable by organizations
- (c) A sector-specific regulation applying only to financial services
- (d) A criminal statute imposing jail time for AI misuse
Question 11. In Athena's compliance analysis, which AI system was classified as "high risk" under the EU AI Act?
- (a) The churn prediction model
- (b) The product recommendation engine
- (c) The customer service chatbot
- (d) The HR screening model
Question 12. Which of the following is a limitation of industry self-regulation, as discussed in the chapter?
- (a) Self-regulation is always more expensive than government regulation
- (b) Voluntary commitments cannot address any form of AI risk
- (c) Companies most likely to cause harm are least likely to participate in self-regulatory initiatives
- (d) Self-regulation is illegal under EU competition law
Question 13. The "highest standard" compliance strategy recommends that multinational companies:
- (a) Build separate AI systems for each jurisdiction to meet each jurisdiction's specific requirements
- (b) Design for the most demanding regulatory requirements and adapt downward for less demanding jurisdictions
- (c) Comply only with regulations in their home country and accept the risk of non-compliance elsewhere
- (d) Wait until all jurisdictions converge on a single standard before investing in compliance
Question 14. Which of the following statements about China's AI regulatory approach is accurate?
- (a) China has adopted a comprehensive, risk-based framework modeled on the EU AI Act
- (b) China has enacted no AI-specific regulations, relying entirely on existing consumer protection laws
- (c) China has enacted targeted, application-specific regulations covering algorithmic recommendations, deep synthesis, and generative AI
- (d) China regulates AI exclusively through voluntary industry commitments
Question 15. The Colorado AI Act (SB 24-205), signed in May 2024, applies to AI systems used in:
- (a) Only government operations within Colorado
- (b) Consequential decisions in areas including education, employment, financial services, healthcare, housing, insurance, and legal services
- (c) All AI applications regardless of use case
- (d) Only autonomous vehicles and robotics
Short Answer
Question 16. Explain why Lena Park recommends the EU AI Act as the "floor" for global AI compliance, even for companies not primarily operating in the EU. Provide two specific reasons.
Question 17. The chapter identifies five market failures that justify AI regulation: information asymmetry, negative externalities, and power concentration, along with accountability gaps and democratic legitimacy concerns. Choose one and explain, in three to four sentences, how it manifests in a real-world AI application.
Question 18. Ravi argues that regulatory compliance is a "competitive moat" for Athena. In two to three sentences, explain the logic of this argument and identify one condition under which it could fail.
Question 19. Compare the EU AI Act's approach to general-purpose AI models (GPAI) with China's approach to regulating generative AI. Identify one key similarity and one key difference.
Question 20. The chapter discusses partial regulatory convergence on some AI governance principles. Identify two areas where global convergence is likely and one area where persistent divergence is expected. Briefly explain why.
True or False
Question 21. True or False: The NIST AI Risk Management Framework is a legally binding regulation that all US companies must follow.
Question 22. True or False: Under the EU AI Act, a customer service chatbot must disclose to EU users that they are interacting with an AI system.
Question 23. True or False: India has enacted comprehensive AI legislation comparable to the EU AI Act.
Question 24. True or False: The EU AI Act applies only to companies headquartered in EU member states.
Question 25. True or False: The UK's AI regulatory approach assigns AI oversight to existing sector regulators rather than creating a new AI-specific regulator.
Answer key available in Appendix B (Answers to Selected Exercises and Quiz Questions).