Quiz — Chapter 30: The EU AI Act and Algorithmic Accountability
14 questions. Select the single best answer for each multiple-choice question. For short-answer questions, write 2–4 sentences.
Multiple Choice
1. Which of the following AI practices is absolutely prohibited under Article 5 of the EU AI Act?
A) A credit scoring model that produces outputs correlated with a protected characteristic B) An AI system deployed by a public authority to score citizens' social behavior and restrict their access to services C) A fraud detection model that autonomously blocks payments without human review D) A customer chatbot that does not disclose its AI nature
Answer: B
Explanation: Article 5 prohibits social scoring by public authorities — AI systems used to evaluate or classify natural persons based on social behavior or personal characteristics, leading to detrimental treatment. This is inspired by concerns about China-style social credit systems. Option A describes a potentially discriminatory but not absolutely prohibited system. Option C describes an oversight gap. Option D describes a limited-risk transparency violation under Article 50.
2. Under Annex III of the EU AI Act, which of the following AI systems is explicitly classified as high-risk?
A) A spam filter using machine learning to classify incoming emails B) A recommender system suggesting film titles to streaming platform subscribers C) An AI system used to evaluate the creditworthiness of natural persons D) A customer segmentation model used to allocate marketing campaigns
Answer: C
Explanation: Annex III(5)(b) explicitly classifies AI systems intended to evaluate the creditworthiness of natural persons or establish their credit score as high-risk. The other options describe minimal-risk applications not listed in Annex III.
3. Cornerstone Financial Group deploys its credit scoring model from servers in the United Kingdom to assess applications from EU-resident customers. Under the EU AI Act, which of the following is correct?
A) The Act does not apply because the model is operated from outside the EU B) The Act applies only if Cornerstone has a legal entity established in the EU C) The Act applies because the model's outputs are used by and affect persons in the EU D) The Act applies only to the EU-based employees who review the model's outputs
Answer: C
Explanation: The EU AI Act has extraterritorial scope. It applies to providers and deployers of AI systems whose outputs are used in the EU, regardless of where the provider or deployer is established. A UK firm serving EU customers with AI credit scoring is subject to the Act for those specific deployments.
4. For most high-risk AI systems in Annex III — including credit scoring and employment AI — what is the required conformity assessment route?
A) Third-party assessment by an accredited notified body B) Internal conformity assessment conducted by the provider C) Assessment by the national competent authority of the EU member state where the deployer operates D) Assessment by the European AI Office
Answer: B
Explanation: For most Annex III categories (including credit scoring, insurance pricing, and employment AI), providers may conduct their own internal conformity assessment without a mandatory third-party notified body. Only biometric identification AI and some law enforcement AI require notified-body assessment.
5. Article 9 of the EU AI Act requires high-risk AI providers and deployers to establish a risk management system. Which of the following best describes the temporal character of this requirement?
A) A one-time assessment completed before market placement B) An annual review conducted by the Chief Risk Officer C) A continuous iterative process running throughout the AI system's entire lifecycle D) A pre-deployment audit repeated only when the model is substantially modified
Answer: C
Explanation: Article 9 specifies that the risk management system must be "a continuous iterative process" running throughout the AI system's entire lifecycle, not a one-time or periodic exercise. It must be maintained, updated, and reviewed as the AI system evolves and as post-market monitoring generates new information.
6. Article 14 of the EU AI Act requires human oversight of high-risk AI systems. Which of the following is NOT a requirement of Article 14?
A) The oversight person must be able to override or interrupt the AI system B) The oversight person must be aware of the risk of automation bias C) The oversight person must independently re-run the AI model's computations to verify outputs D) The oversight person must understand the capacities and limitations of the AI system
Answer: C
Explanation: Article 14 does not require independent replication of model computations. It requires that designated oversight persons understand the system's capabilities and limitations, can monitor its operation, can intervene and override outputs, and are aware of the tendency to over-rely on AI outputs (automation bias). Independent recomputation is not among the stated requirements.
7. Annex IV of the EU AI Act specifies the content of technical documentation for high-risk AI systems. Which of the following items is NOT required under Annex IV?
A) A description of the validation and testing procedures used during development B) A list of all competitors' AI systems operating in the same market C) Information about changes made after initial deployment D) Cybersecurity measures taken to protect the AI system
Answer: B
Explanation: Annex IV requires comprehensive technical documentation about the AI system itself — its design, development, testing, monitoring, cybersecurity, and modifications. It does not require information about competitors' AI systems.
8. The EU AI Act establishes a compute threshold for classifying GPAI models as presenting "systemic risk." What is that threshold?
A) 10^20 floating-point operations (FLOPs) B) 10^23 floating-point operations (FLOPs) C) 10^25 floating-point operations (FLOPs) D) 10^28 floating-point operations (FLOPs)
Answer: C
Explanation: The EU AI Act sets the systemic risk threshold for GPAI models at training computation exceeding 10^25 FLOPs. Models meeting this threshold face additional obligations including adversarial testing, incident reporting to the AI Office, and enhanced cybersecurity measures.
9. Which body is primarily responsible for enforcing the EU AI Act in relation to GPAI models and systemic risk AI?
A) The European Data Protection Board (EDPB) B) The European AI Office, established within the European Commission C) The European Banking Authority (EBA) D) The national competent authority of the member state where the AI provider's head office is located
Answer: B
Explanation: The EU AI Act established the European AI Office within the European Commission as the primary body responsible for supervising GPAI models and their systemic risk provisions. National competent authorities (designated by member states) have primary enforcement responsibility for other provisions, but the AI Office has lead authority for GPAI.
10. The EU AI Act's high-risk AI obligations in Annex III (covering credit scoring, employment AI, and related categories) apply from which date?
A) 1 August 2024 B) 2 February 2025 C) 2 August 2025 D) 2 August 2026
Answer: D
Explanation: The Act entered into force on 1 August 2024. Prohibited practices applied from 2 February 2025. GPAI obligations apply from 2 August 2025. The critical high-risk AI system obligations in Annex III apply from 2 August 2026 — the primary compliance deadline for financial institutions.
11. A US financial institution operates a credit scoring model for EU-resident customers from offices in New York. The model has been validated under SR 11-7 and passes the firm's internal model governance standards. Is this sufficient for EU AI Act compliance?
A) Yes, because SR 11-7 model risk management addresses the same substantive concerns as the EU AI Act B) No, because the EU AI Act requires additional obligations — including CE marking, EU database registration, and Article 9–15 compliance — that SR 11-7 does not require C) Yes, because the Act does not apply to firms established outside the EU D) No, but only because the Act requires third-party conformity assessment that SR 11-7 does not require
Answer: B
Explanation: SR 11-7 addresses model risk management through model validation and governance — important foundations that align with several EU AI Act requirements. However, the Act requires additional specific obligations (CE marking, public EU database registration, formal risk management system under Article 9, human oversight framework under Article 14, Annex IV technical documentation, and logging under Article 12) that SR 11-7 does not mandate. The extraterritorial scope of the Act means the US firm is subject to EU requirements for its EU-customer-affecting systems.
12. How does the UK's approach to AI regulation differ from the EU AI Act?
A) The UK has enacted a more restrictive AI Act that applies to all AI systems including minimal-risk ones B) The UK has adopted a sector-specific, principles-based approach relying on existing regulators rather than horizontal AI legislation C) The UK applies the EU AI Act by treaty but with a two-year implementation delay D) The UK has enacted AI legislation identical to the EU AI Act but with lower financial penalties
Answer: B
Explanation: The UK Government's 2023 AI White Paper chose a sector-specific, principles-based approach, relying on existing regulators (FCA, PRA, ICO, CMA) to apply five cross-sector AI principles through their existing powers. The UK has not enacted horizontal AI legislation. UK firms serving EU customers remain subject to the EU AI Act for those activities.
13. The NIST AI Risk Management Framework (AI RMF 1.0) is described as the primary AI risk management framework in the US financial sector. Which of the following is correct?
A) The AI RMF is a binding federal regulation enforced by the SEC and CFTC B) The AI RMF is mandatory for all federally insured US financial institutions C) The AI RMF is a voluntary guidance framework that provides structure for AI risk management but creates no direct legal obligations D) The AI RMF is an international treaty standard adopted by the G20 financial regulators
Answer: C
Explanation: The NIST AI RMF 1.0, published January 2023, is voluntary guidance — not a regulation, not a statute, and not the basis for enforcement action. Its influence derives from supervisory adoption as a reference framework and its practical utility. Financial institutions bear no legal obligation to adopt it, though doing so is strongly encouraged by regulatory expectations and aligns with SR 11-7 model risk management discipline.
14. Article 10 of the EU AI Act requires that training data for high-risk AI systems be examined for possible biases. What is the strongest characterization of this requirement?
A) It is a best-practice recommendation that firms should document in their model governance policies B) It is a legal obligation, creating accountability for training data decisions that must be documented and maintained as part of technical documentation C) It applies only to training data collected from EU residents D) It requires firms to eliminate all bias from training data before a model may be deployed
Answer: B
Explanation: Article 10's bias examination requirement is a legal obligation, not a best-practice recommendation. It applies to all training data used in high-risk AI systems and requires documented examination — creating legal accountability for training data decisions. The requirement does not demand that bias be fully eliminated (which may be impossible), but it does require that biases be identified, evaluated, and addressed through appropriate mitigation measures.
Answer Key Summary
| Q | Answer |
|---|---|
| 1 | B |
| 2 | C |
| 3 | C |
| 4 | B |
| 5 | C |
| 6 | C |
| 7 | B |
| 8 | C |
| 9 | B |
| 10 | D |
| 11 | B |
| 12 | B |
| 13 | C |
| 14 | B |