Chapter 17: Quiz
The Right to Explanation
20 multiple-choice questions. Select the best answer for each.
Question 1. The philosophical foundation of the right to explanation is BEST described as grounded in:
A) The economic efficiency of transparent markets B) Autonomy (explanation enables self-direction), dignity (being treated as a person, not a data point), epistemic justice (the right to understand what affects you), and due process (the right to know the basis of decisions about you) C) Technical requirements of explainable AI systems D) The historical development of administrative law in European civil law systems
Question 2. Under GDPR Article 22, individuals have the right not to be subject to decisions based "solely on automated processing." The word "solely" is significant because:
A) It limits the right to decisions made entirely without any human involvement whatsoever, making the right extremely broad B) It creates room for organizations to avoid Article 22 obligations through nominal human involvement — rubber-stamp review that does not constitute genuine human oversight C) It means Article 22 applies only to fully autonomous AI systems, not to AI systems that assist human decision-makers D) It restricts Article 22 to decisions about individuals, excluding decisions about organizations
Question 3. Which of the following decisions would MOST clearly trigger GDPR Article 22's protections?
A) A streaming service's AI recommends a movie to a subscriber based on their viewing history B) An automated email marketing system decides when to send promotional emails C) A bank's AI system automatically rejects a mortgage application, without human review, based on the applicant's credit profile D) A factory's AI quality control system rejects defective products
Question 4. Goodman and Flaxman's 2017 paper argued that GDPR creates:
A) A narrowly limited right to see one's credit report B) A broad right to explanation of specific AI decisions, requiring AI systems to be technically capable of explaining their outputs to affected individuals C) No right to explanation, only a right to notification that automated processing occurred D) A right to explanation only for government AI decisions, not private sector decisions
Question 5. Wachter, Mittelstadt, and Russell's counter-argument held that GDPR Article 22 creates:
A) A right to explanation equivalent to the right to a full technical disclosure of the model's architecture and weights B) A right to information about the general logic of automated decision-making processes provided in advance, not an individual's right to demand a post-hoc explanation of a specific decision C) No explanation rights whatsoever; only a right to opt out of automated processing D) Explanation rights identical to those in the US Fair Credit Reporting Act
Question 6. Edwards and Veale's contribution to the Article 22 debate focused primarily on:
A) Arguing that Article 22 should be amended to create a stronger right to explanation B) Arguing that counterfactual explanations — "What would have changed the outcome?" — are practically more valuable for individuals than formal model explanations, regardless of how the legal debate is resolved C) Defending Wachter, Mittelstadt, and Russell's narrow reading of Article 22 D) Proposing that explanation rights should apply only to public sector AI, not private sector AI
Question 7. The EU AI Act's Article 86 is significant for the right to explanation because it:
A) Repeals GDPR Article 22 and replaces it with a narrower right B) Creates an individual right to explanation of AI's role in high-risk AI decisions — applying regardless of whether the decision was "solely automated" — filling the gap left by Article 22's scope limitation C) Requires all AI systems to be inherently interpretable D) Establishes a new EU agency responsible for AI explanation enforcement
Question 8. The "faithfulness problem" in AI explanation refers to:
A) The obligation of AI systems to provide honest, non-manipulative explanations B) The difficulty of translating technical AI explanations into plain language for non-technical recipients C) The risk that post-hoc explanation methods produce approximations of model reasoning that do not accurately represent the model's actual decision logic D) The requirement that AI systems faithfully disclose their training data
Question 9. Cynthia Rudin's argument in her 2019 Nature Machine Intelligence paper is that:
A) All AI explanation methods are inherently unfaithful and therefore explanation rights are unenforceable B) In high-stakes decision domains, organizations should use inherently interpretable models (decision trees, logistic regression) that can genuinely explain themselves, rather than complex black-box models with post-hoc explanations C) Only government AI systems require explanation capacity; private AI systems should be exempt D) Explanation methods have become accurate enough that black-box models are now acceptable in high-stakes contexts
Question 10. The constitutional due process requirements for AI benefits decisions in the United States are BEST established by:
A) The Supreme Court's decision in AI-specific cases decided after 2015 B) GDPR, which applies to US government decisions affecting EU citizens C) Goldberg v. Kelly (1970) and its progeny, establishing notice and hearing requirements for benefit terminations, applied to algorithmic decisions by lower courts including Ledgerwood v. Jobe (2016) D) The Administrative Procedure Act's notice and comment rulemaking requirements
Question 11. The "gaming problem" in AI explanation refers to:
A) The use of AI for illegal gaming operations B) The risk that known explanation requirements enable organizations to design AI systems that produce explanation-compliant outputs that do not accurately represent the model's actual decision logic C) The difficulty of explaining AI decisions in the context of sports and gaming D) The manipulation of explanation systems by adversarial users who game appeals processes
Question 12. Which US legal framework provides the CLOSEST analog to GDPR Article 22's right to explanation?
A) The First Amendment right to receive information B) The Freedom of Information Act, which allows individuals to request government records C) The Equal Credit Opportunity Act's adverse action notice requirements, which require lenders to provide specific reasons for adverse credit decisions D) The Electronic Communications Privacy Act, which protects communications from government surveillance
Question 13. State v. Loomis (Wisconsin Supreme Court, 2016) is relevant to the right to explanation because:
A) The court held that defendants have an absolute right to know the algorithm behind any risk assessment tool B) The court upheld the use of a proprietary COMPAS recidivism risk score in sentencing without full disclosure of the algorithm, raising unresolved due process questions about opacity in criminal justice AI C) The court prohibited the use of all AI risk assessment tools in sentencing D) The court required the state to publish COMPAS's complete algorithm in the public record
Question 14. The "audience problem" in AI explanation refers to:
A) The difficulty of getting the media to cover AI explanation issues accurately B) The problem that what constitutes a "meaningful" explanation depends entirely on the recipient's background — the explanation appropriate for a data scientist differs fundamentally from the explanation appropriate for a loan applicant C) The challenge of explaining AI to legislative audiences unfamiliar with technology D) The limitation that Article 22 rights only apply to individual data subjects, not to class action plaintiffs
Question 15. Why are individual explanation rights insufficient for systemic AI accountability?
A) Individual rights are legally unenforceable in most jurisdictions B) Individual explanations, even when accurate, do not reveal aggregate patterns of discriminatory or erroneous AI decision-making that only become visible in analysis of many decisions across populations C) Individuals typically lack the technical capacity to evaluate explanations they receive D) Individual explanation rights are too expensive for organizations to implement at scale
Question 16. The Dutch Tax Authority's toeslagenaffaire case is significant for Article 22 because it demonstrated that:
A) GDPR Article 22 has been fully effective in preventing algorithmic government discrimination B) An algorithmic fraud detection system that discriminated by nationality and used nominal rather than genuine human review violated GDPR, resulting in significant political and legal consequences C) The Dutch Data Protection Authority lacks enforcement authority over government agencies D) Article 22's "solely automated" requirement prevents it from applying to government decision systems that involve any human review
Question 17. The EU AI Act's requirement for a publicly accessible database of high-risk AI systems represents what form of transparency?
A) Individual transparency — enabling specific affected individuals to access information about decisions made about them B) Systemic transparency — enabling visibility into what AI systems are deployed and what they do at the system level, rather than only at the individual decision level C) Technical transparency — enabling AI researchers to audit model architectures D) Commercial transparency — enabling competitors to understand market-leading AI systems
Question 18. In the context of GDPR Article 22, "meaningful information about the logic involved" has been interpreted by the EDPB and ICO to include:
A) Complete disclosure of model weights, training data, and validation results B) Information about the factors and criteria used in the decision, their relative significance, and ideally counterfactual information about what would have changed the outcome — specific to the individual's case C) Only a statement that automated processing occurred, without details about its logic D) A copy of the vendor's technical documentation for the AI system
Question 19. China's algorithmic transparency regulations (2022) differ from EU GDPR Article 22 primarily in that they:
A) Are more comprehensive and better enforced than GDPR Article 22 B) Focus primarily on algorithmic content recommendation systems and are shaped significantly by government interests in controlling information flows, rather than being grounded in liberal individual autonomy principles C) Apply only to AI systems used by foreign companies, not Chinese domestic companies D) Require full public disclosure of all algorithm source code
Question 20. Building genuine organizational explanation capacity requires:
A) Purchasing a post-hoc explanation tool and deploying it alongside existing models B) A comprehensive approach that includes: designing explanation capacity into models from the start, building explanation interfaces tested with representative users, training staff to communicate AI explanations, creating genuine appeal processes with human override authority, and maintaining feedback loops from appeals back to model improvement C) Legal compliance with the minimum requirements of GDPR Article 22 D) Commissioning an annual external audit of the organization's AI explanation practices
Answer Key
- B
- B
- C
- B
- B
- B
- B
- C
- B
- C
- B
- C
- B
- B
- B
- B
- B
- B
- B
- B