Chapter 17: Exercises
The Right to Explanation
25 exercises ranging from philosophical analysis to legal application to organizational design.
Part A: Comprehension and Conceptual Analysis (Exercises 1–8)
Exercise 1: Philosophical Foundations Write a 600-word essay examining the relationship between the right to explanation and the concept of autonomy. Your essay should address: How does AI opacity threaten autonomy? What does autonomy require beyond the absence of coercion? Does the right to explanation protect autonomy, or does it protect something different? Draw on at least one philosophical source beyond this textbook (suggestions: Kant's categorical imperative; Mill's harm principle; Frankfurt's concept of free action; or a contemporary autonomy theorist).
Exercise 2: Article 22 Scope Analysis For each of the following scenarios, determine whether GDPR Article 22 applies: (a) A bank uses an ML model to generate a credit score that a loan officer uses as one input among several in making a loan decision; (b) A parole board uses a recidivism risk score generated by an algorithm, but the board members state they exercise independent judgment; (c) A social media platform's algorithm decides which posts appear in a user's feed; (d) An HR system automatically rejects resumes that score below a threshold without human review; (e) An insurance company sets premiums using an algorithmic rating model, with a human underwriter approving each policy. For each, explain your reasoning, including how you apply the "solely automated" and "legal or similarly significant effects" criteria.
Exercise 3: Academic Debate Analysis Read the abstracts and conclusions of all three foundational papers in the GDPR right to explanation debate: Goodman and Flaxman (2017), Wachter, Mittelstadt, and Russell (2017), and Edwards and Veale (2017). Write a 700-word analytical summary that: accurately represents each paper's argument; identifies the points of disagreement between them; evaluates which argument is most persuasive on both legal and practical grounds; and identifies a question that none of the three papers satisfactorily answers.
Exercise 4: Faithfulness Problem Research SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) as post-hoc explanation methods. For each, describe: how the method works; what it explains; the circumstances under which the method may produce explanations that are not faithful to the model's actual reasoning; and what evidence exists for unfaithful explanations in practice. Conclude with your assessment: given the faithfulness problem, can SHAP and LIME-based explanations satisfy the legal standard of "meaningful information about the logic involved" under GDPR Article 22?
Exercise 5: Rudin Argument Evaluation Research Cynthia Rudin's 2019 paper "Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead." Write a 600-word critical evaluation that: accurately summarizes her argument; identifies the strongest points of her case; identifies the strongest objections to her position; and arrives at your own assessment of when her argument is most compelling and when its limitations are most significant.
Exercise 6: Systemic vs. Individual Transparency A class action lawsuit reveals that a major bank's ML credit model has approved loans for white applicants at a rate 23 percentage points higher than for Black applicants with identical financial profiles. The bank argues that it has provided adequate individual adverse action notices to all declined applicants, satisfying its ECOA obligations. Analyze: (a) Does adequate individual explanation resolve the systemic discriminatory pattern? (b) What information would be needed to identify the systemic pattern that individual explanations could not reveal? (c) What additional transparency mechanisms would be needed to make the systemic pattern visible and addressable?
Exercise 7: Gaming Problem Analysis Describe a specific scenario in which a financial institution could design a credit scoring model to produce GDPR Article 22-compliant explanations while the model's actual decision logic remained opaque and unexplained. What would the "gamed" explanation system look like? What would a regulator need to examine to detect that the explanation was gaming the requirement rather than genuinely compliant? What regulatory approaches would be most resistant to this type of gaming?
Exercise 8: Global Comparison Compare the right to explanation for credit decisions in: (a) the European Union, under GDPR Article 22 and the EU AI Act; (b) the United States, under ECOA adverse action notice requirements and state-level initiatives; (c) China, under its algorithmic recommendation provisions and credit regulation. For each, identify what an affected individual can demand, what information they can access, and what recourse they have. Which jurisdiction provides the most meaningful protection, and why?
Part B: Applied Legal Analysis (Exercises 9–16)
Exercise 9: GDPR Article 22 Compliance Audit You have been hired as a GDPR compliance consultant by a European bank that uses an ML model for mortgage decisions. The bank currently provides the following when a mortgage is denied: a brief notice stating that the application was unsuccessful; a general description of the factors considered in mortgage decisions (income, debt level, credit history, property value); and information about the right to request human review. Audit the bank's current practice against Article 22 requirements and the EDPB guidance. Identify specific gaps. Recommend specific changes that would bring the bank into compliance with the regulation as the EDPB has interpreted it.
Exercise 10: Article 22 Enforcement Action Draft a complaint to the German Data Protection Authority (the Berliner Beauftragte für Datenschutz und Informationsfreiheit) alleging that a major German employer's AI resume screening system violates GDPR Article 22. Your complaint should: identify the specific Article 22 violation; describe the evidence you would cite; specify what remedy you are seeking; and address the likely arguments the employer would raise in response. Include an analysis of whether the "solely automated" and "significantly affects" criteria are met.
Exercise 11: US Federal Legislation Drafting Draft the key provisions of a proposed federal Algorithmic Explanation Act for the United States. Your proposed statute should: define the scope of covered automated decisions (which sectors, which decision types, what threshold of automation triggers coverage); specify what explanation a covered person is entitled to request and what it must contain; establish enforcement mechanisms (who enforces, what penalties, private right of action?); and address the relationship to existing sector-specific requirements. For each major provision, explain the rationale and the tradeoffs you considered.
Exercise 12: Criminal Justice Application A state court has used a COMPAS recidivism risk score in sentencing a defendant convicted of fraud. The defendant, through counsel, has requested: (a) the specific inputs used to calculate the score; (b) the specific weights assigned to each factor; (c) validation data showing the model's accuracy for defendants with the defendant's demographic characteristics; (d) access to an independent expert to examine the model. The vendor has refused to provide items (b), (c), and (d), citing trade secret protection. Write a legal memorandum analyzing the defendant's constitutional claims and the legal authority that supports and opposes them. Draw on the Loomis case and the due process framework.
Exercise 13: Healthcare AI Explanation An oncology AI model recommends against further active treatment for a patient with advanced cancer. The patient, through her oncologist, requests: (a) what data the model used about her; (b) the main factors driving the recommendation; (c) how the model has performed for patients with her cancer type; (d) how often the model's recommendations differ from clinical outcomes. The hospital's AI vendor has provided only items (a) and (b). Under what legal frameworks, if any, is the patient entitled to items (c) and (d)? Under what ethical frameworks? What policy changes would be needed to ensure patients have access to this information?
Exercise 14: Employment AI Explanation Request You represent a job applicant, Marcus, who applied for a position at a company in New York City and was rejected. The company used an automated resume screening tool (an AEDT under NYC Local Law 144). Marcus requests: (a) information about what criteria the AEDT used to evaluate his application; (b) why his application scored below the threshold for human review; (c) the results of the most recent bias audit of the AEDT. Advise Marcus: What is he entitled to under NYC Local Law 144? What additional rights might he have under other legal frameworks? What would he need to prove to bring a discrimination claim if the AEDT has been found to have disparate impact?
Exercise 15: Benefits Algorithm Challenge A state Medicaid agency is using an automated tool to determine eligibility for home health aide services. Rosa, a quadriplegic recipient, received a letter reducing her weekly aide hours from 60 to 40, citing "updated assessment." She cannot understand from the letter why her hours changed. She wants to challenge the decision. Using the constitutional framework from the Arkansas Medicaid case and the due process principles from Goldberg v. Kelly, advise Rosa: What due process rights does she have? What information must the agency provide to satisfy constitutional notice requirements? What must the agency's appeal process look like to satisfy the requirement of a genuine opportunity to be heard? What legal remedies are available if the agency's current process is constitutionally inadequate?
Exercise 16: Vendor Contract Analysis A county government is negotiating a contract to purchase an AI pretrial risk assessment tool from a private vendor. The vendor's proposed contract includes: a clause preventing the county from disclosing the algorithm's methodology without vendor consent; a provision allowing the vendor to update the model without prior notice; and a limitation on liability for decisions made using the tool. You are the county's chief legal officer. Identify all provisions that are constitutionally or legally problematic. Draft alternative language for each problematic provision that would allow the county to satisfy its due process obligations.
Part C: Organizational Design (Exercises 17–21)
Exercise 17: Explanation Capacity Assessment Conduct a hypothetical assessment of a bank that uses an ML credit scoring model. For each of the following explanation capacity dimensions, describe what adequate capacity would look like and what gap assessment questions you would use to evaluate it: (a) model documentation sufficient for individual explanation generation; (b) post-hoc explanation tool selection and validation; (c) explanation interface design for loan officers; (d) explanation communication for denied applicants; (e) appeal process with genuine human review authority; (f) feedback loops from appeal outcomes to model improvement; (g) staff training on AI communication and explanation.
Exercise 18: Explanation System Design Design a comprehensive explanation system for an AI-assisted parole decision tool used by a state parole board. Your design should address: (a) what information the parole board members receive about each parolee's risk score, including factors, confidence levels, and known limitations; (b) what explanation the parolee is entitled to receive about the score's role in the decision; (c) what explanation the parolee's attorney receives and can present to the board; (d) how the parolee can challenge the score's inputs or methodology; (e) what documentation is maintained for each parole decision; and (f) how errors identified through the appeal process are communicated to the model developer for correction.
Exercise 19: Interpretable Model Transition Plan A regional bank currently uses a gradient boosting model for credit decisions, which requires post-hoc explanation. The bank's data science team has proposed transitioning to a logistic regression model with carefully selected features, which is inherently interpretable. Opposition from the model risk team argues that the interpretable model is less accurate. Design a project plan for evaluating and potentially implementing the transition. Your plan should include: methodology for comparing accuracy between the models on the bank's data; criteria for determining what accuracy reduction, if any, is acceptable to gain interpretability; stakeholder management for the model risk team's concerns; regulatory consultation; and implementation timeline.
Exercise 20: Community Transparency Initiative A city government uses AI systems in: pretrial risk assessment, predictive policing patrol allocation, automated building permit review, and social services benefit determination. Design a public transparency initiative that enables community members to understand and engage with these systems. Your initiative should include: a public AI register that describes each system in plain language; a community feedback mechanism for reporting concerns about specific AI decisions; an annual public report on AI system performance, including error rates and demographic disparities; a community advisory body with meaningful input into AI deployment and design; and a public portal for exercising individual explanation rights.
Exercise 21: Organizational Culture Assessment Develop an assessment tool for evaluating whether an organization's culture genuinely supports the right to explanation or merely complies with it formally. Your assessment should include questions in five areas: leadership commitment (how do senior leaders talk about AI transparency?); incentive structures (are frontline workers rewarded for explaining decisions or for processing volume?); challenge acceptance (how does the organization respond when employees or customers challenge AI decisions?); transparency investment (what resources are allocated to explanation capacity?); and learning culture (how does the organization use explanation failures to improve?). For each area, include 3-4 specific questions and indicators of genuine vs. performative commitment.
Part D: Research and Extended Analysis (Exercises 22–25)
Exercise 22: GDPR Enforcement Research Research GDPR enforcement actions that have specifically addressed or touched on Article 22 obligations, beyond those discussed in this chapter. Use the GDPR Enforcement Tracker (gdprhub.eu), DPA websites, and academic literature. Identify five enforcement actions that involved Article 22 issues. For each, summarize: the facts; the alleged violation; the DPA's finding; the sanction imposed; and what the case reveals about the current state of Article 22 enforcement. Write a 1,500-word synthesis that addresses the overall enforcement pattern and its implications.
Exercise 23: State Law Comparison Conduct a systematic comparison of AI explanation and transparency requirements in five US states: California, Colorado, Connecticut, New York (specifically NYC Local Law 144), and Illinois. For each state, identify: the relevant statutes and regulations; their scope (what decisions, what AI systems, what organizations); what explanation or disclosure they require; what enforcement mechanisms exist; and what gaps remain. Write a 2,000-word comparative analysis that concludes with a recommendation for the state-level approach that best protects individuals' explanation rights.
Exercise 24: Technical Explanation Method Evaluation Conduct a literature review of empirical research on the faithfulness and usefulness of post-hoc explanation methods for individual decisions. Focus on SHAP and LIME as the most commonly used methods. Your review should address: What evidence exists for unfaithfulness of these methods (i.e., cases where explanations do not accurately represent model reasoning)? What evidence exists for cases where the methods produce accurate explanations? Under what conditions are these methods more or less faithful? What alternative explanation methods have been proposed to address faithfulness concerns? Synthesize your findings into a 1,500-word assessment of whether SHAP and LIME explanations can satisfy meaningful legal explanation requirements.
Exercise 25: Policy Brief for a National Legislature Write a policy brief addressed to a national legislature in a jurisdiction that currently lacks a right to explanation for AI decisions (suggestions: the US Congress, the UK Parliament, or the legislature of a non-EU country of your choice). Your brief should: document the current gap in explanation rights and specific harms it causes; analyze the EU GDPR and AI Act frameworks as models; propose specific legislative language creating an explanation right appropriate for your chosen jurisdiction; address likely objections from industry; and estimate the costs and benefits of the proposed legislation. The brief should be 2,000-2,500 words and should be pitched appropriately for a legislative audience with limited technical background.