Chapter 17: Further Reading

The Right to Explanation

Annotated bibliography of 18 essential sources.


Foundational Academic Papers

1. Goodman, Bryce, and Seth Flaxman. "European Union Regulations on Algorithmic Decision-Making and a 'Right to Explanation.'" AI Magazine 38, no. 3 (2017): 50-57. The paper that launched the GDPR right to explanation debate. Goodman and Flaxman argue that GDPR creates a meaningful individual right to explanation of AI decisions, examine the technical challenges this creates for machine learning systems, and call for development of explainable AI methods. Essential for understanding the debate's origins and the policy advocacy it inspired.

2. Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. "Why a Right to Explanation of Automated Decision-Making Does Not Exist in European Union Law." Harvard International Law Journal 38, no. 2 (2017): 1-53. The essential counter-argument to Goodman and Flaxman. Wachter et al. conduct a close reading of GDPR text and recitals to argue that Article 22 creates only a right to prior information about decision logic, not a retrospective individual right to explanation. Required reading alongside the Goodman/Flaxman paper for understanding the legal debate.

3. Edwards, Lilian, and Michael Veale. "Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For." Duke Law and Technology Review 16 (2017): 18-84. A critical and practical synthesis of the Article 22 debate. Edwards and Veale argue that the legal question is less important than the practical one — what form of explanation would actually help individuals — and conclude that counterfactual explanations are the most useful format. Particularly readable for practitioners and policy professionals.

4. Rudin, Cynthia. "Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead." Nature Machine Intelligence 1 (2019): 206-215. The most influential argument for a fundamental rethinking of explainable AI. Rudin argues that in high-stakes domains, organizations should deploy inherently interpretable models rather than complex models with approximate explanations. Challenges the assumptions behind most AI explanation law and policy. Essential reading for anyone working in clinical, criminal justice, or financial services AI.

5. Fricker, Miranda. Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press, 2007. The philosophical foundation for the epistemic justice argument for explanation rights. Fricker's concept of hermeneutical injustice — the harm of lacking conceptual resources to understand one's experiences — provides the most sophisticated philosophical framework for why explanation matters beyond individual fairness. More demanding than most practitioners will engage with in full, but Chapter 7 on hermeneutical injustice is accessible and directly relevant.


6. Selbst, Andrew D., and Solon Barocas. "The Intuitive Appeal of Explainable Machines." Fordham Law Review 87, no. 3 (2018): 1085-1139. A rigorous examination of the assumptions behind explanation rights. Selbst and Barocas question whether explanation can deliver what its advocates promise — whether meaningful AI explanation is technically achievable and legally useful. Important counterpoint to the dominant enthusiasm for explanation rights.

7. Citron, Danielle Keats, and Frank Pasquale. "The Scored Society: Due Process for Automated Predictions." Washington Law Review 89 (2014): 1-33. A prescient analysis of the due process implications of algorithmic scoring systems, written before GDPR and before most current AI policy debates. Citron and Pasquale argue for procedural rights — including explanation rights — for individuals subject to algorithmic scores. Still one of the best frameworks for thinking about algorithmic due process.

8. Coglianese, Cary, and David Lehr. "Regulating by Robot: Administrative Decision Making in the Machine-Learning Era." Georgetown Law Journal 105 (2017): 1147-1223. A comprehensive analysis of how administrative law's reasoned explanation requirements apply to AI decision systems used by government agencies. Essential for understanding the constitutional and administrative law dimension of explanation rights in the public sector.

9. Yeung, Karen. "'Hypernudge': Big Data as a Mode of Regulation by Design." Information, Communication and Society 20, no. 1 (2017): 118-136. An analysis of how algorithmic systems regulate behavior through design rather than through transparent rules, and what implications this has for democratic legitimacy and accountability. Provides important context for understanding why the right to explanation matters for democratic governance, beyond individual fairness.


Technical Literature

10. Lipton, Zachary C. "The Mythos of Model Interpretability." Queue 16, no. 3 (2018): 1-27. A technically rigorous critique of the concept of interpretability in machine learning, arguing that the term encompasses multiple different desiderata that do not always cohere. Essential for understanding the technical complexity behind explanation requirements and why achieving "meaningful explanation" is harder than it might appear.

11. Lakkaraju, Himabindu, and Osbert Bastani. "'How Do I Fool You?': Manipulating User Trust via Misleading Black Box Explanations." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020: 79-85. Empirical demonstration of the gaming problem: the authors show that machine learning models can be designed to produce benign-looking post-hoc explanations while behaving very differently in ways that would not produce those explanations. Essential reading for understanding why explanation requirements are not sufficient for algorithmic accountability.

12. Lundberg, Scott M., Gabriel Erion, Hugh Chen, Alex DeYoung, Ian Gill, Sam Katz, Nicole Himmelfarb, Nisha Naumann, Ken Ho, Brahm Naumann, Asher Nutter, Adam D. Berkowitz, Andrew Lu, Su-In Lee, and David B. Larson. "From Local Explanations to Global Understanding with Explainable AI for Trees." Nature Machine Intelligence 2 (2020): 56-67. A significant expansion of the SHAP methodology with applications in clinical medicine. Demonstrates both the power of SHAP-based explanations for clinical decision support and the remaining challenges in communicating these explanations to clinicians. Important for understanding the state of the art in clinical AI explanation.


Policy and Regulatory Sources

13. European Data Protection Board. "Guidelines 01/2022 on Data Subject Rights — Right of Access." EDPB, January 2022. While focused on access rights generally rather than Article 22 specifically, this guidance provides important context for how the EDPB interprets individuals' rights to information about automated processing. The most current authoritative EDPB statement on individuals' rights related to automated decision-making.

14. Information Commissioner's Office and Alan Turing Institute. "Explaining Decisions Made with AI: An Accessible Overview." ICO/ATI, May 2020. The most practically oriented official guidance on AI explanation anywhere in the world. The ICO and Alan Turing Institute produced a three-part guide covering what meaningful explanation requires, how to implement it for different AI system types, and how different audiences require different explanations. Required reading for practitioners implementing Article 22-compliant explanation systems.

15. Algorithm Watch. "Automating Society Report." Algorithm Watch, 2019 and updates. The most comprehensive documentation of how automated decision systems are used across European social institutions, and what their transparency and explanation practices look like in practice. The report provides empirical grounding for understanding the gap between legal requirements and actual practice. Available free online.


Case Studies and Investigative Reporting

16. Kirchner, Lauren, Matthew Goldstein, Julia Angwin, and Jeff Larson. "Federal Agencies Haul in Student Loan Data Covering 230 Million Americans." ProPublica, October 2020. A detailed investigation of automated decision systems in federal student loan servicing, including the opacity of these systems and the difficulty borrowers face in understanding and challenging adverse decisions. Illustrates how the US explanation gap manifests in practice for student loan borrowers.

17. Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. "Machine Bias." ProPublica, May 2016. The landmark investigation of COMPAS recidivism risk assessment, directly relevant to the discussion of criminal justice explanation rights. While primarily focused on bias, the investigation demonstrates the explanation and transparency problems of proprietary algorithmic tools in criminal justice.


Books

18. Dwork, Cynthia, and Aaron Roth. The Algorithmic Foundations of Differential Privacy. Now Publishers, 2014. Technical but foundational: differential privacy is one of the most promising approaches to enabling algorithmic accountability (including some forms of explanation) while protecting individual privacy. For readers with a technical background, this provides the mathematical foundation for privacy-preserving approaches to algorithmic transparency that are beginning to influence regulatory thinking.