Chapter 15: Further Reading
Communicating AI Decisions to Stakeholders
Annotated bibliography of 18 essential sources.
Foundational Texts
1. Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, 2015. The foundational text on algorithmic opacity and its social consequences. Pasquale examines how the opacity of algorithms used in finance, reputation, and search creates power asymmetries between institutions and individuals. Essential context for understanding why AI communication matters and what is at stake when it fails. Accessible to non-technical readers.
2. Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press, 2018. A deeply reported examination of how automated decision systems in welfare, child protective services, and criminal justice affect poor and working-class Americans. Eubanks's account of the Arkansas benefits case and similar systems provides essential grounding for the communication challenges discussed in this chapter. Required reading for anyone working in public sector AI.
3. O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016. A broad survey of algorithmic systems that produce harm through opacity, scale, and misaligned incentives. Chapters on employment screening, credit scoring, and insurance pricing provide detailed examples of the communication failures this chapter addresses.
Legal and Regulatory Sources
4. Consumer Financial Protection Bureau. "Adverse Action Notification Requirements and the Equal Credit Opportunity Act's Requirement to State Specific Reasons for Adverse Action." CFPB Circular 2022-03. The CFPB's 2022 guidance clarifying that ECOA adverse action requirements apply to AI-based credit models, and that "principal reasons" must be specific enough to enable consumers to understand and respond to them. Essential for compliance professionals in financial services.
5. Article 29 Working Party (now European Data Protection Board). "Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679." WP251rev.01, 2018. The authoritative EU guidance on GDPR Article 22. Covers scope, the meaning of "solely automated," what "meaningful information about the logic involved" requires, and best practices for implementing the right to human review. Dense but essential for practitioners operating in EU contexts.
6. Information Commissioner's Office (UK). "Explaining Decisions Made with AI." ICO / Alan Turing Institute, 2020. A practical, well-organized guide to implementing meaningful explanation in AI systems, developed jointly by the UK data protection regulator and the Alan Turing Institute. Covers explanation for different audiences (technical, non-technical), different contexts, and different model types. One of the most practically useful documents in the field.
7. Federal Trade Commission. "Aiming for Truth, Fairness, and Equity in Your Company's Use of AI." FTC Blog, April 2021. The FTC's clearest statement of its view on AI transparency obligations and deceptive practices in AI deployment. Accessible summary of the FTC's position on disclosure, algorithmic accountability, and the limits of acceptable AI opacity.
8. Ledgerwood v. Jobe, No. 4:16-cv-00564 (E.D. Ark. 2016) and subsequent proceedings. The primary legal authority on constitutional due process requirements for algorithmic benefit determinations. The district court's ruling on notice adequacy and appeal requirements remains highly influential. Available through legal databases and discussed in numerous law review articles.
Technical and Scientific Literature
9. Lundberg, Scott M., and Su-In Lee. "A Unified Approach to Interpreting Model Predictions." Advances in Neural Information Processing Systems 30, 2017. The original paper introducing SHAP (SHapley Additive exPlanations), one of the most widely used post-hoc explanation methods for feature importance. Understanding SHAP is necessary for evaluating whether AI explanations based on it are accurate and what their limitations are. Technical but accessible introduction is available through blog posts by the authors.
10. Kahneman, Daniel, Olivier Sibony, and Cass R. Sunstein. Noise: A Flaw in Human Judgment. Little, Brown, 2021. A rigorous examination of the variability (noise) in human judgment in professional decision-making, including medicine, law, and finance. Provides important context for evaluating automation bias: human judgment is not a perfect baseline against which AI should be compared.
11. Cabitza, Federico, Raffaele Rasoini, and Gian Franco Gensini. "Unintended Consequences of Machine Learning in Medicine." JAMA 318, no. 6 (2017): 517-518. A concise clinical perspective on the risks of automation bias and automation complacency in AI-assisted medicine. Foundational for discussions of how AI communication design must account for predictable human response to algorithmic outputs.
Communication Theory and Practice
12. Tversky, Amos, and Daniel Kahneman. "Judgment Under Uncertainty: Heuristics and Biases." Science 185, no. 4157 (1974): 1124-1131. The seminal paper on cognitive biases in probabilistic judgment. Essential background for understanding why communicating AI uncertainty is difficult: the cognitive shortcuts people use to interpret probabilities often produce systematic errors. Understanding these biases is prerequisite to designing communications that mitigate them.
13. Gigerenzen, Gerd, Wolfgang Gaissmaier, Elke Kurz-Milcke, Lisa M. Schwartz, and Steven Woloshin. "Helping Doctors and Patients Make Sense of Health Statistics." Psychological Science in the Public Interest 8, no. 2 (2007): 53-96. The definitive empirical study of how to communicate health statistics to non-technical audiences. Demonstrates the superiority of natural frequency formats and absolute risk over relative risk and percentages. Essential reading for anyone designing AI output communications in healthcare or any other domain.
14. Edwards, Lilian, and Michael Veale. "Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For." Duke Law and Technology Review 16 (2017): 18-84. A rigorous legal and technical critique of the claim that GDPR creates a meaningful right to explanation. Argues that counterfactual, individual-level explanations may be more practically useful than the "logic involved" disclosure GDPR appears to require. Influential in the academic debate; readable by non-lawyers.
Case Studies and Investigative Reporting
15. Heaven, Will Douglas. "Hundreds of AI tools have been built to catch COVID. None of them helped." MIT Technology Review, July 2021. An investigation of the failure of dozens of clinical AI tools developed during the COVID-19 pandemic, most of which had significant methodological flaws and performed poorly in real-world clinical settings. Essential context for understanding why clinical AI communication must include honest performance disclosure.
16. Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. "Machine Bias." ProPublica, May 2016. The landmark investigation of COMPAS recidivism risk assessment scores and their racial disparities. While focused on bias rather than communication, the investigation illustrates the intersection of opacity and discrimination and the importance of community access to information about AI systems used in criminal justice.
17. Rajpurkar, Pranav, Emma Chen, Oishi Banerjee, and Eric J. Topol. "AI in Health and Medicine." Nature Medicine 28 (2022): 31-38. A state-of-the-field survey of clinical AI applications, including discussion of implementation challenges, performance variability, and communication requirements. Useful for understanding the clinical AI landscape beyond any single system or case.
Policy and Governance
18. New York City Automated Decision Systems Task Force. "Report of the Automated Decision Systems Task Force." City of New York, 2019. The report of New York City's first systematic attempt to inventory and evaluate AI systems used in city government. Though its recommendations have been only partially implemented, the report represents a milestone in governmental AI transparency and provides a practical model for community-level AI governance. Available free online from the NYC Mayor's Office.