Chapter 13: Further Reading and Annotated Sources


Foundational Academic Works

1. Rudin, Cynthia. "Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead." Nature Machine Intelligence 1 (2019): 206–215.

This is the most important scholarly challenge to the conventional assumption that opacity is an acceptable cost of AI performance. Rudin argues that in high-stakes tabular data domains — criminal justice, healthcare, credit — the accuracy-interpretability trade-off is largely a myth. Post-hoc explanation methods like LIME and SHAP, she contends, are not substitutes for genuine interpretability because they provide approximations that may mislead practitioners into confidence about model behavior that is not warranted. Rudin advocates strongly for designing interpretable models from the ground up rather than building complex models and then trying to explain them. Essential reading for anyone involved in AI deployment decisions in consequential domains. Available open-access through various academic repositories.


2. Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, 2015.

Pasquale's landmark book provides the intellectual architecture for understanding algorithmic opacity as a governance and power problem, not merely a technical challenge. Writing before the current AI boom, Pasquale documents how financial institutions and technology companies have weaponized opacity to avoid accountability, concentrating power while escaping scrutiny. His concept of "black box society" describes a world where consequential decisions are made by systems designed to be illegible — not coincidentally, but deliberately. The book anticipates many of the concerns about COMPAS, social media algorithms, and credit scoring that have become central policy debates. Deeply readable for a business audience. Highly recommended as background for Chapters 13, 16, and 18.


3. Breiman, Leo. "Statistical Modeling: The Two Cultures." Statistical Science 16, no. 3 (2001): 199–231.

The paper that introduced the concept of the "Rashomon effect" — the existence of many different models with similar predictive accuracy — in the context of statistical modeling. Breiman describes two cultures of statistical practice: one focused on generative models that represent data-generating processes, the other focused on algorithmic prediction with less regard for interpretability. This foundational paper is indispensable for understanding the intellectual roots of the debate about the accuracy-interpretability trade-off. Available through the Project Euclid open-access repository.


4. Doshi-Velez, Finale, and Been Kim. "Towards A Rigorous Science of Interpretable Machine Learning." arXiv preprint arXiv:1702.08608, 2017.

One of the most frequently cited formal treatments of what "interpretability" means in machine learning, and why it matters. The paper introduces a taxonomy of interpretability evaluation approaches (application-grounded, human-grounded, and functionally-grounded) and argues that the field lacks agreed standards for what counts as a satisfying explanation. This methodological clarity is essential for practitioners and policymakers trying to evaluate competing claims about model interpretability. Available on arXiv.


5. Lipton, Zachary C. "The Mythos of Model Interpretability." Queue 16, no. 3 (2018): 31–57.

A provocative challenge to vague uses of "interpretability" in machine learning discourse. Lipton distinguishes several distinct desiderata that are often conflated under the term "interpretability" — simulatability, decomposability, algorithmic transparency, post-hoc explanations — and argues that different stakeholders need different kinds of transparency for different purposes. Particularly useful for understanding the limits of post-hoc explanation methods and the ways in which the discourse around "explainable AI" can obscure more than it reveals. Available open-access through ACM Queue.


6. State v. Loomis, 881 N.W.2d 749 (Wis. 2016).

The Wisconsin Supreme Court's decision in the foundational COMPAS due process case. The full opinion is essential reading for understanding the legal framework that currently governs (and largely permits) the use of proprietary AI in criminal sentencing. The court's reasoning — particularly its handling of the individualized sentencing argument and the trade secrecy defense — and the concurrences and dissents all illuminate the tensions that the decision leaves unresolved. Available through Westlaw, Lexis, and free sources including the Wisconsin Courts website.


7. European Parliament and Council of the European Union. General Data Protection Regulation, Regulation (EU) 2016/679, Article 22 and Recitals 71–73.

Article 22 of GDPR is the foundational EU text on automated decision-making rights. Reading Article 22 alongside Recitals 71–73, which provide interpretive guidance, is essential for understanding what the "right to explanation" under GDPR actually requires — and where its limitations lie. The Article 29 Working Party's (now European Data Protection Board's) guidance on automated decision-making provides additional interpretive clarity. Available through the EUR-Lex database and GDPR.eu.


8. European Data Protection Board. Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679. Version 2.0, February 2018.

The authoritative guidance document on Article 22 of GDPR, providing detailed interpretation of what "solely automated decisions," "significant effects," and "meaningful information about the logic involved" mean in practice. The guidelines also address the Article 22 exceptions and the rights to human intervention and to contest decisions. An essential companion to the GDPR text for understanding the practical scope of the right to explanation in European law. Available through the EDPB website.


9. Board of Governors of the Federal Reserve System. Supervisory Guidance on Model Risk Management (SR 11-7), April 2011.

The foundational US financial regulatory document establishing expectations for model governance, validation, and oversight in banking institutions. Though predating modern machine learning, SR 11-7's framework for model documentation, independent validation, and ongoing monitoring has been interpreted to apply to AI and machine learning models by the OCC, FDIC, and Federal Reserve. Essential for understanding the regulatory context for AI transparency in financial services. Available through the Federal Reserve website.


Investigative Journalism and Research Reports

10. Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. "Machine Bias." ProPublica, May 23, 2016.

The landmark investigative report analyzing racial disparities in COMPAS scores in Broward County, Florida. This article is essential reading for anyone studying algorithmic bias in criminal justice — both for its substantive findings and as a case study in what external audit of AI systems through output data can and cannot establish. The report sparked an extensive academic debate about the appropriate definition of algorithmic fairness that is examined in Chapter 15. The underlying methodology is documented in a companion technical article. Available free at ProPublica.org.


11. Kirchner, Lauren, et al. "Access Denied: Fairer Algorithms for Health Insurance Need More Transparency." The Markup, August 26, 2021. And: Martinez, Emmanuel, and Lauren Kirchner. "The Secret Bias Hidden in Mortgage-Approval Algorithms." The Markup, August 25, 2021.

Two landmark algorithmic accountability investigations by The Markup, demonstrating the power and limits of external audit from publicly available data. The mortgage lending investigation used HMDA data to document racial disparities in loan approval rates among large lenders, controlling for income and other financial factors. The health insurance investigation examined algorithmic underwriting practices. Both are models of responsible algorithmic accountability journalism and essential reading alongside Chapter 13's discussion of external audit methodology. Available free at TheMarkup.org.


12. Keane, Bernard, and Mary C. Martin. "Big Brother Incorporated: The Terrifying Hidden World of Australia's Facial Recognition AI." Crikey, March 2020.

Examines algorithmic opacity in the context of government use of facial recognition, providing a non-US case study in institutional opacity. Useful for the global variation theme and for understanding how opacity problems manifest across different democratic contexts and regulatory environments.


Books and Policy Reports

13. O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016.

A widely-read and accessible survey of how algorithmic systems across many domains — college rankings, credit scoring, policing, hiring — have produced harmful outcomes, often in ways obscured by their mathematical veneer. O'Neil develops the concept of a "weapon of math destruction" — a model that is important, opaque, and damaging — and applies it across a range of domains. Particularly strong on the ways in which opacity and the authority of quantification combine to insulate harmful algorithms from accountability. Essential introductory reading for business audiences new to the ethics of AI.


14. Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press, 2018.

Examines how algorithmic decision-making in public services — child welfare, public housing, healthcare access — disproportionately affects low-income and marginalized communities. Eubanks documents through detailed case studies how opacity in automated systems makes accountability nearly impossible for the people most affected — precisely those with the least power and fewest resources to challenge adverse decisions. Particularly relevant to Chapter 13's discussion of public benefits opacity and the accountability gap.


15. Selbst, Andrew D., and Solon Barocas. "The Intuitive Appeal of Explainable Machines." Fordham Law Review 87 (2018): 1085.

A careful legal and philosophical analysis of why explanations for AI decisions are demanded and whether they can deliver what they promise. The paper distinguishes several distinct purposes that AI explanations might serve — enabling meaningful challenge, satisfying curiosity, improving trust, enabling system improvement — and argues that no single explanation mechanism serves all these purposes equally. Essential for understanding the limits of explainability as a policy solution. Available through law review archives and SSRN.


16. Diakopoulos, Nicholas. Automating the News: How Algorithms Are Rewriting the Media. Harvard University Press, 2019.

Examines the opacity of algorithmic news curation and recommendation systems, with particular attention to accountability in journalism and media. Provides a thoughtful framework for evaluating when algorithmic editorial discretion requires accountability mechanisms. Relevant to the social media case study and to broader questions about algorithmic editorial power.


17. Raso, Jennifer. "AI in the Administration of Justice." AI and the Rule of Law Project, University of Toronto Faculty of Law, 2019.

A thorough review of AI use in legal and administrative decision-making across Canada, the US, and UK jurisdictions, with careful analysis of the rule-of-law implications of algorithmic opacity in government decision-making. Particularly strong on the FOIA and judicial review dimensions of government AI transparency. Available through the University of Toronto Faculty of Law website.


18. European Commission. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). April 21, 2021 (as subsequently amended and enacted 2024).

The foundational EU AI regulatory framework. The sections on high-risk AI — covering definitions, documentation requirements, transparency obligations, human oversight mandates, and conformity assessment procedures — are directly relevant to Chapter 13's discussion of sector-specific transparency obligations. The EU AI Act represents the most comprehensive attempt yet to impose governance requirements on AI systems in high-stakes domains. Available through EUR-Lex.


19. Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. "Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation." International Data Privacy Law 7, no. 2 (2017): 76–99.

A provocative academic argument that GDPR Article 22 does not actually establish a general right to explanation for automated decisions — that the "meaningful information about the logic" requirement is weaker than commonly claimed. This paper sparked significant debate about the scope and legal status of AI transparency rights in European law. Reading it alongside the EDPB guidance (source 8 above) gives a sense of the contested terrain around what transparency rights actually require. Available through Oxford Journals and SSRN.


20. Holstein, Kenneth, et al. "Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?" In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI) 2019.

A qualitative study of what practitioners inside technology companies actually need to build fairer AI systems — and the obstacles they face. The findings reveal that opacity is often a problem not just for external stakeholders but for practitioners inside organizations: they often lack the tools and processes to evaluate fairness of systems they are responsible for. Provides a ground-level view of the organizational dimensions of the black box problem. Available through ACM Digital Library.


Digital and Interactive Resources

Shapley Value Interactive Explainer (Various Authors): Several high-quality interactive web tools help build intuition about how SHAP values work in practice. Search for "SHAP explainer" or "Shapley value calculator" to find accessible visual tools that complement Section 13.3's discussion of post-hoc explanation methods.

ProPublica COMPAS Analysis GitHub Repository: ProPublica published its COMPAS analysis code and data on GitHub, enabling reproducibility and critique. Examining the code and data alongside the published article (source 10 above) is valuable for students interested in algorithmic audit methodology.

EU AI Act Tracker (Future of Life Institute): An ongoing tracker of the EU AI Act's requirements and implementation status, particularly useful for tracking how sector-specific transparency requirements are being operationalized. Available at the Future of Life Institute website.


Note: Page numbers and URLs omitted because digital availability and pagination vary by edition and access method. All cited works are verifiable through standard academic databases, legal research platforms, or the organizational websites indicated.