Chapter 18: Further Reading — Who Is Responsible When AI Fails?

Foundational Academic Works

  1. Thompson, Dennis F. (1980). "Moral Responsibility of Public Officials: The Problem of Many Hands." American Political Science Review, 74(4), 905–916. The foundational article articulating the many hands problem in political and bureaucratic contexts. Required reading for understanding the structural roots of AI accountability diffusion. Thompson's framework anticipates the AI accountability problem with remarkable precision.

  2. Nissenbaum, Helen. (1994). "Computing and Accountability." Communications of the ACM, 37(1), 72–80. An early and still-essential analysis of how computational systems create accountability gaps, identifying four obstacles to computing accountability: the problem of many hands, bugs as excuses, the ownership problem, and the problem of the computer as scapegoat. Directly applicable to AI accountability decades later.

  3. Cummings, Mary. (2017). "Automation Bias in Intelligent Time Critical Decision Making." Proceedings of the AIAA Space Conference. A rigorous treatment of automation bias — the cognitive tendency to over-rely on automated systems — with implications for understanding why "human in the loop" does not automatically mean "meaningful human oversight."

  4. Collingridge, David. (1980). The Social Control of Technology. St. Martin's Press. The original articulation of the Collingridge dilemma: technology is easiest to regulate when it is poorly understood, and hardest to regulate when it is well-understood. Essential background for understanding the structural challenges of proactive AI regulation.

Accountability in AI Practice

  1. Doshi-Velez, Finale, et al. (2017). "Accountability of AI Under the Law: The Role of Explanation." arXiv preprint arXiv:1711.01134. Examines the legal and ethical dimensions of AI explainability, focusing on how explanation requirements relate to accountability obligations. Connects technical explainability research to legal accountability frameworks.

  2. Raji, Inioluwa Deborah, and Joy Buolamwini. (2019). "Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products." Proceedings of AAAI/ACM Conference on AI, Ethics, and Society. Documents the real-world effects of public disclosure of AI system performance disparities — an empirical examination of public accountability mechanisms in AI.

  3. Selbst, Andrew D. (2021). "An Institutional View of Algorithmic Impact Assessments." Harvard Journal of Law and Technology, 35(1). A critical analysis of algorithmic impact assessments as accountability mechanisms, examining their design, limitations, and relationship to existing regulatory frameworks. Essential for evaluating impact assessment requirements.

  4. Cobbe, Jennifer, et al. (2021). "Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems." Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT). Proposes a framework for accountable automated decision-making centered on meaningful reviewability — a contribution to the "meaningful human review" debate central to this chapter.

  1. Citron, Danielle Keats, and Frank Pasquale. (2014). "The Scored Society: Due Process for Automated Predictions." Washington Law Review, 89, 1–33. An early and influential legal analysis of automated scoring systems and due process rights. Sets the foundation for understanding the legal accountability obligations that arise when AI systems make consequential predictions about individuals.

  2. Landes, William M., and Richard A. Posner. (1987). The Economic Structure of Tort Law. Harvard University Press. The foundational economic analysis of tort law and liability. Chapter 5's treatment of negligence standards and products liability remains essential background for understanding how existing liability frameworks could be applied to AI.

  3. Pagallo, Ugo. (2013). The Laws of Robots: Crimes, Contracts, and Torts. Springer. An early and comprehensive analysis of how existing legal frameworks — contracts, torts, and criminal law — apply to robotic and autonomous systems. Though focused on robotics, the analysis extends directly to AI accountability.

  4. Article 29 Data Protection Working Party. (2018). "Guidelines on Automated Individual Decision-making and Profiling for the Purposes of Regulation 2016/679." European Commission. The official EU guidance on GDPR Article 22, which restricts automated decision-making with significant effects on individuals. Important for understanding the European legal framework for AI accountability.

Case Studies and Investigative Journalism

  1. National Transportation Safety Board. (2019). "Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian." NTSB/HAR-19/03. The full NTSB investigation report on the Uber/Herzberg fatality. Essential primary source for understanding the specific failures documented and the NTSB's accountability analysis.

  2. Wells, Georgia, Jeff Horwitz, and Deepa Seetharaman. (2021). "Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show." Wall Street Journal. The original Wall Street Journal reporting on Frances Haugen's disclosures, focusing on Instagram's effects on teenage mental health. The article that triggered congressional hearings and regulatory attention.

  3. Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. (2016). "Machine Bias." ProPublica. The investigative report documenting racial disparities in COMPAS recidivism scores. A landmark in AI accountability journalism and a model of external AI auditing from output data. Discussed extensively in Chapters 9 and 19.

Professional and Ethical Standards

  1. ACM Code of Ethics and Professional Conduct. (2018). Association for Computing Machinery. The professional ethics code for computing professionals, including AI developers. Essential reference for understanding the professional obligations that ground developer responsibility claims. Available at ethics.acm.org.

  2. IEEE Code of Ethics. (2020). Institute of Electrical and Electronics Engineers. The professional ethics code for electrical and electronics engineers, including those developing AI systems. Affirms the commitment to "hold paramount the safety, health, and welfare of the public."

  3. Reisman, Dillon, et al. (2018). "Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability." AI Now Institute. The most influential practical framework for algorithmic impact assessments, developed specifically for public agency contexts. Applicable to private-sector AI deployment with adaptation.

Comparative and Global Perspectives

  1. Calo, Ryan. (2017). "Artificial Intelligence Policy: A Primer and Roadmap." UC Davis Law Review, 51, 399–435. A comprehensive overview of AI policy issues organized around the distinctive features of AI as a regulatory subject. Essential background for understanding the regulatory accountability landscape.

  2. Diakopoulos, Nicholas. (2016). "Accountability in Algorithmic Decision Making." Communications of the ACM, 59(2), 56–62. Develops a framework for algorithmic accountability centered on auditability, specifically examining investigative journalism as an accountability mechanism for AI systems.