Chapter 20: Further Reading — Liability Frameworks for AI
Foundational Tort and Liability Theory
-
Calabresi, Guido. (1970). The Costs of Accidents: A Legal and Economic Analysis. Yale University Press. The foundational law and economics analysis of accident law, developing the framework of deterrence and compensation as the twin goals of tort liability. Essential for understanding why liability rules matter for safety incentives and what "optimal" tort liability would look like for AI.
-
Restatement (Third) of Torts: Products Liability. (1998). American Law Institute. The authoritative statement of U.S. products liability law. The Restatement's framework — manufacturing defect, design defect, failure to warn — is the analytical starting point for products liability analysis of AI systems. Available at ali.org.
-
Restatement (Third) of Torts: Liability for Physical and Emotional Harm. (2010). American Law Institute. The authoritative statement of U.S. negligence law, defining duty, breach, causation, and damages. Essential background for applying negligence to AI cases.
AI Liability Scholarship
-
Doshi-Velez, Finale, et al. (2017). "Accountability of AI Under the Law: The Role of Explanation." arXiv preprint arXiv:1711.01134. Examines how AI accountability relates to legal requirements for explanation in various legal contexts — employment, lending, criminal justice, and healthcare — identifying where existing law requires AI systems to explain their decisions.
-
Calo, Ryan. (2015). "Robotics and the Lessons of Cyberlaw." California Law Review, 103, 513–563. An early and influential analysis of how the regulatory lessons of early internet law apply to robotics and autonomous systems. Introduces the "emergence" problem — autonomous systems behave in ways their designers did not specify — as a central liability challenge.
-
Vladeck, David C. (2014). "Machines Without Principals: Liability Rules and Artificial Intelligence." Washington Law Review, 89, 117–150. An early legal analysis of AI liability, focusing on the "principal problem" — when AI systems act without human direction, who bears liability? Essential historical reading in AI liability scholarship.
-
Selbst, Andrew D. (2019). "Negligence and AI's Human Users." Boston College Law Review, 60(4), 1305–1380. A comprehensive analysis of negligence doctrine applied to AI cases, focusing particularly on operator and user liability for automation bias and for following AI recommendations without adequate review.
Civil Rights and Anti-Discrimination
-
Barocas, Solon, and Andrew D. Selbst. (2016). "Big Data's Disparate Impact." California Law Review, 104, 671–732. The foundational law review article analyzing how data mining and machine learning systems produce discriminatory outcomes that fall within the scope of Title VII's disparate impact doctrine. Essential for understanding the civil rights framework for AI discrimination claims.
-
EEOC. (2023). "Artificial Intelligence and Algorithmic Fairness Initiative." Equal Employment Opportunity Commission. The EEOC's formal policy initiative on AI and employment discrimination, including technical assistance documents on AI and the ADA (2022) and AI and employment discrimination (2023). Primary source for EEOC's enforcement position. Available at eeoc.gov.
-
Bent, Jason R. (2019). "Is Algorithmic Affirmative Action Legal?" Georgetown Law Journal, 108, 803–853. Examines the legal constraints on using AI to remedy historical discrimination — whether algorithmic design choices intended to produce more diverse outcomes are lawful under Title VII and equal protection doctrine.
Intellectual Property and AI
-
Sobel, Benjamin L. N. (2017). "Artificial Intelligence's Fair Use Crisis." Columbia Journal of Law and the Arts, 41, 45–97. An early and comprehensive analysis of how fair use doctrine applies to AI training on copyrighted works. Develops the framework for the transformativeness analysis that is central to current litigation.
-
U.S. Copyright Office. (2023). "Copyright and Artificial Intelligence." Copyright Office Policy Study. The U.S. Copyright Office's formal policy study on AI and copyright, addressing AI training, AI-generated outputs, and authorship. Primary source for understanding the Copyright Office's position. Available at copyright.gov.
-
Samuelson, Pamela. (2023). "Generative AI Meets Copyright." Science, 381, 158–161. A concise and accessible overview of the AI copyright debates by one of the leading copyright scholars. Analyzes the AI training fair use question and its likely resolution.
EU Legal Framework
-
European Parliament and Council of the European Union. (Proposed 2022). "Directive on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence" [AI Liability Directive]. European Commission. The text of the proposed EU AI Liability Directive. Essential primary source for understanding the EU's approach to AI causation and burden-shifting. Available at ec.europa.eu.
-
European Parliament and Council of the European Union. (2024). "Directive on Liability for Defective Products" (Recast). Official Journal of the European Union. The revised EU Product Liability Directive, extending product liability to software and digital services. Represents the most significant expansion of strict product liability in EU law, with direct implications for AI.
-
Bertolini, Andrea. (2020). "AI and Liability." Study for the European Parliament's JURI Committee. PE 621.926. A comprehensive analysis of AI liability options prepared for the European Parliament, examining negligence, strict liability, and specific AI liability regimes. Informed the development of the EU AI Liability Directive.
Insurance and Risk Management
-
NAIC. (2023). "Big Data and Artificial Intelligence (H) Working Group: AI in Insurance." National Association of Insurance Commissioners. The NAIC's framework for AI use in insurance, addressing how AI-based underwriting, claims, and customer service tools should be governed. The most comprehensive regulatory guidance on AI in insurance.
-
Lloyd's of London. (2023). "Guidance on Autonomous AI Exclusions." Market Bulletin. Lloyd's of London's guidance to syndicates on how to address autonomous AI risk in insurance policies, including recommended exclusion language. Illustrates how the insurance industry is responding to AI liability uncertainty.
Criminal Justice and Constitutional Law
-
Eaglin, Jessica M. (2017). "Constructing Recidivism Risk." Emory Law Journal, 67, 59–122. A comprehensive legal analysis of recidivism risk assessment tools in criminal justice, examining constitutional due process constraints, equal protection concerns, and the evidentiary foundations for their use. Essential background for the Loomis case and its successors.
-
Stevenson, Megan T., and Christopher Slobogin. (2018). "Algorithmic Risk Assessments and the Double-Edged Sword of Youth." Washington University Law Review, 96, 1 (forthcoming)*. SSRN Working Paper. Examines the specific constitutional and policy challenges of using algorithmic risk assessment for juvenile offenders, developing a framework for when such tools are constitutionally permissible and when they are not.