Chapter 20: Key Takeaways — Liability Frameworks for AI

Core Concepts

  1. AI Liability Law Is Being Built in Real Time. There is no comprehensive AI liability framework in the United States. Existing tort, civil rights, intellectual property, and consumer protection law must be stretched and analogized to cover AI harms, and courts are reaching inconsistent conclusions. The litigation being filed today will shape AI liability doctrine for decades.

  2. Negligence Is the Most Broadly Applicable Theory, but Difficult to Prove. Negligence requires duty, breach, causation, and damages. Each element presents AI-specific challenges: duty is unsettled for new AI actors (foundation model providers); breach requires identifying the applicable standard of care; causation is difficult to establish given AI's opacity and distributed causation; and damages are hard to quantify for many AI harms.

  3. Products Liability Requires Threshold Classification of AI as a "Product." The classification of AI as a product (potentially subject to strict liability) or a service (governed by negligence) is unsettled and depends on how the AI is delivered and used. The "software exception" that U.S. courts have generally applied to software — excluding it from strict products liability — is being challenged in the AI context, and the EU has explicitly rejected it.

  4. The Disparate Impact Doctrine Is the Most Established AI Liability Theory. Anti-discrimination law's prohibition on practices with discriminatory effects — regardless of intent — applies directly to AI systems that produce racially or gender-disparate outcomes in employment, credit, and housing. This doctrine provides the most established legal pathway for AI discrimination claims. Deployers (not just developers) are liable under this doctrine.

  5. AI Copyright Litigation Will Determine the Foundation of AI Development. The pending cases — Andersen v. Stability AI, Getty v. Stability AI, NYT v. OpenAI — will determine whether AI training on copyrighted works constitutes infringement or qualifies as fair use. The outcome will shape the economics of AI development, the rights of creative professionals, and the legal infrastructure of the entire AI training ecosystem.

  6. The EU AI Liability Directive Addresses the Evidentiary Barriers That Make AI Claims Difficult. The proposed Directive's presumption of causality — shifting the burden of proof to defendants when they have violated relevant duties and causation is difficult to establish — would make AI liability claims significantly more accessible in the EU. The United States has no comparable mechanism.

  7. Strict Liability Would Internalize Risk but Faces Innovation Objections. The argument for strict liability is compelling: it removes the evidentiary barriers that prevent victims from obtaining redress and creates stronger safety incentives than negligence. The counterargument — innovation chilling — is often overstated. Other high-tech industries (medical devices, aviation) operate under demanding liability regimes without being unable to innovate.

  8. Insurance Is an Important but Underdeveloped Liability Tool. AI liability insurance can transfer risk, compensate victims, and create financial incentives for safety through premium pricing. The market is developing, but insurers are uncertain about AI liability exposure, some major insurers have begun excluding autonomous AI risks, and mandatory insurance requirements have not yet been enacted.

  9. The Loomis Precedent Illustrates the Limits of Constitutional Accountability for AI in Criminal Justice. Courts have generally deferred to AI risk assessment tools in criminal sentencing, accepting procedural adequacy arguments without engaging with the accuracy and bias debates. Constitutional reform of AI in criminal justice will require either more searching judicial review or legislative mandates for transparency and validation.

  10. The Jurisdiction Problem Creates Compliance Complexity and Regulatory Arbitrage Risk. AI systems operate globally; liability regimes do not. The EU's more demanding framework — EU AI Act, AI Liability Directive, revised Product Liability Directive — will apply to any AI system serving EU users, creating pressure for companies to meet EU standards globally or to structure operations to minimize EU exposure.

Key Terms to Know

  • Liability: legal exposure to damages or penalties for causing harm
  • Negligence: failure to exercise reasonable care, consisting of duty, breach, causation, and damages
  • Strict liability: liability for harm regardless of fault
  • Products liability: manufacturer/seller liability for defective products
  • Manufacturing defect: deviation of a specific unit from its intended design
  • Design defect: the product's design itself is unreasonably dangerous
  • Warning defect (failure to warn): inadequate disclosure of known product risks
  • Disparate impact: facially neutral practices that produce discriminatory effects on protected groups
  • Fair use: copyright doctrine permitting reproduction of copyrighted works in specified circumstances
  • Vicarious liability: liability for the acts of another (employee, agent)
  • EU AI Liability Directive: proposed EU framework creating presumption of causality in AI harm cases
  • Enterprise liability: liability located with the party that profits from the activity causing harm
  • Compensation fund: a pool of resources providing no-fault compensation for victims of a specific type of harm

Connections to Other Chapters

  • Chapter 6: EU AI Act — regulatory framework with implicit liability implications
  • Chapter 9: COMPAS, fairness metrics — technical foundation for civil rights claims
  • Chapter 18: Accountability — who bears responsibility upstream of formal liability
  • Chapter 19: Auditing — the relationship between audit findings and legal liability
  • Chapter 30: COMPAS in criminal justice — detailed treatment of Loomis and its successors
  • Chapter 33: EU AI Act detailed — comprehensive analysis of EU regulatory framework