36 min read

In January 2023, a group of artists — Sarah Andersen, Kelly McKernan, and Karla Ortiz — filed a class action lawsuit in the Northern District of California against Stability AI, Midjourney, and DeviantArt. The complaint alleged that these companies...

Chapter 20: Liability Frameworks for AI


Opening Hook

In January 2023, a group of artists — Sarah Andersen, Kelly McKernan, and Karla Ortiz — filed a class action lawsuit in the Northern District of California against Stability AI, Midjourney, and DeviantArt. The complaint alleged that these companies had scraped billions of copyrighted images from the internet without consent, used those images to train AI image generation systems, and were now operating commercial services that could generate images in the style of any artist whose work appeared in the training data — effectively competing with the very artists whose work had made the AI possible.

The same month, Getty Images filed a separate lawsuit in the United Kingdom against Stability AI, alleging that Stability AI had unlawfully copied and processed millions of images from Getty's database to train Stable Diffusion. In December 2023, the New York Times filed what may become the most consequential AI copyright case in U.S. history: a lawsuit against OpenAI and Microsoft alleging that ChatGPT and GPT-4 were trained on millions of Times articles without permission, and that the systems could be prompted to reproduce Times content almost verbatim — directly substituting for the newspaper's paid subscription service.

These cases raised questions that copyright law had never been designed to answer. Does training an AI on copyrighted works constitute infringement? Is the output of an AI system a "copy" in the legal sense? If a user asks an AI to "write in the style of" a specific author, is that infringement? Who owns AI-generated output? What remedies are available when AI-enabled competition undercuts creative professionals?

But intellectual property was only one frontier. Elsewhere in the legal landscape, plaintiffs were arguing that AI diagnostic errors constituted medical malpractice, that AI-generated defamatory statements created new theories of defamation liability, that AI-driven credit denials violated the Equal Credit Opportunity Act, and that AI-powered content moderation failures constituted product liability. AI liability law was being built in real time, through litigation and legislation, and the outcomes of the cases being filed in 2023 and 2024 will shape how AI is developed, deployed, and governed for decades.

This chapter examines the legal landscape of AI liability: what legal theories apply, where they succeed and fall short, how different jurisdictions are approaching the challenge, and what the future of AI liability law is likely to look like.


Learning Objectives

By the end of this chapter, students will be able to:

  1. Identify the primary legal theories applicable to AI-caused harm, including negligence, products liability, civil rights law, intellectual property law, and consumer protection law.
  2. Apply negligence analysis — duty, breach, causation, damages — to hypothetical AI liability scenarios.
  3. Distinguish between manufacturing defect, design defect, and warning defect claims in the products liability context, and assess their applicability to AI systems.
  4. Explain the disparate impact doctrine and its application to AI systems that produce discriminatory outcomes.
  5. Summarize the current state of AI copyright litigation, including key arguments for and against fair use in AI training.
  6. Describe the EU AI Act's liability implications and the EU AI Liability Directive's key provisions.
  7. Evaluate the arguments for and against strict liability for AI-caused harm.
  8. Assess the role of insurance in AI risk management and liability internalization.

Section 20.1: The AI Liability Landscape — A Mess

Why AI Liability Is Legally Unsettled

If you were to ask a lawyer in 2025 which legal theories apply to AI-caused harm, the honest answer is: it depends, and we're not entirely sure. AI liability law is not settled law. It is a body of doctrine in formation — being built through litigation, regulatory guidance, and legislation, in real time, by courts and regulators who are grappling with questions that have no precedent.

The fundamental reason is that existing law was built around conceptual categories that AI challenges. Tort law developed around individual human actors making specific decisions. Products liability developed around physical manufactured goods. Intellectual property law developed around human creative expression. Civil rights law developed around intentional discrimination and, later, facially neutral policies — but not around autonomous systems producing emergent discrimination as a statistical byproduct of optimization. Each body of law must now be stretched, analogized, and adapted to cover AI harms — and courts are reaching inconsistent conclusions about how far it stretches.

This unsettled state has practical consequences. It means that the risk of AI-caused harm is insufficiently internalized by AI developers and deployers: if liability is uncertain, the financial consequences of causing harm are uncertain, and the incentive to prevent harm is correspondingly weakened. It means that victims of AI harm lack reliable pathways to redress: the barriers to bringing AI liability claims are substantial, and many legitimate claims are never filed. And it means that AI development is occurring in a legal vacuum — without the liability rules that, in other domains, create incentives for safety investment.

The Patchwork

The absence of a unified AI liability framework means that different legal theories apply in different domains, producing a patchwork that is difficult for companies to comply with and difficult for plaintiffs to navigate:

In employment, anti-discrimination law (Title VII, the ADEA, the ADA) prohibits AI hiring tools that produce disparate impact, and the EEOC and plaintiff employment lawyers pursue claims under existing doctrine.

In credit and housing, fair lending law (ECOA, FHA) prohibits algorithmic lending tools that produce discriminatory outcomes, and the CFPB, HUD, and private plaintiffs pursue claims under those statutes.

In healthcare, medical malpractice law governs AI-assisted clinical decisions made by licensed professionals, the FDA regulates AI-based medical devices, and product liability law may apply to AI diagnostic tools.

In criminal justice, constitutional due process constraints apply to AI risk assessment tools, and civil rights statutes (42 U.S.C. § 1983) allow claims against government actors.

In content creation, copyright law governs AI training and AI-generated outputs, and the relevant case law is being developed through current litigation.

In consumer products, general negligence and products liability law applies, with the specific application to AI systems being developed through litigation.

This patchwork is not inherently wrong — different contexts may warrant different legal approaches — but it creates significant complexity and leaves many AI harms without clear legal remedies.

The Jurisdiction Problem

AI systems operate globally; liability regimes do not. An AI model trained in the United States on data collected from around the world and deployed through a cloud service to users in 50 countries creates liability exposure under potentially dozens of legal systems. The EU AI Act, the UK's developing AI regulatory framework, China's AI regulations, and U.S. state and federal law all potentially apply to the same AI system, with potentially different and conflicting requirements.

The jurisdiction problem has two dimensions. First, it creates compliance complexity: companies must simultaneously comply with multiple regulatory regimes with different and sometimes incompatible requirements. Second, it creates opportunities for regulatory arbitrage: companies can structure their operations to minimize exposure under the most demanding liability regimes, potentially concentrating AI deployment in jurisdictions with weaker protections.

Vocabulary Builder

  • Liability: Legal exposure to damages, penalties, or other legal consequences for causing harm.
  • Tort: A civil wrong that gives rise to a claim for damages; the body of law governing such claims.
  • Negligence: A tort based on failure to exercise reasonable care, consisting of duty, breach, causation, and damages.
  • Strict liability: Liability for harm regardless of whether the defendant exercised reasonable care.
  • Products liability: The body of law governing manufacturer and seller liability for defective products.
  • Vicarious liability: Liability for the acts of another party (such as an employee or agent).
  • Contributory negligence: The plaintiff's own negligent conduct that contributed to their harm; reduces or eliminates recovery in some jurisdictions.
  • Disparate impact: The doctrine that facially neutral practices that produce discriminatory effects on protected groups can violate anti-discrimination law.

Section 20.2: Negligence Theory Applied to AI

Standard Negligence Elements

Negligence is the most generally applicable legal theory for AI-caused harm. A successful negligence claim requires proving four elements: duty, breach, causation, and damages. Applying each element to AI-caused harm raises novel questions that courts are still working through.

Duty: When AI Actors Owe a Duty of Care

The duty element asks whether the defendant owed a legal duty of care to the plaintiff. In most negligence contexts, a duty of care arises when the defendant's actions could foreseeably harm the plaintiff. For AI systems, this question is more complex:

Products: Manufacturers owe a duty of care to foreseeable users of their products. If an AI system is classified as a product — a question addressed in Section 20.3 — its developer owes a duty to users who suffer harm from product defects. This is relatively settled law.

Services: Service providers owe a duty of reasonable care to the recipients of their services. If an AI system is classified as a service — which many cloud-based AI systems clearly are — the service provider owes a duty of care to users. The exact scope of this duty in the AI context is being developed through litigation.

Foundation model providers: The novel question is whether AI foundation model providers — OpenAI, Anthropic, Google — owe a duty of care to individuals harmed by downstream applications built on their models. There is no settled answer. Arguments for a duty: foundation model providers know their models will be used in high-stakes applications, can foresee that harmful capabilities of their models will cause harm to third parties, and have the technical ability to reduce those harms. Arguments against: the relationship between the foundation model provider and the ultimate downstream victim is too attenuated to support a duty, and imposing such a duty would create limitless liability for providers of general-purpose AI.

Breach: The Standard of Care for AI

The breach element asks whether the defendant failed to exercise the standard of care that a reasonable person would exercise under the circumstances. For AI, the relevant question is: what does reasonable care look like in AI development and deployment?

The standard of care is informed by industry standards, professional codes, and regulatory guidance. The NIST AI Risk Management Framework describes practices that constitute responsible AI development and deployment. The EEOC's Uniform Guidelines on Employee Selection Procedures describe validation practices for employment selection tools. The FDA's guidance on AI/ML-based software as a medical device describes practices for AI medical devices. Adherence to these standards is evidence of reasonable care; departure from them is evidence of breach.

The critical implication: as industry standards for AI safety, fairness, and documentation become more developed and more widely understood, the minimum standard of care rises. Practices that might have been reasonable in 2018 — before fairness testing tools were widely available, before audit requirements existed — may not be reasonable in 2025. This creates ongoing compliance obligations for organizations that continue to deploy AI systems built under older, less rigorous standards.

Causation: Proving That the AI Caused the Harm

The causation element asks whether the defendant's breach caused the plaintiff's harm. For AI, this is often the most challenging element to prove.

But-for causation — the standard form — asks whether the harm would have occurred absent the defendant's breach. In AI cases, establishing but-for causation typically requires showing that if the AI system had not been defective (biased, inaccurate, or poorly designed), the plaintiff would not have suffered the specific harm. This is often difficult to establish: AI decisions are made probabilistically, and the counterfactual (what the decision would have been with a better AI system) is genuinely uncertain.

The substantial factor test is an alternative used when multiple causes contribute to harm, and any one of them would have been sufficient. If an AI credit-scoring model, plus a human loan officer's independent judgment, plus market conditions all contributed to a credit denial, the substantial factor test allows a plaintiff to establish causation without proving that the AI alone was sufficient.

Statistical vs. individual causation is perhaps the most fundamental causation challenge in AI liability. Many AI harms are statistical: the system produces discriminatory outcomes for 30% of Black applicants more than for comparable white applicants. This is a harm at the population level. But the individual plaintiff needs to establish that the AI's bias was a cause of their specific denial — which requires going from a population-level statistical finding to an individual-level causal conclusion, a step that is logically complex and legally contested.

Damages: What Harms Are Compensable?

The damages element asks what compensation the plaintiff can recover for the harm they suffered. AI-caused harms span a wide range:

Economic damages are the most concrete: a job applicant denied employment by a biased AI hiring tool suffers lost wages and lost employment opportunities. A loan applicant denied credit by a biased AI model suffers the financial costs of being unable to access capital. These damages, while real, may be difficult to quantify — particularly when the plaintiff cannot prove that they would have received the job or loan absent the AI's bias.

Non-economic damages for AI harm include dignitary harm (being subjected to discriminatory treatment by an algorithm), emotional distress (the psychological impact of an unjust denial), and reputational harm (damage to an individual's reputation from AI-generated defamatory content). These damages are recognized in many legal contexts but are harder to quantify and subject to more skepticism from courts.

The measurement problem is acute in AI discrimination cases: how do you calculate the value of the job you didn't get because a biased AI screened out your resume, when you can't prove you would have been hired? This problem is not unique to AI cases — it arises in all employment discrimination cases — but it is compounded in AI cases by the opacity of AI decisions and the difficulty of establishing the counterfactual.


Section 20.3: Products Liability and AI

Is AI a "Product"?

The applicability of products liability law to AI systems depends on a threshold question: is AI a "product"? Products liability is designed for manufactured goods — physical objects that are designed, manufactured, distributed, and sold. Software has historically existed in a legal gray zone: is it a product (when embedded in physical media or incorporated into a physical product) or a service (when delivered via cloud or licensed)?

Courts have reached inconsistent conclusions on this question, generally driven by the specific facts of each case. AI systems are delivered in multiple ways: embedded in physical devices (AI-enabled medical devices, autonomous vehicles); delivered as standalone software (applications installed on a user's device); delivered as cloud services (SaaS AI tools accessed through a browser); and delivered as components of other products or services (AI APIs incorporated into third-party applications). Each delivery modality creates different arguments about product vs. service classification.

The practical significance: if AI is a product, strict products liability may apply — the manufacturer is liable for defects that cause harm, without requiring proof of negligence. If AI is a service, liability is generally based on negligence, requiring proof of failure to exercise reasonable care.

Manufacturing Defect

A manufacturing defect occurs when a specific unit of a product deviates from its intended design. In traditional manufacturing, this means the specific car that was assembled with a faulty brake component, not the design of the braking system in general. In AI, a manufacturing defect analog might be a model training process that produced a corrupted model due to a software error — where the specific deployed model behaves differently from what the developer intended and tested.

Manufacturing defect is the least common theory in AI liability, because AI harms typically arise from design choices (the algorithm was designed in a way that produces bias) rather than deviation from design. But it remains available in cases where there is evidence that the specific deployed system had an error not present in the tested version.

Design Defect

Design defect — the claim that the product's design itself is unreasonably dangerous — is the most important products liability theory for AI. A design defect claim does not require showing that a specific unit deviated from the intended design; it argues that the intended design itself is the problem.

Two tests are applied to design defect claims:

The consumer expectation test asks whether the product failed to perform as safely as an ordinary consumer would expect when using it in a reasonably foreseeable way. Applied to AI: would an ordinary user expect a medical AI system to be more accurate for one demographic group than another? Would an ordinary user expect an AI hiring tool to penalize applicants who graduated from women's colleges?

The risk-utility balancing test asks whether the risks of the design outweigh its utility, considering factors like the probability and magnitude of harm, the burden of an alternative safer design, and the utility of the product's features. Applied to AI: does the risk of racial bias in a recidivism prediction tool outweigh the tool's utility in reducing future crime, given that alternative designs (including human prediction) are available and that a less biased design could be achieved at reasonable cost?

Warning Defect (Failure to Warn)

A failure to warn claim alleges that the product's manufacturer failed to provide adequate warnings about the product's risks, such that a user who was adequately warned would have used the product differently or not at all. Applied to AI: did the AI developer adequately disclose the system's known limitations — its accuracy disparities across demographic groups, its failure modes in edge cases, its training data limitations — to deployers and users?

Failure to warn is increasingly important in AI liability because many AI harms result not from the AI system working incorrectly but from deployers and users using the system in contexts for which it was not validated, or in ways that the developer documented (but inadequately highlighted) as outside the system's appropriate scope. Model cards and system documentation serve the same function as product warning labels — they define what the system is for and what its risks are. Inadequate documentation, or documentation that is technically accurate but buried in terms of service, may support a failure to warn claim.

The Software Exception

Courts have historically been reluctant to apply strict products liability to software, reasoning that software is fundamentally different from physical manufactured goods: it is information rather than a physical object, errors are inevitable in complex software in ways that manufacturing defects in physical goods are not, and applying strict liability to software would impose enormous and unpredictable costs on the software industry. This "software exception" has been a significant obstacle to applying strict products liability to AI systems.

Whether the software exception should survive for AI is contested. AI systems embedded in physical products (autonomous vehicles, medical devices) are more clearly "products" than cloud-based AI services, and the software exception may not apply when the AI is a central functional component of a physical product. The EU's approach — which extends product liability to software in its revised Product Liability Directive — represents a deliberate rejection of the software exception, and it may influence U.S. courts and legislators as AI harm cases proliferate.


Section 20.4: Civil Rights and Anti-Discrimination Liability

When Algorithmic Bias Violates Anti-Discrimination Law

The civil rights framework provides the most established legal theory for AI harm in employment, credit, and housing. Anti-discrimination law prohibits practices that produce discriminatory effects on protected groups, even when those practices appear facially neutral — this is the disparate impact doctrine. AI systems that produce racially or gender-disparate outcomes in hiring, lending, or housing decisions can violate federal anti-discrimination law under this doctrine, regardless of whether the AI was designed with discriminatory intent.

The importance of the disparate impact doctrine for AI cannot be overstated. Most AI bias is not the result of intentional discrimination — no one wrote code that says "discriminate against Black applicants." It is the result of training on historically biased data, optimizing for objectives that don't account for fairness, and deploying systems without adequate fairness testing. Intentional discrimination law (disparate treatment) would not reach these cases. Disparate impact doctrine does reach them, provided that a plaintiff can establish statistically significant discriminatory effects.

Who Can Sue

Individual plaintiffs can bring disparate impact claims in employment (Title VII), credit (ECOA), and housing (FHA). The practical challenge: individual plaintiffs often lack the resources to conduct the statistical analysis necessary to prove disparate impact, the data to establish that an AI system (rather than some other factor) caused the disparity, and the legal resources to litigate against well-funded corporations. Class actions — discussed below — address some but not all of these barriers.

Government enforcement agencies have substantial authority in AI discrimination cases. The EEOC can investigate employers, issue guidance, and bring enforcement actions under Title VII and the ADA. The CFPB has authority over financial institutions under the Equal Credit Opportunity Act. HUD has authority over housing discrimination under the Fair Housing Act. These agencies have resources and investigative authority that individual plaintiffs lack, and enforcement actions by these agencies can produce industry-wide changes in AI deployment practices.

Proving Disparate Impact

The legal standard for proving disparate impact requires: (1) identifying a specific employment or lending practice (the AI tool, not just general company policy); (2) demonstrating through statistical evidence that the practice produces a statistically significant disparate impact on a protected group; and (3) showing that the disparity is not justified by business necessity (that the practice is a valid predictor of the relevant outcome, and that no less discriminatory alternative is equally effective).

For AI systems, step (1) requires evidence that the AI tool specifically — not human reviewers who override it — caused the disparity. Step (2) requires statistical analysis of outcomes across demographic groups, which requires data that may be difficult to obtain. Step (3) shifts the burden to the defendant to justify the tool's predictive validity and to demonstrate that less discriminatory alternatives were not available — a burden that defendants can often meet, at least partially, by producing validation studies.

Class Action Dynamics

Individual AI discrimination claims often aggregate into class actions. A single AI hiring tool might screen out thousands of qualified Black or female applicants; a single AI lending tool might deny credit to thousands of similarly creditworthy minority borrowers. The systemic nature of algorithmic discrimination — the same algorithm affects everyone who passes through it — creates the commonality of claims that makes class certification appropriate.

Class actions address the resource imbalance between individual plaintiffs and well-funded corporations: a law firm that aggregates thousands of claims against an AI hiring tool can afford the statistical analysis, the technical expert witnesses, and the litigation costs that no individual plaintiff could bear. They also produce structural remedies (changes to the AI system, monitoring requirements, injunctions) that individual cases cannot.


Section 20.5: Intellectual Property Liability

The most significant AI liability dispute of the current decade is the question of whether training AI systems on copyrighted works constitutes copyright infringement. The legal theory is direct: copyright holders have the exclusive right to reproduce their works (and to authorize reproductions). Training an AI on copyrighted works involves making copies of those works — copying images into a training dataset, copying text into a fine-tuning corpus. If this copying does not qualify for the fair use exception, it is infringement.

The magnitude of the dispute is enormous. Nearly every major AI system has been trained on internet-scale datasets that include copyrighted text, images, code, and other creative works. If AI training infringes copyright, the leading AI companies are exposed to statutory damages — potentially hundreds of billions of dollars — and could face injunctions requiring them to retrain their models on non-infringing data.

The Stable Diffusion and Midjourney Litigation

The class action filed by Andersen, McKernan, and Ortiz in January 2023 against Stability AI, Midjourney, and DeviantArt alleged copyright infringement arising from the use of their works in training Stable Diffusion. The complaint alleged not just infringement through training but also infringement through output: Stable Diffusion could generate images in the style of specific artists (including the plaintiffs), in effect commercially exploiting their creative style.

The case proceeded through a series of motions to dismiss. The district court dismissed several claims but allowed certain copyright claims regarding direct copying to proceed. The litigation is ongoing as of this writing, with discovery and merits proceedings expected to take several more years to resolve.

The key legal questions the case will decide include: (1) whether scraping and processing copyrighted images to train an AI constitutes reproduction under the Copyright Act; (2) whether such training qualifies as fair use (transformative use, no market substitution); and (3) whether AI-generated outputs that closely resemble specific artists' styles constitute infringement of those artists' works.

New York Times v. OpenAI

The New York Times' December 2023 lawsuit against OpenAI and Microsoft is the highest-profile AI copyright case to date. Unlike the artist litigation — which involved images and was therefore partly about style, which is not copyrightable — the Times' lawsuit is about text, where copying is more clearly at issue. The Times alleged that ChatGPT and GPT-4 were trained on millions of Times articles; that when prompted appropriately, the systems produce nearly verbatim reproductions of Times content; and that the systems operate as substitutes for Times subscriptions, directly harming the Times' business.

The fair use defense is central to OpenAI's position. Fair use requires consideration of four factors: the purpose and character of the use (commercial or transformative?); the nature of the copyrighted work; the amount and substantiality of the portion used; and the effect of the use upon the potential market for the copyrighted work.

OpenAI argues that training on copyrighted text is transformative use — the company is not making copies to reproduce the Times' content, but to enable a system that generates original content. The Times argues that the use is commercial (not transformative in any meaningful sense), that it involves the most creative aspects of the Times' work, that it involves massive copying, and that it directly harms the market for Times content by enabling users to substitute ChatGPT for Times subscriptions.

The case will ultimately be decided by courts that must extend fair use doctrine to circumstances it was not designed for — and the outcome will shape the legal foundation of the entire AI training ecosystem.

A separate intellectual property question concerns ownership of AI-generated output. The U.S. Copyright Office has consistently held that copyright protection requires human authorship: works created by AI without human creative input are not copyrightable. In Thaler v. Perlmutter (2023), a federal court affirmed the Copyright Office's rejection of an application to register AI-generated artwork, holding that copyright requires human authorship and that the AI system did not qualify as an author.

This has significant practical implications. AI-generated content — text, images, music, code — that lacks sufficient human creative input is in the public domain, available for anyone to use without permission. This potentially undermines the business models of AI content companies that sell AI-generated creative works and creates difficult questions about the line between AI generation (no copyright protection) and AI-assisted human creation (potentially copyrightable).


Section 20.6: Contract and Consumer Protection Liability

For most consumer-facing AI products, the primary legal framework is not tort or civil rights law — it is contract law, specifically the terms of service (ToS) that users agree to when they access the AI. ToS documents are the legal foundation of the consumer AI relationship. They define what the service provides, what the company's obligations are, and — critically — what liability is excluded.

AI company ToS documents typically include broad limitation of liability clauses, disclaiming responsibility for inaccurate outputs, harmful content, and consequences of reliance on AI-generated information. Whether these disclaimers are enforceable is a growing legal question: courts have occasionally held that limitation of liability clauses are unconscionable when they disclaim liability for harms the user could not have anticipated or avoided, or when they are presented in circumstances that don't allow meaningful review.

FTC Section 5 and Consumer Protection

The Federal Trade Commission's authority under Section 5 of the FTC Act to prohibit "unfair or deceptive acts or practices" is one of the most powerful tools for consumer AI accountability. The FTC has broad authority to pursue companies that:

  • Make false or misleading claims about AI capabilities
  • Use AI in ways that cause substantial consumer harm that they could not reasonably avoid
  • Engage in manipulative practices using AI (dark patterns, deceptive AI personas)
  • Fail to adequately disclose the limitations of AI products

The FTC has brought several enforcement actions with AI dimensions. In 2023, the FTC issued guidance on AI claims, making clear that companies must have substantiation for claims about AI capabilities and must not use AI to engage in deceptive practices. The FTC's 2024 report on AI highlighted several areas of concern, including AI-generated disinformation and manipulative AI-powered advertising.

The FTC's rulemaking authority allows it to establish industry-wide standards through regulations, and the Commission has signaled interest in AI-specific regulations in the commercial context. FTC enforcement is particularly significant for foundation model companies and consumer-facing AI product companies.

Consumer Class Actions

Consumer protection law has generated a growing number of class actions targeting AI products. Claims have been filed alleging that AI-generated content defamed class members, that AI-powered customer service systems misled consumers about their rights, and that AI recommendation systems manipulated consumer behavior in ways that constituted unfair practices. These cases are in early stages, and courts are still working through the applicable legal standards.


Section 20.7: The EU Approach — AI Act and AI Liability Directive

EU AI Act: How Obligations Map to Liability

The EU AI Act (2024) is primarily a regulatory framework — it creates obligations for AI providers and deployers, enforced by regulatory authorities — rather than a liability framework. But its obligations create implicit liability exposure: violation of an AI Act obligation is evidence of breach of the applicable standard of care, supporting negligence claims. In some EU member states, violation of a statutory obligation creates automatic liability for resulting harm (a doctrine called Schutzgesetz or "protective law" doctrine).

The AI Act's obligations are tiered by risk:

For high-risk AI systems (used in employment, credit, healthcare, education, critical infrastructure, law enforcement, and migration), operators must maintain technical documentation, implement risk management systems, ensure data governance, enable human oversight, and achieve specified accuracy and robustness. Violations of these requirements expose providers and deployers to regulatory fines (up to 3% of global annual turnover) and, through the protective law doctrine in some jurisdictions, to civil liability.

For general-purpose AI models above a specified capability threshold, the AI Act imposes transparency obligations, copyright compliance requirements, and (for the most capable models) red-teaming and incident reporting obligations. Violations expose model providers to regulatory fines.

The AI Act's human oversight requirements are particularly significant for liability: if an operator is required to maintain meaningful human oversight of a high-risk AI system and fails to do so, resulting in harm, the failure of human oversight creates clear grounds for liability. This translates the "rubber-stamping" problem — where human review is nominal rather than genuine — into a documented legal obligation.

EU AI Liability Directive (Proposed)

The proposed EU AI Liability Directive represents the most significant global attempt to create a specific AI liability framework. Its key provisions address the evidentiary barriers that make AI liability claims difficult to prove:

Presumption of causality. When an AI system's opacity makes it difficult for a plaintiff to prove that the AI caused their harm, the Directive creates a rebuttable presumption: if the plaintiff can show that the defendant failed to comply with a relevant duty of care (such as an AI Act obligation) and that the non-compliance was likely to cause the plaintiff's harm, causality is presumed. The defendant can rebut this presumption by showing that the harm was caused by some other factor. This addresses the causation problem — one of the most significant barriers to AI liability claims — without eliminating the defendant's ability to defend.

Disclosure obligations. The Directive gives potential plaintiffs the right to request disclosure of evidence about high-risk AI systems — documentation of the system's design, training data, validation methodology, and deployment decisions — to facilitate the investigation of whether a claim is viable. This addresses the evidentiary asymmetry in AI cases: plaintiffs typically lack access to the information necessary to evaluate whether the AI caused their harm, while defendants possess that information.

How this differs from the U.S. approach. The United States has no comparable federal framework. U.S. plaintiffs must establish AI causation through general tort law principles, typically requiring expert evidence and extensive discovery — at their own cost, before any presumption shifts to the defendant. The Directive's burden-shifting approach would make AI liability claims significantly more accessible to EU plaintiffs.

EU Product Liability Directive Revision

In parallel with the AI Liability Directive, the EU has revised its Product Liability Directive to explicitly include software — including AI software — as a "product" subject to strict product liability. This represents a deliberate departure from the software exception that U.S. courts have generally applied. Under the revised Directive, AI software providers can be held strictly liable for defects that cause harm, without the plaintiff needing to prove negligence.

The revised Directive also extends liability to AI-enabled products — products that incorporate AI components — making clear that the physical product manufacturer cannot escape liability by pointing to the AI component as a separate, unrelated product.


Section 20.8: Insurance as a Liability Tool

AI Liability Insurance

Insurance is a liability management tool that serves two functions in the AI accountability context. First, it transfers financial risk from the insured (AI developers and deployers) to the insurer, providing compensation for harm victims. Second, it prices risk: insurers who accurately price AI liability exposure will charge higher premiums for higher-risk AI deployments, creating financial incentives for risk reduction.

The AI liability insurance market is developing rapidly. Several existing insurance lines have AI-relevant dimensions:

Errors and omissions (E&O) insurance covers professional liability — claims that a professional's services caused harm. This line is directly applicable to AI companies providing AI services: if an AI-based recommendation causes harm to a client, E&O coverage would potentially apply.

Cyber liability insurance covers damages arising from data breaches, cyberattacks, and related incidents. This line applies to AI security failures — adversarial attacks, data poisoning, unauthorized access to AI systems — that cause harm to clients or customers.

Product liability insurance covers claims arising from product defects. As AI systems are increasingly treated as products, product liability coverage is relevant for AI developers and deployers.

Directors and officers (D&O) insurance covers claims against corporate officers and directors for management decisions. As AI governance failures become a recognized source of corporate liability, D&O coverage may apply to claims against executives who failed to implement adequate AI risk management.

How Insurance Pricing Disciplines AI Risk

Insurance underwriting requires assessment of the risks to be insured. Insurers who underwrite AI liability must evaluate: what AI systems the policyholder operates; what safety measures are in place; what audit and monitoring practices the organization uses; what the regulatory compliance history is; and what the relevant litigation history shows. This evaluation process creates a financial incentive for AI deployers to invest in safety, auditing, and compliance — behaviors that reduce insurance premiums.

This mechanism is most effective when insurers have the technical expertise to accurately price AI risk. The AI liability insurance market is too young for actuarial data on AI harm frequency and severity to be comprehensive. Insurers are relying on proxies — compliance certifications, audit reports, security assessments — that may not accurately capture actual risk. As the market matures and actuarial data accumulates, insurance pricing should become a more effective risk-management incentive.

Lloyd's of London and AI Risk Exclusions

Some major insurers have begun restricting coverage for AI-related risks. Lloyd's of London issued guidance in 2023 advising syndicates on how to exclude autonomous AI risks from coverage — particularly AI systems that cause harm "without human oversight." Several major cyber liability policies now include exclusions for AI-caused harm, reflecting insurers' uncertainty about the magnitude of AI liability exposure.

These exclusions create a coverage gap: precisely the category of AI deployment that creates the most risk — autonomous AI without meaningful human oversight — may be uninsurable, at least under current policy language. This is an argument for mandatory AI liability insurance with specified coverage requirements: voluntary market exclusions can leave victims of AI harm without any compensation source.


Section 20.9: The Future of AI Liability

The Strict Liability Argument

The argument for strict liability in the AI context is conceptually straightforward: because proving negligence is technically and legally difficult in AI cases (due to opacity, distributed causation, and access barriers), negligence-based liability results in systematic under-deterrence and under-compensation. Strict liability removes the need to prove negligence, requiring only proof of harm and causation, and places the burden of avoiding harm where it is most efficiently borne — on the party that created and benefits from the AI system.

The counterargument — that strict liability will chill AI innovation — has some merit but is often overstated. Strict products liability for physical manufactured goods has not prevented manufacturing innovation; it has provided manufacturers with incentives to make safer products. The same effect should operate in AI: if AI developers know they will be strictly liable for AI-caused harm, they have stronger incentives to invest in safety than under negligence-based liability. The efficiency question is whether the safety-incentive benefits of strict liability outweigh the innovation costs — a complex empirical question.

Enterprise Liability

The enterprise liability model locates liability with the party that profits from the AI system — typically the deploying organization — regardless of which party (developer, platform, deployer) was technically at fault. Enterprise liability is applied in other contexts where attribution of fault is difficult: in vicarious liability doctrine (employers are liable for their employees' torts), in product liability's strict standards for manufacturers, and in absolute liability for abnormally dangerous activities.

Applied to AI, enterprise liability would make the deploying organization liable for AI-caused harm, with a right of indemnification against the AI developer for harms caused by product defects. This shifts the burden of fault attribution from plaintiff to defendant — a more appropriate allocation given the information asymmetry — and creates incentives for deployers to conduct rigorous vendor due diligence.

Compensation Funds

The compensation fund model addresses a different problem: many AI victims lack the resources to pursue individual claims, and many AI harms are too diffuse or too small to support individual litigation even when meritorious. A compensation fund — analogous to the National Childhood Vaccine Injury Act's Vaccine Injury Compensation Program — would provide a streamlined, no-fault compensation mechanism for AI-caused harm, funded by contributions from AI companies.

This approach has several advantages: it provides compensation to victims who would otherwise go uncompensated, it is less adversarial than litigation (reducing social costs), and it can accumulate actuarial data on AI harm that feeds back into improved safety standards. Its disadvantages: it can be politically difficult to design and fund, it may under-compensate serious harms if fund resources are limited, and it removes the deterrence effect of litigation.

International Coordination

The EU's AI Liability Directive and revised Product Liability Directive represent the most developed international effort to harmonize AI liability rules. Whether other jurisdictions will converge toward the EU's approach — as has happened in some areas of privacy law, where GDPR has influenced U.S. state law — is uncertain but possible. The alternative is a permanent liability patchwork, with AI liability rules varying dramatically across jurisdictions and companies optimizing their structures to minimize exposure in the most demanding regimes.


Discussion Questions

  1. The chapter describes AI liability as "a mess." Is this necessarily bad? Some legal theorists argue that legal uncertainty is itself a form of regulatory flexibility — it allows courts to develop doctrine incrementally in response to specific cases. What are the arguments for and against a comprehensive federal AI liability statute, as opposed to allowing doctrine to develop through litigation?

  2. Apply the four elements of negligence — duty, breach, causation, damages — to the following scenario: An AI-based radiology reading tool misses a tumor in a patient's lung X-ray. The radiologist who reviewed the AI's output (which showed "no significant findings") did not look at the image independently because he was relying on the AI. The patient is diagnosed with Stage 4 lung cancer 18 months later. Who has a duty of care? Who breached it? How would you establish causation and quantify damages?

  3. The EU AI Liability Directive's presumption of causality shifts the burden of proof to defendants in some cases. Does this represent an appropriate response to the evidentiary barriers in AI cases, or does it unfairly disadvantage AI companies? How would you design a burden-shifting mechanism that is both fair to defendants and accessible to plaintiffs?

  4. The New York Times v. OpenAI case will require courts to determine whether AI training on copyrighted text constitutes fair use. Walk through the four fair use factors as applied to this case and reach a conclusion. How confident are you in your answer? What factual questions would most influence the analysis?

  5. Mandatory AI liability insurance would create financial incentives for safety investment. What objections would the AI industry make to this requirement? How would you respond to those objections?

  6. The EU has explicitly included software in its revised Product Liability Directive, rejecting the "software exception" that U.S. courts have generally applied. Should U.S. courts and legislators follow suit? What arguments support and oppose extending strict product liability to AI software?

  7. The chapter describes several emerging liability theories — enterprise liability, compensation funds, strict liability — as alternatives or complements to negligence-based liability. Which of these approaches do you find most persuasive, and for which category of AI harm?


Cross-references: Chapter 3 (ethical frameworks); Chapter 6 (EU AI Act); Chapter 9 (COMPAS, fairness metrics); Chapter 18 (accountability — the non-legal foundations of responsibility); Chapter 19 (auditing — the relationship between audit findings and liability); Chapter 30 (COMPAS in criminal justice); Chapter 33 (EU AI Act detailed analysis).