Chapter 20: Quiz — Liability Frameworks for AI

20 questions. Select the best answer for each multiple-choice question. For short-answer questions, provide a concise response of 2–4 sentences.


1. The primary reason AI liability is legally unsettled is:

A) AI companies have successfully lobbied to exempt themselves from all liability B) Existing legal frameworks — negligence, products liability, civil rights law — were not designed for autonomous AI systems and must be stretched and analogized C) AI systems are so new that courts have not yet accepted any cases involving AI-caused harm D) Constitutional limitations prevent Congress from regulating AI liability


2. In a negligence claim, the "breach" element asks:

A) Whether the defendant owed a legal duty to the plaintiff B) Whether the defendant's conduct fell below the applicable standard of care C) Whether the defendant's conduct caused the plaintiff's harm D) Whether the plaintiff suffered legally cognizable damages


3. The "disparate impact" doctrine in anti-discrimination law:

A) Applies only to intentional discrimination with a provable discriminatory motive B) Prohibits facially neutral practices that produce discriminatory effects on protected groups, regardless of intent C) Requires plaintiffs to prove both discriminatory intent and discriminatory effect D) Is no longer applicable to AI systems, which have been specifically exempted from its coverage


4. Which of the following best describes a "design defect" in a products liability claim?

A) A specific unit of a product was manufactured incorrectly, deviating from the intended design B) The manufacturer failed to provide adequate warnings about a product's known risks C) The product's design itself creates an unreasonable risk of harm, making all units defective D) The product was deliberately designed to cause harm to consumers


5. The "software exception" in products liability law refers to:

A) An explicit statutory exemption that Congress enacted to protect software companies B) Courts' general reluctance to apply strict products liability to software, which is often treated as a service rather than a product C) A doctrine that software developers are exempt from negligence liability for coding errors D) A federal preemption rule that prevents state products liability law from applying to software


6. In the Loomis v. Wisconsin case, Eric Loomis challenged the use of COMPAS in his sentencing on three grounds. Which of the following was NOT one of his arguments?

A) The COMPAS algorithm's accuracy was inadequate to support its use in sentencing B) COMPAS used race as an explicit input variable, violating equal protection C) The algorithm's opacity meant he could not meaningfully challenge the score used against him D) COMPAS used gender as an input variable, raising equal protection concerns


7. The four factors in the fair use analysis of copyright law include all of the following EXCEPT:

A) The purpose and character of the use (commercial or transformative?) B) The reputation of the copyright holder C) The amount and substantiality of the portion of the work that was copied D) The effect of the use upon the potential market for the copyrighted work


8. The EU AI Liability Directive's "presumption of causality" provision:

A) Creates an irrebuttable presumption that AI systems cause all harms experienced by individuals who have used them B) Shifts the burden of disproving causation to defendants when they have violated relevant duties and causation is difficult to establish due to AI opacity C) Requires plaintiffs to establish causation through scientific expert testimony before any burden shifts D) Applies only to cases where the EU AI Act has classified the relevant AI system as prohibited


9. Under which legal theory would an employer most likely be held liable if an AI hiring tool it uses produces statistically significant racial disparities in candidate selection?

A) Products liability design defect B) Negligence (failure to exercise reasonable care in vendor selection) C) Disparate impact under Title VII of the Civil Rights Act D) Consumer protection under FTC Section 5


10. The New York Times v. OpenAI case primarily concerns:

A) Whether ChatGPT's outputs can defame public figures like Times journalists B) Whether training GPT-4 on Times articles constitutes copyright infringement, and whether the fair use defense applies C) Whether OpenAI violated the Times' trade secrets by reproducing its content creation processes D) Whether the Times has antitrust claims against OpenAI for unfair competition in the news market


11. The "enterprise liability" model for AI harm would:

A) Hold individual AI engineers personally liable for the systems they build B) Create a shared liability pool among all companies in a particular AI industry sector C) Locate liability with the party that profits from the AI deployment, regardless of fault D) Exempt AI companies from liability if they are classified as technology enterprises rather than product manufacturers


12. In the Getty Images v. Stability AI litigation, the finding that Stable Diffusion could generate images containing blurry versions of the Getty watermark is significant because:

A) It proves that Stability AI deliberately inserted Getty's watermark into its training data to increase model performance B) It is evidence that the model memorized specific training images — undermining the "transformativeness" defense in fair use analysis C) It establishes trademark infringement as the primary legal theory in the case, displacing the copyright claims D) It shows that Stable Diffusion users can generate exact reproductions of Getty images on demand


13. Which of the following best describes the U.S. Copyright Office's position on copyright protection for AI-generated works?

A) AI-generated works are automatically owned by the company that operated the AI system B) AI-generated works require registration within three months of creation to receive protection C) Copyright protection requires human authorship, so AI-generated works without sufficient human creative input are not protected D) AI-generated works are jointly owned by the AI developer and the user who provided the prompt


14. "But-for causation" in a negligence case requires:

A) Proof that the defendant's negligence was the sole cause of the plaintiff's harm B) Evidence that the harm would not have occurred absent the defendant's breach C) A showing that the defendant's conduct substantially contributed to the harm, even if other factors were also present D) Expert testimony establishing the probability that the defendant's conduct caused the harm to a scientific certainty


15. Under current employment discrimination law, when an employer uses a third-party AI hiring tool that produces racially discriminatory outcomes:

A) Only the AI vendor (who built the discriminating system) is liable; the employer is protected as a third-party recipient B) The employer bears liability for the discriminatory outcomes regardless of whether they built the AI or purchased it from a vendor C) Neither party is liable because the discrimination was unintentional and produced by an automated system D) Liability is automatically shared equally between vendor and employer under comparative fault principles


16. The EU's revised Product Liability Directive (in effect from 2026) is significant for AI because:

A) It requires all AI products sold in the EU to undergo government safety certification before sale B) It explicitly includes software — including AI software — as a "product," rejecting the software exception that U.S. courts have applied C) It creates a compensation fund that pays AI harm victims without requiring proof of fault D) It exempts foundation model providers from product liability, holding only deployers liable


17. Lloyd's of London's issuance of guidance encouraging exclusions for "autonomous AI" risks from insurance policies reflects:

A) A determination by the insurance industry that AI never causes legally cognizable harm B) Insurers' uncertainty about AI liability exposure, leading some to exclude precisely the highest-risk AI deployments from coverage C) A regulatory requirement from the UK Financial Conduct Authority restricting AI coverage D) An industry-wide agreement that AI liability should be handled through a dedicated compensation fund


18. A "failure to warn" (warning defect) products liability claim against an AI developer would most likely be based on:

A) The AI developer deliberately concealing known defects from regulators and customers B) The developer's failure to adequately disclose known limitations, accuracy disparities, or failure modes in documentation provided to deployers C) The AI system's failure to warn users in real time when its confidence in a recommendation is low D) The absence of safety labels on the physical devices on which AI software is installed


19. (Short Answer) Explain why proving causation is especially challenging in AI liability cases. What specific features of AI systems create these challenges, and what legal mechanisms have been proposed to address them?


20. (Short Answer) A music streaming company trains an AI composition tool on the complete catalog of a deceased composer whose works are still under copyright (the composer died 30 years ago, so copyright protection extends for another 40 years). The tool generates new compositions "in the style of" the composer. Identify two distinct legal theories under which the composer's estate might bring a claim, and briefly assess the strength of each.


Answer Key: 1-B, 2-B, 3-B, 4-C, 5-B, 6-B, 7-B, 8-B, 9-C, 10-B, 11-C, 12-B, 13-C, 14-B, 15-B, 16-B, 17-B, 18-B. Questions 19–20 are short answer; see discussion guide for model responses.