Chapter 20: Exercises — Liability Frameworks for AI
Difficulty ratings: ⭐ (foundational) through ⭐⭐⭐⭐ (advanced). Exercises marked with † are team/collaborative exercises.
Part A: Comprehension and Application
Exercise 1 ⭐ Define the following terms and explain how each applies specifically to AI-caused harm: liability, tort, negligence, strict liability, products liability, vicarious liability, and disparate impact. For each term, identify one AI scenario in which it would be the primary applicable legal theory.
Exercise 2 ⭐ Apply the four elements of negligence — duty, breach, causation, damages — to the following scenario: A bank deploys an AI-based mortgage underwriting system. The system denies a mortgage application from a qualified borrower. Later analysis reveals that the system was systematically denying applications from borrowers in ZIP codes with majority-Black populations, even when their creditworthiness was equivalent to applicants in predominantly white ZIP codes. For each element, explain: (a) what the element requires; (b) what facts establish or undermine the element; and (c) the primary challenge in proving the element.
Exercise 3 ⭐ Distinguish among the three types of products liability claims (manufacturing defect, design defect, and warning defect) and explain how each would apply to an AI medical diagnostic system that misdiagnoses cancer in patients of color at significantly higher rates than in white patients.
Exercise 4 ⭐ Explain the disparate impact doctrine and its relevance to AI liability. A company uses an AI screening tool to score job applicants on "cultural fit." The tool produces scores that, when combined with its stated selection threshold, result in the selection of 3% of Black applicants and 12% of white applicants with equivalent qualifications. Apply the disparate impact framework to this scenario, including who bears what burden at each stage.
Exercise 5 ⭐⭐ Walk through the four-factor fair use analysis as applied to an AI music composition tool that was trained on the complete recorded output of 50,000 musicians without their consent. The tool generates original compositions that some musicians claim are in their distinctive style. Analyze each factor and reach a conclusion.
Part B: Case Analysis
Exercise 6 ⭐⭐ In Loomis v. Wisconsin, the Wisconsin Supreme Court held that using COMPAS in sentencing did not violate due process, even though the algorithm was proprietary and could not be examined by the defendant. Do you agree with this holding? Draft a dissent that engages with the court's reasoning on each of the three due process arguments (accuracy, gender, opacity) and explains why you would rule differently.
Exercise 7 ⭐⭐ The New York Times v. OpenAI case presents the question of whether training GPT-4 on Times articles is fair use. Assume you are the judge. Identify the three most important factual questions you would need answered before ruling on the fair use issue, and explain why each is legally significant. Then identify which direction each question's answer would point — toward or against fair use.
Exercise 8 ⭐⭐ Compare the Andersen v. Stability AI (artists' class action) and New York Times v. OpenAI cases. Both allege copyright infringement arising from AI training on copyrighted works. Identify three specific ways in which the cases are legally distinct — where the legal analysis might differ — and explain the significance of each distinction for the outcome.
Exercise 9 ⭐⭐⭐ The Getty Images watermark memorization finding — Stable Diffusion generating images that include blurry versions of Getty's watermark — is potentially devastating to the "transformativeness" defense in AI copyright litigation. Write a memo (500–700 words) from the perspective of Stability AI's legal team explaining: (a) what legal significance this finding has; (b) how you would attempt to mitigate its impact in litigation; and (c) what technical facts about how Stable Diffusion works you would want to establish through expert testimony.
Exercise 10 ⭐⭐⭐ † Moot court exercise: Your class will conduct a moot court argument on the following question: "Is the training of large language models on copyrighted internet text fair use under 17 U.S.C. § 107?" Assign teams to argue for and against fair use. Each team should prepare a 10-minute oral argument supported by written briefs addressing all four fair use factors. The remainder of the class will serve as the appellate panel, asking questions during argument.
Part C: Critical Thinking and Analysis
Exercise 11 ⭐⭐ The chapter discusses the "software exception" — courts' general reluctance to apply strict products liability to software. Evaluate the arguments for and against applying strict products liability to AI systems. Your analysis should: (a) identify the strongest arguments for treating AI as a product subject to strict liability; (b) identify the strongest arguments for the software exception; and (c) reach a conclusion about what the law should be, with reasoning.
Exercise 12 ⭐⭐ The EU AI Liability Directive's presumption of causality shifts the burden of proof to defendants when they have violated relevant duties and causation is difficult to establish. Critics argue this goes too far — that presuming causation when it cannot be established is unfair to AI companies. Proponents argue it is necessary to address the evidentiary asymmetry in AI cases. Write a balanced assessment of the arguments, and reach a conclusion about whether the presumption is appropriately calibrated.
Exercise 13 ⭐⭐⭐ The chapter describes enterprise liability as an alternative to negligence for AI harm: locating liability with the party that profits from the AI system, regardless of fault. Using the Amazon hiring algorithm case as your primary example, construct an enterprise liability argument for holding Amazon liable for the discriminatory hiring outcomes produced by its AI tool, independent of whether Amazon was negligent. Address: what the enterprise liability doctrine requires; how Amazon's profits from the AI deployment establish the relevant connection; and what the incentive effects of enterprise liability would be.
Exercise 14 ⭐⭐⭐ Mandatory AI liability insurance — required coverage for organizations deploying high-risk AI systems — has been proposed as a structural accountability mechanism. Assess this proposal from three perspectives: (a) an AI company's legal counsel, who would argue against the requirement; (b) an insurance industry representative, who would address the practical feasibility of underwriting AI liability; and (c) a consumer advocate, who would evaluate the requirement's effectiveness in protecting AI harm victims.
Exercise 15 ⭐⭐⭐ † Liability framework design: Working in teams, design a liability framework for AI-generated defamation — content generated by AI (a chatbot, an image generator, or a news summarization tool) that falsely attributes harmful statements or behaviors to real people. Your framework should address: (a) who is liable — the AI developer, the deploying platform, or the user who prompted the output? (b) what legal theory applies; (c) what the plaintiff must prove; (d) what defenses are available; and (e) what remedies are appropriate. Present your framework as a draft legislative provision.
Part D: Applied Professional Scenarios
Exercise 16 ⭐⭐ You are General Counsel of a company that provides AI-based customer service chatbots to financial services clients. One of your clients — a major bank — has been sued by a class of customers who allege that your company's chatbot gave them inaccurate information about their loan terms, causing them financial harm. The chatbot's responses were generated by a large language model and included "hallucinated" statements about interest rates. Analyze: (a) whether your company has legal exposure; (b) whether the bank (your client) has exposure; (c) what provisions in your service agreement are most relevant; and (d) what steps you would take immediately.
Exercise 17 ⭐⭐ You are the Chief Legal Officer of a large hospital system that uses an AI diagnostic tool from a third-party vendor for chest X-ray analysis. A radiologist files an internal complaint alleging that the tool has significantly lower accuracy for patients over 70 and for patients of certain ethnic backgrounds, based on her observation of clinical cases over the past year. You have not previously conducted a bias audit of the tool. What are your legal obligations? What steps do you take, in what order? What do you tell the board of directors?
Exercise 18 ⭐⭐⭐ You are a member of the legal team at a major AI foundation model company (you may choose OpenAI, Anthropic, Google, or a fictional equivalent). The company is planning to deploy an updated model with significantly enhanced capabilities, including the ability to provide highly personalized medical information and financial advice. Identify five specific liability risks that the new deployment creates, and for each risk: (a) identify the applicable legal theory; (b) assess the likelihood that the theory would succeed in litigation; (c) identify the contractual or technical measures that could reduce the risk; and (d) recommend whether the deployment should proceed, and under what conditions.
Exercise 19 ⭐⭐⭐ † Simulation: An AI-powered loan underwriting system deployed by a community development financial institution (CDFI) has been audited and found to deny loans to Black-owned small businesses at rates 40% higher than to comparable white-owned businesses. Your team plays the roles of: (a) the CDFI's Board of Directors; (b) the CFPB enforcement attorney who has opened an investigation; (c) the AI vendor's legal team; and (d) a class action plaintiff's attorney representing the denied loan applicants. Each team prepares a position statement, and the class conducts a structured negotiation about potential resolutions — regulatory compliance agreement, private settlement, or litigation.
Exercise 20 ⭐⭐⭐⭐ Comparative liability analysis: Identify a specific AI-caused harm scenario (you may use one from this chapter or develop your own). Analyze how liability would be assigned and what outcomes would be available under: (a) current U.S. law (relevant tort, civil rights, and consumer protection theories); (b) the EU framework (EU AI Act obligations plus EU AI Liability Directive presumptions plus revised Product Liability Directive); and (c) a hypothetical strict liability regime. For each framework, analyze: who has liability exposure; what the plaintiff must prove; what remedies are available; and what incentive effects the framework creates for AI developers and deployers.
Part E: Research and Writing
Exercise 21 ⭐⭐ Research the current status of the New York Times v. OpenAI litigation. What has happened since December 2023? Identify the most recent significant ruling or development. Analyze what this development suggests about the likely ultimate outcome of the fair use question and its implications for AI training practices.
Exercise 22 ⭐⭐⭐ Research the EEOC's enforcement actions involving AI in employment decisions. Identify two specific enforcement actions or formal guidance documents issued since 2020. For each, analyze: (a) what conduct the EEOC challenged or addressed; (b) what legal theory was applied; (c) what the outcome or guidance requires; and (d) what implications it has for employers using AI in hiring, promotion, or termination decisions.
Exercise 23 ⭐⭐⭐ † Team research project: Identify an AI liability case that has been filed in a U.S. federal or state court and has generated a published judicial opinion. Brief the case (facts, procedural history, holding, reasoning) and present a 10-minute analysis that addresses: What legal theory did the plaintiff use? What was the court's holding? Was the holding well-reasoned? What does the case establish as precedent for AI liability? How does the holding relate to the frameworks discussed in this chapter?
Exercise 24 ⭐⭐⭐⭐ Policy memorandum: You have been asked to advise a U.S. Senator's office on whether to introduce the "AI Accountability and Compensation Act," a proposed bill that would create: (a) a presumption of causality (similar to the EU AI Liability Directive) for plaintiffs who can establish that an AI provider or deployer violated applicable regulatory obligations; (b) mandatory AI liability insurance for deployers of high-risk AI systems; and (c) an AI Harm Compensation Fund, funded by levies on AI companies above a size threshold, providing streamlined no-fault compensation for AI harm victims. Your memorandum (1,200–1,500 words) should assess the bill's prospects, identify the most significant political obstacles, evaluate the bill's likely effectiveness in each of its three components, and recommend amendments that would strengthen it.
Exercise 25 ⭐⭐⭐⭐ † Capstone exercise: Working in teams, design a comprehensive federal AI liability framework for the United States. Your framework should: (a) specify the legal theories that apply to different categories of AI harm and different actors (developers, deployers, operators, platforms); (b) address the primary obstacles to AI liability claims (opacity, distributed causation, evidentiary asymmetry, causation proof); (c) establish insurance or compensation mechanisms for AI harm victims; (d) address the intellectual property questions raised by AI training; (e) align with (or explicitly diverge from) the EU AI liability framework; and (f) identify the constitutional constraints that limit Congress's options. Present your framework as a set of draft statutory provisions and an accompanying policy memo. Be prepared to defend your framework's choices against critiques from the AI industry, civil liberties organizations, and consumer advocates.