Chapter 18: Exercises — Who Is Responsible When AI Fails?
Difficulty ratings: ⭐ (foundational) through ⭐⭐⭐⭐ (advanced). Exercises marked with † are team/collaborative exercises.
Part A: Comprehension and Application
Exercise 1 ⭐ Define the following terms in your own words, providing a brief example for each: accountability, responsibility, liability, culpability, and the many hands problem. For each term, explain how it applies specifically to AI systems rather than to traditional products or services.
Exercise 2 ⭐ List the five mechanisms through which AI creates accountability gaps, as described in Section 18.1. For each mechanism, provide one example from an AI system you are familiar with (you may use examples from this chapter or from current events).
Exercise 3 ⭐ Identify the seven types of AI failure modes described in Section 18.3. For each type, identify (a) who is primarily responsible, (b) at what stage in the AI lifecycle the failure typically arises, and (c) one structural mechanism that would address it.
Exercise 4 ⭐ Explain the Collingridge dilemma and its application to AI regulation. In your explanation, use a specific current AI technology to illustrate the dilemma — not a technology from this chapter, but one you have identified independently.
Exercise 5 ⭐⭐ Read the following scenario and identify which type of AI failure has occurred, who bears primary responsibility, and what structural changes would have prevented it: A hospital deploys an AI tool that predicts which patients are at high risk of readmission within 30 days of discharge. The tool was trained on 10 years of patient data from the hospital's own records. After deployment, the hospital's quality department notices that Black patients are being systematically assigned lower risk scores than white patients with similar clinical presentations, resulting in fewer follow-up calls and services for Black patients with high actual readmission risk. The vendor of the AI tool was not notified of this pattern for eight months.
Part B: Case Analysis
Exercise 6 ⭐⭐ Apply the many hands framework to the Facebook/Haugen case (Case Study 18-2). For each party in the AI value chain — researchers, developers, managers, executives, the board, regulators — describe their specific contribution to the accountability failure and what they could have done differently. Then answer: who bears the most responsibility for the gap between Facebook's internal research and its public claims about Instagram's effects on teenagers?
Exercise 7 ⭐⭐ The Uber/Herzberg case ended with criminal charges against the safety driver and a civil settlement by Uber, with no charges against Uber executives. Write a memorandum (500–700 words) to a prosecutor making the case for charging a specific Uber executive with criminal negligence. Identify the specific individual, the specific decision they made, and the legal theory under which that decision would support a negligence charge.
Exercise 8 ⭐⭐ Compare the Amazon hiring algorithm failure (discussed in Section 18.2) with the Facebook news feed failure (discussed in Case Study 18-2) using the failure taxonomy from Section 18.3. Which type(s) of failure best characterizes each case? Are the failure types the same or different? What does the comparison reveal about the different forms that AI accountability failure can take?
Exercise 9 ⭐⭐⭐ The chapter argues that the "computer says no" defense — claiming that an AI system made a decision, rather than a human — is inadequate. But consider this counterargument: if an AI system genuinely outperforms human decision-makers on a specific task (say, detecting cancer in radiology images), isn't there an argument that human override of AI recommendations is the greater danger? Write a structured response to this counterargument that incorporates the concepts of automation bias, meaningful human review, and professional obligation.
Exercise 10 ⭐⭐⭐ † In teams of four, conduct a "responsibility audit" of a real AI system deployed by a company or government agency. (Suggestions: the COMPAS recidivism tool, a state's automated benefits determination system, or a private employer's AI hiring tool.) Each team member should take the role of one party in the value chain: developer, deployer, regulator, or affected person. Each party should describe (a) their contribution to the system's design and operation, (b) what they knew or should have known about the system's effects, (c) what obligations they had, and (d) whether those obligations were met. Present your findings as a coordinated team presentation.
Part C: Critical Thinking and Analysis
Exercise 11 ⭐⭐ The chapter distinguishes between individual accountability and structural accountability. Critics of the structural approach argue that it lets individuals off the hook by locating blame in "the system." Critics of the individual approach argue that blaming individuals is scapegoating when problems are genuinely systemic. Write an essay (600–800 words) defending a position: either that individual accountability is essential and cannot be replaced by structural approaches, or that structural approaches are primary and individual accountability is secondary. Engage with the strongest counterargument to your position.
Exercise 12 ⭐⭐ Evaluate the following claim: "Requiring AI developers to conduct mandatory algorithmic impact assessments before deployment would have prevented the Amazon hiring tool from being deployed with its known bias." Is this claim accurate? What conditions would need to hold for impact assessments to be effective in preventing bias-related harms? What are the limitations of impact assessments as an accountability mechanism?
Exercise 13 ⭐⭐⭐ The chapter discusses automation bias — the tendency to over-rely on automated recommendations. Design a training program for HR professionals who will be using an AI hiring tool, specifically aimed at countering automation bias while preserving the efficiency benefits of AI-assisted screening. Your program should include: learning objectives, training content, exercises or simulations, and assessment mechanisms. (750–1,000 words)
Exercise 14 ⭐⭐⭐ Section 18.9 argues for strict liability as an accountability mechanism for AI harm. Using the COMPAS recidivism tool as your primary example, construct an argument for and against strict liability. Consider: Who would be liable? What would plaintiffs need to prove? What would the incentive effects be on AI development and deployment? Would strict liability produce better outcomes than negligence-based liability? Which approach do you favor, and why?
Exercise 15 ⭐⭐⭐ † In teams, draft a model "AI Accountability Policy" for a mid-sized financial institution (approximately 5,000 employees, $20 billion in assets) that is beginning to deploy AI tools in its credit underwriting and fraud detection operations. Your policy should address: who is responsible for AI decisions at each level of the organization; what due diligence is required before deploying a new AI tool; what monitoring is required after deployment; how complaints from affected customers will be handled; and what the organization's whistleblowing procedures are for employees who identify AI-related harms. Your policy should be practical and implementable, not aspirational.
Part D: Applied Professional Scenarios
Exercise 16 ⭐⭐ You are a software engineer at a company that provides AI-based credit-scoring services to community banks. You have discovered that the model produces significantly lower scores for applicants from ZIP codes that overlap with historically redlined neighborhoods, even after controlling for creditworthiness factors. Your manager tells you that the model has passed all regulatory compliance checks and that raising the issue will delay the product launch. What are your obligations? What steps would you take? What professional codes of ethics are relevant to your situation?
Exercise 17 ⭐⭐ You are the Chief Risk Officer of a large retailer that uses an AI-based tool for scheduling employee hours. An employee advocacy organization has sent you a letter claiming that the AI tool is scheduling significantly fewer hours for employees who have filed FMLA (Family and Medical Leave Act) leave, and that this constitutes illegal retaliation. You purchased the tool from a vendor and have not modified it. Draft a memo to your CEO outlining: (a) the legal exposure the company faces; (b) the immediate steps you recommend; (c) the longer-term governance changes the situation demands.
Exercise 18 ⭐⭐⭐ You are the General Counsel of a hospital system that is considering deploying an AI-based sepsis prediction tool. The vendor has provided impressive performance statistics, but the tool has been validated primarily on data from large academic medical centers, and your hospital serves a significantly different patient population (rural, lower-income, higher proportion of patients of color). Draft a due diligence checklist of 15–20 specific questions you would require the vendor to answer before you would recommend deployment to the hospital's board. For each question, briefly explain what accountability concern it addresses.
Exercise 19 ⭐⭐⭐ † Role-play exercise: Your class will conduct a mock hearing before a state legislature considering a bill that would require mandatory algorithmic impact assessments before AI systems are deployed in employment, credit, healthcare, and housing decisions. Assign roles: AI industry lobbyists, consumer advocates, civil rights organizations, small business owners, AI researchers, and legislators. Each party should prepare a 3-minute opening statement and be prepared for cross-examination. After the hearing, hold a debrief discussion: What accountability arguments proved most persuasive? What objections were hardest to answer?
Exercise 20 ⭐⭐⭐⭐ Comparative analysis: The United States has a sector-specific, enforcement-driven approach to AI accountability (EEOC enforcing employment discrimination law against AI, FTC enforcing consumer protection law, CFPB enforcing fair lending). The EU has a risk-based, pre-deployment regulatory approach (EU AI Act) combined with a liability framework (AI Liability Directive). Write a comparative analysis (1,000–1,500 words) of these two approaches. For each approach, analyze: (a) which types of AI harm it addresses most effectively; (b) which types it leaves inadequately addressed; (c) the compliance costs it imposes on AI companies; (d) the barriers it creates for affected individuals to obtain redress. Conclude with a recommendation: Which approach, or which combination of elements from both, would produce the best overall accountability outcomes?
Part E: Research and Writing
Exercise 21 ⭐⭐ Research an AI failure that has occurred since the publication of this textbook (you may use current news sources). Write a 500-word case summary that: (a) describes what happened; (b) identifies the failure type using the taxonomy from Section 18.3; (c) applies the many hands framework to identify who contributed to the failure; (d) describes what accountability (if any) resulted; and (e) proposes one structural change that would have prevented or mitigated the harm.
Exercise 22 ⭐⭐⭐ The chapter notes that aviation safety has improved dramatically through mandatory incident reporting. Research the Aviation Safety Reporting System (ASRS) — how it works, what it covers, how data is analyzed, and what protections reporters receive. Write an analysis (700–900 words) evaluating whether a similar system for AI incidents would be feasible and effective. What obstacles would need to be overcome? What design choices would be most important?
Exercise 23 ⭐⭐⭐ † Team research project: Identify a municipality, state, or country that has enacted specific AI accountability legislation (beyond the EU AI Act, which is covered in Chapter 33). Your team should research: the specific obligations the law creates; the parties to whom it applies; the enforcement mechanism; the penalties for violation; the compliance costs the law imposes; and early evidence of how it is working in practice. Present your findings in a 15-minute presentation with supporting materials.
Exercise 24 ⭐⭐⭐⭐ Policy brief: You have been commissioned by a federal agency to write a policy brief recommending a mandatory AI accountability framework for AI systems used in consequential decisions (credit, employment, housing, healthcare, criminal justice). Your brief (1,200–1,500 words) should: (a) identify the specific accountability gaps your framework addresses; (b) specify the parties to whom obligations apply; (c) describe the specific requirements (impact assessments, registration, auditing, insurance, incident reporting, etc.); (d) propose an enforcement mechanism with appropriate penalties; (e) estimate the compliance costs; and (f) address the principal objections your framework will face from the AI industry and from civil libertarians.
Exercise 25 ⭐⭐⭐⭐ † Capstone exercise: Working in teams, develop a comprehensive AI accountability framework for a large-scale AI deployment of your team's choosing (suggestions: an AI-based criminal justice risk assessment tool used statewide; an AI-based medical diagnosis tool deployed across a health system; a national AI-based benefits eligibility determination system). Your framework should address accountability at every level: developer, platform, deployer, operator, and regulator. It should specify what each party's obligations are, how compliance will be verified, what happens when things go wrong, and how affected individuals can seek redress. Present your framework as a 20-minute presentation with written supporting materials. Be prepared to defend your framework against the critique that it is either too burdensome (chilling innovation) or too permissive (failing to protect the public).