Chapter 18: Quiz — Who Is Responsible When AI Fails?

20 questions. Select the best answer for each multiple-choice question. For short-answer questions, provide a concise response of 2–4 sentences.


1. The "many hands problem" in AI ethics refers to:

A) The difficulty of programming AI systems to perform multiple tasks simultaneously B) The diffusion of responsibility across so many parties that no individual can be meaningfully held accountable C) The challenge of coordinating AI development teams across multiple organizations D) The problem of multiple competing AI systems producing contradictory outputs


2. Which of the following best distinguishes "accountability" from "responsibility" as used in this chapter?

A) Accountability refers to causal relationships; responsibility refers to institutional obligations B) Responsibility refers to causal or moral relationships; accountability refers to institutional obligations to explain and face consequences C) Accountability is a legal concept; responsibility is an ethical concept D) Responsibility applies to individuals; accountability applies to organizations


3. In the Uber/Herzberg autonomous vehicle fatality, the NTSB found that Uber's engineers had:

A) Failed to install any object detection sensors in the vehicle B) Disabled the automatic emergency braking system to reduce "false positives" during testing C) Falsified safety testing reports submitted to Arizona regulators D) Used a safety driver without adequate training for autonomous vehicle operation


4. Which type of AI failure occurs when a system was optimized for the wrong objective — when what the system was told to maximize does not correspond to what society actually wants?

A) Robustness failure B) Integration failure C) Specification failure D) Drift failure


5. The Collingridge dilemma states that:

A) More powerful AI systems will always displace weaker ones, regardless of which is more accurate B) Technology is easiest to regulate when it is new and poorly understood, and hardest to regulate when it is well-understood and entrenched C) AI developers will always face a dilemma between transparency and accuracy D) Regulatory agencies face an inherent conflict between promoting innovation and protecting the public


6. In the Amazon hiring algorithm case, which of the following was the primary technical cause of the bias?

A) Amazon engineers intentionally programmed the system to favor male candidates B) The system was trained on historical resumes submitted predominantly by men, causing it to penalize indicators of female identity C) Amazon's legal team approved a discriminatory specification for the system D) The system's accuracy was inadequate across all demographic groups, not just women


7. The "computer says no" defense refers to:

A) An AI system that refuses to answer certain types of questions B) Using an AI system's decision as a shield against human accountability, claiming the AI made the choice C) The technical limitations that prevent AI systems from explaining their decisions D) A legal doctrine protecting AI companies from liability for automated decisions


8. Which of the following is NOT identified in this chapter as a mechanism through which AI creates accountability gaps?

A) Distributed causation B) Technical opacity C) Insufficient computing power D) Novel legal categories


9. Automation bias is best defined as:

A) Discrimination embedded in AI training data that produces systematic errors B) The tendency to assign too much credit to human engineers for AI system design C) The cognitive tendency to over-rely on automated recommendations, even in the face of countervailing evidence D) The bias introduced when AI systems are optimized for metrics that don't capture social welfare


10. Under current U.S. employment discrimination law, when an employer uses a third-party AI hiring tool that produces racially discriminatory outcomes:

A) The AI vendor is primarily liable because they built the discriminating system B) The employer bears liability because employers cannot delegate their non-discrimination obligations C) Neither party is liable because AI-based discrimination is not covered by Title VII D) Liability is automatically split equally between vendor and employer under comparative fault doctrine


11. A "drift failure" in AI occurs when:

A) An AI system is deliberately manipulated by adversarial attacks on its training data B) An AI system was accurate at deployment but deteriorates as the world changes and diverges from its training environment C) An AI system fails when used outside its designed geographic region D) An AI system produces outputs that are inconsistent across repeated identical queries


12. Which of the following structural accountability mechanisms is most directly analogous to the FDA's pre-market approval process for medical devices?

A) Post-deployment incident reporting B) Mandatory insurance for AI deployers C) Third-party audit requirements D) Mandatory pre-deployment impact assessments with regulatory review


13. The distinction between "systemic failure" and "individual failure" in AI accountability matters primarily because:

A) Individual failures can be addressed by terminating or punishing individuals; systemic failures require changes to processes, incentives, and governance structures B) Individual failures are always more serious than systemic failures C) Systemic failures are subject to different legal standards than individual failures D) Individual failures can be insured against; systemic failures cannot


14. Dennis Thompson's "problem of many hands" was originally developed in the context of:

A) AI ethics and algorithmic accountability B) Nuclear power plant safety analysis C) Public administration and political accountability D) Corporate governance and board oversight


15. The EU AI Act's public database of high-risk AI systems is an example of which structural accountability mechanism?

A) Mandatory insurance B) Incident reporting C) Registration requirements D) Algorithmic impact assessment


16. In the Amazon hiring algorithm case, which of the following parties would NOT typically be considered part of the AI value chain?

A) The engineers who built the model B) The HR professionals who used its outputs C) The job applicants whose applications were screened D) The executives who set the company's hiring diversity goals before the AI was built


17. The argument for strict liability in AI accountability holds that:

A) Strict liability is fairer to AI companies because it eliminates unpredictable jury verdicts B) Removing the need to prove negligence would address the evidentiary barriers plaintiffs face in AI harm cases and create stronger safety incentives C) Strict liability should apply only to AI systems that use machine learning, not rule-based systems D) AI companies should not face any liability if they comply with applicable industry standards


18. Which of the following best describes the concept of "regulatory capture"?

A) A regulator's successful enforcement action that captures documentary evidence of corporate wrongdoing B) The process by which regulatory agencies come to serve the interests of the industries they regulate rather than the public C) A legal doctrine allowing regulators to seize AI systems that pose imminent danger D) The strategy by which technology companies capture market share through regulatory arbitrage


19. (Short Answer) A hospital deploys an AI diagnostic tool that performs significantly less accurately for patients over age 75 compared to younger patients. The vendor documented this limitation in its model card, but the hospital did not read the model card before deployment. When an elderly patient is misdiagnosed and suffers harm, who bears responsibility? Apply the concept of deployer responsibility from Section 18.5 to this scenario.


20. (Short Answer) Explain why the Collingridge dilemma creates a structural bias in favor of under-regulating AI systems, and propose one mechanism that could help regulators navigate this dilemma effectively.


Answer Key: 1-B, 2-B, 3-B, 4-C, 5-B, 6-B, 7-B, 8-C, 9-C, 10-B, 11-B, 12-D, 13-A, 14-C, 15-C, 16-D, 17-B, 18-B. Questions 19–20 are short answer; see discussion guide for model responses.