Exercises: Autonomous Systems and Moral Machines

These exercises progress from concept checks to challenging applications. Estimated completion time: 3-4 hours.

Difficulty Guide: - ⭐ Foundational (5-10 min each) - ⭐⭐ Intermediate (10-20 min each) - ⭐⭐⭐ Challenging (20-40 min each) - ⭐⭐⭐⭐ Advanced/Research (40+ min each)


Part A: Conceptual Understanding ⭐

Test your grasp of core concepts from Chapter 19.

A.1. Using the SAE International framework described in Section 19.1.1, explain the six levels of driving automation (Levels 0-5). In your own words, describe what happens to the human role at each level. At which level does the most significant governance transition occur, and why?

A.2. Section 19.2.1 argues that the trolley problem is a "misleading guide to the actual ethics of autonomous vehicles." Summarize three reasons the chapter gives for this claim. Do you find them convincing?

A.3. Define the following three oversight models as described in Section 19.5: human-in-the-loop, human-on-the-loop, and human-out-of-the-loop. For each model, give one example of a domain where that level of oversight would be appropriate and explain why.

A.4. Section 19.3 discusses lethal autonomous weapons systems (LAWS). Explain the distinction between a weapon system with autonomous targeting capabilities and a remotely piloted weapon operated by a human. Why does this distinction matter for ethical and legal analysis?

A.5. What is the "vigilance problem" as described in Section 19.1.2? Why is it particularly relevant at SAE Level 3 autonomy? What does it suggest about the governance assumptions built into Level 3 systems?

A.6. Section 19.4 examines the MIT Moral Machine experiment. Describe the experiment's methodology and identify at least two findings about cross-cultural variation in moral preferences. Then explain why the chapter argues that Moral Machine results should not be directly encoded into autonomous vehicle software.

A.7. Explain the concept of "meaningful human control" as discussed in Section 19.5. Why is this concept preferred by some scholars and policymakers over the simpler phrase "human oversight"?


Part B: Applied Analysis ⭐⭐

Analyze scenarios, arguments, and real-world situations using concepts from Chapter 19.

B.1. Consider the following scenario:

A hospital deploys a diagnostic AI system that analyzes chest X-rays and flags potential lung cancers for radiologist review. The system operates at SAE Level 2 equivalent — it produces recommendations, but a human radiologist makes the final diagnosis. Over six months, the system flags 1,200 cases. In 1,180 cases, the radiologist agrees with the system's assessment. The radiologist has never overridden the system.

Apply the vigilance problem and automation bias concepts from Sections 19.1.2 and 19.5 to this scenario. Is the radiologist meaningfully "in the loop"? What governance risks does this pattern reveal? Propose two measures to ensure genuine human oversight.

B.2. Eli raises a concern in Section 19.2.2 about the testing of autonomous vehicles: "When autonomous vehicles are deployed in my neighborhood, will they be tested as rigorously in Black neighborhoods as in white ones? Will the training data include enough examples of Black pedestrians in different lighting conditions?"

Research has documented that some computer vision systems perform less accurately on darker skin tones. Using concepts from Chapters 14 (bias), 15 (fairness), and 19, analyze the equity dimensions of autonomous vehicle deployment. Who bears the risks? Who captures the benefits? How should testing and deployment be governed to address Eli's concerns?

B.3. Section 19.3 presents ethical arguments for and against lethal autonomous weapons systems (LAWS). Organize the following arguments into "for" and "against" categories, and for each, identify the ethical framework (utilitarian, deontological, or virtue ethics) it primarily draws on:

  • (a) LAWS could reduce civilian casualties by making more precise targeting decisions than stressed human soldiers
  • (b) Killing a human being is a decision of such moral weight that it should never be delegated to a machine
  • (c) LAWS could lower the political cost of war, making armed conflict more likely
  • (d) LAWS eliminate the possibility of soldiers committing war crimes driven by anger, fear, or revenge
  • (e) Machines cannot understand the moral significance of taking a life
  • (f) An arms race in autonomous weapons could destabilize global security

B.4. The EU AI Act (Section 19.6) classifies AI systems into risk categories: unacceptable risk, high risk, limited risk, and minimal risk. Classify the following systems into the appropriate EU AI Act risk category and justify your classification:

  • (a) A self-driving car operating on public roads
  • (b) A chatbot that helps customers select furniture
  • (c) An AI system that scores job applicants for interviews
  • (d) An AI system that determines eligibility for social welfare benefits
  • (e) An AI social scoring system used by a government to rate citizens' behavior
  • (f) A medical diagnostic AI that recommends cancer treatment protocols

B.5. Section 19.4 examines whether machines can be moral agents. The chapter presents three philosophical positions: (1) machines can be moral agents if they exhibit the right behavior, (2) machines cannot be moral agents because they lack consciousness and understanding, and (3) the question of moral agency is less important than the question of moral responsibility. For each position, construct the strongest argument you can. Then identify which position you find most persuasive and explain why.

B.6. Dr. Adeyemi states in Section 19.1.2: "Every increase in autonomy is also an increase in the governance demand." Apply this principle to the following trajectory:

A hospital starts with a Level 0 system (doctors read all scans manually). It transitions to Level 2 (AI assists, doctor decides). It then adopts Level 4 (AI diagnoses and initiates treatment in emergency settings, with a doctor available for consultation). At each transition, identify: (a) what new governance mechanisms are needed, (b) what new risks emerge, and (c) what new accountability questions arise.


Part C: Real-World Application Challenges ⭐⭐-⭐⭐⭐

These exercises ask you to investigate real-world autonomous systems.

C.1. ⭐⭐ Autonomous Vehicle Policy Analysis. Research the autonomous vehicle regulations in your state or country. Identify: (a) what level of autonomy is currently permitted on public roads, (b) what safety testing is required, (c) who bears liability in the event of an accident involving an autonomous vehicle, and (d) what reporting requirements exist. Write a one-page assessment of whether the regulations are adequate given the risks identified in this chapter.

C.2. ⭐⭐ The Moral Machine Exercise. Visit the MIT Moral Machine website (moralmachine.net) and complete at least 20 scenarios. Record your choices. Then analyze your own moral preferences: Did you prioritize the young over the old? Passengers over pedestrians? More lives over fewer? Were your choices consistent, or did they vary? Write a one-page reflection connecting your choices to the philosophical frameworks discussed in Section 19.4.

C.3. ⭐⭐⭐ EU AI Act Classification. Select three AI systems you interact with regularly (e.g., a recommendation algorithm, a navigation system, a grading tool, a health monitoring app). For each, classify the system under the EU AI Act's risk framework (Section 19.6). Explain your classification, identify what obligations the EU AI Act would impose on the system's provider, and evaluate whether those obligations are proportionate to the risks. Present your findings in a structured table with analysis.

C.4. ⭐⭐⭐ Human Oversight Audit. Identify an organization that uses an AI system with a human-in-the-loop or human-on-the-loop oversight model (e.g., a hospital using diagnostic AI, a bank using algorithmic credit decisions, or a content moderation system). Through publicly available information (policy documents, news articles, academic research), evaluate: (a) whether the human oversight is genuine or perfunctory, (b) what evidence exists for automation bias, and (c) what institutional mechanisms support meaningful human control. Write a two-page assessment.


Part D: Synthesis & Critical Thinking ⭐⭐⭐

These questions require you to integrate multiple concepts from Chapter 19 and think beyond the material presented.

D.1. The chapter argues that the trolley problem dominates public discourse about autonomous vehicles but "individualizes a structural problem" (Section 19.2.1). Write a 400-600 word essay that identifies the structural ethical questions about autonomous vehicles — questions about infrastructure, regulation, equity, labor, and community consent — and argues for why these questions deserve more attention than the trolley problem. Use at least two specific examples.

D.2. The concept of "meaningful human control" (Section 19.5) assumes that human oversight is both possible and desirable. Challenge this assumption. Under what conditions might human oversight actually reduce the safety or fairness of an autonomous system? When might removing the human from the loop be the ethically superior choice? Construct your argument using specific examples from the chapter.

D.3. Section 19.6 describes the EU AI Act's risk-based framework. A critic argues: "Risk classification is inherently subjective and politically negotiated. The same system might be 'high risk' in one society and 'minimal risk' in another. A risk-based framework gives the illusion of objectivity while actually encoding political choices." Evaluate this critique. Is it fair? If so, does it invalidate the risk-based approach, or does it simply mean the approach must be implemented with appropriate transparency?

D.4. This chapter is the final chapter of Part 3 (Algorithmic Systems and AI Ethics). Write a synthesis (300-500 words) that connects the themes of bias (Chapter 14), fairness (Chapter 15), transparency (Chapter 16), accountability (Chapter 17), generative AI (Chapter 18), and autonomy (Chapter 19) into a coherent picture. What is the central argument of Part 3? Where are the tensions? What governance challenges remain unresolved?


Part E: Research & Extension ⭐⭐⭐⭐

These are open-ended projects for students seeking deeper engagement. Each requires independent research beyond the textbook.

E.1. The Uber Fatality: A Deep Dive. Research the March 2018 Uber self-driving car fatality in Tempe, Arizona (which is also the subject of Case Study 1 in this chapter). Go beyond the case study by reviewing: (a) the NTSB investigation report, (b) Uber's internal safety culture as documented in subsequent reporting, (c) the legal proceedings and settlement, and (d) the impact on autonomous vehicle regulation in Arizona and nationally. Write a 1,000-word report that connects the incident to the governance frameworks discussed in this chapter and proposes specific regulatory reforms.

E.2. Autonomous Weapons Governance. Research the international governance debate over lethal autonomous weapons systems, focusing on the United Nations Convention on Certain Conventional Weapons (CCW) discussions. Write a report (800-1,200 words) covering: (a) the positions of major military powers (the US, Russia, China, the UK, France), (b) the proposals of the Campaign to Stop Killer Robots, (c) the current status of negotiations, and (d) the obstacles to a binding international treaty. Evaluate whether a treaty is achievable and, if not, what alternative governance mechanisms might be effective.

E.3. Moral Machine Across Cultures. Research the full findings of the MIT Moral Machine experiment (Awad et al., 2018) — the largest study of cross-cultural moral preferences ever conducted. Write an analysis (600-1,000 words) covering: (a) the methodology and scale of the study, (b) the three major cultural "clusters" identified and how they differ, (c) the study's findings about universal vs. culturally variable moral preferences, and (d) the implications for designing autonomous systems deployed across cultures. Critically evaluate whether crowd-sourced moral preferences should inform AI design.


Solutions

Selected solutions are available in appendices/answers-to-selected.md.