Further Reading: Autonomous Systems and Moral Machines

The sources below provide deeper engagement with the themes introduced in Chapter 19. They are organized by topic and include a mix of foundational texts, empirical research, policy analyses, and accessible popular works. Annotations describe what each source covers and why it is relevant to the chapter's core questions.


Autonomous Vehicles: Ethics, Engineering, and Governance

Awad, Edmond, Sohan Dsouza, Richard Kim, et al. "The Moral Machine Experiment." Nature 563 (2018): 59-64. The full report of the MIT Moral Machine experiment — the largest study of moral preferences ever conducted, with data from millions of participants in 233 countries. The paper identifies three major cultural clusters with distinct moral preference patterns and documents both universal tendencies (saving more lives over fewer) and culturally variable preferences (age, status, and the relative treatment of pedestrians vs. passengers). Essential reading for understanding why cross-cultural moral variation makes it impossible to program a single "correct" moral framework into autonomous vehicles.

Stilgoe, Jack. Who's Driving Innovation? New Technologies and the Collaborative State. London: Palgrave Macmillan, 2020. Stilgoe examines the governance of autonomous vehicles in the UK and US, arguing that the dominant "innovate first, regulate later" approach systematically undervalues public safety and democratic participation. His analysis of how regulatory frameworks are shaped by industry lobbying rather than public deliberation is directly relevant to the chapter's discussion of Arizona's permissive testing environment and the Uber fatality.

Nyholm, Sven. Humans and Robots: Ethics, Agency, and Anthropomorphism. London: Rowman and Littlefield, 2020. A philosophical examination of the ethical relationships between humans and robots, covering moral agency, anthropomorphism, and the trolley problem applied to autonomous vehicles. Nyholm argues that the trolley problem, while philosophically interesting, is a poor guide to the ethics of autonomous vehicles — supporting the chapter's critique — and offers a more nuanced framework for evaluating the moral status of machines.

National Transportation Safety Board. "Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Arizona, March 18, 2018." Accident Report NTSB/HAR-19/03. November 19, 2019. The full NTSB investigation report on the Uber-Herzberg fatality. The report provides exhaustive technical detail on the perception failure, the decision to disable automatic emergency braking, the safety operator's distraction, and the regulatory failures that contributed to the crash. It also includes safety recommendations for the NHTSA, Uber, and the state of Arizona. A primary source for Case Study 1 and essential reading for understanding how autonomous vehicle safety failures emerge from system-level interactions.


Lethal Autonomous Weapons and the Campaign to Stop Killer Robots

Scharre, Paul. Army of None: Autonomous Weapons and the Future of War. New York: W.W. Norton, 2018. The most comprehensive and accessible book on autonomous weapons, written by a former U.S. Department of Defense official who helped draft the original DOD policy on autonomous weapons. Scharre navigates the technical, ethical, legal, and strategic dimensions with exceptional clarity, presenting all sides of the debate while ultimately arguing for the importance of human control. Essential reading for any student seeking deeper engagement with the LAWS debate.

Asaro, Peter. "On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making." International Review of the Red Cross 94, no. 886 (2012): 687-709. Asaro provides the most rigorous philosophical argument for a preemptive ban on autonomous weapons, grounding his analysis in international humanitarian law and the concept of meaningful human control. He argues that IHL's requirements of distinction, proportionality, and precaution inherently require human moral judgment that cannot be replicated by machines. The paper was instrumental in shaping the Campaign to Stop Killer Robots' intellectual framework.

International Committee of the Red Cross. "Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons." ICRC Expert Meeting Report, March 2014. The ICRC's expert analysis of the legal and ethical implications of autonomous weapons under international humanitarian law. The report examines how IHL principles apply to weapons with autonomous targeting functions and concludes that some level of human control over critical functions must be retained. The ICRC's 2021 position paper, calling for new legally binding rules, builds on this analysis.

Heyns, Christof. "Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions: Lethal Autonomous Robotics." United Nations General Assembly, A/HRC/23/47, April 9, 2013. The first UN report to address LAWS directly, calling for a moratorium on the testing, production, assembly, transfer, acquisition, deployment, and use of lethal autonomous robotics until an internationally agreed framework is established. Heyns's report was a catalyst for the CCW process and remains a foundational policy document in the LAWS governance debate.


Moral Agency, Philosophy of Mind, and Machine Ethics

Matthias, Andreas. "The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata." Ethics and Information Technology 6, no. 3 (2004): 175-183. The paper that introduced the concept of the "responsibility gap" — the condition that arises when autonomous learning systems take actions that their developers could not have predicted and their operators did not authorize. Matthias argues that traditional responsibility frameworks are inadequate for learning systems because the system's behavior emerges from training rather than explicit programming. This paper is foundational to the chapter's discussion of accountability in autonomous systems.

Floridi, Luciano, and Jeff W. Sanders. "On the Morality of Artificial Agents." Minds and Machines 14, no. 3 (2004): 349-379. Floridi and Sanders propose a framework for evaluating whether artificial agents can be moral agents, arguing that moral agency does not require consciousness but does require interactivity, autonomy, and adaptability. Their "levels of abstraction" approach provides a nuanced alternative to the binary question of "can machines be moral agents?" and is useful for understanding the philosophical positions presented in Section 19.4.

Wallach, Wendell, and Colin Allen. Moral Machines: Teaching Robots Right from Wrong. Oxford: Oxford University Press, 2009. A pioneering exploration of whether and how machines can be designed to make ethical decisions. Wallach and Allen survey approaches from top-down rule-based systems to bottom-up learning systems and hybrid approaches, concluding that no existing approach can replicate human moral judgment. The book is relevant to the chapter's discussion of whether moral machines are possible and what follows from the answer.


Human Factors, Automation, and Oversight

Parasuraman, Raja, and Dietrich Manzey. "Complacency and Bias in Human Use of Automation: An Attentional Integration." Human Factors 52, no. 3 (2010): 381-410. The most frequently cited review article on automation complacency and automation bias. Parasuraman and Manzey synthesize decades of research showing that humans consistently over-trust automated systems, fail to detect automation errors, and defer to automated recommendations even when they have contradictory evidence. Their attentional framework explains why human oversight degrades as automation reliability increases. Essential for understanding the vigilance problem discussed in the chapter.

Endsley, Mica R. "From Here to Autonomy: Lessons Learned from Human-Automation Research." Human Factors 59, no. 1 (2017): 5-27. Endsley reviews the human factors literature on automation and draws lessons for the design of autonomous systems. She argues that increasing autonomy does not simplify the human's role — it makes it harder, because the human must now understand and monitor a complex system rather than simply performing a task. Her analysis directly supports the chapter's argument that governance demands increase with autonomy, not decrease.


The EU AI Act and Risk-Based Governance

European Parliament. "Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)." Official Journal of the European Union, 2024. The full text of the EU AI Act — the most comprehensive AI governance legislation in the world. The Act's risk-based classification system, its requirements for high-risk AI (including human oversight, transparency, accuracy, and robustness), and its provisions for autonomous systems are directly relevant to Section 19.6. The Act also addresses general-purpose AI models, AI in law enforcement, and biometric surveillance. While the full text is extensive, the preamble and Articles 1-15 are accessible and provide a clear overview.

Veale, Michael, and Frederik Zuiderveen Borgesius. "Demystifying the Draft EU Artificial Intelligence Act." Computer Law Review International 22, no. 4 (2021): 97-112. A clear, accessible analysis of the EU AI Act's structure and implications, written by two leading European technology law scholars. Veale and Borgesius explain the risk classification system, evaluate the Act's strengths and weaknesses, and identify ambiguities that will require judicial interpretation. Useful for students who want to understand the Act without reading the full legislative text.

Smuha, Nathalie A. "From a 'Race to AI' to a 'Race to AI Regulation': Regulatory Competition for Artificial Intelligence." Law, Innovation and Technology 13, no. 1 (2021): 57-84. Smuha examines the global regulatory competition around AI governance — with the EU, US, China, and other jurisdictions pursuing different approaches. Her analysis of how regulatory competition shapes AI governance is relevant to the chapter's discussion of Arizona's permissive autonomous vehicle testing environment and the broader question of whether governance should be harmonized internationally or allowed to vary across jurisdictions.


These readings are starting points, not endpoints. As Parts 4-7 examine governance, regulation, corporate responsibility, and future challenges, the frameworks introduced in this chapter — autonomy levels, meaningful human control, and the responsibility gap — will be tested and refined.