Chapter 37: Further Reading — Autonomous Weapons and Military AI
International Humanitarian Law and Autonomous Weapons
1. International Committee of the Red Cross. (2021). ICRC Position on Autonomous Weapons Systems. The ICRC's formal position on LAWS governance, calling for binding international rules including prohibitions on unpredictable autonomous weapons and weapons that target humans based purely on behavioral profiles. The authoritative IHL institution's most detailed statement on autonomous weapons. Essential reading for understanding the IHL framework.
2. Human Rights Watch. (2012). Losing Humanity: The Case Against Killer Robots. The founding document of the Campaign to Stop Killer Robots, making the humanitarian and ethical case for preemptive prohibition of fully autonomous weapons. Remains the most influential NGO statement on autonomous weapons and provides the framework for subsequent advocacy.
3. Human Rights Watch and Harvard Law School International Human Rights Clinic. (2016). Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban. An updated and expanded legal and ethical analysis of autonomous weapons, addressing developments in LAWS technology and governance discussions since 2012. More detailed legal analysis than the 2012 paper.
4. Schmitt, M. N. (2013). Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics. Harvard National Security Journal Features. The leading legal argument that autonomous weapons can, in principle, comply with IHL — a position taken seriously in the academic debate even by those who disagree. Provides the strongest version of the pro-LAWS-development legal argument.
5. Roff, H. (2016). Killing in War: Responsibility, Risk, and the Future of Warfare. In B. Bhuta et al. (Eds.), Autonomous Weapons Systems: Law, Ethics, Policy. Cambridge University Press. An analysis of how autonomous weapons change moral responsibility in warfare, from a political philosopher's perspective. Cambridge UP volume is the leading scholarly anthology on LAWS governance.
Governance Frameworks and International Negotiations
6. United Nations Group of Governmental Experts on LAWS. (2023). Report of the 2023 Session. The formal report of the CCW's governmental experts group on LAWS, documenting state positions, discussion themes, and conclusions. The primary primary source for understanding the current state of international LAWS negotiations.
7. Campaign to Stop Killer Robots. (Annual). Country Views on Killer Robots. The Campaign's annually updated survey of state positions on LAWS governance, based on official statements in CCW and other forums. The most accessible reference for tracking which states support what governance positions.
8. United Nations Panel of Experts on Libya. (2021). Final Report of the Panel of Experts on Libya. S/2021/229. The UN Panel of Experts report that described the Kargu-2 incident. The relevant section (paragraph 63) is brief but has been widely cited in autonomous weapons governance discussions. Reading the primary source is valuable for understanding what the panel actually said versus how it has been reported.
9. Burt, B. (2023). Autonomous weapons and the laws of war: What states do (not) agree on. SIPRI Background Paper. Stockholm International Peace Research Institute analysis of state positions in CCW negotiations, providing the most current academic assessment of where international governance discussions stand. SIPRI is a leading independent institute on arms control and disarmament.
Project Maven and Tech-Military Contracting
10. Shane, S., & Wakabayashi, D. (2018, April 4). "The Business of War": Google Employees Protest Work for the Pentagon. The New York Times. The primary press account of the Google employee revolt against Project Maven, based on reporting that drew on internal documents and employee interviews. Documents the petition, the resignations, and the internal debate.
11. Metz, C. (2021). Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World. Dutton. A narrative history of the AI industry that includes substantial material on the development of AI at Google and the ethical debates that shaped its application. Provides context for understanding how the Maven episode fit into Google's broader AI development story.
12. Smith, B. (2021). Tools and Weapons: The Promise and the Peril of the Digital Age. Penguin Press. Microsoft President Brad Smith's account of technology and society challenges, including chapters on military AI and the company's approach to government contracting. Provides the Microsoft perspective on the tech-military relationship.
Nuclear AI and Strategic Stability
13. Geist, E., & Lohn, A. J. (2018). How Might Artificial Intelligence Affect the Risk of Nuclear War? RAND Corporation. The most thorough analysis of AI's implications for nuclear risk from a major policy research institution. Covers early warning, decision support, and strategic stability dimensions. Required reading for understanding nuclear AI governance concerns.
14. Scharre, P. (2018). Army of None: Autonomous Weapons and the Future of War. W.W. Norton. The most comprehensive book-length treatment of autonomous weapons, covering technical capabilities, IHL, accountability, and governance. Written by a former U.S. Army Ranger and defense policy researcher, provides both technical grounding and policy analysis. The essential book for this topic.
15. Nuclear Threat Initiative. (2022). AI and Nuclear Weapons: Building Confidence in AI Applications for Nuclear Security. NTI's assessment of AI applications in nuclear security contexts, including early warning and command and control, with recommendations for confidence-building measures and governance. Practical governance-focused analysis from a leading nuclear security organization.
AI Surveillance in Conflict
16. Amnesty International. (2023). Automated Apartheid: How Facial Recognition Fragments, Segregates and Controls Palestinians in the OPT. Amnesty International's investigation of AI facial recognition and surveillance systems deployed in the occupied Palestinian territories. Documents the use of surveillance AI in a conflict context with significant civilian population implications.
17. Mozur, P. (2019, May 22). One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority. The New York Times. Detailed investigative reporting on the AI surveillance system deployed against the Uyghur population in Xinjiang, China. Documents the scale, technical components, and human impact of the surveillance system.
18. +972 Magazine and Local Call. (2024). Lavender: The AI machine directing Israel's bombing spree in Gaza. The investigative report describing the Israeli military's Lavender AI targeting system. Disputed by the Israeli military; represents the primary public account of AI-assisted targeting in the Gaza conflict and the governance questions it raises.
Ethics and Responsibility
19. Brundage, M., et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Future of Humanity Institute. A comprehensive analysis of AI misuse risks including military applications, from researchers at the Future of Humanity Institute, OpenAI, and other institutions. Provides the broadest framework for thinking about dual-use AI risks.
20. IEEE. (2022). Recommended Practice for Organizational Governance of Artificial Intelligence. IEEE Standard 7000-2021. The IEEE's standard for ethical consideration in autonomous and intelligent systems, including provisions relevant to military AI applications. The primary professional standards document for AI engineers.
21. Marchetti, V., et al. (2022). Ethical frameworks for lethal autonomous weapons systems: A systematic review. AI & Society. A systematic academic review of ethical frameworks applied to LAWS, covering consequentialist, deontological, and virtue ethics approaches. Useful for understanding the philosophical literature on autonomous weapons ethics without diving into primary sources.