Chapter 37: Exercises — Autonomous Weapons and Military AI
25 Exercises for Business and Policy Professionals
Foundations and Definitions
Exercise 1: The Autonomy Spectrum Applied Research five specific real-world military weapons systems or programs and place each on the human-in-the-loop / human-on-the-loop / fully autonomous spectrum. For each, justify your classification based on publicly available technical descriptions of how targeting decisions are made. Systems to consider might include: the Phalanx close-in weapons system; the Iron Dome missile defense system; a Predator drone operated by remote pilots; the Kargu-2; and a hypothetical AI-enabled artillery system that auto-adjusts targeting parameters. What definitional challenges do you encounter in the classification process?
Exercise 2: Meaningful Human Control Requirements Draft a definition of "meaningful human control" for use in an international treaty on autonomous weapons. Your definition should be: (a) specific enough to exclude nominally-human-controlled systems where genuine human judgment is absent; (b) clear enough to provide guidance to weapons engineers; (c) flexible enough to accommodate legitimate operational requirements including time-critical defensive scenarios; and (d) verifiable in principle. Research the ICRC's articulation of meaningful human control criteria and compare it to your draft.
Exercise 3: The Definition Problem Research the definitional positions taken by the United States, Russia, China, the United Kingdom, and Germany in CCW discussions on LAWS. For each state, describe: (a) how they define autonomous weapons; (b) whether their definition would include or exclude specific known systems; (c) what national military interests the definition appears designed to protect. Write a 600-word analysis of how definitional disagreement has impeded progress in CCW negotiations.
Exercise 4: IHL Primer and Application Research the four core principles of international humanitarian law relevant to targeting: distinction, proportionality, precaution, and military necessity. For each principle, explain: (a) what it requires; (b) what information a decision-maker needs to comply with it; and (c) what technical and ethical challenges an autonomous weapons system would face in complying with it. Based on your analysis, what is your assessment of whether current AI systems can reliably comply with IHL in complex operational environments?
Autonomous Weapons Governance
Exercise 5: The Kargu-2 Legal Analysis Based on the Kargu-2 case study, conduct a legal analysis: Assuming the UN Panel of Experts' account is accurate — that the Kargu-2 engaged targets autonomously in Libya without real-time human authorization — what IHL violations may have occurred? Who bears legal responsibility under current international law? What additional legal mechanisms would be necessary to address accountability gaps? What specific legal standards should govern the assessment?
Exercise 6: Treaty Design Exercise Design the core provisions of a binding international treaty on lethal autonomous weapons systems. Your treaty should address: (a) definitions that clearly identify prohibited systems; (b) what level of autonomy is permissible; (c) what human control requirements must be met; (d) export control obligations; (e) verification and enforcement mechanisms; (f) state liability for IHL violations by deployed autonomous systems; and (g) a timeline for implementation. Research the Mine Ban Treaty and the Chemical Weapons Convention as structural models.
Exercise 7: The Ottawa Process Scenario The Ottawa Process achieved a ban on anti-personnel mines without the participation of major military powers. Assess whether a similar process could achieve a ban on fully autonomous weapons systems. Consider: (a) which states would need to be in a founding coalition; (b) what civil society organizations would need to play what roles; (c) what humanitarian evidence would need to be mobilized; (d) whether operating outside the CCW framework is realistic; and (e) what the likely effectiveness of a treaty without U.S., Russian, and Chinese participation would be.
Exercise 8: The Campaign to Stop Killer Robots — Strategic Assessment Review the publicly available materials from the Campaign to Stop Killer Robots. Assess the campaign's strategy: (a) What is its theory of change — how does it believe its advocacy will produce binding governance? (b) What coalitions has it built? (c) What evidence has it marshaled? (d) What obstacles has it faced? (e) What should it do differently? Write a 750-word strategic assessment from the perspective of a nonprofit strategy consultant.
Project Maven and Tech-Military Contracting
Exercise 9: Google's AI Principles — Consistency Analysis Research Google's AI Principles (published 2018) and three Google government or military contracts or projects that have been publicly discussed since 2018 (Project Nimbus is one; research others). For each contract or project, analyze: (a) whether it appears consistent with the AI Principles; (b) what arguments Google might make for consistency; (c) what arguments critics make for inconsistency; and (d) your own assessment. What does your analysis reveal about the enforceability of voluntary AI principles?
Exercise 10: The Substitution Effect Google's withdrawal from Project Maven was followed by Microsoft and Palantir engaging more extensively with military AI. This is the "substitution effect" — principled non-participation by one actor is replaced by less constrained participation by another. Research whether the substitution in military AI contracting has resulted in better or worse AI safety, ethics review, or human control requirements than would have resulted from Google's continued participation. Is there evidence that Palantir, Anduril, or Microsoft impose comparable ethical constraints on their military AI work? What does the evidence suggest about the governance value of principled non-participation?
Exercise 11: Tech Company Military AI Policy Design You are asked to advise a major technology company (with frontier AI capabilities) on designing a policy for military AI contracting. The company wants to participate in some military AI work but wants to do so with clear ethical boundaries. Design a policy that addresses: (a) categories of military AI the company will and will not build; (b) what human control requirements must be present in systems the company contributes to; (c) what oversight mechanisms the company will require for military contracts; (d) what employee voice mechanisms will apply to military AI decisions; (e) how the policy will be enforced and publicly reported. Compare your policy to Google's AI Principles and assess what improvements you have made.
Exercise 12: Individual Engineer Decision You are a senior machine learning engineer at a technology company. Your team has been assigned to a project that, as you understand it, involves improving the target classification accuracy of a drone surveillance system used by the U.S. military in counterterrorism operations. You know that the system's outputs are used to support targeting decisions but that there is human review of each targeting decision. You have ethical concerns about the project. Walk through your ethical decision-making process: What additional information would you seek? What ethical frameworks are relevant? What options do you have? What would you do, and why?
Nuclear AI and Strategic Stability
Exercise 13: The Petrov Incident Analysis Research the 1983 Petrov incident in detail: what happened, why the Soviet early warning system generated a false alarm, how Petrov made his decision not to escalate, and what the outcome might have been if he had escalated or if an automated system had been in place. Write a case analysis examining: (a) what human judgment elements were critical to Petrov's correct decision; (b) whether an AI system designed to emulate Petrov's reasoning could have made the same decision; (c) what this incident reveals about the risks of AI in nuclear early warning; and (d) what governance requirements it supports.
Exercise 14: Nuclear AI Red Lines Research the public statements of nuclear security experts, arms control organizations, and governments on the appropriate role of AI in nuclear command and control. Based on your research, develop a set of specific "red lines" — commitments that nuclear-armed states should make about what AI will not do in nuclear decision systems. For each red line, explain the risk it is designed to prevent and why it is necessary.
Exercise 15: Strategic Stability Analysis Research what experts mean by "strategic stability" in the nuclear context — the conditions under which neither nuclear-armed state has an incentive to use nuclear weapons first. Analyze how AI capabilities in early warning, decision support, and command and control could affect strategic stability: (a) What AI capabilities could increase stability? (b) What AI capabilities could decrease stability? (c) What AI capabilities create the most uncertainty? Assess the net effect of current AI development trends on nuclear strategic stability.
AI Surveillance in Conflict
Exercise 16: The Project Nimbus Analysis Research Project Nimbus — the Google and Amazon cloud computing contract with the Israeli government — and the controversy surrounding it. Analyze: (a) What services are provided under Project Nimbus? (b) What concerns have been raised about the contract's use in military operations? (c) How have Google and Amazon responded? (d) What governance mechanisms — contractual, regulatory, or ethical — could address legitimate concerns while allowing legitimate government cloud computing? (e) How does Project Nimbus compare to Google's AI Principles?
Exercise 17: Surveillance Technology Export Governance Research the export of surveillance technology (facial recognition, cell-phone monitoring, predictive policing AI) to governments that have used it for repressive purposes. Identify two specific cases (the Uyghur surveillance system is one; research another). For each, analyze: (a) What technology was exported? (b) By whom, and with what knowledge of intended use? (c) What export control laws applied? (d) What human rights law was implicated? (e) What governance mechanisms could have prevented the export? Propose a surveillance technology export control framework.
Exercise 18: AI-Assisted Targeting Ethics Research publicly available reporting on AI-assisted targeting in contemporary conflicts. Based on available evidence, analyze: (a) What role does AI play in generating targeting recommendations? (b) What level of human review occurs? (c) What performance characteristics are reported for AI targeting systems (false positive rates, civilian casualty rates)? (d) What IHL questions does this raise? (e) What governance requirements should apply? Use the ICRC's framework for meaningful human control as a reference.
Global Governance and International Law
Exercise 19: CCW Negotiating Simulation Conduct a classroom simulation of CCW negotiations on LAWS governance. Assign students to represent delegations from: the United States, China, Russia, Austria (advocate for prohibition), Brazil, the United Kingdom, the ICRC (observer), and the Campaign to Stop Killer Robots (NGO observer). Provide each delegation with a briefing on their stated position and national interests. Negotiate for a session and attempt to reach a consensus text. Debrief on: what compromises were possible, what positions were non-negotiable, and what this reveals about the obstacles to binding governance.
Exercise 20: IHL Compliance Assessment Framework Design a pre-deployment assessment framework for evaluating whether an autonomous weapons system is capable of complying with IHL requirements. Your framework should assess: (a) the system's ability to discriminate between civilians and combatants under realistic operational conditions; (b) whether the system can perform the proportionality calculation; (c) what precautionary measures are built into the system; (d) what conditions — weather, civilian density, combatant behavior — cause the system's IHL compliance to degrade; and (e) what human control requirements must accompany deployment in various scenarios. How would this framework be applied in practice by a state procuring autonomous weapons?
Exercise 21: Accountability Framework Design Design a legal framework for assigning responsibility when an autonomous weapon system causes civilian casualties or other IHL violations. Your framework should: (a) identify all potential responsible actors across the chain from designer to deployer; (b) establish criteria for assigning primary, secondary, and shared responsibility; (c) identify the legal mechanisms through which responsibility could be enforced; (d) address both criminal and civil liability; and (e) consider the international law dimension when the responsible actors are in different states. Compare your framework to the ICRC's guidance on accountability for autonomous weapons.
Tech Worker Ethics and Professional Responsibility
Exercise 22: Professional Ethics Codes and Military AI Research the ethics codes of the IEEE (Institute of Electrical and Electronics Engineers) and the ACM (Association for Computing Machinery). Identify the provisions most relevant to engineers working on military AI. Assess whether the existing codes provide sufficient guidance for engineers facing decisions about: (a) working on autonomous weapons systems; (b) working on AI-assisted targeting; (c) working on military surveillance AI; and (d) working on nuclear AI systems. Propose amendments to professional codes to address military AI specifically.
Exercise 23: The Employee Petition Analysis Research the text and outcome of the Google Project Maven employee petition (2018). Analyze: (a) What ethical arguments did the petition make? (b) What organizational risks did signatories take? (c) What outcome did the petition achieve? (d) What factors made the petition effective? (e) In what circumstances is collective employee action an appropriate governance mechanism for military AI ethics? Assess whether similar actions have occurred at other companies and with what outcomes.
Exercise 24: Personal Responsibility Scope A software engineer at a cloud computing provider contributes to building a high-performance object storage system. That system is subsequently sold to a government defense department and used to store drone surveillance footage that is analyzed to support targeting decisions. The engineer had no knowledge of this application when building the storage system. Analyze: (a) Does this engineer bear any moral responsibility for downstream uses? (b) At what point — if any — in the chain from general infrastructure to specific weapons application does individual moral responsibility attach? (c) What due diligence obligations, if any, does an engineer have about the downstream uses of general-purpose tools they build?
Exercise 25: Democratic Governance of Military AI The chapter argues that governance of military AI in democratic societies requires public deliberation, not just corporate policy and employee activism. Design a democratic governance process for military AI in a democracy of your choice. Your design should address: (a) what legislative oversight and authorization is required for military AI programs; (b) what public transparency about military AI programs is appropriate (recognizing legitimate national security confidentiality); (c) what role independent technical advisory bodies should play; (d) what public comment or deliberation processes apply to major military AI deployments; and (e) what judicial oversight is available. Assess the feasibility of your design in the current political environment.