Chapter 37: Quiz — Autonomous Weapons and Military AI

20 Questions

Instructions: Select the best answer for each multiple-choice question. For short-answer questions, write two to four sentences.


Question 1 "Loitering munitions" are best described as:

A) Drones designed to hover over a target area indefinitely without engaging B) Unmanned weapons systems that loiter in an area seeking targets before diving and detonating C) Missile systems that wait for human authorization before engaging a target D) Surveillance systems that loiter at high altitude to collect intelligence data

Correct Answer: B Explanation: Loitering munitions — sometimes called "kamikaze drones" — are munitions that loiter in an area, using onboard sensors to search for targets, before diving and detonating on target. The Kargu-2 is a prominent example. The "loitering" refers to the munition's period of search before engagement, not prolonged hover without engagement.


Question 2 The "human-on-the-loop" configuration in autonomous weapons means:

A) A human must authorize each individual use of lethal force before it occurs B) The system can autonomously engage targets, but a human monitors and can override C) A human operates the weapon in real time with no autonomous capability D) Multiple humans must jointly authorize each use of lethal force

Correct Answer: B Explanation: Human-on-the-loop describes a configuration where the system makes autonomous targeting decisions but a human supervisor monitors the process and retains the ability to interrupt or override. This contrasts with human-in-the-loop (human authorizes each individual engagement) and fully autonomous (no human oversight of individual engagements).


Question 3 The IHL principle of "distinction" requires that parties to a conflict:

A) Distinguish between different types of weapons systems in their arsenal B) Distinguish between civilian persons and civilian objects on one side and combatants and military objectives on the other, directing attacks only against military objectives C) Distinguish between proportionate and disproportionate uses of force D) Maintain distinctions in command authority between offensive and defensive operations

Correct Answer: B Explanation: Distinction is the foundational IHL targeting principle requiring combatants to discriminate between civilians (who are protected from direct attack) and combatants (who may be targeted), and between civilian objects and military objectives. It is arguably the most fundamental challenge for autonomous targeting systems.


Question 4 Project Maven was initially designed to use AI to:

A) Provide autonomous targeting recommendations for drone strikes B) Analyze drone surveillance video to identify objects and activities of interest C) Design more accurate munitions guidance systems D) Monitor enemy communications for intelligence analysis

Correct Answer: B Explanation: Project Maven's initial application was computer vision for analyzing drone surveillance footage — automatically identifying objects (vehicles, people, structures) in the enormous volume of drone video that military forces were collecting but lacked the human analyst capacity to review. It was not an autonomous targeting system.


Question 5 The UN Panel of Experts on Libya's 2021 report described the Kargu-2 as potentially significant because:

A) It was the largest drone ever used in combat B) It allegedly engaged targets without requiring data connectivity between the operator and the munition, potentially constituting the first autonomous lethal engagement C) It was manufactured in violation of the arms embargo on Libya D) It demonstrated that drone swarms could overwhelm conventional air defense systems

Correct Answer: B Explanation: The panel's report stated that the Kargu-2 had reportedly been "programmed to attack targets without requiring data connectivity between the operator and the munition" — language that describes autonomous target selection and engagement without real-time human authorization, which would constitute a novel threshold in autonomous lethal engagement.


Question 6 The "accountability gap" in autonomous weapons refers to:

A) The absence of financial compensation for civilian casualties from autonomous weapon strikes B) The difficulty of identifying human actors bearing legal responsibility for specific targeting decisions made by autonomous systems C) The gap between military claims about autonomous weapon accuracy and actual performance D) The lack of accountability for arms dealers who supply autonomous weapons to conflict parties

Correct Answer: B Explanation: The accountability gap describes the legal and moral difficulty created when an autonomous system makes a targeting decision and causes harm: the diffuse chain of designers, deployers, programmers, and commanders makes it hard to identify who bears responsibility for the specific decision that resulted in a specific harm, potentially undermining the enforcement of IHL.


Question 7 The IHL proportionality principle in targeting requires:

A) That both sides in a conflict use equivalent force levels B) That civilian casualties expected from an attack not be excessive in relation to anticipated military advantage C) That only proportional weapons be used — weapons whose destructive effect matches the military objective D) That attacks be proportional to the provocation that preceded them

Correct Answer: B Explanation: IHL proportionality requires a contextual weighing of expected civilian casualties and collateral damage against anticipated military advantage. An attack causing civilian harm disproportionate to the military benefit is prohibited, regardless of whether the target itself is a legitimate military objective. This weighing is one of the most contested challenges for autonomous systems.


Question 8 Google published its AI Principles in 2018 following:

A) The Cambridge Analytica scandal B) Employee opposition to the company's Project Maven contract C) A congressional inquiry into AI development practices D) The EU AI Act's initial proposal requiring technology company AI policies

Correct Answer: B Explanation: Google published its AI Principles in June 2018, following months of internal employee opposition to Project Maven, the departure of employees who resigned over the contract, and the decision not to renew the Maven contract. The Principles were a direct response to the employee activism and the governance questions the Maven episode raised.


Question 9 Which of the following companies was founded explicitly to build defense technology that some other Silicon Valley companies declined to pursue?

A) Google DeepMind B) Palantir Technologies C) Anduril Industries D) OpenAI

Correct Answer: C Explanation: Anduril Industries was founded in 2017 by Palmer Luckey (founder of Oculus VR) explicitly to build defense technology using frontier AI and software, positioning itself as an alternative to tech companies that declined certain military AI work. Palantir has also been active in defense, though it was founded before Maven; Anduril was most explicitly positioned in response to the Maven-era dynamics.


Question 10 The 1983 Petrov incident is relevant to AI in nuclear command and control primarily because:

A) Petrov used an early AI system to correctly identify a false alarm in Soviet nuclear early warning B) A Soviet early warning system incorrectly detected a U.S. ICBM launch, and human judgment — not automated response — prevented potential escalation C) The incident demonstrated that nuclear early warning systems are perfectly reliable with appropriate human oversight D) Petrov's decision was later found to have been incorrect, and the attack was real

Correct Answer: B Explanation: The Petrov incident is relevant because Soviet early warning systems generated a false alarm of a U.S. ICBM launch, and Stanislav Petrov made the correct decision — based on intuition, contextual reasoning, and distrust of the sensor reading — not to escalate. An automated system responding to the same false sensor data might have triggered escalation. The incident demonstrates the value of human judgment in nuclear early warning scenarios.


Question 11 The Campaign to Stop Killer Robots advocates primarily for:

A) Improved safety testing of autonomous weapons before deployment B) A binding international prohibition on fully autonomous weapons systems C) Enhanced transparency requirements for autonomous weapons programs D) National-level regulations on autonomous weapons development

Correct Answer: B Explanation: The Campaign to Stop Killer Robots advocates for a preemptive binding international treaty prohibiting lethal autonomous weapons systems — systems that can select and engage targets without meaningful human control. It does not advocate for improved regulation of autonomous weapons but for their prohibition.


Question 12 "Dual-use technology" in the military AI context refers to:

A) Technology that can be used by both military and civilian users simultaneously B) Technology with both legitimate civilian applications and potential military weaponization C) Technology that can be operated in both autonomous and human-controlled modes D) Technology developed in both offensive and defensive variants

Correct Answer: B Explanation: Dual-use technology is technology developed for legitimate civilian purposes that also has military applications (or vice versa). Computer vision, natural language processing, and machine learning platforms are dual-use technologies: developed and refined for commercial applications, but applicable to surveillance, targeting, and autonomous weapons.


Question 13 The Convention on Certain Conventional Weapons (CCW) discussions on autonomous weapons have, as of the mid-2020s:

A) Produced a binding treaty prohibiting fully autonomous weapons B) Produced a non-binding political declaration on meaningful human control C) Produced extensive discussions but no binding legal obligations on autonomous weapons D) Failed to attract significant state participation and been suspended

Correct Answer: C Explanation: The CCW has been the primary multilateral forum for autonomous weapons governance discussions since 2014. As of the mid-2020s, these discussions have not produced binding legal obligations, primarily due to definitional disagreements and the resistance of major military powers to constraints on their autonomous weapons programs.


Question 14 The Kargu-2 is manufactured by which company, in which country?

A) Baykar, Turkey B) STM (Savunma Teknolojileri Mühendislik ve Ticaret A.S.), Turkey C) Elbit Systems, Israel D) AeroVironment, United States

Correct Answer: B Explanation: The Kargu-2 is manufactured by STM, a Turkish defense company. It is a rotary-wing loitering munition with autonomous target detection and engagement capabilities, as described in STM's marketing materials.


Question 15 The "liar's dividend" concept, introduced in Chapter 35, is relevant to autonomous weapons governance because:

A) Arms manufacturers can use AI to fabricate evidence of weapons system performance B) The existence of deepfake technology allows states to deny authentic evidence of autonomous weapons violations, claiming footage is fabricated C) Military AI systems can generate false intelligence that triggers autonomous engagement D) AI-generated disinformation campaigns can justify autonomous weapons use

Correct Answer: B Explanation: The liar's dividend — the ability to dismiss authentic video evidence as AI-generated — is relevant to autonomous weapons accountability because video evidence of an autonomous weapon committing an IHL violation could be dismissed by the deploying state as deepfake. This undermines the evidentiary basis for accountability that international law requires.


Question 16 Short Answer: What specific concern do nuclear security experts raise about AI in nuclear early warning systems, beyond general AI reliability concerns?

Model Answer: Experts are specifically concerned about adversarial manipulation: an adversary who understands the AI system's decision logic could potentially generate false attack signatures — spoofed sensor data, electronic deception — designed to trigger the AI system's escalation assessment. Traditional human judgment is more robust to novel adversarial inputs than AI systems, which can be systematically fooled by adversarial techniques that don't require reproducing actual attack signatures, only inputs that fool the model. This creates an asymmetric vulnerability that doesn't exist to the same degree with human-reviewed early warning assessments.*


Question 17 Short Answer: What is the "substitution effect" in the context of tech company military AI ethics, and why is it a challenge for principled non-participation?

Model Answer: The substitution effect describes the dynamic in which one company's principled withdrawal from a military AI application is replaced by a less-constrained competitor who provides the same or similar capability. When Google withdrew from Project Maven, Microsoft and Palantir provided comparable capabilities to the DoD. If principled non-participation doesn't prevent the application from occurring — only changing which company provides it — the governance benefit is limited to whatever ethical constraints the non-participating company would have imposed. If competitors impose fewer constraints, principled withdrawal may result in less-governed applications.*


Question 18 Short Answer: Why does the proportionality calculation in IHL create a distinctive challenge for autonomous weapons systems that the distinction requirement does not?

Model Answer: The distinction requirement is a cognitive challenge — identifying whether a target is a civilian or a combatant — that could in principle be approached as a classification problem (though a very difficult one in complex operational environments). The proportionality calculation is a normative weighing problem: it requires assessing the moral and military value of different objectives, predicting uncertain outcomes, and making contextual ethical judgments about when civilian harm is acceptable. Critics argue this is constitutively a human ethical judgment — not merely a technically difficult classification — and that algorithmic "proportionality" would not be the same moral operation that IHL requires.*


Question 19 Short Answer: What verification challenge distinguishes autonomous weapons governance from nuclear weapons arms control, and why does it matter?

Model Answer: Nuclear weapons arms control can rely on physical verification — highly enriched uranium, plutonium, and warheads are detectable through technical means including satellite monitoring, radiation detection, and on-site inspection. Autonomy is a software characteristic: whether a drone engaged a target autonomously or with human authorization cannot be determined from external observation of the drone or its physical remains. This makes treaty verification extremely challenging — a state could certify that its systems require human control while operating them autonomously in the field, with little risk of detection. Governance frameworks must rely on transparency measures, confidence-building, and incident reporting rather than physical verification.*


Question 20 Short Answer: The chapter argues that autonomous weapons governance requires public democratic deliberation rather than primarily relying on corporate self-governance. What is the strongest argument for this position, and what is the strongest counterargument?

Model Answer: The strongest argument for public deliberation is that decisions about what lethal force capabilities a democracy's military develops and deploys are fundamentally political and ethical decisions that belong in the public domain — they involve values about war, proportionality, accountability, and democratic control of military force that cannot be appropriately decided by commercial companies' board rooms or employee petitions. The strongest counterargument is that democratic processes are slow, poorly equipped to evaluate technical AI matters, and subject to capture by populist pressures, while technology companies with deep expertise and reputational incentives may make more informed and constrained decisions than democratic processes would produce — particularly for classified military programs where public deliberation faces inherent limits.*