Case Study: Autonomous Weapons: The Campaign to Stop Killer Robots

"Fully autonomous weapons would be the third revolution in warfare, after gunpowder and nuclear arms." — Open letter signed by thousands of AI researchers, 2015

Overview

In November 2012, a coalition of non-governmental organizations launched the Campaign to Stop Killer Robots — a global effort to preemptively ban fully autonomous weapons systems before they are developed and deployed. The campaign's central demand: that a human being must always make the final decision to use lethal force, and that this requirement must be enshrined in international law.

More than a decade later, the debate over lethal autonomous weapons systems (LAWS) remains one of the most consequential governance challenges at the intersection of technology, ethics, and international security. Discussions at the United Nations Convention on Certain Conventional Weapons (CCW) have produced years of deliberation but no binding treaty. Major military powers continue to invest billions in autonomous weapons development. And the underlying technology — AI-powered targeting, autonomous navigation, and sensor-based decision-making — advances faster than governance can follow.

This case study examines the campaign, the arguments on both sides, the obstacles to international governance, and what the debate reveals about the fundamental questions of moral agency and human control explored in this chapter.

Skills Applied: - Evaluating ethical arguments for and against LAWS (Section 19.3) - Analyzing the concept of meaningful human control (Section 19.5) - Assessing international governance challenges for emerging military technologies - Connecting the autonomous weapons debate to broader themes of accountability and moral agency


What Are Lethal Autonomous Weapons Systems?

Definition and Scope

There is no universally agreed definition of LAWS, which is itself a governance challenge. The International Committee of the Red Cross (ICRC) defines an autonomous weapon system as one that "can select and attack targets without human intervention." The Campaign to Stop Killer Robots uses a broader definition: any weapon system that "would select and engage targets without meaningful human control."

The distinction between "without human intervention" and "without meaningful human control" is significant. A weapon might technically involve a human who presses a button to authorize a mission — but if the human has no real understanding of what targets the system will select, no ability to override specific engagements in real time, and no capacity to evaluate whether each engagement complies with international humanitarian law, then the human control is not meaningful.

The Autonomy Spectrum in Weapons

Like autonomous vehicles, autonomous weapons exist on a spectrum:

Current systems with autonomous functions: - Missile defense systems (e.g., the U.S. Phalanx CIWS, Israel's Iron Dome) that automatically detect and intercept incoming projectiles. These operate in environments where the speed of engagement makes human decision-making impossible (milliseconds), and the targets are objects (missiles, rockets), not people. - Loitering munitions (e.g., Israel's Harop, Turkey's Kargu-2) that can autonomously search an area and strike targets matching pre-defined signatures. These systems occupy a gray area — they are often described as having "human-on-the-loop" oversight, but the human may be far from the engagement and unable to verify target identity in real time. - Armed drones operated remotely by human pilots (e.g., the U.S. MQ-9 Reaper). These are not autonomous in the targeting sense — a human makes the decision to fire — but they introduce geographic and psychological distance between the decision-maker and the act of killing.

Anticipated future systems: - Autonomous combat drones that can identify, track, and engage human targets without real-time human authorization. - Autonomous submarine and surface vessel systems capable of engaging enemy ships or submarines independently. - Swarm systems — networks of dozens or hundreds of autonomous drones coordinating attacks without individual human control of each unit.

The Kargu-2 Incident

In March 2020, during the Libyan civil war, a Turkish-made Kargu-2 loitering munition reportedly engaged retreating forces "without requiring data connectivity between the operator and the munition" — meaning the weapon may have autonomously identified and attacked a human target. A United Nations Panel of Experts report documented the incident, noting that the Kargu-2 was "programmed to attack targets without requiring data connectivity between the operator and the munition — in effect, a true 'fire, forget, and find' capability."

If confirmed, this would represent the first documented instance of an autonomous weapon attacking a human without explicit human authorization for that specific engagement. The incident remains contested — the extent of human involvement is unclear — but it marked a inflection point in the debate, shifting it from the hypothetical to the operational.


The Campaign to Stop Killer Robots

Origins and Organization

The Campaign to Stop Killer Robots was launched in April 2013 by a coalition of NGOs including Human Rights Watch, Article 36, the International Committee for Robot Arms Control, and Pax Christi. The campaign is modeled on successful precedents: the International Campaign to Ban Landmines (which led to the 1997 Ottawa Treaty) and the International Campaign to Abolish Nuclear Weapons (which contributed to the 2017 Treaty on the Prohibition of Nuclear Weapons).

The campaign's strategy combines: - Public advocacy — raising awareness of the threat through media campaigns, publications, and public events - Diplomatic engagement — participating in CCW discussions and lobbying governments to support a preemptive ban - Academic and technical engagement — partnering with AI researchers and ethicists to build the technical and philosophical case against LAWS - Moral framing — centering the argument on human dignity and the principle that machines should not make life-or-death decisions

The Open Letters

In 2015, a landmark open letter signed by over 3,000 AI and robotics researchers — including Stuart Russell, Yoshua Bengio, and the late Stephen Hawking — called for a ban on autonomous weapons. The letter warned:

"Autonomous weapons select and engage targets without human intervention. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable... Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations, and selectively killing a particular ethnic group."

A follow-up letter in 2017, signed by 116 founders of robotics and AI companies, described autonomous weapons as a "Pandora's box" and called for urgent international action.


Arguments Against LAWS (For a Ban)

The Human Dignity Argument

The most fundamental argument against LAWS is that the decision to kill a human being carries moral weight that cannot be delegated to a machine. This is a deontological argument: regardless of consequences, certain actions require moral agency — the capacity for moral judgment, empathy, and the understanding of what it means to take a life. Machines do not possess moral agency. Therefore, machines should not make the decision to kill.

The ICRC has framed this as a requirement of international humanitarian law (IHL). IHL's principles of distinction (discriminating between combatants and civilians), proportionality (ensuring the military advantage of an attack is not outweighed by civilian harm), and precaution (taking all feasible measures to minimize civilian harm) all require judgment — not just pattern matching. A machine can be trained to distinguish a person carrying a rifle from a person carrying an umbrella, but it cannot assess whether the person with the rifle is a combatant, a hunter, or a frightened civilian seeking protection.

The Accountability Gap

Chapter 17's accountability frameworks directly apply. If an autonomous weapon kills a civilian in violation of IHL, who is responsible? The programmer who wrote the targeting algorithm? The commander who authorized the deployment? The manufacturer who sold the system? The state that fielded it? The accountability gap is not hypothetical — it is a structural consequence of removing the human decision-maker from the kill chain.

The Arms Race Risk

The 2015 open letter warned that autonomous weapons development among major powers would trigger an arms race, making the technology cheaper, more widespread, and ultimately available to non-state actors, authoritarian regimes, and terrorist organizations. Unlike nuclear weapons, which require rare materials and massive infrastructure, autonomous weapons require only commercially available components (drones, processors, AI software) — making proliferation far harder to control.

The Lowered Threshold for War

If autonomous weapons reduce the human cost of war for the side that deploys them (no soldiers at risk, no body bags, no political fallout), they may lower the political threshold for initiating armed conflict. Historically, the risk of casualties has been a democratic check on the use of force — citizens and legislatures are reluctant to authorize wars that will kill their own people. Autonomous weapons could erode this check, making war "easier" in a political sense while remaining devastating for the other side.


Arguments Against a Ban (In Favor of LAWS Development)

The Humanitarian Argument

Proponents argue, paradoxically, that autonomous weapons could reduce civilian casualties. Human soldiers commit war crimes: they kill out of fear, anger, and revenge. They panic in combat. They make errors of judgment under extreme stress. An autonomous system that can reliably distinguish combatants from civilians and apply proportional force might, in theory, cause fewer civilian deaths than human soldiers do in practice.

This is a utilitarian argument: if the outcome is fewer deaths, the means (delegating targeting to machines) are justified. The force of this argument depends on the empirical assumption that autonomous weapons will actually be more precise than humans — an assumption that is unproven and may remain so until the weapons are used in combat.

The Strategic Stability Argument

Some defense analysts argue that autonomous weapons could enhance deterrence and reduce the risk of large-scale conflict by making defense more effective (e.g., autonomous missile defense systems that can respond faster than human operators). In this view, the technology itself is not inherently destabilizing — what matters is how it is integrated into existing security architectures.

The Feasibility Argument Against a Treaty

Skeptics of a ban argue that a treaty would be unenforceable. Unlike chemical weapons (which require specific precursor chemicals) or nuclear weapons (which require enrichment facilities), autonomous weapons use dual-use technology — commercial AI, off-the-shelf drones, and conventional processors. Verifying that a state has not developed autonomous weapons would require inspecting virtually its entire technology infrastructure.

Furthermore, major military powers — including the United States, Russia, and China — have consistently resisted a binding treaty, making adoption through the CCW's consensus-based process effectively impossible.

The "Already Here" Argument

Some military officials argue that the debate is moot because autonomous functions are already embedded in existing weapons systems (missile defense, loitering munitions). Rather than attempting to ban a category of weapons that is difficult to define, governance should focus on regulating the use of autonomy in weapons — setting rules for human oversight levels, targeting restrictions, and accountability mechanisms.


The International Governance Landscape

The CCW Process

The United Nations Convention on Certain Conventional Weapons (CCW) has been the primary forum for LAWS governance discussions since 2014. A Group of Governmental Experts (GGE) on LAWS has met regularly, producing reports and recommendations. However, the GGE operates by consensus — any single state can block progress — and major military powers have used this mechanism to prevent binding outcomes.

As of this writing, the CCW process has produced: - Agreement that international humanitarian law applies to LAWS (a basic but non-trivial consensus) - Agreement that some form of "human responsibility" must be maintained (without defining what that means) - No agreement on a binding treaty, a moratorium, or a common definition of LAWS

National Positions

Country/Bloc Position
United States Opposes a ban; supports "appropriate levels of human judgment" but resists binding definitions
Russia Opposes a ban; argues autonomous weapons are not yet mature enough to regulate
China Supports a ban on use (but not development) of LAWS — a position critics call strategically convenient
United Kingdom Opposes a ban; supports "human oversight" without binding standards
France/Germany Support a "political declaration" with principles for human control, but not a legally binding treaty
Austria, Belgium, Brazil, Chile, and 30+ countries Support a legally binding instrument to regulate or prohibit LAWS
Campaign to Stop Killer Robots Demands a preemptive ban on all autonomous weapons that select and engage targets without meaningful human control

The Growing Momentum for Action

Despite the lack of progress at the CCW, momentum has built outside the formal process: - The UN Secretary-General has repeatedly called for restrictions on LAWS. - The ICRC issued an unprecedented recommendation in 2021 calling for new legally binding rules on autonomous weapons. - Latin American and African states have formed voting blocs supporting a ban. - The Austrian government announced in 2023 that it would pursue negotiations on a LAWS treaty outside the CCW framework if the CCW process continued to stall — a strategy modeled on the nuclear weapons ban treaty, which was negotiated outside the traditional nuclear powers' preferred forums.


Connections to Chapter 19

Moral Agency

The autonomous weapons debate is the sharpest practical test of the moral agency question from Section 19.4. If machines cannot be moral agents — if they cannot understand the significance of taking a life, cannot exercise compassion, cannot bear moral responsibility — then delegating the decision to kill to a machine creates a permanent accountability void. Someone must be responsible when lethal force is used. If the machine cannot be that someone, and the human has been removed from the decision, the result is a "responsibility gap" (Matthias, 2004) that no existing legal or ethical framework can close.

Meaningful Human Control

The concept of "meaningful human control" (Section 19.5) was developed specifically in the autonomous weapons context. It requires more than nominal human involvement: the human must have adequate information, sufficient time, genuine authority, and the cognitive capacity to exercise judgment over each specific engagement. A commander who authorizes a mission and then watches as an autonomous system selects and engages dozens of targets over hours without further human input does not exercise meaningful human control — even though a human formally authorized the mission.

The Governance Demand

Dr. Adeyemi's principle — "every increase in autonomy is also an increase in the governance demand" — applies with special force in the military domain. The consequences of autonomous weapons failure are irreversible (people die). The operational environment is complex and adversarial (the enemy actively tries to deceive the system). The pressure to deploy before governance catches up is intense (military competition creates urgency). And the governance mechanisms that exist — the CCW, international humanitarian law, rules of engagement — were designed for human decision-makers, not autonomous systems.


Discussion Questions

  1. The central question. Should the international community preemptively ban lethal autonomous weapons? If so, how should LAWS be defined? If not, what governance framework should replace a ban?

  2. The humanitarian paradox. Proponents argue LAWS could reduce civilian casualties by eliminating human error, fear, and rage. Opponents argue that delegating killing to machines violates human dignity regardless of outcome. Is it possible to resolve this tension, or are the two positions fundamentally incommensurable?

  3. Enforcement. If a ban were adopted, how would it be enforced? Autonomous weapons use dual-use technology that is commercially available. Is verification feasible? Compare this challenge to the enforcement of chemical or nuclear weapons treaties.

  4. The precedent question. The Campaign to Stop Killer Robots is modeled on the landmine and cluster munitions ban campaigns. Those campaigns succeeded in producing treaties that most (but not all) states signed. Is the LAWS context more analogous to landmines (where a ban proved achievable) or nuclear weapons (where major powers have consistently refused comprehensive disarmament)?


Your Turn: Mini-Project

Option A: Position Paper. You are a diplomat at the CCW. Write a 600-word position paper for your country on LAWS governance. Choose a country, research its actual position, and either defend that position or propose a modified position that you believe is more ethically and strategically sound. Ground your argument in the ethical frameworks from Chapter 19.

Option B: The Kargu-2 Analysis. Research the 2020 Libya incident involving the Kargu-2 loitering munition. Write a 600-word analysis addressing: (a) what is known about the incident, (b) whether the engagement was truly autonomous, (c) how existing international humanitarian law applies, and (d) what the incident reveals about the adequacy of current governance. Use at least three sources beyond this textbook.

Option C: Technology Governance Design. Rather than a complete ban, design a regulatory framework for autonomous weapons that permits some autonomous functions while maintaining meaningful human control. Your framework should specify: (a) which autonomous functions are permitted and which are prohibited, (b) what level of human oversight is required for different categories of weapons, (c) how compliance is verified, and (d) what accountability mechanisms apply when violations occur. Present your framework in a two-page document.


References

  • Campaign to Stop Killer Robots. "The Threat of Fully Autonomous Weapons." Position paper, 2020. https://www.stopkillerrobots.org.

  • International Committee of the Red Cross. "ICRC Position on Autonomous Weapons Systems." ICRC, May 2021.

  • United Nations Panel of Experts on Libya. "Final Report of the Panel of Experts on Libya." S/2021/229, March 2021.

  • Awad, Edmond, et al. "The Moral Machine Experiment." Nature 563 (2018): 59-64.

  • Russell, Stuart, et al. "Autonomous Weapons: An Open Letter from AI and Robotics Researchers." Future of Life Institute, July 28, 2015.

  • Asaro, Peter. "On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making." International Review of the Red Cross 94, no. 886 (2012): 687-709.

  • Heyns, Christof. "Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions." United Nations General Assembly, A/HRC/23/47, April 9, 2013.

  • Scharre, Paul. Army of None: Autonomous Weapons and the Future of War. New York: W.W. Norton, 2018.

  • Matthias, Andreas. "The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata." Ethics and Information Technology 6, no. 3 (2004): 175-183.

  • Bode, Ingvild, and Hendrik Huelss. "Autonomous Weapons Systems and Changing Norms in International Relations." Review of International Studies 44, no. 3 (2018): 393-413.