Case Study 37.2: The Kargu-2 Report and the Autonomous Weapons Threshold

A Disputed Incident, an Undisputed Problem


Overview

This case study examines an incident that may — or may not — represent the first fully autonomous lethal engagement by a machine in the history of warfare. The uncertainty itself is instructive.

In a March 2021 report to the United Nations Security Council, the Panel of Experts on Libya — a group established by the Security Council to monitor compliance with the arms embargo on Libya — described incidents from the spring of 2020 involving Turkish-manufactured Kargu-2 drones. The panel's report stated that logistics convoys and retreating forces loyal to the Libyan National Army (LNA) had been "hunted down" by lethal autonomous weapons systems that were programmed to attack targets "without requiring data connectivity between the operator and the munition."

If accurate, this would describe a scenario in which the drone — without real-time human authorization — made targeting decisions based on its onboard sensors and programming, engaged those targets with lethal force, and did so without the operator being able to intervene in the individual targeting sequence. In the language of the autonomy spectrum: fully autonomous, human-out-of-the-loop lethal engagement.

The manufacturer disputed this characterization. The facts remain contested. The report's language is carefully hedged. And yet the scenario described — algorithmically-determined lethal engagement without human authorization — is precisely what the international community has been discussing for a decade without producing binding governance. The Kargu-2 episode, whatever precisely occurred, illustrates why that governance gap matters.


Background: The Kargu-2 System

The Kargu-2 is a rotary-wing attack drone manufactured by the Turkish defense company STM (Savunma Teknolojileri Mühendislik ve Ticaret A.S.). It is classified as a loitering munition — a category of weapons system sometimes called a "kamikaze drone" that loiters in an area seeking targets before diving and detonating.

The Kargu-2 is designed to use onboard image processing and machine learning to identify and track targets. Its technical capabilities, as described in STM's marketing materials and technical documentation, include:

  • Autonomous target detection and tracking using onboard visual processing
  • Classification of potential targets by type (person, vehicle)
  • The ability to operate in GPS-denied environments using visual navigation
  • Anti-jamming capabilities that allow operation without continuous uplink
  • The ability to operate in swarms of up to twelve units with coordinated behavior

The system's design specifically addresses scenarios in which communication with the operator is unavailable or compromised. This is a common engineering challenge for military drones in contested electromagnetic environments: adversaries can jam communication links, and a drone that loses its communication link becomes useless if it cannot operate autonomously. The Kargu-2's anti-jamming and autonomous capabilities are partly a response to this operational requirement.

STM's marketing materials and public statements have described the Kargu-2 as capable of autonomous target selection and engagement. The degree of autonomy in actual deployed systems, and the operational protocols under which they are used, are matters that STM has not disclosed fully.


The UN Panel of Experts Report

The UN Panel of Experts on Libya was established by Security Council Resolution 1973 (2011) to monitor compliance with the arms embargo imposed on Libya during the Libyan civil war. The panel consists of six experts in areas including arms, finance, and related fields, appointed by the Secretary-General.

The panel's March 2021 report covered incidents from 2019-2020. A section of the report addressing the transfer and use of weapons, including unmanned systems, described the Kargu-2 incidents. The relevant passage stated that logistics convoys and retreating LNA forces had been "hunted down and remotely engaged" by Kargu-2 drones that were "programmed to attack targets without requiring data connectivity between the operator and the munition."

This language — and specifically the phrase "without requiring data connectivity between the operator and the munition" — is the most significant and most contested element of the report. If accurate, it describes an engagement in which the operator could not have been making targeting decisions or authorizing individual strikes in real time, because there was no communication link through which such decisions could be transmitted. The autonomy would have been the drone's own onboard programming and sensor processing.

The Panel's Sources and Methodology

The panel acknowledged limitations in its evidence base. Libya is a conflict zone in which multiple parties with conflicting interests provide information. The panel draws on satellite imagery, expert technical analysis, interviews, and documentary evidence, but in an environment characterized by incomplete information, restricted access, and parties motivated to shape the evidentiary record.

The report does not identify the specific incidents described with complete documentation. It represents the panel's expert assessment based on available evidence, not a legal finding beyond reasonable doubt.

The Manufacturer's Response

STM disputed the characterization of the Kargu-2 as having operated fully autonomously in Libya. The company's public statements emphasized that the Kargu-2 is designed to require human authorization for engagement, that the system's autonomous capabilities are for navigation and target detection rather than targeting decision, and that any Kargu-2 deployments in Libya occurred with appropriate human control.

The dispute illustrates a definitional challenge that recurs throughout autonomous weapons governance: what constitutes "human control" and what constitutes "autonomous engagement" are not always clearly separable in complex weapons systems that have varying levels of autonomy across different phases of operation. A drone might navigate autonomously, detect and classify targets autonomously, and then require a human trigger for engagement — or the trigger might be pre-programmed. External observers often cannot determine which operational mode was used in a specific engagement.


What the Case Reveals About the State of LAWS Governance

Whatever precisely occurred in Libya, the episode illuminates the governance gaps in autonomous weapons regulation.

The Verification Problem

International governance of autonomous weapons faces a fundamental verification challenge: it is extremely difficult to determine, from external observation, whether a specific weapon system operated autonomously or with human authorization in a specific engagement. Unlike nuclear weapons, where the material — highly enriched uranium, plutonium — is detectable and trackable, autonomy is a software characteristic that cannot be directly observed from outside the system. A military operating autonomous weapons could deny that specific engagements were autonomous, and external verification would be difficult.

This verification challenge undermines the traditional arms control model, in which treaty compliance is verified through inspection regimes, satellite monitoring, and reporting requirements. Autonomous weapons governance requires alternative approaches: transparency requirements about system design and testing, confidence-building measures, and incident reporting mechanisms that allow the international community to assess how systems have been used.

The Accountability Gap in Practice

The Libya incident illustrates the accountability gap in practical terms. Multiple LNA forces were reportedly attacked by the Kargu-2 drones. If those forces included individuals who were protected under international humanitarian law — if the targeting algorithm incorrectly classified civilians as combatants, or if individuals who had laid down arms were targeted — who is accountable for that violation? The Turkish government, as the provider of the weapons system? The UAE, which reportedly supplied the drones to the forces operating them? The LNA forces that deployed them? The STM engineers who designed the targeting algorithm?

Current international law does not clearly answer these questions. The diffusion of responsibility across developers, exporters, and deployers of autonomous weapons systems creates accountability gaps that existing legal frameworks are not designed to fill.

The Export Control Dimension

The Libya episode involves the export of a Turkish-manufactured autonomous weapon system to Libyan forces. The Kargu-2 in Libya reportedly operated under UAE direction, with the UAE as the exporter to conflict parties. This export chain raises questions about whether existing export control regimes — designed for conventional weapons — are adequate for autonomous weapons systems.

Arms export controls focus on preventing weapons from reaching prohibited end users and ensuring appropriate end-use. For autonomous weapons, the governance question extends beyond the initial export: once deployed, an autonomous system makes its own engagement decisions. The exporter's control over how the system is used in the field is limited once the system is deployed. Governance frameworks for autonomous weapons exports must address the behavior of the system in the field, not only the identity of the initial recipient.

The Precedent Risk

Perhaps the most significant governance concern raised by the Kargu-2 episode is the precedent risk. If fully autonomous lethal engagement has occurred — even in a disputed, incompletely documented case — it creates a factual precedent that autonomous lethal engagement is an established aspect of modern warfare. This precedent makes binding prohibition more difficult: states can point to existing deployment as evidence that autonomous lethal engagement is already a feature of the operational environment, arguing that prohibition would disadvantage states that comply while others disregard it.

The value of preemptive governance — establishing binding prohibitions before a technology is widely deployed — lies partly in avoiding this precedent dynamic. Once a technology is operationally deployed at scale, governing it is substantially more difficult than establishing preemptive governance frameworks.


The Campaign to Stop Killer Robots: Governance Advocacy in Practice

The Campaign to Stop Killer Robots was founded in 2012 — five years before the Libya incident — in anticipation of exactly the governance challenge that the Kargu-2 episode illustrates. The Campaign's founding was motivated by the recognition that autonomous weapons technology was developing rapidly, that international governance frameworks were not keeping pace, and that the window for preemptive governance was narrowing.

The Campaign's Core Argument

The Campaign's fundamental position is that fully autonomous weapons systems — systems that can select and engage targets without meaningful human control — should be prohibited by binding international law. The Campaign's arguments span several dimensions:

Legal argument: Fully autonomous weapons cannot comply with IHL's requirements of distinction and proportionality, which require forms of judgment that current AI cannot reliably exercise. A treaty prohibition is necessary because existing IHL is insufficient.

Ethical argument: Decisions to take human life are of such moral weight that they must be made by human beings, not machines. Delegating lethal force decisions to algorithms crosses a fundamental moral threshold regardless of whether a technically capable algorithm could theoretically comply with IHL.

Accountability argument: Autonomous weapons create accountability gaps that undermine the deterrent and justice functions of international law. Without clear human responsibility for lethal decisions, IHL violations cannot be effectively investigated or prosecuted.

Dignity argument: Human beings have a right to be killed, if killed, by other human beings who can perceive their humanity and be moved by it. Killing by algorithm denies this fundamental dignity.

The Campaign in International Forums

The Campaign has been active in the CCW discussions, where it has advocated for a preemptive ban and worked to build a coalition of states supporting binding governance. It has organized expert workshops, produced research reports, testified before national legislative bodies, and run public campaigns in multiple countries.

The Campaign's preferred outcome — a binding international treaty prohibiting fully autonomous weapons — has not been achieved as of the mid-2020s. CCW discussions have continued without producing binding obligations. But the Campaign has achieved several intermediate outcomes: keeping the issue on the international agenda, building the coalition of states and civil society organizations that would be necessary for any governance initiative, and establishing the conceptual framework — meaningful human control, accountability, dignity — that governs international discussion of the issue.


The ICRC's Position

The International Committee of the Red Cross — the primary international institution responsible for promoting and developing IHL — has taken an increasingly clear position on autonomous weapons governance.

The ICRC's 2021 report on autonomous weapons systems concluded that a new international treaty is needed to establish limits on autonomous weapons. Specifically, the ICRC recommended: a prohibition on autonomous weapons that are unpredictable in their effects; a prohibition on targeting systems that apply force against humans based on purely behavioral profiles; a requirement that autonomous weapons be limited in the types of targets they can engage; and requirements for time and geographic limitations on autonomous weapon deployments.

The ICRC stopped short of recommending a comprehensive ban on all autonomous weapons, focusing instead on specific characteristics — unpredictability and human targeting — that it found clearly inconsistent with IHL. This position distinguishes the ICRC from the Campaign to Stop Killer Robots, which advocates for a broader prohibition, and reflects the ICRC's institutional role as a promoter of IHL rather than a general disarmament advocate.

The ICRC's involvement is significant because its position carries particular authority in international humanitarian law discussions. States that negotiate arms control treaties take ICRC guidance seriously, and the ICRC's articulation of specific characteristics that are inconsistent with IHL provides a legal framework for binding governance discussions.


The Path to Binding Governance

The gap between the need for binding governance and the likelihood of achieving it in the near term is substantial. But examining the pathways that have produced binding arms control in other domains illuminates what would be necessary.

The Ottawa Process Model

The Ottawa Process, which produced the Mine Ban Treaty (1997) prohibiting anti-personnel mines, is frequently cited as a model for autonomous weapons governance. The Ottawa Process succeeded despite the absence of buy-in from major military powers — the United States, Russia, and China have not joined the Mine Ban Treaty. It succeeded because:

  • A coalition of middle powers and civil society organizations drove the process
  • Humanitarian organizations documented the effects of anti-personnel mines on civilian populations in compelling terms
  • The treaty was negotiated outside the traditional disarmament forum (the Conference on Disarmament) when that forum proved unable to act
  • The treaty was opened for signature on an accelerated timeline, creating political momentum

The autonomous weapons context has parallels: the CCW has proven unable to produce binding obligations; a coalition of middle powers and civil society organizations (the Campaign to Stop Killer Robots) is advocating for binding governance; the ICRC and human rights organizations have documented humanitarian concerns. The significant obstacle is the scale of major power investment in autonomous weapons — the United States, China, and Russia have larger autonomous weapons programs than the major powers resisting mine prohibition had, creating stronger resistance to binding limits.

What Is Achievable in the Near Term

Given these constraints, analysts of autonomous weapons governance have identified several achievable intermediate steps:

  • A political declaration, not legally binding but representing a political commitment, by a broad coalition of states to maintain meaningful human control over lethal force
  • Inclusion of autonomous weapons governance provisions in existing arms control forums and bilateral agreements
  • Development of voluntary codes of conduct for autonomous weapons testing, deployment, and export
  • National-level legislation in states with significant autonomous weapons programs, establishing domestic governance requirements

These measures would not resolve the fundamental governance gap, but they would build the normative foundation and institutional infrastructure on which binding governance could eventually be built.


Discussion Questions

  1. The UN Panel of Experts' account of the Kargu-2 incidents in Libya is hedged and contested. What does the difficulty of determining whether autonomous lethal engagement occurred reveal about the governance challenges of autonomous weapons? What verification mechanisms would be necessary to make autonomous weapons governance treaties enforceable?

  2. STM disputed the characterization of the Kargu-2 as having operated fully autonomously, emphasizing that the system requires human authorization for engagement. Assuming this is true in design, what operational conditions might cause a nominally human-authorized system to function in practice as a fully autonomous one? What implications does this have for governance frameworks that focus on system design rather than operational use?

  3. The Campaign to Stop Killer Robots advocates for a preemptive ban on fully autonomous weapons systems. Major military powers argue that existing IHL is sufficient to govern autonomous weapons. Evaluate both positions. Which is more legally and practically defensible?

  4. The accountability gap in autonomous weapons is a core governance concern: who is responsible when an autonomous weapon commits an IHL violation? Propose a legal framework for attributing responsibility across the chain of designers, manufacturers, exporters, and deployers of an autonomous weapon that causes civilian casualties.

  5. The Ottawa Process achieved binding governance on anti-personnel mines without the participation of major military powers. Can this model be replicated for autonomous weapons? What are the similarities and differences that would affect the likelihood of success?

  6. The Kargu-2 is a Turkish-manufactured weapon that was reportedly used in Libya in ways that may have involved autonomous lethal engagement. What governance obligations did Turkey have as the manufacturer and exporter? What governance obligations did other states — including states that are CCW parties — have? What governance response would have been appropriate if fully autonomous lethal engagement were confirmed?