34 min read

In the summer of 2020, a United Nations Panel of Experts on Libya reported on incidents involving the Turkish-manufactured Kargu-2, a loitering munitions drone capable of autonomous target engagement. The panel's report stated that the drone may...

Chapter 37: Autonomous Weapons and Military AI

When Algorithms Kill


Opening Hook

In the summer of 2020, a United Nations Panel of Experts on Libya reported on incidents involving the Turkish-manufactured Kargu-2, a loitering munitions drone capable of autonomous target engagement. The panel's report stated that the drone may have autonomously tracked and attacked human targets without requiring data connectivity between the operator and the munition — which would make it potentially the first fully autonomous lethal engagement by a machine in armed conflict.

The report's language was careful: "may have," "reportedly." The facts remain disputed. The drone's manufacturer, the Turkish defense company STM, disputed characterizations of the system as fully autonomous. The UN panel's account was based on information from multiple parties with varying interests in the classification. The underlying incident — a drone strike in a civil war characterized by fog, propaganda, and unreliable reporting — is not the kind of event about which certainty is easy.

But the scenario the report described — an algorithm making a kill decision without human authorization — has been the defining concern of the international humanitarian law community, human rights organizations, and AI ethics researchers for more than a decade. Whether or not the Kargu-2 incident constitutes the first confirmed instance of autonomous lethal engagement, the technology to make such an engagement possible exists. It has been developed, tested, and in some cases deployed by multiple states. The governance frameworks designed to prevent unauthorized autonomous lethal force have not kept pace.

This chapter examines the ethics of autonomous weapons, military AI, and what international law says — and critically, does not say — about machines that kill. It is written with awareness that this is a domain of significant geopolitical contest, rapidly evolving technology, and genuine empirical uncertainty. Epistemic humility is not optional; it is required by the facts.


Learning Objectives

By the end of this chapter, students will be able to:

  1. Describe the range of military AI applications beyond autonomous weapons, including intelligence analysis, logistics, cybersecurity, and surveillance.
  2. Explain the conceptual spectrum from human-in-the-loop to fully autonomous weapons systems and define what "meaningful human control" means in the context of international humanitarian law.
  3. Analyze the requirements of international humanitarian law — distinction, proportionality, and precaution — and evaluate whether autonomous weapons systems can plausibly comply with those requirements.
  4. Explain the accountability gap created by autonomous lethal targeting and why it poses a distinctive challenge for international law.
  5. Assess the dual-use problem in military AI, including the implications of Project Maven and the ethics of technology company military contracting.
  6. Evaluate the specific risks of AI in nuclear command and control, including the relevance of historical incidents like the 1983 Petrov case.
  7. Describe the current state of international governance of autonomous weapons, including the CCW discussions, the 2023 UN General Assembly resolution, and the Campaign to Stop Killer Robots.
  8. Analyze the individual ethical responsibilities of technology professionals working on or adjacent to military AI systems.

Section 1: The Landscape of Military AI

Artificial intelligence in military applications extends far beyond the question of autonomous weapons — though that question is the most ethically urgent. Understanding the full landscape of military AI is essential for evaluating both the risks and the governance challenges.

Intelligence Analysis

Intelligence analysis — making sense of enormous volumes of data collected from satellites, signals intelligence, human sources, and open-source information — has been a major driver of military AI investment. AI tools assist analysts in processing satellite imagery to identify military equipment, facilities, and movement; analyzing signals intercepts; and synthesizing open-source intelligence from social media, news sources, and communications networks.

Project Maven, discussed in Section 5, began as an AI tool for analyzing drone surveillance footage. The project illustrates the intelligence analysis use case: military forces collect far more video than human analysts can review, creating pressure to automate the initial identification of objects and activities of interest.

Logistics and Maintenance

Military logistics — moving personnel, equipment, and supplies — and maintenance planning are areas where AI has been quietly deployed with less controversy. Predictive maintenance tools that forecast when military equipment will require service before failure represent significant economic and operational value. Supply chain optimization tools improve the efficiency of military logistics. These applications raise fewer acute ethical concerns than targeting AI, but are relevant to the overall AI posture of military organizations.

Cybersecurity

AI is deployed in military cybersecurity for both offensive and defensive applications. Defensive AI monitors networks for intrusion indicators, identifies anomalous behavior, and automates response to common attack patterns. Offensive AI can be used for vulnerability identification, network exploitation, and — in more controversial applications — automated cyber attack. The line between defensive and offensive cyber AI is frequently blurred, and the legal framework governing AI-enabled cyber conflict is underdeveloped.

Drone Navigation and Control

Autonomous or semi-autonomous navigation for unmanned aerial, naval, and ground vehicles is one of the most active areas of military AI development. Drone swarms — coordinated groups of unmanned systems that can operate collectively without continuous human control — represent a particularly significant emerging capability. Swarm coordination algorithms enable dozens or hundreds of small drones to operate as a coordinated unit, with decision-making distributed across the swarm rather than controlled from a central operator.

Predictive Analytics

Predictive analytics tools in military contexts analyze patterns of life — aggregated behavioral data from surveillance, cell phone metadata, financial records, and other sources — to identify individuals or locations associated with adversary activity. These tools raise significant concerns about proportionality, about the reliability of predictions in the context of lethal targeting, and about the surveillance apparatus they require.

The Scale of Investment

Military AI investment is substantial and accelerating. The U.S. Department of Defense's AI investment has grown from modest initial programs to a stated strategic priority, with the Joint Artificial Intelligence Center (JAIC) — subsequently renamed the Chief Digital and Artificial Intelligence Office (CDAO) — established to coordinate AI adoption across military services. The Department of Defense's FY2024 budget included substantial AI-related spending across programs.

China has made military AI a central element of its military modernization strategy, explicitly targeting AI-enabled military superiority. Russia has programs for autonomous ground vehicles and AI-assisted weapons systems. Multiple other states — Israel, Turkey, South Korea, the United Kingdom, Australia — have active military AI development programs. The competitive dynamic among major powers creates pressure for speed of development that can work against safety, validation, and governance.


Section 2: Defining Autonomous Weapons

A persistent challenge in the governance of autonomous weapons is the definitional question: what is an autonomous weapons system? The answer matters because governance frameworks — international law, military policy, export controls — require clear definitions of what they apply to. Definitional ambiguity has been one of the primary obstacles to progress in international negotiations.

The Autonomy Spectrum

The most widely used framework describes a spectrum of human-machine control:

Human-in-the-loop: A human operator makes the targeting decision and authorizes each individual use of lethal force. The system cannot engage targets without explicit human authorization for each engagement. Traditional weapon systems operated by human soldiers, and remotely piloted aircraft where a human operator decides each strike, are examples.

Human-on-the-loop: The system can autonomously select and engage targets but a human operator monitors and has the ability to interrupt or override the engagement. The human is supervising a system that makes its own targeting decisions, rather than making the targeting decision themselves. Some current systems, including certain missile defense systems that operate under time constraints too short for human review, approximate this model.

Fully autonomous (human-out-of-the-loop): The system selects and engages targets without any human involvement in specific targeting decisions. The human role is limited to initial deployment and general programming of the system's parameters. The Kargu-2 incident, as reported by the UN panel, approximated this scenario if the autonomous engagement account is accurate.

Meaningful Human Control

"Meaningful human control" has emerged as the key concept in policy debates about autonomous weapons. The Campaign to Stop Killer Robots and many states advocate for a requirement that meaningful human control must be maintained over lethal force decisions. But what does "meaningful" mean?

The International Committee of the Red Cross (ICRC) has articulated elements of meaningful human control: the human must understand what the system will do in a given situation; the human must have the ability to intervene or override; the human decision must occur within a time frame that allows for genuine deliberation; and the human must be able to exercise judgment about the specific target in the specific context, not merely general approval of a category of engagement.

These requirements exclude some scenarios that might nominally involve a human "in the loop." If a system operates so fast that a human nominally monitoring cannot meaningfully intervene before engagement, the human-on-the-loop is nominal rather than substantive. If a human approves an engagement category but cannot assess the specific target's distinction from civilians, their control is not meaningful in the relevant sense.

The Definition Problem in Governance

Definitional disagreement has been a significant obstacle in international negotiations. Different states define "autonomous weapons" differently, often in ways that protect their own systems from regulation. A state with advanced missile defense systems may define autonomous weapons to exclude systems operating in defensive modes. A state that has deployed loitering munitions may define autonomy in ways that characterize those systems as human-controlled. These definitional disputes are not merely semantic; they shape what international legal frameworks can govern.


Section 3: International Humanitarian Law and LAWS

International humanitarian law (IHL) — also called the laws of armed conflict — governs the conduct of war. Its core principles have been developed over a century of treaties, customs, and decisions, and they apply to all weapons systems, including autonomous ones.

Core IHL Principles

The three principles most directly relevant to autonomous weapons are:

Distinction: Combatants must at all times distinguish between civilian persons and civilian objects on one side and combatants and military objectives on the other, directing attacks only against military objectives. This requires the ability to identify whether a target is a combatant or a civilian. For autonomous weapons, the question is whether an algorithm can make this determination with sufficient reliability in the chaotic, visually and contextually complex environment of armed conflict.

Proportionality: An attack is prohibited if the expected civilian casualties and collateral damage are excessive in relation to the anticipated military advantage. This is a judgment that requires weighing military and civilian values against each other — a contextual, ethical judgment that critics argue is inherently beyond algorithmic capability.

Precaution: Those conducting attacks must take all feasible precautions to avoid, or in any event minimize, incidental civilian casualties. This requires awareness of the specific situation, consideration of alternative attack options, and continuous assessment as circumstances evolve.

Can Autonomous Weapons Comply with IHL?

The central legal and ethical debate about autonomous weapons is whether they can comply with these IHL requirements. The debate has both technical and philosophical dimensions.

The technical dimension concerns whether AI systems can reliably distinguish civilians from combatants in real-world conflict environments. Combatants in contemporary warfare do not always wear uniforms; civilians may carry weapons; the behavior that distinguishes civilians from combatants can be ambiguous, contextual, and rapidly changing. Critics of autonomous weapons argue that current AI cannot reliably make this distinction in conditions of operational complexity, and that the consequences of error — unlawful killing of civilians — are grave.

Proponents argue that autonomous systems could eventually make targeting determinations more consistently and without the emotional effects (fear, anger, revenge) that cause human fighters to commit atrocities. They also note that IHL compliance requires comparison to what human-operated systems do, not comparison to a perfect standard.

The philosophical dimension concerns the proportionality calculation. Proportionality requires weighing civilian harm against military advantage — a judgment that involves assessing the value of different objectives, predicting uncertain outcomes, and applying contextual ethical reasoning. Critics argue that this assessment cannot be reduced to an algorithm because it requires a form of practical moral judgment that is constitutively human.

The Campaign to Stop Killer Robots

The Campaign to Stop Killer Robots is a coalition of more than 270 civil society organizations in 70 countries that advocates for a pre-emptive ban on fully autonomous weapons systems. Founded in 2012, the campaign has been active in international forums, particularly the Convention on Certain Conventional Weapons (CCW) discussions and the United Nations.

The Campaign's position is that fully autonomous weapons should be prohibited because they cannot exercise the human judgment required by IHL, because they cannot be held accountable for IHL violations, and because transferring life-and-death decisions to machines crosses a fundamental moral threshold regardless of technical capability.

CCW Discussions

The Convention on Certain Conventional Weapons has been the primary multilateral forum for discussions of lethal autonomous weapons systems since 2014. State parties to the CCW have met annually — and in expert group meetings — to discuss LAWS, with the stated goal of developing a common understanding and possible governance framework.

As of 2024, CCW discussions have not produced binding legal obligations on autonomous weapons. Several factors have impeded progress: definitional disagreements, the unwillingness of major military powers to accept constraints on their autonomous weapons development, and the CCW's consensus-based decision-making process, which allows any state party to block action.

The 2023 UN General Assembly Resolution

In November 2023, the United Nations General Assembly adopted a resolution on autonomous weapons, calling on states to engage constructively in negotiations and expressing concern about the potential for autonomous weapons to undermine IHL. The resolution was notable for its broad support — more than 160 states voted in favor — but it is non-binding and does not establish specific legal obligations. It represents, however, increasing international political support for governance action that the CCW process has not yet produced.


Section 4: The Targeting Algorithm Problem

The decision to use lethal force is, in the context of international humanitarian law, one of the most consequential decisions a state can make. Delegating that decision — even partially — to an algorithm raises questions that go beyond technical capability to fundamental questions about moral agency, accountability, and the relationship between the state and the use of violence.

What It Means to Delegate Kill Decisions

When a human soldier or drone operator decides to engage a target, they exercise judgment. They assess the situation, consider the target's identity, evaluate the risk to civilians, weigh their legal obligations and their humanity, and make a decision. They bear moral and legal responsibility for that decision. When an autonomous system makes an engagement decision, the chain of moral and legal responsibility becomes diffuse in ways that current law does not adequately address.

Who is responsible when an autonomous weapon violates IHL — when it kills civilians it should have identified as protected, or when it kills after circumstances have changed in ways the algorithm did not detect? The programmer who wrote the targeting algorithm? The military commander who deployed the system? The political leadership that authorized its use? The state as a legal entity? Current international law assumes that individuals and states can be held responsible for IHL violations. The autonomous weapons accountability gap — the difficulty of assigning responsibility to a human actor for algorithmic decisions — challenges that assumption fundamentally.

The Discrimination Requirement

The IHL requirement of distinction — of discriminating between civilians and combatants — places particular demands on targeting algorithms. In symmetric conventional warfare, discrimination can be relatively straightforward: opposing military forces wear uniforms and operate military equipment in military formations. But contemporary armed conflict is rarely symmetric. Insurgencies, urban warfare, and counterterrorism operations involve combatants who may be indistinguishable from civilians based on visual appearance alone.

The discrimination requirement also applies in real time, under circumstances that are rapidly changing. A person who was a civilian moments ago may have picked up a weapon; a person who appeared armed may have laid down their weapon and surrendered. The contextual, time-varying, behavior-dependent nature of combatant status is extremely difficult to encode in a targeting algorithm.

The Proportionality Calculation

The proportionality calculation — weighing expected civilian harm against military advantage — presents a different kind of algorithmic challenge. It is not primarily a pattern recognition or identification problem; it is a normative weighing problem. It requires assessing the value of different objectives, the uncertainty of predicted outcomes, and the moral weight of different types of harm. These assessments are contested among human decision-makers; it is unclear what it would mean for an algorithm to make them, and whether an algorithmic proportionality calculation would be legally and ethically valid.

The Accountability Vacuum

The accountability vacuum in autonomous weapons is a specific legal concern with structural implications. If an autonomous weapon commits what would constitute a war crime if committed by a human soldier — deliberate targeting of civilians, disproportionate civilian casualties, failure to accept surrender — who is prosecuted? Current international criminal law (the Rome Statute, the laws of war) assigns criminal liability to individuals and, under some frameworks, states. It does not assign liability to machines. If autonomous weapons create systematic IHL violations without identifiable human decision-makers responsible for specific targeting choices, the deterrent and accountability functions of international criminal law are undermined.


Section 5: Dual-Use Technology and the Tech-Military Complex

The relationship between commercial technology development and military AI is complex, ethically contested, and evolving. Much of the most powerful AI technology in the world is developed by commercial companies — Google, Microsoft, Amazon, Palantir, Anduril — and that technology has direct military applications.

Civilian AI Repurposed for Military Use

Many AI capabilities developed for civilian purposes have direct military applications. Computer vision technology developed for autonomous vehicles can be applied to targeting. Natural language processing developed for customer service can be applied to intelligence analysis. Machine learning platforms developed for commercial applications can be fine-tuned for military use. This dual-use nature means that military AI capability can be developed at commercial speed and scale, funded by commercial revenue, and transferred to military applications.

Commercial Off-the-Shelf AI in Weapons Systems

The U.S. Department of Defense has explicitly sought to leverage commercial AI rather than fund all military AI development internally. This approach reduces costs and allows access to frontier AI capabilities developed by commercial companies with resources exceeding defense research budgets. But it creates governance challenges: commercial AI is developed for commercial purposes, under commercial evaluation criteria, and is not necessarily validated for the reliability, safety, and adversarial robustness requirements of military applications.

Project Maven and Its Legacy

Project Maven, formally the Algorithmic Warfare Cross-Functional Team, was launched by the U.S. Department of Defense in 2017. Its initial mission was to use AI to analyze the enormous volumes of drone surveillance video that military forces were collecting but lacked the human analyst capacity to review. Google was engaged as a contractor to provide TensorFlow and related AI capabilities for this purpose.

The story of Project Maven is examined in detail in Case Study 37.1. In brief: in 2018, thousands of Google employees signed a petition opposing the company's involvement in military AI, leading Google's leadership to decide not to renew the Maven contract when it expired. The episode had significant lasting effects on the tech-military relationship and on the policies of major technology companies regarding military AI contracts.

Tech Company Policies on Military AI

Following Project Maven, several major technology companies articulated policies on military AI contracts:

Google published AI Principles in 2018 that stated the company would not pursue AI for weapons or other technologies that cause or facilitate injury. Google subsequently did not bid on the JEDI cloud computing contract (which went to Microsoft). However, Google has continued to hold some government and defense-related contracts that critics argue are inconsistent with its stated principles.

Microsoft has taken a different approach, explicitly embracing government and military contracts. Microsoft President Brad Smith publicly stated that Microsoft would not "create principled objections" to selling technology to the military. Microsoft won the JEDI contract and has pursued additional defense contracts.

Palantir, whose data analytics platform has extensive military and intelligence applications, has explicitly positioned itself as the defense-sector alternative to companies like Google that have declined certain military work. Anduril, founded by Palmer Luckey, was explicitly founded to build defense technology that tech companies were unwilling to build.

Amazon Web Services has provided cloud computing infrastructure for military applications. The commercial cloud providers — Amazon, Microsoft, and Google — are central to the U.S. government's cloud computing infrastructure, which includes military applications.


Section 6: AI in Nuclear Command and Control

Among the most significant and least-discussed risks of military AI is the potential role of AI systems in nuclear command and control. Nuclear deterrence depends on the credibility of the threat to respond to nuclear attack, which in turn depends on the ability of decision-makers to assess whether an attack is occurring and to authorize a response in the compressed time frames that nuclear attack scenarios create.

Decision Timeline Compression

The decision timelines in nuclear scenarios are extraordinarily compressed. Intercontinental ballistic missiles can reach their targets in approximately thirty minutes. Submarine-launched ballistic missiles, launched from proximity to their targets, can create timelines of ten minutes or less. These timelines have historically created pressure for early warning systems that alert decision-makers rapidly — and have created at least a structural argument for pre-delegation of nuclear authority in some scenarios.

AI systems could further compress these timelines by enabling faster processing of early warning data and faster generation of response options. The concern is that this compression, intended to improve response time, could reduce the time for human deliberation below the threshold necessary for meaningful oversight of nuclear use decisions.

The False Positive Risk

Early warning systems have historically generated false alarms — detections of attack that proved to be sensor errors, software bugs, or misinterpreted data. The Petrov incident of 1983 is the most famous example: Soviet early warning systems detected what they reported as an inbound U.S. ICBM launch, and Soviet duty officer Stanislav Petrov correctly assessed it as a false alarm rather than escalating — possibly preventing nuclear war. The system was generating a false positive caused by satellite sensor error.

AI-enhanced early warning systems could reduce false positives by processing multiple sensor streams and identifying inconsistencies that indicate false alarms. But they could also generate new types of false positives through adversarial manipulation — an adversary who understands the AI system's decision logic could potentially generate false attack signatures that trigger the AI's escalation assessment. The reliability of AI systems under adversarial pressure is specifically uncertain; AI systems are known to be vulnerable to adversarial inputs in ways that human judgment is not.

Concerns about Algorithmic Nuclear Escalation

Strategic stability in the nuclear domain depends on the predictability and rationality of decision-making under crisis conditions. Human decision-makers, while imperfect, apply contextual judgment, consider political context, and have the ability to decide that a partial sensor detection does not justify nuclear response. Algorithmic decision-making, even as an advisory input to human decision-makers, could introduce escalation dynamics that are poorly understood. If an AI system evaluates a crisis situation as high-confidence attack, that assessment could drive political and military decision-making in ways that reduce the space for de-escalation.

Experts in nuclear security, including researchers at the RAND Corporation, the Nuclear Threat Initiative, and academic institutions specializing in nuclear policy, have called for specific red lines on AI in nuclear command and control — specifically, a commitment by nuclear states to maintain meaningful human control over nuclear launch decisions and not to delegate nuclear targeting or launch authority to AI systems.


Section 7: AI-Enabled Surveillance in Conflict

AI has dramatically expanded the surveillance capabilities available to parties in armed conflict, with implications for civilian protection under IHL and for human rights in conflict zones.

Mass Surveillance in Conflict Zones

AI-enhanced surveillance in conflict zones draws on satellite imagery analysis, social media monitoring, cell phone location data, facial recognition from cameras and drone imagery, and pattern-of-life analysis to build comprehensive pictures of population behavior. These capabilities can serve legitimate military intelligence purposes — identifying adversary military infrastructure and forces — but can also enable mass surveillance of civilian populations, with significant human rights implications.

The Palestinian AI Surveillance Case

The Israeli Defense Forces' use of AI-assisted surveillance and targeting systems in Gaza has been documented and contested. In 2024, reporting by +972 Magazine and Local Call documented an Israeli military AI system called Lavender that reportedly assigned numerical scores to Palestinian individuals assessing their likelihood of being Hamas members, with those scores used to inform targeting decisions. The reporting described high civilian casualty tolls associated with lower-confidence targeting scores and limited human review of individual targeting decisions.

The Israeli military disputed aspects of the reporting, and the full technical and operational details of the systems described are not publicly confirmed. The reporting raised significant questions about the IHL compliance of AI-assisted targeting with limited human review, the proportionality of civilian casualties associated with AI-enabled targeting, and the accountability for algorithmic targeting decisions. The episode illustrates the gap between the theoretical requirements of IHL compliance and the practical operation of AI-assisted targeting in active conflict.

Project Nimbus

Project Nimbus is a cloud computing contract worth approximately $1.2 billion between Google and Amazon and the Israeli government and military, signed in 2021. The contract provides cloud computing infrastructure, AI services, and other technology capabilities to Israeli government agencies including the military.

The contract became controversial as the conflict in Gaza intensified following October 7, 2023. Google employees organized protests against the contract, arguing that cloud computing and AI capabilities provided through Project Nimbus were being used to support military operations with significant civilian casualty tolls. Google dismissed several employees who participated in workplace protests against the contract.

The Uyghur Surveillance System

The surveillance infrastructure deployed by the Chinese government against the Uyghur population in Xinjiang represents one of the most comprehensive documented deployments of AI surveillance technology. The system combines facial recognition cameras, cell phone monitoring, DNA collection, and predictive policing algorithms to monitor the movements, associations, and behavior of millions of Uyghurs, enabling what researchers have described as unprecedented mass control of an ethnic minority population.

Multiple Western technology companies have been documented to have sold components of this surveillance infrastructure, including facial recognition technology and networking equipment, to Xinjiang authorities. Some of these sales occurred before the full extent of the Uyghur surveillance system was documented; others occurred after. The case raises questions about technology company due diligence in assessing how their products will be used by government customers.


Section 8: Tech Worker Ethics and Military AI

The governance of military AI is not only a question for governments and international institutions. The technology professionals who develop, test, and deploy military AI systems have their own ethical responsibilities — and have begun to assert them.

The Google Project Maven Revolt

In April 2018, more than 3,000 Google employees signed an internal letter opposing Google's participation in Project Maven. The letter stated: "We believe Google should not be in the business of war. Therefore we ask that Project Maven contract not be renewed after it expires in 2019, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."

Several employees resigned over the contract. The internal opposition was significant enough that Google's leadership ultimately decided not to renew the Project Maven contract. Google subsequently published its AI Principles, which include a commitment not to build AI for applications that cause or are likely to cause widespread harm, or that gather or use information for surveillance violating internationally accepted norms, or for weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

Individual Engineer Responsibility

The Maven episode raises the question of individual professional responsibility for engineers who work on dual-use technologies with military applications. When is an engineer's contribution to military AI ethically permissible, and when does it create personal moral responsibility for harm?

Several frameworks bear on this question. Professional engineering ethics — codified in the National Society of Professional Engineers code of ethics and similar codes — generally requires engineers to hold paramount the safety, health, and welfare of the public. How this principle applies to military technology depends on contested judgments about the ethics of particular military applications.

The Nuremberg Principles, developed after World War II, established that individuals can be held responsible for crimes against humanity regardless of orders from superiors, rejecting the "just following orders" defense. While these principles were developed for extreme cases, they establish the principle that individual moral responsibility does not disappear simply because one is acting as part of an institution.

For technology professionals, the practical question is: what level of inquiry into the use of their work is ethically required? At what point does contributing to general-purpose technology that may be used in military applications differ from directly contributing to specific weaponization? These questions do not have universal answers, but the Maven episode established that collective action by technology workers can have genuine governance effects.

Tech Company Policies

The Maven episode created significant pressure on major technology companies to articulate policies on military AI. These policies vary substantially:

Google's AI Principles prohibit AI for weapons or technologies that cause injury, surveillance violating international norms, and applications that contravene international law and human rights. The principles have been criticized as underspecified and inconsistently applied.

Microsoft's approach has been more permissive, with explicit commitment to serving government and military customers. Microsoft President Brad Smith has argued that the technology sector should not be less helpful to democracies than to other actors.

Palantir and Anduril have positioned themselves explicitly as companies willing to build defense technology that others decline to build, arguing that this is both commercially and patriotically appropriate.


Section 9: Existing Governance and Its Gaps

The governance framework for autonomous weapons and military AI consists of a patchwork of international law principles, national policies, and voluntary commitments that leaves significant gaps.

U.S. DoD Directive 3000.09

DoD Directive 3000.09, originally issued in 2012 and updated in 2023, establishes U.S. policy on autonomous weapon systems. The Directive requires that autonomous and semi-autonomous weapon systems be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force. It establishes that autonomous weapon systems must not be designed to select and engage individual targets based solely on sensor data, and that semi-autonomous weapon systems must only be authorized to select and engage individual targets following a human decision.

The 2023 update strengthened some of these requirements while maintaining significant flexibility for military judgment about what constitutes "appropriate" human judgment. Critics note that the Directive provides insufficient guidance on what "meaningful human control" requires in practice, and that its definitions allow for substantial autonomy in engagement while nominally maintaining human control requirements.

The Biden Executive Order and Military AI

The Biden administration's October 2023 Executive Order on AI, which established broad AI governance requirements for federal agencies, also included provisions applicable to military AI. It directed the Department of Defense to continue developing principles for responsible AI use in military contexts and to address the safety and reliability of AI in national security applications. It did not establish specific prohibitions on autonomous weapons.

International Positions

State positions on autonomous weapons governance vary significantly. Several states — Austria, Brazil, Costa Rica, Chile, Mexico, New Zealand, and others — have called for a preemptive ban on fully autonomous weapons systems. The United States, Russia, China, Israel, South Korea, and other militarily significant states have resisted binding prohibition, generally arguing for a national governance approach rather than international prohibition. Russia and China have argued that autonomous weapons governance is premature, while simultaneously developing autonomous military capabilities.

The United Kingdom has issued policy statements asserting that it does not develop or deploy fully autonomous weapons and will maintain human control over lethal force, while reserving judgment about what this requires in specific systems.

The Treaty Gap

The fundamental governance gap in the autonomous weapons domain is the absence of a binding international treaty prohibiting or regulating lethal autonomous weapons systems. The CCW process has produced discussions but not binding obligations. The legal tools available to address autonomous weapons are the existing IHL requirements — distinction, proportionality, precaution — which apply to all weapons but were not designed with autonomous systems in mind.

Human rights organizations and the ICRC have argued that existing IHL is insufficient to govern autonomous weapons and that a new treaty — analogous to treaties prohibiting anti-personnel mines (Ottawa Treaty) or cluster munitions (Oslo Convention) — is necessary. The political obstacles to such a treaty are substantial, as it would require buy-in from major military powers that have thus far resisted binding restrictions.


Section 10: The Path Forward

The governance of military AI and autonomous weapons is one of the most difficult challenges in contemporary international security. The difficulty is not primarily technical — it is political, legal, and ethical.

What Meaningful Governance Requires

Effective governance of military AI requires several elements that current frameworks do not fully provide.

A binding international agreement on autonomous weapons: The CCW process has demonstrated that voluntary discussions among states with divergent interests do not produce binding obligations. A binding treaty — on the model of arms control treaties that have addressed biological, chemical, and nuclear weapons — is necessary to establish clear prohibition on fully autonomous lethal weapons and clear requirements for human control. Whether such a treaty is achievable given the geopolitical dynamics of major power competition is uncertain.

Red lines on specific applications: Even short of a comprehensive treaty, binding commitments on specific high-risk applications — AI in nuclear command and control, autonomous weapons in densely civilian areas, autonomous weapons used against protected persons — could provide meaningful governance without requiring agreement on all aspects of military AI.

Transparency and confidence-building measures: States could agree to exchange information about their autonomous weapons programs, development principles, and testing results, enabling better assessment of compliance with stated policies and building confidence that governance commitments are being honored.

Tech company autonomous weapons policies: Commercial AI companies with frontier capabilities should establish and maintain clear policies on military AI contracts, including specific prohibitions on autonomous weapons development and requirements for meaningful human control in any military AI they develop or provide.

Individual professional responsibility: Technology professionals who contribute to military AI should exercise professional judgment about the applications they contribute to, informed by IHL requirements and ethical principles. Professional engineering and computer science organizations should provide guidance on military AI ethics.

The Arms Control Treaty Model

Arms control treaties addressing autonomous weapons would face significant obstacles but are not unprecedented. The Ottawa Treaty banning anti-personnel mines was achieved without the support of major military powers initially, and the United States, Russia, and China have still not joined it. Nevertheless, it has shaped the global norm against anti-personnel mines and constrained their use. A similar approach to autonomous weapons — a coalition of willing states establishing a prohibition norm that gradually attracts broader adherence — is one potential path.

The Campaign to Stop Killer Robots

The Campaign to Stop Killer Robots represents the most organized civil society effort to achieve binding international governance. Its advocacy in international forums, its work to build political support among states, and its documentation of the gaps in current governance have been important inputs to the international discussion. Its preferred outcome — a preemptive ban on fully autonomous weapons — faces significant political obstacles from major military powers. But the Campaign has been effective in raising the profile of the issue and in building the coalition of states and civil society organizations that would be necessary for any governance initiative.


Recurring Themes in This Chapter

Power and Accountability: Autonomous weapons create a distinctive accountability vacuum: the power to take human life is exercised by an algorithm, but the human accountability that IHL requires is diffuse, disputed, and potentially evasive. Governance frameworks must address who is accountable for autonomous weapons decisions in ways that current law does not.

Innovation vs. Harm: Military AI innovation — in intelligence, logistics, and decision support — has genuine value. The harm potential of autonomous lethal targeting, AI-enabled mass surveillance in conflict zones, and AI in nuclear command and control is also genuine and potentially catastrophic. The ethics of military AI require grappling with this tension seriously, not treating "national security" as a blanket justification.

Ethics Washing: Major technology companies publish AI principles and ethics policies that purport to govern military AI, while simultaneously pursuing military contracts that may be inconsistent with those principles. The gap between stated principles and operational practice is significant and should be scrutinized.

Diversity and Inclusion: The victims of autonomous weapons errors — the civilians killed when targeting algorithms err — are predominantly people in the Global South, in conflict zones, and in populations already subjected to disproportionate military violence. The governance of autonomous weapons must center their interests, not only the interests of militarily powerful states.

Global Variation: Major powers — the United States, China, Russia, Israel — have resisted binding autonomous weapons governance. Smaller states, civil society organizations, and the ICRC have advocated for binding prohibition. This variation reflects genuine differences in national interest, geopolitical positioning, and military capability that make the politics of autonomous weapons governance among the most difficult in international security.


Conclusion

Autonomous weapons and military AI represent frontier ethical challenges for the international community. Unlike many technology ethics questions — where the principal harms are economic, reputational, or privacy-related — the harms here include death, violation of the laws of war, and the undermining of the international legal architecture that has, however imperfectly, constrained the conduct of armed conflict.

The governance situation as of the mid-2020s is unsatisfactory. International discussions continue without producing binding obligations. Major military powers develop autonomous capabilities while resisting constraints. Tech companies navigate competing commercial and ethical pressures with inconsistent results. The technology — including AI-enhanced targeting, autonomous drone navigation, and AI-assisted nuclear warning systems — continues to develop and deploy in advance of the governance frameworks that would make its use safe and accountable.

For business and policy professionals, the military AI domain raises questions that cannot be evaded. Companies that develop AI are, knowingly or not, developing capabilities that have military applications. Whether to engage with military contracts, on what terms, and with what ethical constraints, is a decision that commercial technology companies must make — and that their employees, investors, and the public have legitimate interests in scrutinizing. The governance of military AI will be shaped by choices made not only by governments and international institutions, but by the technology companies and professionals who build the capabilities that military forces deploy.


Key Terms

Lethal Autonomous Weapons System (LAWS): A weapons system that can select and engage targets using lethal force without meaningful human control over individual targeting decisions.

Human-in-the-Loop: A weapons system configuration in which a human operator authorizes each individual use of lethal force.

Human-on-the-Loop: A weapons system configuration in which the system can autonomously engage targets but a human operator monitors and can interrupt or override.

Meaningful Human Control: The requirement under international humanitarian law that human judgment must be exercised in a manner that is substantive rather than nominal over lethal targeting decisions.

Distinction: The IHL requirement to distinguish between civilians and combatants, directing attacks only against military objectives.

Proportionality: The IHL requirement that expected civilian casualties not be excessive relative to anticipated military advantage.

Precaution: The IHL requirement to take all feasible measures to avoid or minimize civilian casualties.

Accountability Gap: The difficulty, in autonomous weapons systems, of identifying human actors who bear moral and legal responsibility for specific targeting decisions.

Dual-Use Technology: Technology developed for civilian purposes that also has military applications (or vice versa).

Project Maven: The U.S. Department of Defense program, initially partnered with Google, to use AI for analysis of drone surveillance footage.

Convention on Certain Conventional Weapons (CCW): The multilateral treaty forum in which state parties have discussed autonomous weapons governance since 2014, without producing binding obligations.

Campaign to Stop Killer Robots: A coalition of more than 270 civil society organizations advocating for a binding international prohibition on fully autonomous weapons systems.