13Years of CCW Talks
100+States in LAWS Debate
$18B+US Military AI Budget
0Binding Treaties

Table of Contents

1. Overview & Definitions

Military AI represents one of the most consequential and contested domains of AI governance. The development and deployment of AI in military contexts — from intelligence analysis and logistics to targeting and autonomous weapons — raises fundamental questions about human control over lethal force, compliance with international humanitarian law (IHL), strategic stability, and the future of warfare.

1.1 Key Definitions

Term Definition Context
Lethal Autonomous Weapons Systems (LAWS)Weapons systems that can select and engage targets without meaningful human control. No universally agreed definition exists, which is itself a central issue in negotiations.Primary focus of CCW discussions; some call these “killer robots”
Autonomous Weapons System (AWS)Broader term for weapons with autonomous functions in critical phases (target identification, tracking, selection, engagement). May include varying degrees of human involvement.Used in academic and policy literature
Human-on-the-LoopHumans can monitor the system and intervene to override or abort, but the system can act without prior human authorization for each individual engagement.Many current military systems operate this way (e.g., missile defense)
Human-in-the-LoopHumans must authorize each individual use of force. The system cannot engage without explicit human command.Traditional weapons operation; many advocate this as minimum for LAWS
Meaningful Human ControlHumans retain sufficient understanding, oversight, and ability to intervene in autonomous weapons decisions to ensure IHL compliance. Concept is central to governance debate but not precisely defined.ICRC, many states, and civil society advocate this as the standard
Critical FunctionsFunctions directly related to the application of force: target detection, identification, selection, tracking, and engagement.Key distinction from non-critical autonomous functions (navigation, logistics)

1.2 Spectrum of Autonomy

The Autonomy Spectrum: Military AI systems exist on a spectrum from fully human-controlled to fully autonomous. Most current systems are semi-autonomous — they perform some functions autonomously (target detection, tracking) while requiring human authorization for others (engagement). The governance debate centers on where to draw the line: which functions must remain under human control, and what constitutes “meaningful” human control? The answer has profound implications for military effectiveness, civilian protection, and international stability.

2. International Humanitarian Law & LAWS

2.1 IHL Principles Applied to LAWS

International humanitarian law (the laws of armed conflict) applies to all weapons and means of warfare, including autonomous systems. Key principles:

IHL Principle Requirement Challenge for LAWS
DistinctionParties must distinguish between combatants and civilians, and between military objectives and civilian objectsCan AI reliably distinguish combatants from civilians in complex environments? What about combatants who are hors de combat (surrendering, wounded)?
ProportionalityAttacks must not cause excessive civilian harm relative to the anticipated military advantageProportionality requires contextual judgment and weighing of values — can AI make these inherently human assessments? Who programs the acceptable ratio?
PrecautionParties must take all feasible precautions to minimize civilian harmDoes autonomous targeting satisfy the obligation to take “constant care”? Can an AI system take precautionary measures in dynamic situations?
Humanity (Martens Clause)In cases not covered by specific rules, persons are protected by “the principles of humanity and the dictates of public conscience”Does delegating life-or-death decisions to machines violate the “dictates of public conscience”? The ICRC argues this clause is relevant to LAWS.

2.2 Legal Responsibility Gap

A central legal challenge is the “accountability gap” — if an autonomous weapon causes unlawful harm, who is legally responsible?

3. The CCW Process

3.1 Convention on Certain Conventional Weapons

The primary international forum for LAWS governance is the Group of Governmental Experts (GGE) on LAWS under the Convention on Certain Conventional Weapons (CCW). The CCW has been the venue for LAWS discussions since 2013:

Year Development Outcome
2013First informal expert meeting on LAWS at CCWInitial discussions on scope and definitions
2014-2016Informal meetings of expertsExplored technical, military, legal, and ethical dimensions
2017GGE on LAWS formally establishedMandate to discuss LAWS within CCW framework
2019GGE adopts 11 guiding principlesFirst agreed output; includes IHL applicability, human responsibility, accountability
2021-2023Continued GGE sessions; push for legally binding instrumentDeep divisions between states; no consensus on binding instrument; several states call for negotiations
2024GGE continues; growing frustration with paceCoalition of states (Austria, Costa Rica, etc.) push for alternative forums; UN General Assembly resolution on LAWS
2025UN General Assembly involvementUNGA First Committee resolution requesting Secretary-General report on LAWS; movement toward UNGA-based process

3.2 The 11 Guiding Principles (2019)

The GGE’s 11 guiding principles represent the only consensus output to date:

  1. IHL continues to apply fully to all weapons systems, including potential LAWS
  2. Human responsibility for decisions on the use of weapons systems must be retained
  3. Accountability for developing, deploying, and using emerging weapons systems must be ensured in accordance with applicable international law
  4. Legal reviews of weapons under Article 36 of Additional Protocol I should be conducted
  5. Risk assessments and mitigation measures should be part of the design, development, testing, and deployment cycle
  6. Physical security, appropriate safeguards, and the risk of unauthorized acquisition should be considered
  7. Risk of an arms race should be taken into consideration
  8. Interaction between humans and machines should ensure IHL compliance
  9. When developing AI for military purposes, states should consider use of common terminologies and standards
  10. States should encourage dialogue on LAWS with relevant stakeholders
  11. The CCW provides the appropriate framework for dealing with LAWS

3.3 Divisions at the CCW

Position States Arguments
Pre-emptive BanAustria, Costa Rica, Pakistan, Palestine, Colombia, 30+ states; Campaign to Stop Killer RobotsIHL requires human judgment; accountability gap is insurmountable; arms race risks; moral imperative
New Legally Binding Instrument~70 states (various formulations); most Latin American, African, and some European statesNeed binding rules with prohibitions on certain systems and regulations on others; meaningful human control standard
Political Declaration / Non-bindingUS, UK, France, Australia, Israel, South Korea, Japan, IndiaExisting IHL is sufficient; non-binding norms preferred; technology evolving too fast for rigid rules; premature to regulate
Oppose RegulationRussia (blocks consensus at CCW)Premature to define LAWS; existing IHL sufficient; no need for new instrument; opposes any binding restrictions

4. United States

4.1 DoD Directive 3000.09 (2023 Update)

The foundational US policy on autonomous weapons is DoD Directive 3000.09, “Autonomy in Weapon Systems,” originally issued in 2012 and updated in January 2023:

4.2 National Security Memorandum on AI (2023)

The White House National Security Memorandum on AI (NSM, October 2023) established broader AI governance for national security:

4.3 US Military AI Programs

Program Service/Agency Description Autonomy Level
Replicator InitiativeDoD (Deputy Secretary)Accelerate fielding of autonomous systems at scale; focus on “small, smart, cheap” autonomous platformsVaries — some autonomous navigation, human-in-loop for engagement
CDAO (Chief Digital and AI Office)DoDCentral AI governance office; responsible for AI strategy, data management, and AI adoption across DoDGovernance/oversight role
Project MavenCDAO (originally NGA)Computer vision for intelligence analysis — object detection and classification in imageryAssistive — humans make targeting decisions
Collaborative Combat Aircraft (CCA)US Air ForceAI-piloted drone wingmen designed to accompany crewed fightersAutonomous flight; human authorization for weapons employment
DARPA AI ProgramsDARPAACE (Air Combat Evolution); OFFSET (swarm tactics); LongShot (air-launched autonomous missile)High autonomy in research; unclear deployment rules

5. China & Russia

5.1 China

China is simultaneously a major military AI developer and a participant in LAWS governance discussions:

Military AI Development

Governance Position

5.2 Russia

6. European Positions

6.1 European Parliament

The European Parliament has taken strong positions on LAWS:

6.2 Key EU Member State Positions

State Position on LAWS Key Details
FranceSupports “political declaration” (non-binding)Major military AI investor; supports human control but opposes binding ban; co-hosted REAIM Summit (2023)
GermanySupports regulation, not outright banSupports legally binding rules with prohibitions and positive obligations; human control over critical functions; active in CCW
AustriaSupports pre-emptive banLeading advocate for prohibition; co-chairs group calling for new legally binding instrument; active in Campaign to Stop Killer Robots
NetherlandsSupports meaningful human controlAdvocates for human judgment in each attack decision; developing national military AI policy aligned with IHL
BelgiumSupports legally binding instrumentSupports prohibitions on systems that cannot comply with IHL; regulations requiring human control over others

6.3 NATO

7. Other National Policies

Country Military AI Policy LAWS Position Key Programs
United KingdomDefence AI Strategy (2022); AI-enabled capabilitiesSupports political declaration; opposes binding treatyAutonomous Warrior; Tempest AI wingman; DASA AI
IsraelActive developer and deployer of military AIOpposes binding regulationsIron Dome; Harpy/Harop loitering munitions; AI targeting
South KoreaAI-based defence innovation; Defense Reform 4.0Cautious; supports CCW discussionsSGR-A1 sentry robot (DMZ); autonomous naval vessels
JapanDefense buildup including AI; 2022 NSSSupports human involvement in LAWS decisionsAI for ISR, cyber defense, unmanned systems
TürkiyeMajor autonomous weapons developer/exporterNon-committal on binding rulesBayraktar TB2; Kargu-2 (autonomous engagement report)
IndiaAI in Defence task force; Defence AI CouncilCautious on binding commitmentsDRDO autonomous systems; AI border surveillance
AustraliaDefence Strategic Review (2023) emphasizes AISupports responsible AI useAUKUS AI cooperation; Ghost Bat autonomous wingman

8. Military AI Applications

8.1 Categories of Military AI

Category Applications Autonomy Level Governance Issues
ISRImage/signal analysis; pattern recognition; object classificationHigh autonomy in analysis; human decision on actionBias in target identification; civilian harm from misclassification
Autonomous NavigationSelf-driving military vehicles; autonomous drones; unmanned vesselsFull autonomy for movement; human control for engagementCivilian interaction; GPS-denied environments
Targeting & EngagementTarget identification; kill chain acceleration; weapons guidanceRanges from assistive to potentially fully autonomousMost contested; IHL compliance; meaningful human control
Cyber OperationsAI cyber defense; vulnerability discovery; autonomous responseOften fully autonomous (speed-of-light decisions)Escalation risks; collateral effects; attribution
Command & ControlDecision support; course of action generation; battle managementAssistive — AI recommends, humans decideAutomation bias; decision-making speed pressure
LogisticsPredictive maintenance; supply chain; automated resupplyHigh autonomy; low lethality concernGenerally less controversial; reliability

8.2 Loitering Munitions

The Loitering Munition Debate: Loitering munitions occupy a gray area in LAWS governance. Systems like the Israeli Harop, Turkish Kargu-2, and Iranian Shahed series can loiter, detect targets, and engage with varying human involvement. In the 2020 Libya conflict, a UN Panel of Experts report suggested a Kargu-2 may have autonomously engaged retreating forces — potentially the first documented autonomous weapons engagement without human authorization.

8.3 Nuclear Command & Control

9. Ethical Frameworks & Principles

Organization Position Key Arguments
ICRCNew legally binding rules; prohibit unpredictable LAWS; require human controlIHL requires human judgment; meaningful human control; some AWS inherently incapable of IHL compliance
Campaign to Stop Killer RobotsPre-emptive ban on fully autonomous weaponsMachines should not make life-or-death decisions; accountability gap; arms race; digital dehumanization
UN Secretary-GeneralLegally binding instrument by 2026Called LAWS morally repugnant and politically unacceptable
REAIM Summit (2023)Call to Action; 50+ state endorsementNon-binding; human responsibility; IHL compliance; testing
AI IndustryVaries; some pledged not to develop lethal autonomous weaponsGoogle withdrew from Project Maven; ongoing pressure on tech companies

10. Comparative Analysis

Dimension USA China Russia EU
Investment$18B+ (2024)~$10B+ (opaque)$2-5B (est.)Varies by state
PolicyDoD 3000.09; NSMMilitary-Civil FusionNational AI StrategyNo EU-wide policy
Human ControlAppropriate levels of judgmentRhetorical support; unclearNo clear standardNATO principles
Treaty PositionOpposes binding treatySupports some regulationOpposes; blocks CCWEP supports ban; states divided
DeploymentActive (ISR, defense)Active developmentActive (Ukraine, Syria)Limited

12. References & Resources

International Law & Treaties

National Policies

ICRC & Civil Society

Research & Think Tanks

Previous Healthcare AI Next Copyright & IP