Comprehensive guide to the governance of artificial intelligence in military applications — lethal autonomous weapons systems (LAWS), international humanitarian law, the CCW process, national military AI policies, and the global debate on human control over the use of force.
Last updated: February 2026 12 Sections Global Coverage
Military AI represents one of the most consequential and contested domains of AI governance. The development and deployment of AI in military contexts — from intelligence analysis and logistics to targeting and autonomous weapons — raises fundamental questions about human control over lethal force, compliance with international humanitarian law (IHL), strategic stability, and the future of warfare.
1.1 Key Definitions
Term
Definition
Context
Lethal Autonomous Weapons Systems (LAWS)
Weapons systems that can select and engage targets without meaningful human control. No universally agreed definition exists, which is itself a central issue in negotiations.
Primary focus of CCW discussions; some call these “killer robots”
Autonomous Weapons System (AWS)
Broader term for weapons with autonomous functions in critical phases (target identification, tracking, selection, engagement). May include varying degrees of human involvement.
Used in academic and policy literature
Human-on-the-Loop
Humans can monitor the system and intervene to override or abort, but the system can act without prior human authorization for each individual engagement.
Many current military systems operate this way (e.g., missile defense)
Human-in-the-Loop
Humans must authorize each individual use of force. The system cannot engage without explicit human command.
Traditional weapons operation; many advocate this as minimum for LAWS
Meaningful Human Control
Humans retain sufficient understanding, oversight, and ability to intervene in autonomous weapons decisions to ensure IHL compliance. Concept is central to governance debate but not precisely defined.
ICRC, many states, and civil society advocate this as the standard
Critical Functions
Functions directly related to the application of force: target detection, identification, selection, tracking, and engagement.
Key distinction from non-critical autonomous functions (navigation, logistics)
1.2 Spectrum of Autonomy
The Autonomy Spectrum: Military AI systems exist on a spectrum from fully human-controlled to fully autonomous. Most current systems are semi-autonomous — they perform some functions autonomously (target detection, tracking) while requiring human authorization for others (engagement). The governance debate centers on where to draw the line: which functions must remain under human control, and what constitutes “meaningful” human control? The answer has profound implications for military effectiveness, civilian protection, and international stability.
2. International Humanitarian Law & LAWS
2.1 IHL Principles Applied to LAWS
International humanitarian law (the laws of armed conflict) applies to all weapons and means of warfare, including autonomous systems. Key principles:
IHL Principle
Requirement
Challenge for LAWS
Distinction
Parties must distinguish between combatants and civilians, and between military objectives and civilian objects
Can AI reliably distinguish combatants from civilians in complex environments? What about combatants who are hors de combat (surrendering, wounded)?
Proportionality
Attacks must not cause excessive civilian harm relative to the anticipated military advantage
Proportionality requires contextual judgment and weighing of values — can AI make these inherently human assessments? Who programs the acceptable ratio?
Precaution
Parties must take all feasible precautions to minimize civilian harm
Does autonomous targeting satisfy the obligation to take “constant care”? Can an AI system take precautionary measures in dynamic situations?
Humanity (Martens Clause)
In cases not covered by specific rules, persons are protected by “the principles of humanity and the dictates of public conscience”
Does delegating life-or-death decisions to machines violate the “dictates of public conscience”? The ICRC argues this clause is relevant to LAWS.
2.2 Legal Responsibility Gap
A central legal challenge is the “accountability gap” — if an autonomous weapon causes unlawful harm, who is legally responsible?
Commander: Under command responsibility doctrine, commanders are responsible for actions of forces under their control. But does this extend to autonomous systems they deployed but did not directly control?
Operator: If a human authorized deployment but the system selected specific targets autonomously, is the operator responsible for each engagement?
Programmer/Manufacturer: If unlawful targeting results from algorithmic design choices, do developers bear criminal liability?
The Machine: Current international criminal law requires a human accused — machines cannot be prosecuted. This creates a potential gap where no human can be held accountable.
3. The CCW Process
3.1 Convention on Certain Conventional Weapons
The primary international forum for LAWS governance is the Group of Governmental Experts (GGE) on LAWS under the Convention on Certain Conventional Weapons (CCW). The CCW has been the venue for LAWS discussions since 2013:
Year
Development
Outcome
2013
First informal expert meeting on LAWS at CCW
Initial discussions on scope and definitions
2014-2016
Informal meetings of experts
Explored technical, military, legal, and ethical dimensions
2017
GGE on LAWS formally established
Mandate to discuss LAWS within CCW framework
2019
GGE adopts 11 guiding principles
First agreed output; includes IHL applicability, human responsibility, accountability
2021-2023
Continued GGE sessions; push for legally binding instrument
Deep divisions between states; no consensus on binding instrument; several states call for negotiations
2024
GGE continues; growing frustration with pace
Coalition of states (Austria, Costa Rica, etc.) push for alternative forums; UN General Assembly resolution on LAWS
2025
UN General Assembly involvement
UNGA First Committee resolution requesting Secretary-General report on LAWS; movement toward UNGA-based process
3.2 The 11 Guiding Principles (2019)
The GGE’s 11 guiding principles represent the only consensus output to date:
IHL continues to apply fully to all weapons systems, including potential LAWS
Human responsibility for decisions on the use of weapons systems must be retained
Accountability for developing, deploying, and using emerging weapons systems must be ensured in accordance with applicable international law
Legal reviews of weapons under Article 36 of Additional Protocol I should be conducted
Risk assessments and mitigation measures should be part of the design, development, testing, and deployment cycle
Physical security, appropriate safeguards, and the risk of unauthorized acquisition should be considered
Risk of an arms race should be taken into consideration
Interaction between humans and machines should ensure IHL compliance
When developing AI for military purposes, states should consider use of common terminologies and standards
States should encourage dialogue on LAWS with relevant stakeholders
The CCW provides the appropriate framework for dealing with LAWS
3.3 Divisions at the CCW
Position
States
Arguments
Pre-emptive Ban
Austria, Costa Rica, Pakistan, Palestine, Colombia, 30+ states; Campaign to Stop Killer Robots
IHL requires human judgment; accountability gap is insurmountable; arms race risks; moral imperative
New Legally Binding Instrument
~70 states (various formulations); most Latin American, African, and some European states
Need binding rules with prohibitions on certain systems and regulations on others; meaningful human control standard
Political Declaration / Non-binding
US, UK, France, Australia, Israel, South Korea, Japan, India
Existing IHL is sufficient; non-binding norms preferred; technology evolving too fast for rigid rules; premature to regulate
Oppose Regulation
Russia (blocks consensus at CCW)
Premature to define LAWS; existing IHL sufficient; no need for new instrument; opposes any binding restrictions
4. United States
4.1 DoD Directive 3000.09 (2023 Update)
The foundational US policy on autonomous weapons is DoD Directive 3000.09, “Autonomy in Weapon Systems,” originally issued in 2012 and updated in January 2023:
Scope: Applies to all DoD components developing or fielding autonomous and semi-autonomous weapons
Key requirement: Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise “appropriate levels of human judgment” over the use of force
Senior review: Autonomous weapons systems (those that can select and engage targets without further human input after activation) require senior-level review and approval before development and fielding
2023 changes: Expanded coverage to include AI-enabled systems; updated testing requirements; added provisions for cybersecurity and unintended engagements; streamlined approval process
4.2 National Security Memorandum on AI (2023)
The White House National Security Memorandum on AI (NSM, October 2023) established broader AI governance for national security:
Responsible Military AI Use declaration: Political Declaration on Responsible Military Use of AI and Autonomy (REAIM Summit, February 2023) — endorsed by 50+ states
AI safety and security requirements: Mandates testing and evaluation of AI systems before deployment in national security contexts
Guardrails: AI shall not be used to “circumvent or undermine” human control requirements; specific prohibitions on autonomous nuclear launch decisions
4.3 US Military AI Programs
Program
Service/Agency
Description
Autonomy Level
Replicator Initiative
DoD (Deputy Secretary)
Accelerate fielding of autonomous systems at scale; focus on “small, smart, cheap” autonomous platforms
Varies — some autonomous navigation, human-in-loop for engagement
CDAO (Chief Digital and AI Office)
DoD
Central AI governance office; responsible for AI strategy, data management, and AI adoption across DoD
Governance/oversight role
Project Maven
CDAO (originally NGA)
Computer vision for intelligence analysis — object detection and classification in imagery
Assistive — humans make targeting decisions
Collaborative Combat Aircraft (CCA)
US Air Force
AI-piloted drone wingmen designed to accompany crewed fighters
Autonomous flight; human authorization for weapons employment
High autonomy in research; unclear deployment rules
5. China & Russia
5.1 China
China is simultaneously a major military AI developer and a participant in LAWS governance discussions:
Military AI Development
Military-Civil Fusion (MCF): National strategy explicitly links civilian AI advances to military applications. Major tech companies (Baidu, Alibaba, Tencent, SenseTime, iFlytek) are designated as national AI champions with military relevance.
PLA AI integration: People’s Liberation Army is incorporating AI into intelligence, surveillance, reconnaissance (ISR), electronic warfare, logistics, and decision support
Autonomous systems: Extensive development of autonomous drones, unmanned surface vessels, and ground robots with varying autonomy levels
AI-enabled command and control: “Intelligentized warfare” (智能化战争) is the PLA’s vision for AI-transformed military operations
Governance Position
CCW position: China has expressed support for “some form of” regulation of LAWS, but definitions and details remain vague. China submitted a position paper in 2018 calling for a ban on use of fully autonomous lethal weapons (not development or production)
Meaningful human control: China nominally supports human control over LAWS but has not endorsed specific formulations
Strategic ambiguity: China maintains flexibility in its positions, avoiding commitments that would constrain its own military AI programs
5.2 Russia
CCW position: Russia has been the primary blocker of consensus at the CCW on LAWS regulation. Opposes any legally binding instrument; argues existing IHL is sufficient; questions whether LAWS exist or will exist
Military AI development: National AI Strategy includes military applications; Uran-9 unmanned ground vehicle; various autonomous drone projects; AI for electronic warfare
Practical deployment: Evidence of autonomous or semi-autonomous systems in Syria and Ukraine, including loitering munitions
Rhetoric vs. practice: Publicly cautious about autonomous weapons while actively developing and deploying AI-enabled military systems
6. European Positions
6.1 European Parliament
The European Parliament has taken strong positions on LAWS:
2018 Resolution: Called for an international ban on LAWS and urged EU member states to negotiate a legally binding instrument at the CCW
2021 Resolution on AI in Defence: Emphasized human oversight over autonomous weapons; called for EU common position
EU AI Act: Military applications are explicitly excluded from the AI Act’s scope (Article 2(3)), though dual-use systems may be covered
6.2 Key EU Member State Positions
State
Position on LAWS
Key Details
France
Supports “political declaration” (non-binding)
Major military AI investor; supports human control but opposes binding ban; co-hosted REAIM Summit (2023)
Germany
Supports regulation, not outright ban
Supports legally binding rules with prohibitions and positive obligations; human control over critical functions; active in CCW
Austria
Supports pre-emptive ban
Leading advocate for prohibition; co-chairs group calling for new legally binding instrument; active in Campaign to Stop Killer Robots
Netherlands
Supports meaningful human control
Advocates for human judgment in each attack decision; developing national military AI policy aligned with IHL
Belgium
Supports legally binding instrument
Supports prohibitions on systems that cannot comply with IHL; regulations requiring human control over others
6.3 NATO
AI Strategy (2021): NATO adopted its first AI strategy, committing to “responsible use” of AI aligned with international law
Principles of Responsible Use (2021): Six principles — lawfulness, responsibility and accountability, explainability and traceability, reliability, governability, bias mitigation
DIANA (Defence Innovation Accelerator for the North Atlantic): NATO’s innovation program includes AI as a key technology area
AI certification: NATO is developing AI certification standards for interoperable allied systems
7. Other National Policies
Country
Military AI Policy
LAWS Position
Key Programs
United Kingdom
Defence AI Strategy (2022); AI-enabled capabilities
Supports political declaration; opposes binding treaty
Autonomous Warrior; Tempest AI wingman; DASA AI
Israel
Active developer and deployer of military AI
Opposes binding regulations
Iron Dome; Harpy/Harop loitering munitions; AI targeting
The Loitering Munition Debate: Loitering munitions occupy a gray area in LAWS governance. Systems like the Israeli Harop, Turkish Kargu-2, and Iranian Shahed series can loiter, detect targets, and engage with varying human involvement. In the 2020 Libya conflict, a UN Panel of Experts report suggested a Kargu-2 may have autonomously engaged retreating forces — potentially the first documented autonomous weapons engagement without human authorization.
8.3 Nuclear Command & Control
US policy: NSM (2023) explicitly prohibits autonomous nuclear launch decisions
Dead Hand (Perimetr): Russia’s semi-automated nuclear response system raises AI-in-nuclear questions
AI for early warning: All nuclear states use AI in early warning, raising false positive escalation risks
Consensus: Broad agreement that nuclear decisions must remain fully under human control
9. Ethical Frameworks & Principles
Organization
Position
Key Arguments
ICRC
New legally binding rules; prohibit unpredictable LAWS; require human control
IHL requires human judgment; meaningful human control; some AWS inherently incapable of IHL compliance
Campaign to Stop Killer Robots
Pre-emptive ban on fully autonomous weapons
Machines should not make life-or-death decisions; accountability gap; arms race; digital dehumanization
UN Secretary-General
Legally binding instrument by 2026
Called LAWS morally repugnant and politically unacceptable
REAIM Summit (2023)
Call to Action; 50+ state endorsement
Non-binding; human responsibility; IHL compliance; testing
AI Industry
Varies; some pledged not to develop lethal autonomous weapons
Google withdrew from Project Maven; ongoing pressure on tech companies
10. Comparative Analysis
Dimension
USA
China
Russia
EU
Investment
$18B+ (2024)
~$10B+ (opaque)
$2-5B (est.)
Varies by state
Policy
DoD 3000.09; NSM
Military-Civil Fusion
National AI Strategy
No EU-wide policy
Human Control
Appropriate levels of judgment
Rhetorical support; unclear
No clear standard
NATO principles
Treaty Position
Opposes binding treaty
Supports some regulation
Opposes; blocks CCW
EP supports ban; states divided
Deployment
Active (ISR, defense)
Active development
Active (Ukraine, Syria)
Limited
11. Trends & Future Outlook
Moving Beyond the CCW
Russia’s blocking has pushed momentum toward alternative forums including UNGA resolutions, regional treaties, and coalitions of the willing — mirroring the mine ban and cluster munitions treaties.
Ukraine Conflict Acceleration
The conflict in Ukraine has dramatically accelerated military AI development globally. AI-enabled drones, electronic warfare, and ISR systems are deployed at unprecedented speed, creating urgency for governance.
Proliferation Risk
Autonomous weapons technology is increasingly accessible. Commercial drones modified for military use are widespread. Proliferation to non-state actors raises governance challenges that state-centric frameworks struggle to address.
Swarm Technology
Autonomous swarms raise unique governance questions. A swarm that collectively makes targeting decisions may have no single human in the loop for each engagement, even if a human authorized the mission. Existing frameworks do not adequately address swarm autonomy.