How Artificial Intelligence Impacts Deterrence Stability: A Realistic Assessment
Policy Briefs

How Artificial Intelligence Impacts Deterrence Stability: A Realistic Assessment

Download or Print the Report

Despite a broad consensus on human-in-the-loop control, the modern reality is that machine learning and Artificial Intelligence (AI) is increasingly integrated into nuclear command systems. The integration of AI into military architectures presents significant risks to strategic and crisis stability. While AI can enhance data processing and intelligence, its speed and potential for error create new pathways for unintended nuclear escalation. In this policy brief, Dr Manpreet Sethi identifies four key pathways through which AI-enabled military systems could destabilise nuclear deterrence.

  • Threats to second-strike survivability through enhanced intelligence and targeting capabilities;
  • NC3 vulnerabilities by breaking encryption methods for nuclear command, control, and communications (NC3) or fabricating a nuclear threat that triggers escalation;
  • Compressed decision timelines in ways that reduce opportunities for de-escalation;
  • Inflated perceptions of strategic advantage leading to riskier, asymmetric escalation postures to avoid neutralisation.

Sethi emphasises that while AI’s integration into military operations is irreversible, its risks can be managed through deliberate policy choices. The brief recommends several measures including maintaining air-gaps between launch commands and early warning systems, using AI only as a decision support tool rather than a replacement for human judgement, prohibiting autonomous weapons systems for nuclear delivery, and banning cyberattacks on NC3 systems and crisis communication channels.

The policy brief notes recent diplomatic developments, including the September 2024 Responsible AI in the Military Domain (REAIM) conference in Seoul and the November 2024 Biden-Xi affirmation on maintaining human control over nuclear weapons decisions, as positive steps. It calls for all nuclear-armed states to commit to retaining human responsibility over nuclear decision-making.

About the Author

Manpreet Sethi, PhD is Senior Research Adviser at APLN and a Distinguished Fellow at the Centre for Aerospace Power and Strategic Studies (CAPSS), New Delhi. She has published over 130 papers on nuclear issues and served as a member of the International Group of Eminent Persons established by former Japanese Prime Minister Fumio Kishida to identify pathways to a nuclear weapons-free world. She is a member of the Science and Security Board of the Bulletin of the Atomic Scientists.

The opinions articulated above represent the views of the authors and do not necessarily reflect the position of the Asia-Pacific Leadership Network or any of its members.

The APLN website is a source of authoritative research and analysis and serves as a platform for debate and discussion among our senior network members, experts and practitioners, as well as the next generation of policymakers, analysts and advocates. Comments and responses can be emailed to apln@apln.network.

Image: Artificial Intelligence and nuclear decision-making. Credit: iStock- guirong hao.

Related Articles