AI is Quietly Reshaping Nuclear Risk in South Asia
Commentaries

AI is Quietly Reshaping Nuclear Risk in South Asia

Download or Print as PDF

Artificial intelligence (AI) is not yet running nuclear launch systems in South Asia. But the rapid integration of AI into conventional military systems in India and Pakistan is already reshaping escalation dynamics, potentially affecting nuclear decision-making. The real risk isn’t a science-fiction scenario where machines start wars. Instead, it’s how AI will influence the speed of weapon systems, increase plausible deniability, shorten decision-making timelines, and influence perceptions of strategic advantage. India and Pakistan, however, don’t need broad arms control agreements to lower AI-related risks. This commentary describes how India and Pakistan’s integration of AI into their respective militaries has changed the regional environment, increasing the need to put credible “guardrails” around AI-enabled systems to not only reduce escalation risks but also recognise mutual vulnerabilities.

The Automation of the Conventional Battlefield

India has been explicit about mainstreaming AI across its armed forces. In 2019, an executive order created a high-level Defence AI Council (chaired by the defence minister) and a Defence AI Project Agency to drive adoption, along with steps to professionalise data management and build testbeds and training pipelines. Automated air-defense networks, AI-assisted intelligence, surveillance, and reconnaissance (ISR), counter-drone systems, and tests with loitering munitions are now part of a more interconnected Indian conventional military force. These systems offer faster response times, better targeting, and greater battlefield awareness. Although operating with fewer resources, Pakistan also remains proactive. The Pakistan Air Force has established dedicated AI research centers, and in February 2026, completed the “Golden Eagle” exercise that was focused on AI-enabled, net-centric operations, integrating cyber, space, and electromagnetic spectrum operations, and featuring manned–unmanned teaming with strike drones and loitering munitions in a “highly contested, congested and degraded environment.” The Pakistan Army is also investing in AI-enabled ISR, cyber capabilities, and drone technologies—primarily to counter India’s growing technological advantage.

At first glance, these developments seem confined to the traditional realm. But in a rivalry where conflicts quickly escalate from conventional clashes to nuclear signaling, the boundaries between conventional and nuclear are blurred. AI accelerates the pace of war – and pace matters in South Asia.

Compressed Timelines and Escalation Risks

The most immediate impact of AI integration is increased speed. Automated air-defense systems and AI-assisted targeting streamline the “observe–orient–decide–act” loop. In a rapid cross-border exchange, commanders might have only seconds to verify a threat. Automation can minimise certain human errors, but it also introduces new risks, such as automation bias.

Automation bias – the tendency to trust machine outputs – becomes especially risky in crisis situations marked by deception, electronic warfare, and incomplete information. A radar misclassification, a spoofed signal, or a drone swarm could produce a misleading operational picture. If retaliation occurs without full verification, escalation might outpace political leaders’ ability to intervene. India and Pakistan have repeatedly demonstrated – with the last crisis in May 2025 – how quickly crises can intensify within a short time. AI-enabled systems further shorten the fuse.

ISR, Counterforce Anxiety, and “Use-It-or-Lose-It” Pressures

AI-enhanced ISR systems – integrating satellites, drones, pattern recognition, and predictive analytics – offer better tracking of mobile assets. However, they also increase concerns about counterforce capabilities.

If one side believes the other can reliably detect and target key military nodes – such as airbases, command centers, or missile units – they might worry about being vulnerable at the start of a conflict. This concern can pressure them to act first, especially during a crisis when signals are unclear. Even if nuclear forces stay under strict human control, perceptions of vulnerability in conventional forces can influence nuclear decision-making. Leaders who fear that their command-and-control systems are being monitored or targeted may feel compelled to escalate quickly.

AI doesn’t need to be inside nuclear command and control to impact nuclear stability. It only has to change perceptions of survivability. The stability–instability paradox becomes more pronounced as battlefield transparency improves and reaction times decrease.

Drone Warfare and the Fog of Attribution

It’s no secret that drones have changed warfare. Drones were initially employed to test radars and air-defense networks, and even to map an opponent’s defensive architecture. AI has significantly improved the quality, capability, and efficiency of drones. In the case of India and Pakistan, both are increasingly integrating AI into navigation, targeting, and swarm coordination.

Even though it’s becoming easier to determine which kind of drones a country uses (e.g., India has used Israeli-made Heron drones and Pakistan has used Turkish Bayraktar TB2 and Chinese-made CH-4 drones), drones still create ambiguity. Attribution can be unclear, especially when launch sites are contested or hidden. Proxy actors make the situation even more complex. Routine cross-border drone incidents between India and Pakistan are now common, and hence, have lowered the cost of retaliation. However, the same ambiguity that allows deniability during peacetime can cause deadly misunderstandings in a crisis. A misattributed drone strike, combined with automated air-defence responses, could escalate beyond what was intended.

Cyber, AI, and Nuclear Entanglement

AI is also transforming cyber operations – speeding up vulnerability discovery and improving spoofing and disruption methods. Even without intentionally targeting nuclear command-and-control systems, cyber operations by India or Pakistan against dual-use or adjacent infrastructure could lead to dangerous complications. For instance, if communications are compromised or early-warning data is perceived as manipulated, leaders may feel compelled to act quickly rather than risk paralysis. In South Asia, where decision-making windows are already short, cyber-induced ambiguity combined with AI-driven automation increases the risk of accidental escalation.

Diverging Approaches to Guardrails

India and Pakistan publicly articulate different approaches to AI governance. Pakistan has been among the more vocal advocates for legally binding international constraints on lethal autonomous weapons, arguing that meaningful human control must be preserved. Islamabad frames unconstrained autonomy as destabilising, particularly in regions marked by enduring rivalry. India emphasises responsible use within existing international humanitarian law frameworks. New Delhi participates in multilateral discussions on autonomous weapons but resists categorical bans, preferring flexible, non-binding norms that preserve technological options.

Despite these differences, there is a crucial point of convergence: neither country has signaled support for autonomous nuclear decision-making. The emerging global norm that humans – not machines – must retain control over decisions on nuclear use provides a foundation for regional risk-reduction efforts. The challenge, however, lies in translating principles into practice.

Practical Guardrails for a Rapidly Changing Battlefield

India and Pakistan do not need sweeping arms control agreements to reduce AI-related risks. Several bilateral, coordinated steps could meaningfully lower escalation pressures.

First, both states could publicly reaffirm that nuclear launch authority will remain under meaningful human control, with no delegation to autonomous systems. Even declaratory commitments clarify redlines.

Second, the two sides can tailor crisis communication for AI- driven incidents. For example, both could establish a drone incident prevention mechanism – a dedicated channel for rapid clarification, attribution procedures, and de-escalation techniques for when UAVs cross borders. Traditional hotlines assume human-paced attribution but since future incidents may involve swarms, spoofed tracks, or AI-generated “confidence,” developing a hotline specifically for drones would reduce escalation risks. Similarly, both militaries could adopt internal “pause-and-verify” protocols for automated air-defense systems, requiring secondary confirmation before expanding target sets during ambiguous alerts. A bilateral protocol for rapid clarification – what systems were engaged, what was spoofed, what was misidentified – could prevent an algorithmic error from becoming a political point of no return.

Third, informal understandings not to target nuclear command-and-control (NC2)–adjacent infrastructure with cyber operations could help reduce worst-case fears. Cyber operations that intersect with these AI-enabled systems could amplify uncertainty – especially if they degrade data integrity or create false signals. Informal restraints on targeting NC2-adjacent infrastructure therefore act as a buffer against the most dangerous pathways to escalation, helping ensure that technological competition does not outpace strategic stability.

These measures do not require deep trust, but they do require mutual recognition of shared vulnerability.

A Quiet but Decisive Shift

AI is quietly but decisively transforming South Asia’s military landscape. The region is not on the brink of autonomous nuclear launch systems. Yet, escalation dynamics are shifting beneath the surface. Speed, automation, and opacity are replacing slower, more deliberate forms of signaling. The danger lies not in malevolent machines but in accelerated miscalculation.

India and Pakistan have long managed deterrence amid crises, proxy violence, and conventional clashes. AI does not make nuclear war inevitable. But it does raise the premium on clear guardrails. In a region where escalation ladders are steep and decision windows narrow, the difference between stability and catastrophe may increasingly hinge on how well humans remain in control. The politics of the region make establishing guardrails seem impossible, but the speed at which AI is being used demands that both countries come together to lower escalation risks.

 

About the Author

Dr Sahar Khan is an independent analyst and an expert on U.S. grand strategy, nonproliferation and arms control, nuclear security, crisis management, and South Asian regional politics. She has served as the Deputy Director and Senior Fellow of the South Asia Program at the Stimson Center (2023-2025), Research Fellow in the Defense and Foreign Policy Studies department at the Cato Institute (2021-2023), and Managing Editor of Inkstick Media (2020-2023). Her writing has been featured in The Diplomat, Newsweek, Axios, The National Interest, Inkstick Media, Ducks of Minerva, Cato Unbound, among others. She has a Ph.D. in political science from the University of California, Irvine, an M.P.P. from the University of Chicago’s Harris School of Public Policy, where she focused on nuclear security safeguards, and a B.A. from Ohio Wesleyan University.

The opinions articulated above represent the views of the author and do not necessarily reflect the position of the Asia-Pacific Leadership Network or any of its members. APLN’s website is a source of authoritative research and analysis and serves as a platform for debate and discussion among our senior network members, experts, and practitioners, as well as the next generation of policymakers, analysts, and advocates. Comments and responses can be emailed to apln@apln.network.

Image: Indian army soldiers carry drones after a display of weapons and drones used during ”Operation Sindoor” during the Know Your Army program ahead of Republic Day in Baramulla, Jammu and Kashmir, India, on January 25, 2026. (Photo by Nasir Kachroo/NurPhoto via Getty Images)

Related Articles