The Manifold Implications of the AI-Nuclear Nexus

Author: Wilfred Wan

In recent years, multilateral discussion on the integration of artificial intelligence (AI) technologies into nuclear weapons systems has centred on the proverbial nuclear button. The terrifying image of a machine deciding to use nuclear weapons on its own volition presents a clear and effective entry point into the topic. France, the United Kingdom, and the United States, for instance, produced in July 2022 a working paper for the Tenth Review Conference of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) pledging to maintain a policy of “human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.” US officials have called for China and Russia to issue a similar statement to not defer nuclear use decisions to machines. Yet views on the value of such a commitment are not settled even among like-minded states, also raising questions about precisely what human control and involvement means, and which actions would be considered critical. Indeed, parsing the ‘nuclear button’ scenario reveals the many facets and complexities of the AI-nuclear nexus, with yet-to-be determined impacts on nuclear risk, strategic stability, arms control, and disarmament. How then might states manoeuvre at the AI-nuclear nexus?

Given significant advances in the last decade, the military value of AI technologies has become readily apparent. This is true in the nuclear context as well, as self-optimizing systems that allow for greater efficiency in pattern recognition, data matching, and other tasks can boost the performance of early warning systems, intelligence and target analysis, network defense, and other auxiliary decision-making or supportive processes. Indeed, the question is “not if, but when, how and by whom” AI will be adopted in nuclear force architecture. Notably, while some of the cited applications may not be ‘critical’, AI integration in these contexts will still have significant effects on the data and information that guide decision-making, including in crisis situations. Further, AI integration into non-nuclear platforms and systems can affect the operating environment in which nuclear decision-making takes place. Considering this wide range of potential applications, there are intrinsic challenges to isolate AI’s impact, especially if such integration is likely to take place gradually. Exacerbating issues with assessment is the secrecy surrounding nuclear weapons programs, the intangible nature of AI, the different AI technologies and techniques that may be integrated, and the degree to which these may be adopted by states.

The many uncertainties associated with the AI-nuclear nexus requires a broad approach to considering its implications, opportunities and risks alike. This could be examined through the lens of nuclear risk and strategic stability, the latter in this context essentially referring to ways in which AI-enabled systems can intensify a state’s concerns about the vulnerability of its nuclear forces and its susceptibility to a debilitating first strike (‘crisis stability’), or alter its incentives to build up its nuclear forces (‘arms race stability’). Arguably, AI can drive these outcomes through changes to 1) capabilities, 2) behaviours, and 3) processes. These are considered briefly in turn.

1.) The impact of AI integration on capabilities

    In theory, AI could challenge the ‘assuredness’ of a nuclear-armed state’s second-strike capabilities, thereby undermining strategic stability. For example, this would be the case if AI-enabled uncrewed underwater vehicles or maritime intelligence, surveillance, and reconnaissance capabilities significantly enhances an adversary’s ability to detect and attack nuclear-armed ballistic missile submarines (SSBNs)—regarded as the most survivable nuclear delivery platform precisely due to their ability to hide in vast oceans. In addition, AI-fuelled cyber operations could enable more frequent and powerful ‘left of launch’ operations, or target early warning systems. On the opposite side, AI could help a state protect its forces by strengthening its defences against such operations. Indeed, widespread applicability of AI across well-matched adversaries may altogether negate its impact on risk and strategic stability. Yet, greater reliance on AI in general can also create new entry points for external third-party interference (e.g. hacking, spoofing) that could undermine networks and communications linked to nuclear forces.

    2.) The impact of AI integration on behaviours

    Integration of AI in military systems is likely to impact on human behaviours. From one perspective, AI-inspired efficiency can provide decision-makers with a wider-ranging and higher-quality set of data, lessening the need for human interpretation and enhancing predictability even in times of crisis. However, greater efficiency across the board may lead to timeline compression, especially as AI-algorithms push for immediate response in light of information inputs, potentially creating a speed-running of crisis scenarios. In such circumstances, there could be greater pressure for decision-makers to act quickly and decisively, including in an escalatory manner. Further, as decision-support functions become more mechanized, AI reliance could lessen the initiative and flexibility of human operators, who may be less equipped to push back on recommended courses of action—especially as the sophistication of AI algorithms increase and become more unexplainable. Operators may also be less inclined towards de-escalation as their ownership of decisions wanes.

    3.) The process of AI integration

    Some have argued that AI technologies, including deep learning, are likely to be integrated in the software and hardware of a variety of nuclear command, control, and communications systems, as means to enhancing not only their performance but their safety, security, and reliability. Yet this is just one potential outcome, and the smoothness of the integration process itself is perhaps too-often taken for granted. Algorithms for instance are subject to human bias and programming errors, and integration into critical systems may simply exacerbate these. There remain unknowns about the performance of systems in unfamiliar environments and compressed timeframes; some have characterized normal accidents as the ‘inevitable’ consequences of complex systems. Issues with the safety and reliability of AI technologies, and related, over-delegation to and overdependence on these, would be more manifest with their premature deployment, a distinct possibility in light of action-reaction dynamics given the strategic context, and the desire of states to achieve first-mover advantage.

    Asymmetries in AI and nuclear capabilities, seemingly endless gradients of AI integration, and myriad unknowns regarding their implications, all present challenges for international efforts to mitigate risk linked to the AI-nuclear nexus. The presence of political attention on the topic constitutes a welcome first step. Yet stakeholders will need to consider more systematically the wide range of issues at the nexus. This can be informed by the broader discourse around responsible AI, including in the context of the Convention on Certain Conventional Weapons, the Summit on Responsible Artificial Intelligence in the Military Domain (REAIM), and UN forums on Lethal Autonomous Weapons Systems. Emerging governance of civilian applications of AI—as with the European Union AI Act and the International Code of Conduct for Organizations Developing Advanced AI Systems—can also present best practices, even if these would have to be adapted for the military (and nuclear) domains. Yet, there remains a particular need for states to unpack the escalatory dynamics that may be linked to AI-nuclear integration. Mapping AI capabilities, limitations, and integration possibilities; and exchanging on ethical considerations and notions of human agency in nuclear decision-making, for instance, can help begin to unpack the implications of the AI-nuclear nexus, and avoid the potentially catastrophic consequences.

    Wilfred Wan is Director of the Weapons of Mass Destruction Programme at the Stockholm International Peace Research Institute (SIPRI).

    Skip to content