Artificial intelligence (AI) is a catalyst for many trends that increase the salience of nuclear, biological or chemical weapons of mass destruction (WMD). AI can facilitate and speed up the development or manufacturing of WMD or precursor technologies. With AI assistance, those who currently lack the necessary knowledge to produce fissile materials or toxic substances can acquire WMD capabilities. AI itself is of proliferation concern. As an intangible technology, it spreads easily and its diffusion is difficult to control through supply side mechanisms, such as export controls. At the intersection of nuclear weapons and AI, there are concerns about rising risks of inadvertent or intentional nuclear weapons use, reduced crisis stability and new arms races.
To be sure, AI also has beneficial applications and can reduce WMD-related risks. AI can make transparency and verification instruments more effective and efficient because of its ability to process immense amounts of data and to detect unusual patterns, which maybe indicative of noncompliant behaviour. AI can also improve situational awareness in crisis situations.
While efforts to explore and exploit the military dimension of AI are moving ahead rapidly, these beneficial dimension of the AI-WMD intersection remains under-researched and under-used.
The immediate challenge is to build guardrails around the integration of AI into the WMD sphere and to slow down the incorporation of AI into research, development, production, and planning for nuclear, biological and chemical weapons. Meanwhile, governments should identify risk mitigation measures and at the same time intensify their search for the best approaches to capitalize on the beneficial applications of AI in controlling WMD. Efforts to ensure that the international community is able “to govern this technology rather than let it govern us” have to address challenges at three levels at the AI and WMD intersection.
AI simplifies and accelerates the development and production of weapons of mass destruction
First, AI can facilitate the development of biological, chemical or nuclear weapons by making research, development and production more efficient and faster. This is true even for “old” technologies like fissile material production, which remains expensive and requires large-scale industrial facilities. AI can help to optimize uranium enrichment or plutonium separation, two key processes in any nuclear weapons programme.
The connection of AI and chemistry as well as biochemistry is particularly worrying. The Director General of the Organisation for the Prohibition of Chemical Weapons (OPCW) has warned of “the potential risks that artificial intelligence-assisted chemistry may pose” to the Chemical Weapons Convention and warned of “the ease and speed with which novel routes to existing toxic compounds can be identified”. This creates serious new challenges for the control of toxic substances and their precursors.
Similar concerns exist with regard to biological weapons. Synthetic biology is in itself a dynamic field. But AI puts the development of novel chemical or biological agents through such new technologies on steroids. Rather than going through lengthy and costly lab experiments, AI can “predict” the biological effects of known and even unknown agents. A much cited paper by Filippa Lentzos and colleagues describes an experiment during which an AI, in less than six hours and running on a standard hardware configuration, “generated forty thousand molecules that scored within our desired threshold”, meaning that these agents were likely more toxic than publicly known chemical warfare agents.
AI lowers proliferation hurdles
To be sure, current commercial AI providers have instructed their AI models to not answer questions on how to build WMD or related technologies. But such limits will not remain impermeable. And in future, the problem may not be so much preventing the misuse of existing AI models but the proliferation of AI models or the technologies that can be used to build them. Only a fraction of all spending on AI is invested in the safety and security of such models.
AI lowers the threshold of WMD use
Third, the integration of AI into the WMD sphere can also lower the threshold for the use of nuclear, biological or chemical weapons. Thus, all nuclear weapon states have begun to integrate AI into their nuclear command, control, communication and information (NC3I) infrastructure. The ability of AI models to analyse large chunks of data at unprecedented speeds can improve situational awareness and help to warn, for example, of incoming nuclear attacks. But at the same time AI may also be used to optimize military strike options. Because of the lack of transparency around AI integration, fears that adversaries may be intent to conduct a disarming strike with AI assistance can increase, setting up a race to the bottom on nuclear decision-making.
In a crisis situation, overreliance on AI systems that are unreliable or working with faulty data may create additional problems. Data may be incomplete or may have been manipulated. AI models themselves are not objective. These problems are structural and thus not easily fixed. A UNIDIR study, for example, found that “gender norms and bias can be introduced into machine learning throughout its life cycle”. Another inherent risk is that AI systems designed and trained for military uses are biased towards war fighting, rather war avoidance, which would make de-escalation in nuclear crisis much more difficult.
The consensus among nuclear weapon states that a human always has to stay in the loop before a nuclear weapon is launched, is important but it remains a problem that the understanding of human control may differ significantly.
Slow down!
It would be a fool’s errand to try to slow down the development of AI itself. But we need to decelerate the convergence of AI with the research, development, production and military planning related to WMD. It must also be possible to prevent a spill-over from the integration of AI into the conventional military sphere to applications leading to nuclear, biological and chemical weapons use.
Such deceleration and channelling strategies can build on some universal norms and prohibitions. But they will also have to be tailored to the specific regulative frameworks, norms and patterns regulating nuclear, biological and chemical weapons. The zero draft of the Pact for the Future, to be adopted at the September 2024 Summit of the Future points in the right direction by suggesting a commitment by the international community “to developing norms, rules and principles on the design, development and use of military applications of artificial intelligence through a multilateral process, while also ensuring engagement with stakeholders from industry, academia, civil society and other sectors.”
Fortunately, efforts to improve AI governance on WMD do not need to start from scratch. At the global level, the prohibitions of biological and chemical weapons enshrined in the Biological and Chemical Weapons Conventions are all encompassing: the general purpose criterion prohibits all chemical and biological agents that are not used peacefully, whether AI comes into play or not. But AI may test these prohibitions in various ways, including by merging biotechnology and chemistry “seamlessly” with other novel technologies. It is therefore essential the OPCW is monitoring these developments closely.
International Humanitarian Law (IHL) implicitly establishes limits on the miliary application of AI by prohibiting the indiscriminate and disproportionate use of force in war. The Group of Governmental Experts (GGE) on Lethal Autonomous Weapons under the Convention on Certain Conventional Weapons (CCW) is doing important work by attempting to spell out what the IHL requirements mean for weapons that act without human control. These discussions will, mutatis mutandis, also be relevant for any nuclear, biological or chemical weapons that would be reliant on AI functionalities that reduce human control.
Shared concerns around the risks of AI and WMD have triggered a range of UN-based initiatives to promote norms around responsible use. The legal, ethical and humanitarian questions raised at the April 2024 Vienna Conference on Autonomous Weapons Systems are likely to inform debates and decisions around limits on AI integration into WMD development and employment, and particular nuclear weapons use. After all, similar pressures to shorten decision times and improve autonomy of weapons systems apply to nuclear as well as conventional weapons.
From a regulatory point of view, it is advantageous that the market for AI related products is still highly concentrated around a few big players. It is positive that some of the countries with the largest AI companies are also investing in the development of norms around responsible use of AI. It is obvious that these companies have agency, and in some cases probably more influence on politics that small states.
The Bletchley Declaration adopted at the November 2023 AI Safety Summit in the UK, for example, highlighted the “particular safety risks” that arise “at the ‘frontier’ of AI”. These could include risks that may “arise from potential intentional misuse or unintended issues of control relating to alignment with human intent”. The summits on Responsible Artificial Intelligence in the Military Domain (REAIM) are another “effort at coalition building around military AI” that could help to establish rules of the game.
The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, agreed in Washington in September 2023 confirmed important principles that apply also to the WMD sphere, including the applicability of international law and the need to “implement appropriate safeguards to mitigate risks of failures in military AI capabilities.” One step in this direction would be for the nuclear weapon states to conduct so-called failsafe reviews that would aim to comprehensively evaluate how control of nuclear weapons can be ensured at all times, even when AI-based systems are incorporated.
All such efforts could and should be building blocks that can be incorporated into a comprehensive governance approach. Yet, the risks around AI leading to increased risk of nuclear weapons use are most pressing. Artificial intelligence is not the only emerging and disruptive technology affecting international security. Space warfare, cyber, hypersonic weapons, quantum are all affecting nuclear stability. It is therefore particularly important that nuclear weapon states amongst themselves build a better understanding and confidence about the limits of AI integration into NC3I.
An understanding between China and the United States on guardrails around military misuses of AI would be the single most important measure to slow down the AI race. The fact that Presidents Xi Jingping and Joe Biden in November 2023 agreed that “China and the United States have broad common interests” including on artificial intelligence and to intensify consultations on that and others issues was a much needed sign of hope. But China since then has been hesitating to actually engage in such talks.
Meanwhile, relevant nations can lead by example when considering the integration of AI into the WMD realm. This concerns first of all the nuclear weapon states which can demonstrate responsible behaviour by pledging, for example, that they would not use AI to interfere with the nuclear command, control and communication systems of their adversaries. All states should also practice maximum transparency when conducting experiments around the use of AI for biodefense activities because such activities can easily be mistaken for offensive work. Finally, the German government’s pioneering role in looking at the impact of new and emerging technologies on arms control has to be recognized. Its Rethinking Arms Control conferences, including the most recent conference on AI and WMD on June 28 in Berlin with key contributors such as the Director General of the OPCW are particularly important. Such meetings can systematically and consistently investigate the AI-WMD interplay in a dialogue between experts and practitioners. If they can agree what guardrails and speed bumps are needed, an important step toward effective governance on AI in the WMD sphere has been taken.