Eoin Micheál McNamara, a Finnish Institute of International Affairs research fellow, recently authored a policy brief titled Nuclear Arms Control Policies and Safety in Artificial Intelligence: Transferable Lessons or False Equivalence? (2024). This brief explores the parallels between nuclear arms control and the emerging challenges of AI governance, considering whether lessons from one domain can inform effective policy in the other. The Finnish Institute of International Affairs commissioned this work to contribute to the global conversation on AI governance, providing insights into the transferable lessons that can be derived from the history of nuclear arms control.
What can we take from Nuclear Arms Control initiatives to Tackle AI?
According to McNamara, the picture is complex. In a world where technological advancement in artificial intelligence (AI) races ahead, policymakers are increasingly turning to historical analogies to guide their approach to global AI governance. McNamara asks if the lessons from nuclear arms control are, in fact, applicable to A. He argues that the comparison between AI and nuclear weapons is insightful, incomplete, and not quite right, but still useful with caution.
Despite recent collapses of multiple nuclear arms control treaties, people and organizations in the AI safety space have been liberally using analogies to atomic arms control initiatives as frameworks for managing AI risks. Some view these parallels as essential for conceptualizing global strategic competition in AI. In contrast, others argue that focusing on nuclear analogies distracts from creating new, tailored frameworks for AI's unique challenges. In either case, examining the lessons learned from nuclear arms control geopolitics offers a chance to identify valuable approaches that could apply to the evolving world of AI.
Analogies That Warn and Inspire
The arms race mentality that dominated nuclear policy has been reborn in the AI domain. Leaders of major powers often frame AI advancement as a race that must be won—a framing that fuels competition and undermines the safe development of the technology. Just this week, the US-China Economic Security Review Committee posited that the US and China are already embroiled in such an AI race. McNamara argues that this mindset risks creating a spiral of unsafe practices as nations compete for supremacy, with each country afraid of falling behind.
However, history also shows us how the institutionalization of global norms led to the establishment of the so-called “nuclear taboo,” which stigmatized the use of nuclear weapons and promoted risk reduction. Similarly, global AI governance could benefit from focusing on reciprocal safety measures and establishing norms that discourage dangerous AI practices. Mutual agreements on AI development—modeled after arms control treaties—could be the foundation for multilateral cooperation and safety.
The Role of Epistemic Communities
McNamara introduces this idea of epistemic communities in AI safety and governance that are emerging. Historically, expert communities played an essential role in promoting nuclear arms control. These epistemic communities provided the knowledge and ethical perspectives needed to establish safeguards. The National AI Safety Institutes, NGOs, Think Tanks, and Safety Labs are playing a similar role. In AI, similar communities are also emerging as key actors, advocating for responsible innovation. The difference, however, lies in the fact that AI's development is primarily driven by the private sector, where profitability often trumps safety. The challenge is for governments and expert communities to find a balance that protects public safety without stifling innovation.
Opportunities for Global Frameworks
As McNamara points out, however, there are limitations to the analogy. Unlike nuclear weapons, AI technologies can be easily replicated and adapted, making regulation more challenging. By drawing on lessons from nuclear non-proliferation agreements, such as the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), we may yet be able to establish a multilateral framework for AI safety could establish norms that stigmatize particularly dangerous applications of AI, just as the nuclear taboo prevents the spread and use of nuclear weapons. Such a treaty could lay the groundwork for a cooperative approach to AI safety, providing a framework for risk management in an increasingly competitive landscape.
Another potential model is the concept of “Graduated Reciprocation in Tension Reduction” (GRIT), which involves one country taking an initial step towards de-escalation, hoping others will follow. The European Union, McNamara suggests, which lags both the U.S. and China in AI capabilities, could promote such initiatives, advocating for risk reduction and safety as part of a broader strategy to align global AI powers on safer paths.
The analogies between AI and nuclear weapons are imperfect, but they offer important lessons for managing the risks of rapid technological advancement. If we heed the lessons of the past, we can avoid repeating mistakes driven by unchecked competition. By emphasizing norms, safety, and multilateral cooperation, the international community can work towards establishing a safer global landscape for AI development.
Today's key question for policymakers is how to create effective safeguards without stifling innovation. By fostering dialogue and building on lessons from nuclear history, leaders can shape AI's future to enhance global stability rather than undermine it. But also in a way that makes the analogy most valuable and effective.
McNamara’s policy brief is a must-read for anyone thinking about the use of analogies in the AI risk and safety debates to engender action in state and non-state actors.
References
McNamara, E. M. (2024). Nuclear arms control policies and safety in artificial intelligence: Transferable lessons or false equivalence? Finnish Institute of International Affairs.