AI is supposedly the new nuclear weapons — but how similar are they, really?
What the history of nuclear arms can — and can’t — tell us about the future of AI.
By Dylan Matthews
If you spend enough time reading about artificial intelligence, you’re bound to encounter one specific analogy: nuclear weapons. Like nukes, the argument goes, AI is a cutting-edge technology that emerged with unnerving rapidity and comes with serious and difficult to predict risks that society is ill-equipped to handle.
The heads of AI labs OpenAI, Anthropic, and Google DeepMind, as well as researchers like Geoffrey Hinton and Yoshua Bengio and prominent figures like Bill Gates, signed an open letter in May making the analogy explicitly, stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Oppenheimer director Christopher Nolan, by contrast, doesn’t think AI and nukes are very similar. The Making of the Atomic Bomb author Richard Rhodes thinks there are important parallels. The New York Times ran a quiz asking people if they could distinguish quotes about nuclear weapons from quotes about AI. Some policy experts are calling for a Manhattan Project for AI, just to make the analogy super-concrete. Anecdotally, I know tons of people working on AI policy who’ve been reading Rhodes’s book for inspiration. I recently saw a copy on a coffee table at Anthropic’s offices, when I was visiting there for a reporting trip.
It’s easy to understand why people grasp for analogies like this. AI is a new, bewildering technology that many experts believe is extremely dangerous, and we want conceptual tools to help us wrap our heads around it and think about its consequences. But the analogy is crude at best, and there are important differences between the technologies that will prove vital in thinking about how to regulate AI to ensure it’s deployed safely, without bias against marginalized groups and with protections against misuse by bad actors.
Here’s an incomplete list of ways in which the two technologies seem similar — and different.
Similarity: extremely rapid scientific progress
In December 1938, the chemists Otto Hahn and Fritz Strassmann found that if they bombarded the radioactive element uranium with neutrons, they got what looked like barium, an element much smaller than uranium. It was a baffling observation — radioactive elements had to that point only been known to emit small particles and transmute to slightly smaller elements — but by Christmas Eve, their collaborators, the physicists Lise Meitner and Otto Frisch, had come up with an explanation: the neutrons had split the uranium atoms, creating solid barium and krypton gas. Frisch called the process “fission.”
On July 16, 1945, after billions of dollars of investment and the equivalent of 67 million hours of labor from workers and scientists including Frisch, the US military detonated the Trinity device, the first nuclear weapon ever deployed, using the process that Frisch and Meitner had only theorized less than seven years earlier.
Few scientific fields have seen a theoretical discovery translated into an immensely important practical technology quite that quickly. But AI might come close. Artificial intelligence as a field was born in the 1950s, but modern “deep learning” techniques in AI, which process data through several layers of “neurons” to form artificial “neural networks,” only took off with the realization around 2009 that specialized chips called graphics processing units (GPUs) could train such networks much more efficiently than standard central processing units (CPUs) on computers. Soon thereafter, deep learning models began winning tournaments testing their ability to categorize images. The same techniques proved able to beat world champions at Go and StarCraft and produce models like GPT-4 or Stable Diffusion that produce incredibly compelling text and image outputs.
Progress in deep learning appears to be roughly exponential, because the computing resources and data applied to it seem to be steadily growing. The field of model scaling estimates what happens to AI models as the data, computing power, and number of parameters available to them are expanded. A team at the Chinese tech giant Baidu demonstrated this in an empirical paper in 2017, finding that “loss” (the measured error of a model, compared to known true results, on various tasks) decays at an exponential rate as the model’s size grows, and subsequent research from OpenAI and DeepMind has reached similar findings.
All of which is to say: much as nuclear fission developed astonishingly quickly, advanced deep learning models and their capabilities appear to be improving at a similarly startling pace.
Similarity: potential for mass harm
I presume I do not need to explain how nuclear weapons, let alone the thermonuclear weapons that make up modern arsenals, can cause mass harm on a scale we’ve never before experienced. The same potential for AI requires somewhat more exposition.
Many scholars have demonstrated that existing machine learning systems adopted for purposes like flagging parents for Child Protective Services often recapitulate biases from their training data. As these models grow and are adopted for more and more purposes, and as we grow increasingly dependent on them, these kinds of biases will prove more and more consequential.
There is also substantial misuse potential for sufficiently complex AI systems. In an April paper, researchers at Carnegie Mellon were able to stitch together large language models into a system that, when instructed to make chlorine gas, could figure out the right chemical compound and instruct a “cloud laboratory” (an online service where chemists can conduct real, physical chemistry experiments remotely) to synthesize it. It appeared capable of synthesizing VX or sarin gas (as well as methamphetamine) and only declined due to built-in safety controls that model developers could easily disable. Similar techniques could be used to develop bioweapons.
Much of the information needed to make chemical or biological weapons is available publicly now, and has been for some time — but it requires specialists to understand and act on that information. The difference between a world where laypeople with access to a large language model can build a dangerous bioweapon, and a world where only specialists can, is somewhat akin to the difference between a country like the US where large-capacity semiautomatic guns are widely available and a country like the UK where access to such weapons is strictly controlled. The vastly increased access to these guns has left the US a country with vastly higher gun crime. LLMs could, without sufficient controls, lead to a world where the lone wolves who currently kill through mass shootings in the US instead use bioweapons with the potential to kill thousands or even millions.
Is that as bad as nuclear weapons? Probably not. For that level of harm you need AI takeover scenarios which are necessarily much more speculative and harder to reason about, as they require AIs vastly more powerful than anything that exists today. But the harms from things like algorithmic bias and bioweapons are more immediate, more concrete, and still large enough to demand a lot of attention.
Difference: one is a military technology, one is a general-purpose technology
I do not use nuclear weapons in my everyday life, and unless you’re in a very specific job in one of a handful of militaries, you probably don’t either. Nuclear fission has affected our everyday lives through nuclear energy, which provides some 4 percent of the world’s energy, but due to its limited adoption, that technology hasn’t exactly transformed our lives either.
We don’t know with any specificity how AI will affect the world, and anyone who tells you what’s about to happen in much detail and with a great deal of confidence is probably grifting you. But we have reason to think that AI will be a general-purpose technology: something like electricity or telegraphy or the internet that broadly changes the way businesses across sectors and nations operate, as opposed to an innovation that makes a dent in one specific sector (as nuclear fission did in the energy sector and in military and geopolitical strategy).
Producing text quickly, as large language models do, is a pretty widely useful service for everything from marketing to technical writing to internal memo composition to lawyering (assuming you know the tech’s limits) to, unfortunately, disinformation and propaganda. Using AI to improve services like Siri and Alexa so they function more like a personal assistant, and can intelligently plan your schedule and respond to emails, would help in many jobs. McKinsey recently projected that generative AI’s impact on productivity could eventually add as much as $4.4 trillion to the global economy — more than the annual GDP of the UK. Again, take these estimates with a large grain of salt, but the point that the technology will be broadly important to a range of jobs and sectors is sound.
Banning nuclear fission would probably be a bad idea — nuclear power is a very useful technology — but humans have other sources of energy. Banning advanced AI, by contrast, is clearly not viable, given how broadly useful it could be even with the major threats it poses.
Similarity: uranium and chips
When the theoretical physicist Niels Bohr first theorized in 1939 that uranium fission was due to one specific isotope of the element (uranium-235), he thought this meant that a nuclear weapon would be wholly impractical. U235 is much rarer than the dominant uranium-238 isotope, and separating the two was, and remains, an incredibly costly endeavor.
Separating enough U235 for a bomb, Bohr said at the time, “can never be done unless you turn the United States into one huge factory.” A few years later, after visiting Los Alamos and witnessing the scale of industrial effort required to make working bombs, which at its peak employed 130,000 workers, he quipped to fellow physicist Ed Teller, “You see, I told you it couldn’t be done without turning the whole country into a factory. You have done just that.”
Separating out uranium in Oak Ridge, Tennessee, was indeed a massive undertaking, as was the parallel effort in Hanford, Washington, to produce plutonium (the Hiroshima bomb used the former, the Trinity and Nagasaki bombs the latter). That gave arms control efforts something tangible to grasp onto. You could not make nuclear weapons without producing large quantities of plutonium or enriched uranium, and it’s pretty hard to hide that you’re producing large quantities of those materials.
A useful analogy can be made between efforts to control access to uranium and efforts to control access to the optimized computer chips necessary to do modern deep learning. While AI research involves many intangible factors that are difficult to quantify — the workforce skill needed to build models, the capabilities of the models themselves — the actual chips used to train models are trackable. They are built in a handful of fabrication plants (“fabs”). Government agencies can monitor when labs are purchasing tens or hundreds of thousands of these chips, and could even mandate firmware on the chips that logs certain AI training activity.
That’s led some analysts to suggest that an arms control framework for AI could look like that for nuclear weapons — with chips taking the place of uranium and plutonium. This might be more difficult for various reasons, from the huge amount of international cooperation required (including between China and Taiwan) to the libertarian culture of Silicon Valley pushing against imprinting tracking info on every chip. But it’s a useful parallel nonetheless.
As early as 1944, Niels Bohr was holding meetings with Franklin Roosevelt and Winston Churchill and urging them in the strongest terms to tell Joseph Stalin about the atomic bomb project. If he found out through espionage, Bohr argued, the result would be distrust between the Allied powers after World War II concluded, potentially resulting in an arms race between the US/UK and the Soviet Union and a period of grave geopolitical danger as rival camps accumulated mass nuclear arsenals. Churchill thought this was absurd and signed a pledge with Roosevelt not to tell Stalin.
The postwar arms race between the US and the Soviet Union proceeded much as Bohr predicted, with Churchill’s nation as an afterthought.
The historical context behind AI’s development now is much less fraught; the US is not currently in an alliance of convenience with a regime it despises and expects to enter geopolitical competition with as soon as a massive world war concludes.
But the arms race dynamics that Bohr prophesied are already emerging in relation to AI and US-Chinese relations. Tech figures, particularly ex-Google CEO Eric Schmidt, have been invoking the need for the US to take the lead on AI development lest China pull ahead. National security adviser Jake Sullivan said in a speech last year that the US must maintain “as large of a lead as possible” in AI.
As my colleague Sigal Samuel has written, this belief might rest on misconceptions that being “first” on AI matters more than how one uses the technology, or that China will leave its AI sector unregulated, when it’s already imposing regulations. Arms races, though, can be self-fulfilling: if enough actors on each side think they’re in an arms race, eventually they’re in an arms race.
Difference: AI technology is much easier to copy
The vast majority of nations have declined to develop nukes, including many wealthy nations that easily have the resources to build them. Partially this limited proliferation is due to the fact that building nuclear weapons is fundamentally hard and expensive.
The International Campaign to Abolish Nuclear Weapons estimates that ultra-poor North Korea spent $589 million on its nuclear program in 2022 alone, implying it has spent many billions over the decades the program has developed, Most countries do not want to invest those kinds of resources to develop a weapon they will likely never use. Most terrorist groups lack the resources to build such a weapon.
AI is difficult and costly to train — but relative to nukes, much easier to piggyback off of and copy once some company or government has built a model. Take Vicuna, a recent language model built off of the LLaMA model released by Meta (Facebook’s parent company), whose internal details were leaked to the public and are widely available. Vicuna was trained using about 70,000 conversations that real users had with ChatGPT which, when used to “fine tune” LLaMA, produced a much more accurate and useful model. According to its creators, training Vicuna cost $300, and they argue its output rivals that of ChatGPT and its underlying models (GPT-3.5 and GPT-4).
There are lots of nuances here that I’m glossing over. But the capability gap between hobbyist and mega-corporation is simply much smaller in AI than it is in nukes. A team of hobbyists trying to develop a nuclear weapon would have a much easier job than the Manhattan Project did, simply because they can benefit from everything the latter, and every nuclear project since, has learned. But they simply could not build a working nuclear device. People with minimal resources can build and customize advanced AI systems, even if not cutting-edge ones, and will likely continue to be able to do so.
One expert I spoke to when thinking about this piece said bluntly that “analogies are the worst form of reasoning.” He has a point: one of my own takeaways from considering this particular analogy is that it’s tempting in part because it gives you a lot more historical material to work with. We know a lot about how nuclear weapons were developed and deployed. We know very little about how the future development and regulation of AI is likely to proceed. So it’s easier to drone on about nukes than it is to try to think through future AI dynamics, because I have more history to draw upon.
Given that, my main takeaway is that glib “AI=nukes” analogies are probably a waste … but more granular comparisons of particular processes, like the arms race dynamics between the US and Soviets in the 1940s and the US and China today, can possibly be fruitful. And those comparisons point in a similar direction. The best way to handle a new, powerful, dangerous technology is through broad international cooperation. The right approach isn’t to lie back and just let scientists and engineers transform our world without outside input.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.