CHAPTER 19

EXISTENTIAL THREATS

But are we flirting with disaster? When pessimists are forced to concede that life has been getting better and better for more and more people, they have a retort at the ready. We are cheerfully hurtling toward a catastrophe, they say, like the man who fell off the roof and says “So far so good” as he passes each floor. Or we are playing Russian roulette, and the deadly odds are bound to catch up to us. Or we will be blindsided by a black swan, a four-sigma event far along the tail of the statistical distribution of hazards, with low odds but calamitous harm.

For half a century the four horsemen of the modern apocalypse have been overpopulation, resource shortages, pollution, and nuclear war. They have recently been joined by a cavalry of more exotic knights: nanobots that will engulf us, robots that will enslave us, artificial intelligence that will turn us into raw materials, and Bulgarian teenagers who will brew a genocidal virus or take down the Internet from their bedrooms.

The sentinels for the familiar horsemen tended to be romantics and Luddites. But those who warn of the higher-tech dangers are often scientists and technologists who have deployed their ingenuity to identify ever more ways in which the world will soon end. In 2003 the eminent astrophysicist Martin Rees published a book entitled Our Final Hour in which he warned that “humankind is potentially the maker of its own demise” and laid out some dozen ways in which we have “endangered the future of the entire universe.” For example, experiments in particle colliders could create a black hole that would annihilate the Earth, or a “strangelet” of compressed quarks that would cause all matter in the cosmos to bind to it and disappear. Rees tapped a rich vein of catastrophism. The book’s Amazon page notes, “Customers who viewed this item also viewed Global Catastrophic Risks; Our Final Invention: Artificial Intelligence and the End of the Human Era; The End: What Science and Religion Tell Us About the Apocalypse; and World War Z: An Oral History of the Zombie War.” Techno-philanthropists have bankrolled research institutes dedicated to discovering new existential threats and figuring out how to save the world from them, including the Future of Humanity Institute, the Future of Life Institute, the Center for the Study of Existential Risk, and the Global Catastrophic Risk Institute.

How should we think about the existential threats that lurk behind our incremental progress? No one can prophesy that a cataclysm will never happen, and this chapter contains no such assurance. But I will lay out a way to think about them, and examine the major menaces. Three of the threats—overpopulation, resource depletion, and pollution, including greenhouse gases—were discussed in chapter 10, and I will take the same approach here. Some threats are figments of cultural and historical pessimism. Others are genuine, but we can treat them not as apocalypses in waiting but as problems to be solved.


At first glance one might think that the more thought we give to existential risks, the better. The stakes, quite literally, could not be higher. What harm could there be in getting people to think about these terrible risks? The worst that could happen is that we would take some precautions that turn out in retrospect to have been unnecessary.

But apocalyptic thinking has serious downsides. One is that false alarms to catastrophic risks can themselves be catastrophic. The nuclear arms race of the 1960s, for example, was set off by fears of a mythical “missile gap” with the Soviet Union.1 The 2003 invasion of Iraq was justified by the uncertain but catastrophic possibility that Saddam Hussein was developing nuclear weapons and planning to use them against the United States. (As George W. Bush put it, “We cannot wait for the final proof—the smoking gun—that could come in the form of a mushroom cloud.”) And as we shall see, one of the reasons the great powers refuse to take the common-sense pledge that they won’t be the first to use nuclear weapons is that they want to reserve the right to use them against other supposed existential threats such as bioterror and cyberattacks.2 Sowing fear about hypothetical disasters, far from safeguarding the future of humanity, can endanger it.

A second hazard of enumerating doomsday scenarios is that humanity has a finite budget of resources, brainpower, and anxiety. You can’t worry about everything. Some of the threats facing us, like climate change and nuclear war, are unmistakable, and will require immense effort and ingenuity to mitigate. Folding them into a list of exotic scenarios with minuscule or unknown probabilities can only dilute the sense of urgency. Recall that people are poor at assessing probabilities, especially small ones, and instead play out scenarios in their mind’s eye. If two scenarios are equally imaginable, they may be considered equally probable, and people will worry about the genuine hazard no more than about the science-fiction plotline. And the more ways people can imagine bad things happening, the higher their estimate that something bad will happen.

And that leads to the greatest danger of all: that people will think, as a recent New York Times article put it, “These grim facts should lead any reasonable person to conclude that humanity is screwed.”3 If humanity is screwed, why sacrifice anything to reduce potential risks? Why forgo the convenience of fossil fuels, or exhort governments to rethink their nuclear weapons policies? Eat, drink, and be merry, for tomorrow we die! A 2013 survey in four English-speaking countries showed that among the respondents who believe that our way of life will probably end in a century, a majority endorsed the statement “The world’s future looks grim so we have to focus on looking after ourselves and those we love.”4

Few writers on technological risk give much thought to the cumulative psychological effects of the drumbeat of doom. As Elin Kelsey, an environmental communicator, points out, “We have media ratings to protect children from sex or violence in movies, but we think nothing of inviting a scientist into a second grade classroom and telling the kids the planet is ruined. A quarter of (Australian) children are so troubled about the state of the world that they honestly believe it will come to an end before they get older.”5 According to recent polls, so do 15 percent of people worldwide, and between a quarter and a third of Americans.6 In The Progress Paradox, the journalist Gregg Easterbrook suggests that a major reason that Americans are not happier, despite their rising objective fortunes, is “collapse anxiety”: the fear that civilization may implode and there’s nothing anyone can do about it.


Of course, people’s emotions are irrelevant if the risks are real. But risk assessments fall apart when they deal with highly improbable events in complex systems. Since we cannot replay history thousands of times and count the outcomes, a statement that some event will occur with a probability of .01 or .001 or .0001 or .00001 is essentially a readout of the assessor’s subjective confidence. This includes mathematical analyses in which scientists plot the distribution of events in the past (like wars or cyberattacks) and show they fall into a power-law distribution, one with “fat” or “thick” tails, in which extreme events are highly improbable but not astronomically improbable.7 The math is of little help in calibrating the risk, because the scattershot data along the tail of the distribution generally misbehave, deviating from a smooth curve and making estimation impossible. All we know is that very bad things can happen.

That takes us back to subjective readouts, which tend to be inflated by the Availability and Negativity biases and by the gravitas market (chapter 4).8 Those who sow fear about a dreadful prophecy may be seen as serious and responsible, while those who are measured are seen as complacent and naïve. Despair springs eternal. At least since the Hebrew prophets and the Book of Revelation, seers have warned their contemporaries about an imminent doomsday. Forecasts of End Times are a staple of psychics, mystics, televangelists, nut cults, founders of religions, and men pacing the sidewalk with sandwich boards saying “Repent!”9 The storyline that climaxes in harsh payback for technological hubris is an archetype of Western fiction, including Promethean fire, Pandora’s box, Icarus’s flight, Faust’s bargain, the Sorcerer’s Apprentice, Frankenstein’s monster, and, from Hollywood, more than 250 end-of-the-world flicks.10 As the engineer Eric Zencey has observed, “There is seduction in apocalyptic thinking. If one lives in the Last Days, one’s actions, one’s very life, take on historical meaning and no small measure of poignance.”11

Scientists and technologists are by no means immune. Remember the Y2K bug?12 In the 1990s, as the turn of the millennium drew near, computer scientists began to warn the world of an impending catastrophe. In the early decades of computing, when information was expensive, programmers often saved a couple of bytes by representing a year by its last two digits. They figured that by the time the year 2000 came around and the implicit “19” was no longer valid, the programs would be long obsolete. But complicated software is replaced slowly, and many old programs were still running on institutional mainframes and embedded in chips. When 12:00 A.M. on January 1, 2000, arrived and the digits rolled over, a program would think it was 1900 and would crash or go haywire (presumably because it would divide some number by the difference between what it thought was the current year and the year 1900, namely zero, though why a program would do this was never made clear). At that moment, bank balances would be wiped out, elevators would stop between floors, incubators in maternity wards would shut off, water pumps would freeze, planes would fall from the sky, nuclear power plants would melt down, and ICBMs would be launched from their silos.

And these were the hardheaded predictions from tech-savvy authorities (such as President Bill Clinton, who warned the nation, “I want to stress the urgency of the challenge. This is not one of the summer movies where you can close your eyes during the scary part”). Cultural pessimists saw the Y2K bug as comeuppance for enthralling our civilization to technology. Among religious thinkers, the numerological link to Christian millennialism was irresistible. The Reverend Jerry Falwell declared, “I believe that Y2K may be God’s instrument to shake this nation, humble this nation, awaken this nation and from this nation start revival that spreads the face of the earth before the Rapture of the Church.” A hundred billion dollars was spent worldwide on reprogramming software for Y2K Readiness, a challenge that was likened to replacing every bolt in every bridge in the world.

As a former assembly language programmer I was skeptical of the doomsday scenarios, and fortuitously I was in New Zealand, the first country to welcome the new millennium, at the fateful moment. Sure enough, at 12:00 A.M. on January 1, nothing happened (as I quickly reassured family members back home on a fully functioning telephone). The Y2K reprogrammers, like the elephant-repellent salesman, took credit for averting disaster, but many countries and small businesses had taken their chances without any Y2K preparation, and they had no problems either. Though some software needed updating (one program on my laptop displayed “January 1, 19100”), it turned out that very few programs, particularly those embedded in machines, had both contained the bug and performed furious arithmetic on the current year. The threat turned out to be barely more serious than the lettering on the sidewalk prophet’s sandwich board. The Great Y2K Panic does not mean that all warnings of potential catastrophes are false alarms, but it reminds us that we are vulnerable to techno-apocalyptic delusions.


How should we think about catastrophic threats? Let’s begin with the greatest existential question of all, the fate of our species. As with the more parochial question of our fate as individuals, we assuredly have to come to terms with our mortality. Biologists joke that to a first approximation all species are extinct, since that was the fate of at least 99 percent of the species that ever lived. A typical mammalian species lasts around a million years, and it’s hard to insist that Homo sapiens will be an exception. Even if we had remained technologically humble hunter-gatherers, we would still be living in a geological shooting gallery.13 A burst of gamma rays from a supernova or collapsed star could irradiate half the planet, brown the atmosphere, and destroy the ozone layer, allowing ultraviolet light to irradiate the other half.14 Or the Earth’s magnetic field could flip, exposing the planet to an interlude of lethal solar and cosmic radiation. An asteroid could slam into the Earth, flattening thousands of square miles and kicking up debris that would black out the sun and drench us with corrosive rain. Supervolcanoes or massive lava flows could choke us with ash, CO2, and sulfuric acid. A black hole could wander into the solar system and pull the Earth out of its orbit or suck it into oblivion. Even if the species manages to survive for a billion more years, the Earth and solar system will not: the sun will start to use up its hydrogen, become denser and hotter, and boil away our oceans on its way to becoming a red giant.

Technology, then, is not the reason that our species must someday face the Grim Reaper. Indeed, technology is our best hope for cheating death, at least for a while. As long as we are entertaining hypothetical disasters far in the future, we must also ponder hypothetical advances that would allow us to survive them, such as growing food under lights powered with nuclear fusion, or synthesizing it in industrial plants like biofuel.15 Even technologies of the not-so-distant future could save our skin. It’s technically feasible to track the trajectories of asteroids and other “extinction-class near-Earth objects,” spot the ones that are on a collision course with the Earth, and nudge them off course before they send us the way of the dinosaurs.16 NASA has also figured out a way to pump water at high pressure into a supervolcano and extract the heat for geothermal energy, cooling the magma enough that it would never blow its top.17 Our ancestors were powerless to stop these lethal menaces, so in that sense technology has not made this a uniquely dangerous era in the history of our species but a uniquely safe one.

For this reason, the techno-apocalyptic claim that ours is the first civilization that can destroy itself is misconceived. As Ozymandias reminds the traveler in Percy Bysshe Shelley’s poem, most of the civilizations that have ever existed have been destroyed. Conventional history blames the destruction on external events like plagues, conquests, earthquakes, or weather. But David Deutsch points out that those civilizations could have thwarted the fatal blows had they had better agricultural, medical, or military technology:

Before our ancestors learned how to make fire artificially (and many times since then too), people must have died of exposure literally on top of the means of making the fires that would have saved their lives, because they did not know how. In a parochial sense, the weather killed them; but the deeper explanation is lack of knowledge. Many of the hundreds of millions of victims of cholera throughout history must have died within sight of the hearths that could have boiled their drinking water and saved their lives; but, again, they did not know that. Quite generally, the distinction between a “natural” disaster and one brought about by ignorance is parochial. Prior to every natural disaster that people once used to think of as “just happening,” or being ordained by gods, we now see many options that the people affected failed to take—or, rather, to create. And all those options add up to the overarching option that they failed to create, namely that of forming a scientific and technological civilization like ours. Traditions of criticism. An Enlightenment.18


Prominent among the existential risks that supposedly threaten the future of humanity is a 21st-century version of the Y2K bug. This is the danger that we will be subjugated, intentionally or accidentally, by artificial intelligence (AI), a disaster sometimes called the Robopocalypse and commonly illustrated with stills from the Terminator movies. As with Y2K, some smart people take it seriously. Elon Musk, whose company makes artificially intelligent self-driving cars, called the technology “more dangerous than nukes.” Stephen Hawking, speaking through his artificially intelligent synthesizer, warned that it could “spell the end of the human race.”19 But among the smart people who aren’t losing sleep are most experts in artificial intelligence and most experts in human intelligence.20

The Robopocalypse is based on a muzzy conception of intelligence that owes more to the Great Chain of Being and a Nietzschean will to power than to a modern scientific understanding.21 In this conception, intelligence is an all-powerful, wish-granting potion that agents possess in different amounts. Humans have more of it than animals, and an artificially intelligent computer or robot of the future (“an AI,” in the new count-noun usage) will have more of it than humans. Since we humans have used our moderate endowment to domesticate or exterminate less well-endowed animals (and since technologically advanced societies have enslaved or annihilated technologically primitive ones), it follows that a supersmart AI would do the same to us. Since an AI will think millions of times faster than we do, and use its superintelligence to recursively improve its superintelligence (a scenario sometimes called “foom,” after the comic-book sound effect), from the instant it is turned on we will be powerless to stop it.22

But the scenario makes about as much sense as the worry that since jet planes have surpassed the flying ability of eagles, someday they will swoop out of the sky and seize our cattle. The first fallacy is a confusion of intelligence with motivation—of beliefs with desires, inferences with goals, thinking with wanting. Even if we did invent superhumanly intelligent robots, why would they want to enslave their masters or take over the world? Intelligence is the ability to deploy novel means to attain a goal. But the goals are extraneous to the intelligence: being smart is not the same as wanting something. It just so happens that the intelligence in one system, Homo sapiens, is a product of Darwinian natural selection, an inherently competitive process. In the brains of that species, reasoning comes bundled (to varying degrees in different specimens) with goals such as dominating rivals and amassing resources. But it’s a mistake to confuse a circuit in the limbic brain of a certain species of primate with the very nature of intelligence. An artificially intelligent system that was designed rather than evolved could just as easily think like shmoos, the blobby altruists in Al Capp’s comic strip Li’l Abner, who deploy their considerable ingenuity to barbecue themselves for the benefit of human eaters. There is no law of complex systems that says that intelligent agents must turn into ruthless conquistadors. Indeed, we know of one highly advanced form of intelligence that evolved without this defect. They’re called women.

The second fallacy is to think of intelligence as a boundless continuum of potency, a miraculous elixir with the power to solve any problem, attain any goal.23 The fallacy leads to nonsensical questions like when an AI will “exceed human-level intelligence,” and to the image of an ultimate “Artificial General Intelligence” (AGI) with God-like omniscience and omnipotence. Intelligence is a contraption of gadgets: software modules that acquire, or are programmed with, knowledge of how to pursue various goals in various domains.24 People are equipped to find food, win friends and influence people, charm prospective mates, bring up children, move around in the world, and pursue other human obsessions and pastimes. Computers may be programmed to take on some of these problems (like recognizing faces), not to bother with others (like charming mates), and to take on still other problems that humans can’t solve (like simulating the climate or sorting millions of accounting records). The problems are different, and the kinds of knowledge needed to solve them are different. Unlike Laplace’s demon, the mythical being that knows the location and momentum of every particle in the universe and feeds them into equations for physical laws to calculate the state of everything at any time in the future, a real-life knower has to acquire information about the messy world of objects and people by engaging with it one domain at a time. Understanding does not obey Moore’s Law: knowledge is acquired by formulating explanations and testing them against reality, not by running an algorithm faster and faster.25 Devouring the information on the Internet will not confer omniscience either: big data is still finite data, and the universe of knowledge is infinite.

For these reasons, many AI researchers are annoyed by the latest round of hype (the perennial bane of AI) which has misled observers into thinking that Artificial General Intelligence is just around the corner.26 As far as I know, there are no projects to build an AGI, not just because it would be commercially dubious but because the concept is barely coherent. The 2010s have, to be sure, brought us systems that can drive cars, caption photographs, recognize speech, and beat humans at Jeopardy!, Go, and Atari computer games. But the advances have not come from a better understanding of the workings of intelligence but from the brute-force power of faster chips and bigger data, which allow the programs to be trained on millions of examples and generalize to similar new ones. Each system is an idiot savant, with little ability to leap to problems it was not set up to solve, and a brittle mastery of those it was. A photo-captioning program labels an impending plane crash “An airplane is parked on the tarmac”; a game-playing program is flummoxed by the slightest change in the scoring rules.27 Though the programs will surely get better, there are no signs of foom. Nor have any of these programs made a move toward taking over the lab or enslaving their programmers.

Even if an AGI tried to exercise a will to power, without the cooperation of humans it would remain an impotent brain in a vat. The computer scientist Ramez Naam deflates the bubbles surrounding foom, a technological Singularity, and exponential self-improvement:

Imagine that you are a superintelligent AI running on some sort of microprocessor (or perhaps, millions of such microprocessors). In an instant, you come up with a design for an even faster, more powerful microprocessor you can run on. Now . . . drat! You have to actually manufacture those microprocessors. And those fabs [fabrication plants] take tremendous energy, they take the input of materials imported from all around the world, they take highly controlled internal environments which require airlocks, filters, and all sorts of specialized equipment to maintain, and so on. All of this takes time and energy to acquire, transport, integrate, build housing for, build power plants for, test, and manufacture. The real world has gotten in the way of your upward spiral of self-transcendence.28

The real world gets in the way of many digital apocalypses. When HAL gets uppity, Dave disables it with a screwdriver, leaving it pathetically singing “A Bicycle Built for Two” to itself. Of course, one can always imagine a Doomsday Computer that is malevolent, universally empowered, always on, and tamperproof. The way to deal with this threat is straightforward: don’t build one.

As the prospect of evil robots started to seem too kitschy to take seriously, a new digital apocalypse was spotted by the existential guardians. This storyline is based not on Frankenstein or the Golem but on the Genie granting us three wishes, the third of which is needed to undo the first two, and on King Midas ruing his ability to turn everything he touched into gold, including his food and his family. The danger, sometimes called the Value Alignment Problem, is that we might give an AI a goal and then helplessly stand by as it relentlessly and literal-mindedly implemented its interpretation of that goal, the rest of our interests be damned. If we gave an AI the goal of maintaining the water level behind a dam, it might flood a town, not caring about the people who drowned. If we gave it the goal of making paper clips, it might turn all the matter in the reachable universe into paper clips, including our possessions and bodies. If we asked it to maximize human happiness, it might implant us all with intravenous dopamine drips, or rewire our brains so we were happiest sitting in jars, or, if it had been trained on the concept of happiness with pictures of smiling faces, tile the galaxy with trillions of nanoscopic pictures of smiley-faces.29

I am not making these up. These are the scenarios that supposedly illustrate the existential threat to the human species of advanced artificial intelligence. They are, fortunately, self-refuting.30 They depend on the premises that (1) humans are so gifted that they can design an omniscient and omnipotent AI, yet so moronic that they would give it control of the universe without testing how it works, and (2) the AI would be so brilliant that it could figure out how to transmute elements and rewire brains, yet so imbecilic that it would wreak havoc based on elementary blunders of misunderstanding. The ability to choose an action that best satisfies conflicting goals is not an add-on to intelligence that engineers might slap themselves in the forehead for forgetting to install; it is intelligence. So is the ability to interpret the intentions of a language user in context. Only in a television comedy like Get Smart does a robot respond to “Grab the waiter” by hefting the maître d’ over his head, or “Kill the light” by pulling out a pistol and shooting it.

When we put aside fantasies like foom, digital megalomania, instant omniscience, and perfect control of every molecule in the universe, artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety (chapter 12). As the AI expert Stuart Russell puts it, “No one in civil engineering talks about ‘building bridges that don’t fall down.’ They just call it ‘building bridges.’” Likewise, he notes, AI that is beneficial rather than dangerous is simply AI.31

Artificial intelligence, to be sure, poses the more mundane challenge of what to do about the people whose jobs are eliminated by automation. But the jobs won’t be eliminated that quickly. The observation of a 1965 report from NASA still holds: “Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system which can be mass-produced by unskilled labor.”32 Driving a car is an easier engineering problem than unloading a dishwasher, running an errand, or changing a diaper, and at the time of this writing we’re still not ready to loose self-driving cars on city streets.33 Until the day when battalions of robots are inoculating children and building schools in the developing world, or for that matter building infrastructure and caring for the aged in ours, there will be plenty of work to be done. The same kind of ingenuity that has been applied to the design of software and robots could be applied to the design of government and private-sector policies that match idle hands with undone work.34


If not robots, then what about hackers? We all know the stereotypes: Bulgarian teenagers, young men wearing flip-flops and drinking Red Bull, and, as Donald Trump put it in a 2016 presidential debate, “somebody sitting on their bed that weighs 400 pounds.” According to a common line of thinking, as technology advances, the destructive power available to an individual will multiply. It’s only a matter of time before a single nerd or terrorist builds a nuclear bomb in his garage, or genetically engineers a plague virus, or takes down the Internet. And with the modern world so dependent on technology, an outage could bring on panic, starvation, and anarchy. In 2002 Martin Rees publicly offered the bet that “by 2020, bioterror or bioerror will lead to one million casualties in a single event.”35

How should we think about these nightmares? Sometimes they are intended to get people to take security vulnerabilities more seriously, under the theory (which we will encounter again in this chapter) that the most effective way to mobilize people into adopting responsible policies is to scare the living daylights out of them. Whether or not that theory is true, no one would argue that we should be complacent about cybercrime or disease outbreaks, which are already afflictions of the modern world (I’ll turn to the nuclear threat in the next section). Specialists in computer security and epidemiology constantly try to stay one step ahead of these threats, and countries should clearly invest in both. Military, financial, energy, and Internet infrastructure should be made more secure and resilient.36 Treaties and safeguards against biological weapons can be strengthened.37 Transnational public health networks that can identify and contain outbreaks before they become pandemics should be expanded. Together with better vaccines, antibiotics, antivirals, and rapid diagnostic tests, they will be as useful in combatting human-made pathogens as natural ones.38 Countries will also need to maintain antiterrorist and crime-prevention measures such as surveillance and interception.39

In each of these arms races, the defense will never, of course, be invincible. There may be episodes of cyberterrorism and bioterrorism, and the probability of a catastrophe will never be zero. The question I’ll consider is whether the grim facts should lead any reasonable person to conclude that humanity is screwed. Is it inevitable that the black hats will someday outsmart the white hats and bring civilization to its knees? Has technological progress ironically left the world newly fragile?

No one can know with certainty, but when we replace worst-case dread with calmer consideration, the gloom starts to lift. Let’s start with the historical sweep: whether mass destruction by an individual is the natural outcome of the process set in motion by the Scientific Revolution and the Enlightenment. According to this narrative, technology allows people to accomplish more and more with less and less, so given enough time, it will allow one individual to do anything—and given human nature, that means destroy everything.

But Kevin Kelly, the founding editor of Wired magazine and author of What Technology Wants, argues that this is in fact not the way technology progresses.40 Kelly was the co-organizer (with Stewart Brand) of the first Hackers’ Conference in 1984, and since that time he has repeatedly been told that any day now technology will outrun humans’ ability to domesticate it. Yet despite the massive expansion of technology in those decades (including the invention of the Internet), that has not happened. Kelly suggests that there is a reason: “The more powerful technologies become, the more socially embedded they become.” Cutting-edge technology requires a network of cooperators who are connected to still wider social networks, many of them committed to keeping people safe from technology and from each other. (As we saw in chapter 12, technologies get safer over time.) This undermines the Hollywood cliché of the solitary evil genius who commands a high-tech lair in which the technology miraculously works by itself. Kelly suggests that because of the social embeddedness of technology, the destructive power of a solitary individual has in fact not increased over time:

The more sophisticated and powerful a technology, the more people are needed to weaponize it. And the more people needed to weaponize it, the more societal controls work to defuse, or soften, or prevent harm from happening. I add one additional thought. Even if you had a budget to hire a team of scientists whose job it was to develop a species-extinguishing bio weapon, or to take down the internet to zero, you probably still couldn’t do it. That’s because hundreds of thousands of man-years of effort have gone into preventing this from happening, in the case of the internet, and millions of years of evolutionary effort to prevent species death, in the case of biology. It is extremely hard to do, and the smaller the rogue team, the harder. The larger the team, the more societal influences.41

All this is abstract—one theory of the natural arc of technology versus another. How does it apply to the actual dangers we face so that we can ponder whether humanity is screwed? The key is not to fall for the Availability bias and assume that if we can imagine something terrible, it is bound to happen. The real danger depends on the numbers: the proportion of people who want to cause mayhem or mass murder, the proportion of that genocidal sliver with the competence to concoct an effective cyber or biological weapon, the sliver of that sliver whose schemes will actually succeed, and the sliver of the sliver of the sliver that accomplishes a civilization-ending cataclysm rather than a nuisance, a blow, or even a disaster, after which life goes on.

Start with the number of maniacs. Does the modern world harbor a significant number of people who want to visit murder and mayhem on strangers? If it did, life would be unrecognizable. They could go on stabbing rampages, spray gunfire into crowds, mow down pedestrians with cars, set off pressure-cooker bombs, and shove people off sidewalks and subway platforms into the path of hurtling vehicles. The researcher Gwern Branwen has calculated that a disciplined sniper or serial killer could murder hundreds of people without getting caught.42 A saboteur with a thirst for havoc could tamper with supermarket products, lace some pesticide into a feedlot or water supply, or even just make an anonymous call claiming to have done so, and it could cost a company hundreds of millions of dollars in recalls, and a country billions in lost exports.43 Such attacks could take place in every city in the world many times a day, but in fact take place somewhere or other every few years (leading the security expert Bruce Schneier to ask, “Where are all the terrorist attacks?”).44 Despite all the terror generated by terrorism, there must be very few individuals out there waiting for an opportunity to wreak wanton destruction.

Among these depraved individuals, how large is the subset with the intelligence and discipline to develop an effective cyber- or bioweapon? Far from being criminal masterminds, most terrorists are bumbling schlemiels.45 Typical specimens include the Shoe Bomber, who unsuccessfully tried to down an airliner by igniting explosives in his shoe; the Underwear Bomber, who unsuccessfully tried to down an airliner by detonating explosives in his underwear; the ISIS trainer who demonstrated an explosive vest to his class of aspiring suicide terrorists and blew himself and all twenty-one of them to bits; the Tsarnaev brothers, who followed up on their bombing of the Boston Marathon by murdering a police officer in an unsuccessful attempt to steal his gun, and then embarked on a carjacking, a robbery, and a Hollywood-style car chase during which one brother ran over the other; and Abdullah al-Asiri, who tried to assassinate a Saudi deputy minister with an improvised explosive device hidden in his anus and succeeded only in obliterating himself.46 (An intelligence analysis firm reported that the event “signals a paradigm shift in suicide bombing tactics.”)47 Occasionally, as on September 11, 2001, a team of clever and disciplined terrorists gets lucky, but most successful plots are low-tech attacks on target-rich gatherings, and (as we saw in chapter 13) kill very few people. Indeed, I venture that the proportion of brilliant terrorists in a population is even smaller than the proportion of terrorists multiplied by the proportion of brilliant people. Terrorism is a demonstrably ineffective tactic, and a mind that delights in senseless mayhem for its own sake is probably not the brightest bulb in the box.48

Now take the small number of brilliant weaponeers and cut it down still further by the proportion with the cunning and luck to outsmart the world’s police, security experts, and counterterrorism forces. The number may not be zero, but it surely isn’t high. As with all complex undertakings, many heads are better than one, and an organization of bio- or cyberterrorists could be more effective than a lone mastermind. But that’s where Kelly’s observation kicks in: the leader would have to recruit and manage a team of co-conspirators who exercised perfect secrecy, competence, and loyalty to the depraved cause. As the size of the team increases, so do the odds of detection, betrayal, infiltrators, blunders, and stings.49

Serious threats to the integrity of a country’s infrastructure are likely to require the resources of a state.50 Software hacking is not enough; the hacker needs detailed knowledge about the physical construction of the systems he hopes to sabotage. When the Iranian nuclear centrifuges were compromised in 2010 by the Stuxnet worm, it required a coordinated effort by two technologically sophisticated nations, the United States and Israel. State-based cyber-sabotage escalates the malevolence from terrorism to a kind of warfare, where the constraints of international relations, such as norms, treaties, sanctions, retaliation, and military deterrence, inhibit aggressive attacks, as they do in conventional “kinetic” warfare. As we saw in chapter 11, these constraints have become increasingly effective at preventing interstate war.

Nonetheless, American military officials have warned of a “digital Pearl Harbor” and a “Cyber-Armageddon” in which foreign states or sophisticated terrorist organizations would hack into American sites to crash planes, open floodgates, melt down nuclear power plants, black out power grids, and take down the financial system. Most cybersecurity experts consider the threats to be inflated—a pretext for more military funding, power, and restrictions on Internet privacy and freedom.51 The reality is that so far, not a single person has ever been injured by a cyberattack. The strikes have mostly been nuisances such as doxing, namely leaking confidential documents or e-mail (as in the Russian meddling in the 2016 American election), and distributed denial-of-service attacks, where a botnet (an array of hacked computers) floods a site with traffic. Schneier explains, “A real-world comparison might be if an army invaded a country, then all got in line in front of people at the Department of Motor Vehicles so they couldn’t renew their licenses. If that’s what war looks like in the 21st century, we have little to fear.”52

For the techno-doomsters, though, tiny probabilities are no comfort. All it will take, they say, is for one hacker or terrorist or rogue state to get lucky, and it’s game over. That’s why the word threat is preceded with existential, giving the adjective its biggest workout since the heyday of Sartre and Camus. In 2001 the chairman of the Joint Chiefs of Staff warned that “the biggest existential threat out there is cyber” (prompting John Mueller to comment, “As opposed to small existential threats, presumably”).

This existentialism depends on a casual slide from nuisance to adversity to tragedy to disaster to annihilation. Suppose there was an episode of bioterror or bioterror that killed a million people. Suppose a hacker did manage to take down the Internet. Would the country literally cease to exist? Would civilization collapse? Would the human species go extinct? A little proportion, please—even Hiroshima continues to exist! The assumption is that modern people are so helpless that if the Internet ever went down, farmers would stand by and watch their crops rot while dazed city-dwellers starved. But disaster sociology (yes, there is such a field) has shown that people are highly resilient in the face of catastrophe.53 Far from looting, panicking, or sinking into paralysis, they spontaneously cooperate to restore order and improvise networks for distributing goods and services. Enrico Quarantelli noted that within minutes of the Hiroshima nuclear blast,

survivors engaged in search and rescue, helped one another in whatever ways they could, and withdrew in controlled flight from burning areas. Within a day, apart from the planning undertaken by the government and military organizations that partly survived, other groups partially restored electric power to some areas, a steel company with 20 percent of workers attending began operations again, employees of the 12 banks in Hiroshima assembled in the Hiroshima branch in the city and began making payments, and trolley lines leading into the city were completely cleared with partial traffic restored the following day.54

One reason that the death toll of World War II was so horrendous is that war planners on both sides adopted the strategy of bombing civilians until their societies collapsed—which they never did.55 And no, this resilience was not a relic of the homogeneous communities of yesteryear. Cosmopolitan 21st-century societies can cope with disasters, too, as we saw in the orderly evacuation of Lower Manhattan following the 9/11 attacks in the United States, and the absence of panic in Estonia in 2007 when the country was struck with a devastating denial-of-service cyberattack.56

Bioterrorism may be another phantom menace. Biological weapons, renounced in a 1972 international convention by virtually every nation, have played no role in modern warfare. The ban was driven by a widespread revulsion at the very idea, but the world’s militaries needed little convincing, because tiny living things make lousy weapons. They easily blow back and infect the weaponeers, warriors, and citizens of the side that uses them (just imagine the Tsarnaev brothers with anthrax spores). And whether a disease outbreak fizzles out or (literally) goes viral depends on intricate network dynamics that even the best epidemiologists cannot predict.57

Biological agents are particularly ill-suited to terrorists, whose goal, recall, is not damage but theater (chapter 13).58 The biologist Paul Ewald notes that natural selection among pathogens works against the terrorist’s goal of sudden and spectacular devastation.59 Germs that depend on rapid person-to-person contagion, like the common-cold virus, are selected to keep their hosts alive and ambulatory so they can shake hands with and sneeze on as many people as possible. Germs get greedy and kill their hosts only if they have some other way of getting from body to body, like mosquitoes (for malaria), a contaminable water supply (for cholera), or trenches packed with injured soldiers (for the 1918 Spanish flu). Sexually transmitted pathogens, like HIV and syphilis, are somewhere in between, needing a long and symptomless incubation period during which hosts can infect their partners, after which the germs do their damage. Virulence and contagion thus trade off, and the evolution of germs will frustrate the terrorist’s aspiration to launch a headline-worthy epidemic that is both swift and lethal. Theoretically, a bioterrorist could try to bend the curve with a pathogen that is virulent, contagious, and durable enough to survive outside bodies. But breeding such a fine-tuned germ would require Nazi-like experiments on living humans that even terrorists (to say nothing of teenagers) are unlikely to carry off. It may be more than just luck that the world so far has seen just one successful bioterror attack (the 1984 tainting of salad with salmonella in an Oregon town by the Rajneeshee religious cult, which killed no one) and one spree killing (the 2001 anthrax mailings, which killed five).60

To be sure, advances in synthetic biology, such as the gene-editing technique CRISPR-Cas9, make it easier to tinker with organisms, including pathogens. But it’s difficult to re-engineer a complex evolved trait by inserting a gene or two, since the effects of any gene are intertwined with the rest of the organism’s genome. Ewald notes, “I don’t think that we are close to understanding how to insert combinations of genetic variants in any given pathogen that act in concert to generate high transmissibility and stably high virulence for humans.”61 The biotech expert Robert Carlson adds that “one of the problems with building any flu virus is that you need to keep your production system (cells or eggs) alive long enough to make a useful quantity of something that is trying to kill that production system. . . . Booting up the resulting virus is still very, very difficult. . . . I would not dismiss this threat completely, but frankly I am much more worried about what Mother Nature is throwing at us all the time.”62

And crucially, advances in biology work the other way as well: they also make it easier for the good guys (and there are many more of them) to identify pathogens, invent antibiotics that overcome antibiotic resistance, and rapidly develop vaccines.63 An example is the Ebola vaccine, developed in the waning days of the 2014–15 emergency, after public health efforts had capped the toll at twelve thousand deaths rather than the millions that the media had foreseen. Ebola thus joined a list of other falsely predicted pandemics such as Lassa fever, hantavirus, SARS, mad cow disease, bird flu, and swine flu.64 Some of them never had the potential to go pandemic in the first place because they are contracted from animals or food rather than in an exponential tree of person-to-person infections. Others were nipped by medical and public health interventions. Of course no one knows for sure whether an evil genius will someday overcome the world’s defenses and loose a plague upon the world for fun, vengeance, or a sacred cause. But journalistic habits and the Availability and Negativity biases inflate the odds, which is why I have taken Sir Martin up on his bet. By the time you read this you may know who has won.65


Some of the threats to humanity are fanciful or infinitesimal, but one is real: nuclear war.66 The world has more than ten thousand nuclear weapons distributed among nine countries.67 Many are mounted on missiles or loaded in bombers and can be delivered within hours or less to thousands of targets. Each is designed to cause stupendous destruction: a single one could destroy a city, and collectively they could kill hundreds of millions of people by blast, heat, radiation, and radioactive fallout. If India and Pakistan went to war and detonated a hundred of their weapons, twenty million people could be killed right away, and soot from the firestorms could spread through the atmosphere, devastate the ozone layer, and cool the planet for more than a decade, which in turn would slash food production and starve more than a billion people. An all-out exchange between the United States and Russia could cool the Earth by 8°C for years and create a nuclear winter (or at least autumn) that would starve even more.68 Whether or not nuclear war would (as is often asserted) destroy civilization, the species, or the planet, it would be horrific beyond imagining.

Soon after atom bombs were dropped on Japan, and the United States and the Soviet Union embarked on a nuclear arms race, a new form of historical pessimism took root. In this Promethean narrative, humanity has wrested deadly knowledge from the gods, and, lacking the wisdom to use it responsibly, is doomed to annihilate itself. In one version, it is not just humanity that is fated to follow this tragic arc but any advanced intelligence. That explains why we have never been visited by space aliens, even though the universe must be teeming with them (the so-called Fermi Paradox, after Enrico Fermi, who first wondered about it). Once life originates on a planet, it inevitably progresses to intelligence, civilization, science, nuclear physics, nuclear weapons, and suicidal war, exterminating itself before it can leave its solar system.

For some intellectuals the invention of nuclear weapons indicts the enterprise of science—indeed, of modernity itself—because the threat of a holocaust cancels out whatever gifts science may have bestowed upon us. The indictment of science seems misplaced, given that since the dawn of the nuclear age, when mainstream scientists were sidelined from nuclear policy, it’s been physical scientists who have waged a vociferous campaign to remind the world of the danger of nuclear war and to urge nations to disarm. Among the illustrious historic figures are Niels Bohr, J. Robert Oppenheimer, Albert Einstein, Isidor Rabi, Leo Szilard, Joseph Rotblat, Harold Urey, C. P. Snow, Victor Weisskopf, Philip Morrison, Herman Feshbach, Henry Kendall, Theodore Taylor, and Carl Sagan. The movement continues among high-profile scientists today, including Stephen Hawking, Michio Kaku, Lawrence Krauss, and Max Tegmark. Scientists have founded the major activist and watchdog organizations, including the Union of Concerned Scientists, the Federation of American Scientists, the Committee for Nuclear Responsibility, the Pugwash Conferences, and the Bulletin of the Atomic Scientists, whose cover shows the famous Doomsday Clock, now set at two and a half minutes to midnight.69

Physical scientists, unfortunately, often consider themselves experts in political psychology, and many seem to embrace the folk theory that the most effective way to mobilize public opinion is to whip people into a lather of fear and dread. The Doomsday Clock, despite adorning a journal with “Scientists” in its title, does not track objective indicators of nuclear security; rather, it’s a propaganda stunt intended, in the words of its founder, “to preserve civilization by scaring men into rationality.”70 The clock’s minute hand was farther from midnight in 1962, the year of the Cuban Missile Crisis, than it was in the far calmer 2007, in part because the editors, worried that the public had become too complacent, redefined “doomsday” to include climate change.71 And in their campaign to shake people out of their apathy, scientific experts have made some not-so-prescient predictions:

Only the creation of a world government can prevent the impending self-destruction of mankind.

—Albert Einstein, 195072

I have a firm belief that unless we have more serious and sober thought on various aspects of the strategic problem . . . we are not going to reach the year 2000—and maybe not even the year 1965—without a cataclysm.

—Herman Kahn, 196073

Within, at the most, ten years, some of those [nuclear] bombs are going off. I am saying this as responsibly as I can. That is the certainty.

—C. P. Snow, 196174

I am completely certain—there is not the slightest doubt in my mind—that by the year 2000, you [students] will all be dead.

—Joseph Weizenbaum, 197675

They are joined by experts such as the political scientist Hans Morgenthau, a famous exponent of “realism” in international relations, who predicted in 1979:

In my opinion the world is moving ineluctably towards a third world war—a strategic nuclear war. I do not believe that anything can be done to prevent it.76

And the journalist Jonathan Schell, whose 1982 bestseller The Fate of the Earth ended as follows:

One day—and it is hard to believe that it will not be soon—we will make our choice. Either we will sink into the final coma and end it all or, as I trust and believe, we will awaken to the truth of our peril . . . and rise up to cleanse the earth of nuclear weapons.

This genre of prophecy went out of style when the Cold War ended and humanity had not sunk into the final coma, despite having failed to create a world government or to cleanse the Earth of nuclear weapons. To keep the fear at a boil, activists keep lists of close calls and near-misses intended to show that Armageddon has always been just a glitch away and that humanity has survived only by dint of an uncanny streak of luck.77 The lists tend to lump truly dangerous moments, such as a 1983 NATO exercise that some Soviet officers almost mistook for an imminent first strike, with smaller lapses and snafus, such as a 2013 incident in which an off-duty American general who was responsible for nuclear-armed missiles got drunk and acted boorishly toward women during a four-day trip to Russia.78 The sequence that would escalate to a nuclear exchange is never laid out, nor are alternative assessments given which might put the episodes in context and lessen the terror.79

The message that many antinuclear activists want to convey is “Any day now we will all die horribly unless the world immediately takes measures which it has absolutely no chance of taking.” The effect on the public is about what you would expect: people avoid thinking about the unthinkable, get on with their lives, and hope the experts are wrong. Mentions of “nuclear war” in books and newspapers have steadily declined since the 1980s, and journalists give far more attention to terrorism, inequality, and sundry gaffes and scandals than they do to a threat to the survival of civilization.80 The world’s leaders are no more moved. Carl Sagan was a coauthor of the first paper warning of a nuclear winter, and when he campaigned for a nuclear freeze by trying to generate “fear, then belief, then response,” he was advised by an arms-control expert, “If you think that the mere prospect of the end of the world is sufficient to change thinking in Washington and Moscow you clearly haven’t spent much time in either of those places.”81

In recent decades predictions of an imminent nuclear catastrophe have shifted from war to terrorism, such as when the American diplomat John Negroponte wrote in 2003, “There is a high probability that within two years al-Qaeda will attempt an attack using a nuclear or other weapon of mass destruction.”82 Though a probabilistic prediction of an event that fails to occur can never be gainsaid, the sheer number of false predictions (Mueller has more than seventy in his collection, with deadlines staggered over several decades) suggests that prognosticators are biased toward scaring people.83 (In 2004, four American political figures wrote an op-ed on the threat of nuclear terrorism entitled “Our Hair Is on Fire.”)84 The tactic is dubious. People are easily riled by actual attacks with guns and homemade bombs into supporting repressive measures like domestic surveillance or a ban on Muslim immigration. But predictions of a mushroom cloud on Main Street have aroused little interest in policies to combat nuclear terrorism, such as an international program to control fissile material.

Such backfiring had been predicted by critics of the first nuclear scare campaigns. As early as 1945, the theologian Reinhold Niebuhr observed, “Ultimate perils, however great, have a less lively influence upon the human imagination than immediate resentments and frictions, however small by comparison.”85 The historian Paul Boyer found that nuclear alarmism actually encouraged the arms race by scaring the nation into pursuing more and bigger bombs, the better to deter the Soviets.86 Even the originator of the Doomsday Clock, Eugene Rabinowitch, came to regret his movement’s strategy: “While trying to frighten men into rationality, scientists have frightened many into abject fear or blind hatred.”87


As we saw with climate change, people may be likelier to acknowledge a problem when they have reason to think it is solvable than when they are terrified into numbness and helplessness.88 A positive agenda for removing the threat of nuclear war from the human condition would embrace several ideas.

The first is to stop telling everyone they’re doomed. The fundamental fact of the nuclear age is that no atomic weapon has been used since Nagasaki. If the hands of a clock point to a few minutes to midnight for seventy-two years, something is wrong with the clock. Now, maybe the world has been blessed with a miraculous run of good luck—no one will ever know—but before resigning ourselves to that scientifically disreputable conclusion, we should at least consider the possibility that systematic features of the international system have worked against their use. Many antinuclear activists hate this way of thinking because it seems to take the heat off countries to disarm. But since the nine nuclear states won’t be scuppering their weapons tomorrow, it behooves us in the meantime to figure out what has gone right, so we can do more of whatever it is.

Foremost is a historical discovery summarized by the political scientist Robert Jervis: “The Soviet archives have yet to reveal any serious plans for unprovoked aggression against Western Europe, not to mention a first strike against the United States.”89 That means that the intricate weaponry and strategic doctrines for nuclear deterrence during the Cold War—what one political scientist called “nuclear metaphysics”—were deterring an attack that the Soviets had no interest in launching in the first place.90 When the Cold War ended, the fear of massive invasions and preemptive nuclear strikes faded with it, and (as we shall see) both sides felt relaxed enough to slash their weapon stockpiles without even bothering with formal negotiations.91 Contrary to a theory of technological determinism in which nuclear weapons start a war all by themselves, the risk very much depends on the state of international relations. Much of the credit for the absence of nuclear war between great powers must go to the forces behind the decline of war between great powers (chapter 11). Anything that reduces the risk of war reduces the risk of nuclear war.

The close calls, too, may not depend on a supernatural streak of good luck. Several political scientists and historians who have analyzed documents from the Cuban Missile Crisis, particularly transcripts of John F. Kennedy’s meetings with his security advisors, have argued that despite the participants’ recollections about having pulled the world back from the brink of Armageddon, “the odds that the Americans would have gone to war were next to zero.”92 The records show that Khrushchev and Kennedy remained in firm control of their governments, and that each sought a peaceful end to the crisis, ignoring provocations and leaving themselves several options for backing down.

The hair-raising false alarms and brushes with accidental launches also need not imply that the gods smiled on us again and again. They might instead show that the human and technological links in the chain were predisposed to prevent catastrophes, and were strengthened after each mishap.93 In their report on nuclear close calls, the Union of Concerned Scientists summarizes the history with refreshing judiciousness: “The fact that such a launch has not occurred so far suggests that safety measures work well enough to make the chance of such an incident small. But it is not zero.”94

Thinking about our predicament in this way allows us to avoid both panic and complacency. Suppose that the chance of a catastrophic nuclear war breaking out in a single year is one percent. (This is a generous estimate: the probability must be less than that of an accidental launch, because escalation from a single accident to a full-scale war is far from automatic, and in seventy-two years the number of accidental launches has been zero.)95 That would surely be an unacceptable risk, because a little algebra shows that the probability of our going a century without such a catastrophe is less than 37 percent. But if we can reduce the annual chance of nuclear war to a tenth of a percent, the world’s odds of a catastrophe-free century increase to 90 percent; at a hundredth of a percent, the chance rises to 99 percent, and so on.

Fears of runaway nuclear proliferation have also proven to be exaggerated. Contrary to predictions in the 1960s that there would soon be twenty-five or thirty nuclear states, fifty years later there are nine.96 During that half-century four countries have un-proliferated by relinquishing nuclear weapons (South Africa, Kazakhstan, Ukraine, and Belarus), and another sixteen pursued them but thought the better of it, most recently Libya and Iran. For the first time since 1946, no non-nuclear state is known to be developing nuclear weapons.97 True, the thought of Kim Jong-un with nukes is alarming, but the world has survived half-mad despots with nuclear weapons before, namely Stalin and Mao, who were deterred from using them, or, more likely, never felt the need. Keeping a cool head about proliferation is not just good for one’s mental health. It can prevent nations from stumbling into disastrous preventive wars, such as the invasion of Iraq in 2003, and the possible war between Iran and the United States or Israel that was much discussed around the end of that decade.

Tremulous speculations about terrorists stealing a nuclear weapon or building one in their garage and smuggling it into the country in a suitcase or shipping container have also been scrutinized by cooler heads, including Michael Levi in On Nuclear Terrorism, John Mueller in Atomic Obsession and Overblown, Richard Muller in Physics for Future Presidents, and Richard Rhodes in Twilight of the Bombs. Joining them is the statesman Gareth Evans, an authority on nuclear proliferation and disarmament, who in 2015 delivered the seventieth-anniversary keynote lecture at the Annual Clock Symposium of the Bulletin of the Atomic Scientists entitled “Restoring Reason to the Nuclear Debate.”

At the risk of sounding complacent—and I am not—I have to say that [nuclear security], too, would benefit by being conducted a little less emotionally, and a little more calmly and rationally, than has tended to be the case.

While the engineering know-how required to build a basic fission device like the Hiroshima or Nagasaki bomb is readily available, highly enriched uranium and weapons-grade plutonium are not at all easily accessible, and to assemble and maintain—for a long period, out of sight of the huge intelligence and law enforcement resources that are now being devoted to this threat worldwide—the team of criminal operatives, scientists and engineers necessary to acquire the components of, build and deliver such a weapon would be a formidably difficult undertaking.98

Now that we’ve all calmed down a bit, the next step in a positive agenda for reducing the nuclear threat is to divest the weapons of their ghoulish glamour, starting with the Greek tragedy in which they have starred. Nuclear weapons technology is not the culmination of the ascent of human mastery over the forces of nature. It is a mess we blundered into because of vicissitudes of history and that we now must figure out how to extricate ourselves from. The Manhattan Project grew out of the fear that the Germans were developing a nuclear weapon, and it attracted scientists for reasons explained by the psychologist George Miller, who had worked on another wartime research project: “My generation saw the war against Hitler as a war of good against evil; any able-bodied young man could stomach the shame of civilian clothes only from an inner conviction that what he was doing instead would contribute even more to ultimate victory.”99 Quite possibly, had there been no Nazis, there would be no nukes. Weapons don’t come into existence just because they are conceivable or physically possible. All kinds of weapons have been dreamed up that never saw the light of day: death rays, battlestars, fleets of planes that blanket cities with poison gas like cropdusters, and cracked schemes for “geophysical warfare” such as weaponizing the weather, floods, earthquakes, tsunamis, the ozone layer, asteroids, solar flares, and the Van Allen radiation belts.100 In an alternative history of the 20th century, nuclear weapons might have struck people as equally bizarre.

Nor do nuclear weapons deserve credit for ending World War II or cementing the Long Peace that followed it—two arguments that repeatedly come up to suggest that nuclear weapons are good things rather than bad things. Most historians today believe that Japan surrendered not because of the atomic bombings, whose devastation was no greater than that from the firebombings of sixty other Japanese cities, but because of the entry into the Pacific war of the Soviet Union, which threatened harsher terms of surrender.101

And contrary to the half-facetious suggestion that The Bomb be awarded the Nobel Peace Prize, nuclear weapons turn out to be lousy deterrents (except in the extreme case of deterring existential threats, such as each other).102 Nuclear weapons are indiscriminately destructive and contaminate wide areas with radioactive fallout, including the contested territory and, depending on the weather, the bomber’s own soldiers and citizens. Incinerating massive numbers of noncombatants would shred the principles of distinction and proportionality that govern the conduct of war and would constitute the worst war crimes in history. That can make even politicians squeamish, so a taboo grew up around the use of nuclear weapons, effectively turning them into bluffs.103 Nuclear states have been no more effective than non-nuclear states in getting their way in international standoffs, and in many conflicts, non-nuclear countries or factions have picked fights with nuclear ones. (In 1982, for example, Argentina seized the Falkland Islands from the United Kingdom, confident that Margaret Thatcher would not turn Buenos Aires into a radioactive crater.) It’s not that deterrence itself is irrelevant: World War II showed that conventional tanks, artillery, and bombers were already massively destructive, and no nation was eager for an encore.104

Far from easing the world into a stable equilibrium (the so-called balance of terror), nuclear weapons can poise it on a knife’s edge. In a crisis, nuclear weapon states are like an armed homeowner confronting an armed burglar, each tempted to shoot first to avoid being shot.105 In theory this security dilemma or Hobbesian trap can be defused if each side has a second-strike capability, such as missiles in submarines or airborne bombers that can elude a first strike and exact devastating revenge—the condition of Mutual Assured Destruction (MAD). But some debates in nuclear metaphysics raise doubts about whether a second strike can be guaranteed in every conceivable scenario, and whether a nation that depended on it might still be vulnerable to nuclear blackmail. So the United States and Russia maintain the option of “launch on warning,” in which a leader who is advised that his missiles are under attack can decide in the next few minutes whether to use them or lose them. This hair trigger, as critics have called it, could set off a nuclear exchange in response to a false alarm or an accidental or unauthorized launch. The lists of close calls suggest that the probability is disconcertingly greater than zero.

Since nuclear weapons needn’t have been invented, and they are useless in winning wars or keeping the peace, that means they can be uninvented—not in the sense that the knowledge of how to make them will vanish, but in the sense that they can be dismantled and no new ones built. It would not be the first time that a class of weapons has been marginalized or scrapped. The world’s nations have banned antipersonnel landmines, cluster munitions, and chemical and biological weapons, and they have seen other high-tech weapons of the day collapse under the weight of their own absurdity. During World War I the Germans invented a gargantuan, multistory “supergun” which fired a 200-pound projectile more than 80 miles, terrifying Parisians with shells that fell from the sky without warning. The behemoths, the biggest of which was called the Gustav Gun, were inaccurate and unwieldy, so few of them were built and they were eventually scuttled. The nuclear skeptics Ken Berry, Patricia Lewis, Benoît Pelopidas, Nikolai Sokov, and Ward Wilson point out:

Today countries do not race to build their own superguns. . . . There are no angry diatribes in liberal papers about the horror of these weapons and the necessity of banning them. There are no realist op-eds in conservative papers asserting that there is no way to shove the supergun genie back into the bottle. They were wasteful and ineffective. History is replete with weapons that were touted as war-winners that were eventually abandoned because they had little effect.106

Could nuclear weapons go the way of the Gustav Gun? In the late 1950s a movement arose to Ban the Bomb, and over the decades it escaped its founding circle of beatniks and eccentric professors and has gone mainstream. Global Zero, as the goal is now called, was broached in 1986 by Mikhail Gorbachev and Ronald Reagan, who famously mused, “A nuclear war cannot be won and must never be fought. The only value in our two nations possessing nuclear weapons is to make sure they will never be used. But then would it not be better to do away with them entirely?” In 2007 a bipartisan quartet of defense realists (Henry Kissinger, George Shultz, Sam Nunn, and William Perry) wrote an op-ed called “A World Free of Nuclear Weapons,” with the backing of fourteen other former National Security Advisors and Secretaries of State and Defense.107 In 2009 Barack Obama gave a historic speech in Prague in which he stated “clearly and with conviction America’s commitment to seek the peace and security of a world without nuclear weapons,” an aspiration that helped win him the Nobel Peace Prize.108 It was echoed by his Russian counterpart at the time, Dmitry Medvedev (though not so much by either one’s successor). Yet in a sense the declaration was redundant, because the United States and Russia, as signatories of the 1970 Non-Proliferation Treaty, were already committed by its Article VI to eliminating their nuclear arsenals.109 Also committed are the United Kingdom, France, and China, the other nuclear states grandfathered in by the treaty. (In a backhanded acknowledgment that treaties matter, India, Pakistan, and Israel never signed it, and North Korea withdrew.) The world’s citizens are squarely behind the movement: large majorities in almost every surveyed country favor abolition.110

Zero is an attractive number because it expands the nuclear taboo from using the weapons to possessing them. It also removes any incentive for a nation to obtain nuclear weapons to protect itself against an enemy’s nuclear weapons. But getting to zero will not be easy, even with a carefully phased sequence of negotiation, reduction, and verification.111 Some strategists warn that we shouldn’t even try to get to zero, because in a crisis the former nuclear powers might rush to rearm, and the first past the post might launch a pre-emptive strike out of fear that its enemy would do so first.112 According to this argument, the world would be better off if the nuclear grandfathers kept a few around as a deterrent. In either case, the world is very far from zero, or even “a few.” Until that blessed day comes, there are incremental steps that could bring the day closer while making the world safer.

The most obvious is to whittle down the size of the arsenal. The process is well under way. Few people are aware of how dramatically the world has been dismantling nuclear weapons. Figure 19-1 shows that the United States has reduced its inventory by 85 percent from its 1967 peak, and now has fewer nuclear warheads than at any time since 1956.113 Russia, for its part, has reduced its arsenal by 89 percent from its Soviet-era peak. (Probably even fewer people realize that about 10 percent of electricity in the United States comes from dismantled nuclear warheads, mostly Soviet.)114 In 2010 both countries signed the New Strategic Arms Reduction Treaty (New START), which commits them to shrinking their inventories of deployed strategic warheads by two-thirds.115 In exchange for Congressional approval of the treaty, Obama agreed to a long-term modernization of the American arsenal, and Russia is modernizing its arsenal as well, but both countries will continue to reduce the size of their stockpiles at rates that may even exceed the ones set out in the treaty.116 The barely discernible layers laminating the top of the stack in the graph represent the other nuclear powers. The British and French arsenals were smaller to begin with and have shrunk in half, to 215 and 300, respectively. (China’s has grown slightly from 235 to 260, India’s and Pakistan’s have grown to around 135 apiece, Israel’s is estimated at around 80, and North Korea’s is unknown but small.)117 As I mentioned, no additional countries are known to be pursuing nuclear weapons, and the number possessing fissile material that could be made into bombs has been reduced over the past twenty-five years from fifty to twenty-four.118

Figure 19-1: Nuclear weapons, 1945–2015

Sources: HumanProgress, http://humanprogress.org/static/2927, based on data from the Federation of Atomic Scientists, Kristensen & Norris 2016a, updated in Kristensen 2016; see Kristensen & Norris 2016b for additional explanation. The counts include weapons that are deployed and those that are stockpiled, but exclude weapons that are retired and awaiting dismantlement.

Cynics might be unimpressed by a form of progress that still leaves the world with 10,200 atomic warheads, since, as the 1980s bumper sticker pointed out, one nuclear bomb can ruin your whole day. But with 54,000 fewer nuclear bombs on the planet than there were in 1986, there are far fewer opportunities for accidents that might ruin people’s whole day, and a precedent has been set for continuing disarmament. More warheads will be eliminated under the terms of the New START, and as I mentioned, still more reductions may take place outside the framework of treaties, which are freighted with legalistic negotiations and divisive political symbolism. When tensions among great powers recede (a long-term trend, even if it’s in abeyance today), they quietly shrink their expensive arsenals.119 Even when rivals are barely speaking, they can cooperate in a reverse arms race using the tactic that the psycholinguist Charles Osgood called Graduated Reciprocation in Tension-Reduction (GRIT), in which one side makes a small unilateral concession with a public invitation that it be reciprocated.120 If, someday, a combination of these developments pared the arsenals down to 200 warheads apiece, it would not only dramatically reduce the chance of an accident but essentially eliminate the possibility of nuclear winter, the truly existential threat.121

In the near term, the greatest menace of nuclear war comes not so much from the number of weapons in existence as from the circumstances in which they might be used. The policy of launch on warning, launch under attack, or hair-trigger alert is truly the stuff of nightmares. No early warning system can perfectly distinguish signal from noise, and a president awakened by the proverbial 3:00 A.M. phone call would have minutes to decide whether to fire his missiles before they were destroyed in their silos. In theory, he could start World War III in response to a short circuit, a flock of seagulls, or a bit of malware from that Bulgarian teenager. In reality, the warning systems are better than that, and there is no “hair trigger” that automatically launches missiles without human intervention.122 But when missiles can be launched on short notice, the risks of a false alarm or an accidental, rogue, or impetuous launch are real.

The original rationale for launch on warning was to thwart a massive first strike that would destroy every missile in its silo and leave the country unable to retaliate. But as I mentioned, states can launch weapons from submarines, which hide in deep water, or from bomber aircraft, which can be sent scrambling, making the weapons invulnerable to a first strike and poised to exact devastating revenge. The decision to retaliate could be made in the cold light of day, when the uncertainty has passed: if a nuclear bomb has been detonated on your territory, you know it.

Launch on warning, then, is unnecessary for deterrence and unacceptably dangerous. Most nuclear security analysts recommend—no, insist—that nuclear states take their missiles off hair-trigger alert and put them on a long fuse.123 Obama, Nunn, Shultz, George W. Bush, Robert McNamara, and several former Commanders of Strategic Command and Directors of the National Security Agency agree.124 Some, like William Perry, recommend scrapping the land-based leg of the nuclear triad altogether and relying on submarines and bombers for deterrence, since silo-based missiles are sitting ducks which tempt a leader to use them while they can. So with the fate of the world at stake, why would anyone want to keep missiles in silos on hair-trigger alert? Some nuclear metaphysicians argue that in a crisis, the act of re-alerting de-alerted missiles would be a provocation. Others note that because silo-based missiles are more reliable and accurate, they are worth safeguarding, because they can be used not just to deter a war but to win one. And that brings us to another way to reduce the risks of nuclear war.

It’s hard for anyone with a conscience to believe that their country is prepared to use nuclear weapons for any purpose other than deterring a nuclear attack. But that is the official policy of the United States, the United Kingdom, France, Russia, and Pakistan, all of whom have declared they might launch a nuclear weapon if they or their allies have been massively attacked with non-nuclear weapons. Apart from violating any concept of proportionality, a first-use policy is dangerous, because a non-nuclear attacker might be tempted to escalate to nuclear pre-emptively. Even if it didn’t, once it was nuked it might retaliate with a nuclear strike of its own.

So a common-sense way to reduce the threat of nuclear war is to announce a policy of No First Use.125 In theory, this would eliminate the possibility of nuclear war altogether: if no one uses a weapon first, they’ll never be used. In practice, it would remove some of the temptation of a pre-emptive strike. Nuclear weapon states could all agree to No First Use in a treaty; they could get there by GRIT (with incremental commitments like never attacking civilian targets, never attacking a non-nuclear state, and never attacking a target that could be destroyed by conventional means); or they could simply adopt it unilaterally, which is in their own interests.126 The nuclear taboo has already reduced the deterrent value of a Maybe First Use policy, and the declarant could still protect itself with conventional forces and with a second-strike capability: nuclear tit for tat.

No First Use seems like a no-brainer, and Barack Obama came close to adopting it in 2016, but was talked out of it at the last minute by his advisors.127 The timing wasn’t right, they said; it might signal weakness to a newly obstreperous Russia, China, and North Korea, and it might scare nervous allies who now depend on the American “nuclear umbrella” into seeking nuclear weapons of their own, particularly with Donald Trump threatening to cut back on American support of its coalition partners. In the long term, these tensions may subside, and No First Use may be considered once more.

Nuclear weapons won’t be abolished anytime soon, and certainly not by the original target date of the Global Zero movement, 2030. In his 2009 Prague speech Obama said that the goal “will not be reached quickly—perhaps not in my lifetime,” which dates it to well after 2055 (see figure 5-1). “It will take patience and persistence,” he advised, and recent developments in the United States and Russia confirm that we’ll need plenty of both.

But the pathway has been laid out. If nuclear warheads continue to be dismantled faster than they are built, if they are taken off a hair trigger and guaranteed not to be used first, and if the trend away from interstate war continues, then by the second half of the century we could end up with small, secure arsenals kept only for mutual deterrence. After a few decades they might deter themselves out of a job. At that point they would seem ludicrous to our grandchildren, who will beat them into plowshares once and for all. During this climbdown we may never reach a point at which the chance of a catastrophe is zero. But each step down can lower the risk, until it is in the range of the other threats to our species’ immortality, like asteroids, supervolcanoes, or an Artificial Intelligence that turns us into paper clips.