We Don't Have to Destroy Ourselves
We are the only species that has the capacity to make ourselves extinct. But courting that risk is not inevitable; it is a choice, and it's not too late to make the right one.
Thank you for reading The Garden of Forking Paths. This edition is free for everyone, but please consider upgrading to a paid subscription to fully access all 165+ essays and support my research and writing. A paid subscription makes a great last minute gift for that intellectually curious person in your life—cerebral discovery, all for just $4/month.
I: Don’t Draw a Black Ball from the Urn of Invention
“It is an unnerving thought,” Bill Bryson once wrote, “that we may be living the universe's supreme achievement and its worst nightmare simultaneously.” Our sensations of awe and beauty, our intense ability to love, our self-aware consciousness may each be unparalleled—a rare instance of the universe perceiving itself.1
But as humans, we also possess an awesome capability known to no other species on the planet: we can make ourselves extinct. No matter how much octopuses scheme or chimpanzees twirl rudimentary tools, they cannot destroy themselves. We are in our own club, trapped in a cage of existential risk—designed by our cleverness, as seemingly inescapable as the march of progress, and with the unfortunate potential to kill us all.
Oxford philosopher Nick Bostrom, one of the world’s leading experts on existential risk, likens human technological progress to a giant urn of possibility. Imagine that every potential invention—everything a clever member of our species could ever hope to discover—is hidden inside that enormous urn, just waiting to be plucked out.
But there’s a catch. In Bostrom’s representation, the urn of near-infinite possibilities is filled with inventions in the form of balls of different shades. Many are white balls: inventions that are unequivocally good for us, like pacemakers and polio vaccines. There are also many grey balls, which can help but also may produce great harm, such as the smokestacks that powered prosperity yet also polluted the planet, or the development of Zyklon B, which was used as an effective pesticide before the Nazi regime used it to murder millions.2
And then, perhaps, there is at least one black ball. The black ball, if plucked out, unleashes a technology on us that will inevitably lead to our extinction. The black ball, Bostrom writes, is “a technology that invariably or by default destroys the civilization that invents it.”
These days, with astonishing frequency, inventors stick their hands into that urn of possibility, waggle their fingers around hoping for a lucky dip, and seize progress. So far, we have not wiped ourselves out.
However, Bostrom argues, if there is at least one black ball lurking within the urn, then it follow that we will eventually pull it out. It could be soon, it could be in thousands of years, but the logic is ironclad. It’s only a matter of time. This is what he terms The Vulnerable World Hypothesis.
“Our civilization has a considerable ability to pick up balls,” Bostrom warns, “but no ability to put them back into the urn. We can invent but we cannot un-invent. Our strategy is to hope that there is no black ball.”
Every species dies out, but it is of great interest to us when that might happen for Homo sapiens. Few of us would be terribly bothered if we knew that humans only had a million years left on Earth. But we might behave rather differently if we knew our species had less than a century before snuffing itself out. Would you decide to have kids? Would you worry about your bank account? Would you still go to work, 9 to 5, filling out spreadsheets at an insurance company?
These questions are not doom-mongering inducements to paralyzing depression. Instead, they highlight lurking peril—but call us to action to build a better, more resilient world. As the late E.O. Wilson once put it, we’re living a “one Earth, one experiment,” existence. “We’ve only got one shot at this. Let’s be careful.”
II: Mirror Microbes—New Existential Risk Just Dropped
To understand a new existential risk—a potential black ball that scientists are just starting to pull out of the urn—we first need a brief biology lesson.
The building blocks of life can all be described as either “right handed” or “left handed,” a characteristic known as chirality. If you picture the familiar double helix of DNA, imagine the structure as a spiral staircase. If you’re turning right as you descend, then it’s right-handed; if you’re turning left, it’s left handed. Both RNA and DNA are comprised of right-handed nucleotides, whereas proteins are comprised of left-handed amino acids.3 That’s just the way it is—perhaps a “frozen accident” of evolutionary history.4
But humans are a curious species and modernity has bestowed upon us unprecedented tools to manipulate how the world is into how the world could be. That combination of imagination and scientific progress has led scientists to not just wonder “what if?” but to try to make an alternative version of evolution a reality, turning right-handed into left.5
Enter the stranger-than-fiction idea of “mirror life.”
The idea can be summarized as this: what if we flipped everything around? Maybe that would be cool, or interesting, or unexpectedly useful! Plus, there’s a mystery: did DNA become left-handed because that spiral staircase shape provides a competitive advantage to all life? Or was it just arbitrary, a fluke of evolutionary history? Experimental curiosity has, after all, driven much human progress.
Such curiosity has already led to some preliminary research to overcome the technical barriers of turning right-handed DNA into a freakish left-handed variety, inverting nature’s fundamental inventions. Scientists reckon that they’re probably around a decade away from being able to produce these mirror images of life, including so-called “mirror microbes,” a sort of reverse bacterial design.
While this biological engineering breakthrough could have beneficial applications, a group of scientists just released a research letter warning that the mirror microbes pose an existential threat to humanity.6 Because every immune response we have has evolved to combat what already exists in nature’s biochemical engineering, producing something that’s altogether new could plausibly leave us—and many species around us—utterly defenseless against mirror images of life.
To the built-in mechanisms we have to detect harmful pathogens, these mirror microbes “would be invisible to the immune system until it was too late,” says Timothy Hand, a co-author and immunologist at the University of Pittsburgh.
In short, it’s plausible that mirror life could, in the worst scenario, wipe out Homo sapiens—and much other life, too. Or, if you want to dress it up in more restrained Science speak: “Unchecked replication of mirror bacteria within internal tissues is likely to be deleterious to the host organism and may be lethal.”
The 38 scientists who wrote the warning suggest that any attempts to mitigate the risk aren’t safe enough given the catastrophic consequences of an error. For example, through a process known as synthetic auxotrophy, biologists could design mirror life that is dependent on compounds that do not exist in nature, meaning these mirror microbes (theoretically) couldn’t survive outside the lab. But bacteria evolve—and humans make mistakes. Do we want to roll the dice?
The researchers, including some who had already started work on mirror life before realizing the potential for inflicting devastating harm on the world, call for a ban on research and a stop to funding activities that could produce mirror life. Some scientists have pushed back, calling this alarmist, with one likening this warning to “banning the transistor because you’re worried about cybercrime 30 years later.”
But cybercrime isn’t a black ball. Mirror life might be.
III: Luck and Existential Risk
What happens if, at some point, our luck runs out?
Just after midnight on September 26, 1983, with geopolitical tensions soaring, Stanislav Petrov saw the word “LAUNCH” appear on the screen before him. As the Soviet duty officer in charge of monitoring the “Oko” early warning system for surprise American nuclear attacks, the system showed him five incoming Intercontinental Ballistic Missiles, or ICBMs, heading toward the Soviet Union.
Protocol dictated what Petrov was supposed to do: inform his superior officers, who would then initiate the Soviet counterstrike, an all-out response that was aimed at annihilating the United States—with the unfortunate possible consequence of killing much of the life on the planet.
Petrov had to make a decision: to inform his superiors, or to assume it was a false alarm and not report it. Because he believed that any American surprise attack would be overwhelming rather than a measly five missiles, he told his bosses that it was merely a false alarm. He had no idea whether that was true or not. “The siren howled,” Petrov later recalled, “but I just sat there for a few seconds, staring at the big, back-lit, red screen.” He sat, waited, and hoped he was right.
It turned out that the “five ICBMs” were actually just…sunlight reflecting on clouds in a weird way. Petrov, grappling in the final years of his life with his unexpected role as a man who may have saved billions of lives through restrained inaction, put it simply: the world was “lucky it was me on shift that night.”
This form of luck—the fluke of a single human decision going the right way in a critical moment—isn’t a good way to gamble with existential risk. But there’s a deeper, more philosophical form of luck we should be grateful for, and it’s a kind of luck that may be running out as emerging, dangerous technologies become cheaply accessible to ordinary people.
We are lucky that, by happenstance, we live in a universe where the laws of physics make splitting the atom with nuclear fission difficult. It requires rare, valuable materials, incredible scientific know-how, expensive equipment, and so on. Consequently, there are only nine governments—and zero non-state actors—who currently possess nuclear weapons.
But what if, as Bostrom wonders, the universe had featured slightly different properties? What if, for example, one could split an atom by cleverly putting an electric current through a couple pieces of glass with a battery? In that universe, humans would likely have already blown themselves up. In short, part of the reason that nuclear weapons haven’t been a black ball yet is because they have an arbitrary property: making nuclear weapons is hard. The nature of the technology means that regulation is a lot easier than it would be if destructive fission could easily be achieved by millions of humans at low cost.
Bostrom, writing in 2019, categorizes this hypothetical as a “Type 1” vulnerability—which he terms “easy nukes” for shorthand. It’s a combination of a highly destructive, existential risk technology, but with easy access to the masses. In that scenario, it’s pretty much guaranteed that some idiot will use it. Or, as Bostrom imagines: “consider a ‘very easy nukes’ scenario, in which any halfwit can create an easily portable thermonuclear weapon at the kitchen sink over the course of an afternoon: this would definitely qualify as a civilizational vulnerability.”
There is some concern (of varying degrees of plausibility) that artificial intelligence could, eventually, operate this way—with an incredibly low bar to entry, and the ability for “any halfwit” to wreak havoc with digital tools that could either accidentally or deliberately end up wiping us out.7
Bostrom creates a whole typology of these risks—great bedtime reading, I assure you—but perhaps the most intriguing is what he terms a “Type 0” risk, known with the disarmingly cute alliterative name of Surprising Strangelets.
Formally, he describes it as “some technology that carries a hidden risk such that the default outcome when it is discovered is inadvertent civilizational devastation.” For example, in 1942, researchers trying to develop nuclear weapons had a bit of a panic when they realized there was a possibility that their intended blast could accidentally light the atmosphere on fire, potentially destroying the planet for human habitation. Subsequent calculations showed this probably wouldn’t happen, so they went ahead with the test. The ongoing evidence that the calculations were right is that we are currently alive.
But sometimes, calculations are wrong. As Bostrom points out, in 1954, the United States tested a new nuclear device. The math nerds carefully crunched the numbers and said it would produce a six megaton blast, with a little wiggle room, for a maximum of eight. Instead, it was more than twice as big as forecast—15 megatons. It poisoned people downwind—and could have been much worse. The point is obvious: sometimes, we get it wrong, and when we’re playing with existential risk, minor calculation errors could lead to humanity’s end.
Would you push a big red button if there was a one percent chance that doing so might end humanity? Those kinds of choices are only becoming more frequent given our technological advancement (see mirror life above). At some point, our luck will run out. Eventually, an “easy nukes” scenario will emerge, in which access to humanity destroying technology is democratized, or we will be caught off guard by the unexpected devastation of a “surprising strangelet.”
IV: Extinction Panics and Calculating Existential Risk
Believe it or not, human worries about extinction are relatively recent in Western thought. Natural science long had to be reconciled with religious maxims, and it was viewed as implausible that the divine would ever erase humanity, or that the current slate of species were not the same as species that had always existed.
Advances in the Enlightenment—particularly in the field of geology—started to challenge this viewpoint. However, even when the fossil record spit out strange species, there was a belief that they simply existed elsewhere on the planet (Thomas Jefferson believed that “mammoths or megalonyxes” must exist somewhere else on the American continent, for example). Even Charles Lyell, the founding father of modern geology, believed that dinosaurs were merely temporarily absent and would someday again roam the British countryside.
However, eventually it became obvious that cataclysmic change was possible—and that many creatures had ceased to exist. From the early 1800s, extinction fears were laced into scientific reasoning and, in particular, became a guiding trope of early English science fiction writing. Prophecies about the end of humanity spread, whether from the disappearance of the “aqueous fluid of our globe,” or the moon falling onto the Earth, or a great darkness and the end of the world.
Then, just over a century later, in the aftermath of millions dead from World War I and the Spanish Flu, an extinction panic emerged: we would imminently be the authors of our own demise. Winston Churchill, acting as doom-monger, penned an essay with the rather upbeat title of “Shall We All Commit Suicide?” Two decades before the Manhattan Project succeeded, he prophetically warned about the miniaturisation of unfathomable destruction, humanity unleashing devastation with “a bomb no bigger than an orange.”
As Tyler Austin Harper of Bates College explains, the 1920s were a time of rapid technological change, laced with fears of powerful new machines decimating populations, futuristic robots, and inevitable social collapse. It would lead, according to H.G. Wells, “inexorably to fresh wars, to shortages, hunger, miseries and social debacles, at last either to complete extinction or to a degradation beyond our present understanding.” Harper points to a 1920 play warning that robots will enslave humanity. “They’ve ceased to be machines. They’re already aware of their superiority, and they hate us as they hate everything human.”
The parallels are obvious—and the panic then, as now, was based on real risks. Still, it’s oddly comforting to know that some people have been convinced of the imminent demise of Homo sapiens for hundreds of years—and yet, we’re still here.
However, for too long, little attention was paid to these risks. In 2013, Bostrom produced this graphic, showing academic output tied to some select keywords.
Low probability, high impact events are notoriously difficult to forecast. Such “Black Swans” defy models because modelling is typically the act of using past patterns to predict future events. When an event has never happened before, the past is a poor guide.8 It’s inherently uncertain—and the combination of uncertainty with existential risk is not a reassuring realm to inhabit.
This dynamic gave rise to some thinkers—including Nassim Nicolas Taleb and Rupert Read—arguing in favor of the “precautionary principle,” designed to deal with situations of “ruin”—catastrophic devastation—rather than mere “risk.”9
And even though anyone who thinks seriously about catastrophic risk understands the inescapable uncertainty of putting numbers to forecasting unprecedented futures, many have tried to quantify our existential peril.
The United Nations Office for Disaster Risk Reduction conducted a meta-analysis of this number crunching and came up with an average estimate. By their count, there is between a 2 percent and a 14 percent chance of an extinction-level event in the 21st century. And while I think these figures are largely made up because of the impossibility of such modelling, what these estimates are saying is that a lot of very smart people believe it could happen—within the next 75 years.
(A glass half full person might say that some very smart people think that there’s at least an 86 percent chance that humanity will survive into the next century. Take your pick!)
However, for those who favor complacency, it’s important to note: just because something has been alright for a long time, doesn’t mean it will continue to be alright forever. Taleb uses the parable of the turkey on a farm, a plump bird who sees the farmer as a lovely man who comes to bring him food, without fail, every day. The farmer is his friend, his provider, his carer—until one day, right before Thanksgiving, ruin comes unexpectedly. We would be wise not to be that turkey.
V: We Don’t Have to Destroy Ourselves
If Bostrom is correct that a black ball is lurking in the urn, then he points to two possible solutions to mitigate our cataclysmic end. The first is a non-starter: create such extensive policing surveillance that “any halfwit” can’t destroy humanity because every halfwit is constantly being monitored to ensure they don’t take advantage of an “easy nukes” scenario.
The more realistic scenario is difficult, but not impossible: true global governance of existential risks. This requires unprecedented cooperation to avoid unprecedented consequences. So far, our track record is not great.
For example, regardless of covid’s origins, it’s obvious to anyone in pandemic preparedness that lab leaks of dangerous, experimental pathogens pose a risk to humanity. (Many have definitively happened; we’re lucky they weren’t worse).
And yet, as Filippa Lentzos of King’s College London explained to me previously, there still is no international organization with enforcement power that regulates or inspects all of the most potentially dangerous labs in the world—those known as biosafety level 4, or BSL-4 labs. It’s such an obvious area of potential risk—and we continue to allow a laissez-faire attitude, leaving these decisions mostly to national governments. The same is broadly true of artificial intelligence, where the technology is rapidly outpacing regulation.
But we don’t have to accept reckless courting of existential risk. If we fear a black ball is being withdrawn from the urn, we have the power to stop it. It’s a choice.
Thankfully, there are examples of successful international coordination. Winston Churchill, writing in 1924—having seen the grotesque power of chemical warfare—warned that it would only get worse, a devastating existential risk. Yet, as Harper highlights, soon after, the 1925 Geneva Protocol created an international ban—and those horrors have largely been contained. More recently, to protect the ozone layer, humanity came together and acted.
The point is not to suggest that existential risks are overblown—they’re clearly not. Whether it’s extreme climate change, nuclear apocalypse, devastation from artificial intelligence, mirror life or other biothreats, we are living in an unprecedented era of a new power—the power to destroy ourselves.
But humans can be astonishingly wise when it counts most; if we demand that our leaders take these threats more seriously; and if our politicians have the courage to create new, powerful international agencies that have the teeth to manage and mitigate the lurking black balls that could end our species, then I’d be far more willing to bet on humanity’s longevity.
Despite our cornucopia of flaws, we are worth saving; a species defined not just by our maddening defects, but by our unique curiosity, wonder, awe, love, beauty, ingenuity, and kindness. And yet, nobody can save us but ourselves.
When I was a child, my grandfather gave me the two word secret for a successful life: avoid catastrophe. Those two words should be the mantra for humanity in the 21st century—and (hopefully) beyond.
Thank you for reading. If you’ve found this thought-provoking and you value my work, please consider upgrading to a paid subscription to keep it sustainable—and to unlock all 165+ essays. It’s $4/month and keeps me writing. I truly appreciate your support.
This is a line from Alan Watts: “Through our eyes, the universe is perceiving itself.”
Sometimes a single scientist pulls out a white ball and a grey one. Fritz Haber, for example, invented the Haber process, which increased the carrying capacity of the Earth by billions. His work was also used to invent Zyklon B, which was later used to kill millions, including many people in his extended family.
For more on this, see this warning article from Science.
The term “frozen accident” is from an early pioneer of complexity, Murray Gell-Mann.
Fun fact: I was left-handed until I switched as a child. The switch made scissors easier but it was much worse for my dashed prospects in the Major League Baseball draft.
As Robert F. Service, writing in Science notes: “Synthetic biologists have previously synthesized mirror-image proteins and genetic molecules. And mirror-image amino acids and peptides—the building blocks of proteins—have been incorporated into several approved drugs. Because natural enzymes struggle to break down mirror-image biomolecules, these components help the drugs survive longer in the body.”
Bostrom is the same philosopher who invented the “paperclip problem,” a thought experiment to highlight the inadvertent dangers of AI.
This is also true for low frequency events. For example, the Yellowstone Supervolcano has erupted previously, but the number of eruptions is so low that any probabilistic forecast based on past intervals between eruptions is extremely uncertain. It’s possible it’ll never erupt again. It’s also possible it will erupt next year. Nobody knows.
The precautionary principle has plenty of detractors who argue it stifles innovation.
Great essay thanks. I always learn something (usually many things) from reading your work. Have a great holiday! All the best, John.
This right here makes the yearly subscription worthwhile (it was already innately valuable but this one article is worth an entire year). Thank you for making it public so all my cybersecurity and ethics students can read it 😊