The Precipice: Existential Risk and the Future of Humanity

Placeholder book cover

Toby Ord, The Precipice: Existential Risk and the Future of Humanity, Hachette, 2020, 480pp., $18.99 (pbk), ISBN 9780316484923.

Reviewed by Theron Pummer, University of St Andrews

2020.08.02


In this timely book, Toby Ord argues that there is a one in six chance that humanity will suffer an existential catastrophe within the next 100 years, and that minimizing this risk should be a major global priority. We live in an age of heightened existential risk, due to such powerful technologies as nuclear weapons, biotechnology, and artificial intelligence. Ord calls this age "the Precipice." It is an unsustainable time: humanity cannot carry on playing Russian roulette. Unless we soon achieve a much higher level of existential safety, we will destroy ourselves.

The book offers an engaging and empirically-grounded synoptic view of humanity's past, present, and future, and of the risks threatening to cause that future to be far worse than it could be. Do not be intimidated by the fact that the book is 480 pages long. The main text is only about 250 pages, and the rest is notes, references, and appendices. You can work through the main narrative quite quickly if you resist the urge to read the notes. Because it is so well written -- and on such an important topic -- I found the book hard to put down once I got going.

It is not your typical philosophy book. It contains a high proportion of information from other disciplines (physics, biology, earth science, computer science, history, anthropology, statistics, international relations, and political science). It devotes comparatively little space to grappling with philosophical literature. It is not principally concerned with the adjudication of traditional philosophical debates. For these reasons, some philosophers might mistakenly assume that the book is of little philosophical interest. But, as I suggest below, the book brings into focus a number of frontiers on which it is vitally important that we make philosophical progress -- and that we do so sooner rather than later.

Chapter 1 traces human history, from African savannahs 200,000 years ago all the way up to the beginning of the Precipice, marked by the detonation of the first atomic bomb in 1945.

Chapter 2 defines an existential catastrophe as the destruction of humanity's long-term potential. This potential could be destroyed by extinction, an unrecoverable collapse, or a permanent dystopia. An existential risk, then, is a risk of an existential catastrophe. Ord presents a range of philosophical arguments for the claim that it is of great importance that we minimize existential risk (I return to these arguments below). At least, it is more important than what is reflected by the relatively small fraction of global resources currently devoted to existential safety as such.

Chapter 3 is on natural risks, including risks of asteroid and comet impacts, supervolcanic eruptions, and stellar explosions. Ord argues that we can appeal to the fact that we have already survived for 2,000 centuries as evidence that the total existential risk posed by these threats from nature is relatively low (less than one in 2,000 per century).

Chapter 4 is on anthropogenic risks, including risks from nuclear war, climate change, and environmental damage. Ord estimates these risks as significantly higher, each posing about a one in 1,000 chance of existential catastrophe within the next 100 years. However, the odds are much higher that climate change will result in non-existential catastrophes, which could in turn make us more vulnerable to other existential risks.

Chapter 5 is on future risks, including engineered pandemics and artificial intelligence. Worryingly, Ord puts the risk of engineered pandemics causing an existential catastrophe within the next 100 years at roughly one in thirty. With any luck the COVID-19 pandemic will serve as a "warning shot," making us better able to deal with future pandemics, whether engineered or not. Ord's discussion of artificial intelligence is more worrying still. The risk here stems from the possibility of developing an AI system that both exceeds every aspect of human intelligence and has goals that do not coincide with our flourishing. Drawing upon views held by many AI researchers, Ord estimates that the existential risk posed by AI over the next 100 years is an alarming one in ten.

Chapter 6 turns to questions of quantifying particular existential risks (some of the probabilities cited above do not appear until this chapter) and of combining these into a single estimate of the total existential risk we face over the next 100 years. Ord's estimate of the latter is one in six.

Chapter 7 puts forth a long-term strategy for humanity. According to Ord, our first priority is getting off the Precipice. Once we have achieved existential security, we will be in a position to enter what he calls "the Long Reflection," a period during which humanity works out what its best kind of future would look like. Even if complete certainty and consensus in ethics is unachievable, humanity could at least work out what constitutes enough certainty or consensus.

Chapter 8 provides lower-bound sketches of humanity's long-term potential, with respect to duration, scale, and experiences. We could build a massive galactic civilization that lasts for trillions of years and that is guided by the ethical discoveries of the Long Reflection.

Ord implies that there is relatively little need to adjudicate traditional ethical debates right now, that this can instead be left for the Long Reflection. For instance, he writes:

[fully achieving humanity's potential] can wait upon a serious reflection about which future is best and on how to achieve that future without any fatal missteps. And while it would not hurt to begin such reflection now, it is not the most urgent task. To maximize our chance of success, we need first to get ourselves to safety -- to achieve existential security. This is the task of our time. The rest can wait. (193)

But I believe The Precipice in fact brings into focus a number of ethical and other philosophical debates the adjudication of which cannot wait. Even if most such progress can wait until the Long Reflection, several philosophical questions bear on which existential risks it is important to reduce, and on just how important it is to reduce them. At least some philosophy is urgent, as we need it to navigate the Precipice.

There are urgent questions in epistemology and decision theory. One question concerns how to estimate risks of existential catastrophes. When dealing with these unprecedented events, there is little hard evidence to go on (168, 195-9). Depending on how skeptical we should be in such contexts, we could end up with a one in 1,000 probability of existential catastrophe in the next century rather than Ord's one in six.

Another question concerns how to take account of moral uncertainty in our decision theory -- for example, to what extent it can be treated like empirical uncertainty. In fact, Ord and two coauthors have a forthcoming book on this topic.[1]

Yet another question concerns the significance of extremely small probabilities of extremely large gains. Even assuming that preserving humanity's long-term potential is billions of times better than saving the lives of 100 presently existing people, we may be reluctant to accept the claim that we should take a probability of one in a billion of preserving humanity's long-term potential (by marginally reducing the likelihood of one particular existential catastrophe) over a probability of one of saving the 100 lives. Views on which we should maximize expected value imply this claim, but so do many others. In fact, unless we reject one or more seemingly innocuous views about how to evaluate prospects, we must accept the Pascalian conclusion that, for any nonzero probabilities p1 and p2, and any finitely good outcome O1, there is a sufficiently better outcome O2 such that p2 of O2 should be taken over p1 of O1.

There are also, I argue, urgent questions in ethics. This may not seem to be the case. It may instead seem that more or less all ethical views, for all their differences, converge on the claim that it is of great importance to minimize existential risk. If this is correct, then even if we need to "solve ethics" during the Long Reflection (to fully achieve humanity's potential), it is unnecessary to do so now. Ethics already tells us what we currently need to know: put an end to the Precipice and begin a period of existential security. In chapter 2, Ord presents a range of arguments which, taken together, could be claimed to show that there is such ethical convergence.

First Ord offers a present-oriented argument for the importance of minimizing existential risk, based on the fact that existential catastrophes could cause presently existing people to suffer and die (42-3). Second is a future-oriented argument, based on the fact that existential catastrophes could prevent trillions of future people from living good lives, and prevent future impersonal goods such as beauty and human achievement (43-9). Third is a past-oriented argument, based on the fact that over the centuries past generations have made enormous sacrifices to build up humanity's knowledge, technology, and prosperity (49-52). Ord suggests that we might have a duty to repay a debt to past generations by "paying it forward" to future generations. Fourth is an argument that appeals to the virtues of humanity as a whole -- as some sort of group agent (52-5). Individuals do better at promoting their long-term flourishing when they cultivate the virtue of prudence. Humanity will likewise do better at promoting its long-term flourishing if it cultivates the appropriate "civilizational virtues." Fifth is an argument that appeals to the possibility that, as rational agents, we are unique (53-6). Had he believed us to be alone in the universe, Carl Sagan might have said we are the way for the cosmos to know itself.

Whatever one makes of the plausibility of each individual argument, there is indeed a decent range of different views in ethics that converge on the claim that it is of great importance to minimize existential risk (56). But this range leaves out a number of important contenders. The argument from civilizational virtues has a virtue ethical flavor. And the past-oriented argument has a deontological flavor. But each of these two arguments represents only a fairly narrow sort of consideration within a very broad tradition in ethics. Neither consideration comes close to exhausting the considerations characteristic of the tradition in question, and neither is particularly characteristic of the tradition in question. Some standard considerations within these traditions may even count against the claim that it is of great importance to minimize existential risk.

A standard deontological principle is the doctrine of doing and allowing, according to which it is morally worse to cause harm than it is merely to allow it to occur. Suppose an asteroid is headed for Earth. If we deflect the asteroid, things will carry on as usual. If we do nothing, the asteroid impact will destroy all life on the planet and render it permanently inhospitable. If things carry on as usual, many presently existing people will continue living lives worth continuing and many more people with lives worth living will come into existence, but many presently existing people will continue living lives not worth continuing and many more people with lives not worth living will come into existence. If we deflect the asteroid, we are in a way causally responsible for this mixture of people with lives worth living and people with lives not worth living. We are not in the same way causally responsible for what results from merely allowing the impact to occur. Even if the good effects of deflecting the asteroid outweigh the bad effects of doing so, defenders of the doctrine of doing and allowing may argue that it is wrong to cause so many people with lives not worth living to come into existence. And even if by the lights of the doctrine it is permissible to deflect the asteroid, the moral conflict involved may substantially dampen the overall moral importance of doing so.

On other views -- embraced by many consequentialists and deontologists alike -- the addition of people with lives worth living is not in itself a good thing. According to the procreation asymmetry, it is not good to bring into existence people with lives that would be worth living, but it is bad to bring into existence people with lives that would be not worth living (or, while we do not have moral reason to bring into existence people with lives worth living, we have moral reason not to bring into existence people with lives not worth living). Some defenders of the asymmetry argue that, with respect to possible future people, preventing extinction is very bad: it would bring into existence very many people with lives not worth living, which is very bad, and it would bring into existence very many people with lives worth living, which is not good, and the latter cannot offset all the badness of the former. They may further argue that all this badness cannot be justified on the grounds that it is necessary for impersonal goods like beauty, human achievement, or cosmic uniqueness, or that it is necessary to repay debts to past generations. And arguably if it is so bad to add new generation after new generation of people, the civilizational analogue of prudence would favor a peaceful extinction (for example, via globally coordinated mass sterilization). It would of course be very bad if an extinction event cut short the worth continuing lives of those existing at the time of the event. Defenders of the asymmetry could argue that the badness of bringing into existence trillions of people with lives not worth living outweighs the badness of allowing billions of people with worth continuing lives to die. Or they could argue that the complaints of the former would collectively outweigh the complaints of the latter.

Extinction is not the only sort of existential catastrophe. Humanity's long-term potential could also be destroyed by a permanent dystopia (153-8). Defenders of the asymmetry could argue that, while a peaceful extinction would be good, a permanent dystopia would be very bad. It would appear there is wider ethical convergence on the claim that it is of great importance to minimize risk of a permanent dystopia than there is on the claim that it is of great importance to minimize total existential risk. Some particular existential risks that risk extinction also risk permanent dystopia. But out of all particular existential risks, the risk of a permanent dystopia seems proportionately higher in the case of AI.

So there are questions in ethics that bear on which particular existential risks it is most important to reduce. To better decide, we need ethical progress now. Ord suggests two rejoinders. Each would support the claim that, even prior to resolving the above sorts of ethical issues, we still have good reason to reduce total existential risk, including risk of extinction.

The first rejoinder is that, even if a view in population ethics like the procreation asymmetry is correct, we still have good reason to avoid extinction so that our descendants can travel around the universe reducing the suffering of alien life forms (chapter 2, footnote 51).

But there would presumably be strong deontological objections to a proposal of bringing about lives not worth living in order to keep humanity around long enough to develop the tools needed for reducing alien suffering. A moderate deontologist would have to agree that we are permitted to cause suffering as long as it is necessary for the prevention of a sufficiently greater amount of suffering. But the proposal in question appeals merely to the chance that there are alien life forms whose suffering we can greatly reduce. Whether a small but nonzero probability of preventing enormous suffering yields a strong enough reason to avoid extinction brings us back to the unresolved Pascalian issue noted above.

The other rejoinder Ord suggests is based on an appeal to moral uncertainty and the value of keeping our options open (56-7, 264, and chapter 2, footnotes 50, 52). Even if we are fairly confident that a view like the procreation asymmetry is correct, we cannot be certain that it is. There is at least a significant probability that it is very good to bring about many additional people who would have lives worth living. So, if we allow ourselves to go extinct, we risk throwing away a fantastically good future. But if in the future we become sufficiently confident that the asymmetry is correct, we can at that point cause ourselves to go extinct peacefully.

The problem with this rejoinder is that extinction is not the only sort of state we might become irreversibly locked into. If we do not allow ourselves to go extinct, we risk locking ourselves into a fantastically bad future -- with no option of an early escape via extinction. First take empirical uncertainty: there is a significant probability that, if we do not go extinct soon, we will become locked into a dystopian state that is extremely bad on virtually all ethical views. There could be centuries of nothing but misery for everyone followed immediately by extinction. Next take moral uncertainty: there is a nonzero probability that the badness of certain kinds of misery and suffering cannot be compensated for by the addition of any number of lives that are well worth living. The future could be extremely bad even if the state we become permanently locked into is one in which each century trillions of people live wonderful lives and one person lives a miserable life. And there is a significant probability that, even if there is some number of wonderful lives that can compensate for one miserable life, it is a rather large number.

It is unclear whether Ord's appeal to moral uncertainty and the value of keeping our options open provides a strong enough reason to avoid extinction. At least, there is more that would need to be said. One possibility is that the full calculation does on balance favor avoiding extinction, but only by a relatively narrow margin. The devil is in the details.

For these sorts of reasons, I believe there is a good deal of philosophical progress that should not wait for the Long Reflection. We are at the Precipice now, and doomed to make decisions with monumental ramifications. Achieving greater confidence and consensus on a host of philosophical questions will improve our deliberations on these urgent matters, and increase our chances of getting things right. Philosophers are needed for navigating the Precipice. Ord's book is the best starting place for philosophers looking to participate in this vital interdisciplinary endeavor.


[1] William MacAskill, Krister Bykvist, and Toby Ord, Moral Uncertainty (Oxford: Oxford University Press, forthcoming).