Anthropic Bias: Observation Selection Effects in Science and Philosophy

Placeholder book cover

Bostrom, Nick, Anthropic Bias: Observation Selection Effects in Science and Philosophy, Routledge, 2002, 224pp, $70.00 (hbk), ISBN 0415938589

Reviewed by Neil Manson, Virginia Commonwealth University

2003.02.09


Is the fact that life evolved on Earth evidence that life is abundant in the universe? Does the fact that many of the free physical parameters of the universe require “fine-tuning” in order for life to be possible support the hypothesis that there is a vast multitude of physically real universes? Are we entitled to conclude from our being among the first sixty billion humans ever to have lived that probably no more than several trillion humans will ever come into existence – that is, that human extinction lies in the relatively near future? Answering these much-discussed questions involves reasoning from observations that are conditioned by “anthropic bias” or “observational selection effects.” A selection effect is a bias introduced by limitations in one’s data collection process; for example, the method of telephone polling used by Literary Digest in the 1936 U.S. presidential election was biased against Roosevelt supporters. An observational selection effect is a selection effect that arises from the very preconditions of observership. In this lively and topical book (to my knowledge it is the only book-length treatment of the topic), Bostrom claims to give an account of how to reason in light of observational selection effects that is both more rigorous and more general than any other in the literatures on fine-tuning, the anthropic principle, and the Doomsday Argument.

After giving an introductory overview in Chapter 1, Bostrom devotes Chapter 2 to the case of fine-tuning in cosmology – the need for the free parameters to be “just right” in order for there to be life. As he properly notes, the distinctive claim the problem of fine-tuning makes on our attention is not that the basic features of our physics appear to be arbitrary, but that the universe would have been lifeless if those arbitrary features had been slightly different. His primary concerns are to establish that the latter fact is relevant in deciding if fine-tuning calls out for explanation and that fine-tuning supports the multiverse hypothesis. He skillfully exposes the flaws in the argument (advocated by Ian Hacking, Phil Dowe, and Roger White) that the multiverse hypothesis does not raise the probability that this universe is fine-tuned for life.

I am less satisfied when he turns to fine-tuning’s need for explanation and to the difference between surprising and unsurprising improbable events. He rightly acknowledges that it cannot be in light of its improbability that fine-tuning is surprising, because if it were, then the existence of any other possible universe (including one possessing some equally improbable yet boring set of characteristics) would be just as surprising. Upon reaching this conclusion, however, Bostrom says we should abandon “vague talk of what makes events surprising” (p. 32) and proceeds to develop an analysis of how the multiverse hypothesis serves to make fine-tuning call out for explanation. Yet the account of surprising improbable events he sketches does allow that fine-tuning is surprising – if fine-tuning is considered in light of the design hypothesis. I don’t see why Bostrom abandons the well-established “surprisingness” analysis of need for explanation, especially when it is about to lead to an intuitively plausible answer to the question of why cosmic fine-tuning for life stands in need of explanation. According the design hypothesis this status opens the door to some provocative questions. For example, what business do scientists and philosophers have letting the desire to provide an alternative to the design hypothesis drive their theory-construction? I would have liked to hear Bostrom’s answer, but his disavowal of the “surprisingness” analysis of need for explanation cuts off that line of inquiry.

In Chapter 3, Bostrom surveys the history of “the anthropic principle” – a term coined by Brandon Carter in 1974 to identify a principle to be used for reasoning in light of observational selection effects in the cosmological case. The term has since gone out of control; not only are there dozens of inconsistent formulations of it in the literature, but it now is often used to refer to the data of fine-tuning themselves or to pro-design/pro-multiverse arguments based on those data (rather than to an epistemic principle). Bostrom claims that, in addition to being a source of confusion, the anthropic principles heretofore articulated all fail to solve the problem of “freak observers” – the problem that, since every possible observation is consistent with a cosmological theory according to which the universe is sufficiently vast (or infinite), no observation could rule out or favor any given vast-world cosmology. (For example, in a sufficiently vast universe, it is likely that a black hole someplace in spacetime will produce a brain making any given observation.) Bostrom rightly abandons the term “anthropic principle” and seeks to develop a principle that is clearer, more general, and capable of solving the freak-observer problem. He starts with “the Self-Sampling Assumption.”

(SSA) One should reason as if one were a random sample from the set of all observers in one’s reference class.

The rest of the book tracks SSA as it applies to cases, with Bostrom ultimately concluding that SSA must be replaced with “the Strong Self-Sampling Assumption.”

(SSSA) One should reason as if one’s present observer-moment were a random sample from the set of all observer-moments in its reference class.

In Chapter 4, Bostrom leads the reader through a variety of thought experiments that lend intuitive support to SSA. We see that SSA leads to the right answer in all of the cases so long as we manage to settle on the right reference class. Identifying the appropriate reference class turns out to be the key problem in applying SSA, particularly when different hypotheses entail different numbers of observers.

In Chapter 5, Bostrom seeks to show that support for SSA also comes from the indispensable, if not explicit, role it plays in scientific reasoning. He claims SSA leads us to the right results with respect to puzzling cases in cosmology, thermodynamics, evolutionary biology, traffic analysis, and quantum physics. In applying SSA to the problem of freak observers in cosmology, Bostrom says the solution is to construe the evidence not as “Such and such observations are made” (E) but as “We are making such and such observations” (E’). Given that we are the ones making the observations, it is improbable that, amongst all those making relevantly similar observations, we would be in the tiny (“freak”) minority that is doing so as a result of having been produced by a black hole.

Bostrom is saying that the problem of freak observers derives from the fact that, when we existentially generalize from E’ to E, we needlessly deprive ourselves of important information – namely, indexical information. I agree that the indexical element in observation is crucial to solving the problem of freak observers. However, an element in Bostrom’s argument – he uses the phrase “rationality requirement” – needs clarification. He says what makes it right to reason from E’ in this case is “the rationality requirement that one should take all relevant evidence into account, [which] dictates that in case E’ leads to different conclusions than does E, it is E’ that determines what we ought to believe” (p. 74). His endorsement of the rationality requirement seems at odds with his rejection (in Chapter 2) of the objection that the multiverse hypothesis fails to explain why this universe is fine-tuned for life. If the multiverse hypothesis doesn’t raise the probability that this universe is fine-tuned (where we treat ‘this universe’ as a rigid designator), then even though the multiverse hypothesis raises the probability that some universe or other is fine-tuned, the rationality requirement will dictate that we ought not to take the fine-tuning of this universe as evidence in favor of the multiverse hypothesis. Yet (based on my reading of Chapter 2), Bostrom never denies that the multiverse hypothesis doesn’t raise the probability that this universe is fine-tuned; he simply thinks this isn’t relevant.

The problem, it seems to me, is that Bostrom has tied together into one principle both a true claim (“one should take all relevant evidence into account”) and a false one (“in case E’ leads to different conclusions than does E, it is E’ that determines what we ought to believe”). This latter claim, which we can call “the total-evidence requirement,” is mistaken, as the following example shows. Let us suppose a demographer seeks to determine the death rate of current or former Hollywood movie stars. She reads her newspaper’s “Entertainment” section during a particular week and sees that (D’) stars Susan Sweetheart, Mark Matineeidol, and Humphrey Heartthrob died that week. Using existential generalization, she derives that (D) three Hollywood stars died last week. From D our demographer proceeds to calculate the movie-star death rate. Yet the death rate of Hollywood stars (whatever it is) is probabilistically independent of the deaths of Susan Sweetheart, Mark Matineeidol, and Humphrey Heartthrob, so D’ doesn’t confirm the death rate the demographer derived from D. What are we to believe? According to the total-evidence requirement we ought not to believe the demographer’s calculation, because D’ contains more information than D. Something’s gone wrong here; to fix it, the total-evidence requirement will have to be modified or abandoned.

“The Doomsday Argument” (DA) – the argument that the human race is more likely to go extinct sooner than we previously thought – is the subject of Chapters 6 and 7. In these chapters, Bostrom seeks to articulate DA and explain why the current objections to it fail, thus establishing a need that SSA will fulfill. There’s a lot of inside baseball here, but two key points emerge. The first is the problem of the reference class: what are the classes from which one ought to reason as if one were randomly selected from them? The second is the falsity of “the Self-Indication Assumption.”

(SIA) Given the fact that you exist, you should (other things being equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.

While SIA promises to nullify the probability shift DA engenders (SIA favors hypotheses according to which there are a great number of future humans over hypotheses according to which there are not), that’s about all SIA has going for it. As Bostrom deftly argues (both in Chapter 7 and Chapter 2), the fact that we exist certainly disconfirms hypotheses according to which it is unlikely that any observers exist, but that’s not what SIA says. If SIA were a good rule of reasoning, the question of whether the universe is infinite/open rather than finite/closed (which surely is an open scientific question) could be settled by SIA on purely a priori grounds. That can’t be right.

Having argued in Chapters 4-8 in support of SSA, Bostrom brings out some potential inadequacies with SSA in Chapter 9: “Paradoxes of the Self-Sampling Assumption.” In all of the paradoxes, humans are put in odd situations such that their actions can bring it about that they are exceptionally early humans (by ensuring that there are a tremendous number of future humans). For example, in the case Lazy Adam, Adam and Eve form the firm intention that, unless a wounded deer walks by their cave in the Garden of Eden, they will procreate. (They know that if they procreate, they’ll be kicked out of the Garden of Eden and their progeny will number in the billions.) Although the prior probability that a wounded deer walks by their cave is low, the posterior probability of the nonoccurrence of this event can be made arbitrarily lower as the number of future humans goes up. So it is rational for Adam to believe that a wounded deer will walk by his cave. Thus it seems that SSA leads to the (counterintuitive) result that Lazy Adam can cause wounded deer to walk by his cave. In this case (and similar ones) Bostrom goes to great lengths to show that the counterintuitive results don’t hold – for example, that Lazy Adam can rationally believe a coincidence will occur even though (almost certainly) no coincidence will occur. Having argued for that, however, Bostrom seeks to develop a version of SSA whereby it does not take people like Lazy Adam to the wrong conclusion.

This leads Bostrom to set out the SSSA in Chapters 10 and 11. SSSA, remember, is couched in terms of “observer moments” – that is, temporal parts of observers. By shifting the problem of the reference class from that of deciding what observers are relevantly similar to oneself to that of deciding what observer moments are relatively similar to one’s present observer moment, anthropic reasoners get more options regarding which reference class to adopt. This means that a rational anthropic reasoner is free to reject a choice of reference class that leads to a paradoxical result of the sort discussed in Chapter 9. Bostrom tries to formalize these insights in “the Observation Equation” (OE), which he says spells out “the probabilistic connection between theory and observation that enables one to derive observational consequences from theories about the distribution of observer-moments in the world” (p. 172).

Anthropic Bias is a synthesis of some of the most interesting and important ideas to emerge from discussion of cosmic fine-tuning, the anthropic principle, and the Doomsday Argument. It deserves a place on the shelves of epistemologists and philosophers of science, as well as specialists interested in the topics just mentioned.