Hugh Everett III died of a heart attack in July 1982 at the age of 51. Almost 26 years later, a *New York Times* obituary for his Ph.D advisor, John Wheeler, mentioned him and Richard Feynman as Wheeler's most prominent students. Everett's Ph.D thesis on the relative state formulation of quantum mechanics, later known as the "Many Worlds Interpretation", was published (in its edited form) in 1957, and later (in its original, unedited form) in 1973, and since then has given rise to one of the most radical schools of thought in the foundations of quantum theory. Several years ago two conferences held in Oxford and in the Perimeter Institute celebrated the occasion of 50 years to the first publication of Everett's thesis. The book *Many Worlds?* grew out of contributions to these conferences, but, as its editors emphasize, it is more than mere conference proceedings.

Instead, an attempt was made to assemble an impressive collection of papers which illustrate the promise of the many worlds interpretation and the obstacles it faces. Twenty three papers divided into six sections follow an introduction by Simon Saunders, one of Oxford's fiercest Everettians. The first four sections cover two thorny issues that have been flagged by contemporary opponents to the many worlds interpretation, namely, the problem of ontology and the problem of probability, while the fifth discusses alternatives to Everett such as Bohmian mechanics and information-theoretic approaches to quantum theory. The sixth section seems to be a wild card, hosting several papers unrelated to each other, including one of the most interesting contributions to this volume on the history of Everett's thesis and his (some may say all too) short academic career. Each section concludes with transcripts of the discussion session that took place after the talks, thus giving an additional emphasis to the points of contention.

For lack of space, in what follows I would like to focus, in each of the six sections, on only a few chapters that I found especially illuminating.

Start with ontology. Present day Everettians (who for some reason tend to concentrate around Oxford, UK) seem to converge on the twofold claim that many worlds is nothing but the formalism of quantum theory applied to macroscopic systems, and that *this* was what Everett himself sought to highlight in his struggles to publish his dissertation. Thus David Wallace, in the chapter "Decoherence and Ontology", urges us to accept that the ontological claims of quantum theory, when taken literally, are (1) that we live in a multiverse, (2) that branches are real, and (3) that their reality is no less problematic (and no more susceptible to doubt) than the reality of the atmosphere around some of the planets that inhabit the NGC 1300, a spiral galaxy some 65 million light years from Earth. Wallace then argues that it is only with the rise of decoherence theory that philosophers and physicists were able to recognize what was clear to Everett and to his follower DeWitt from the start, namely, that "the quantum formalism is capable of yielding its own interpretation" and that no additional amendments to the theory are required in order to answer how these branches are to be defined.

How are we then to reconcile the aforementioned claims with the fact that quantum mechanics, taken literally, mentions neither branches nor a multiplicity of worlds? According to Wallace, these elements should be regarded as "emergent" on a par with haircuts or tigers. Not only, says Wallace, are the latter missing from the mathematical formalism of our most fundamental physical theory, but also they are not directly definable in the language of microphysics. Nevertheless, and this is the message, no one would doubt their existence. Decoherence, as a mechanism that allows quasi-classical structures to emerge from the underlying quantum theory, is what establishes the existence of these structures, where by "existence" we mean no more (and no less) than what we mean when we talk about the existence of other macroscopic entities that presumably emerge from the microphysical world.

Proponents of alternative no-collapse interpretations to Everett such as Bohmian mechanics are not impressed. For Bohmians, the wave function *alone* is insufficient to account for the result of any measurement. To do so, says Tim Maudlin in the chapter "Can the World Be Only Wavefunction?", one must add particles, i.e., localized objects in low-dimensional spacetime, into the ontology. Maudlin's conclusion is that Everett's interpretation, and similarly collapse alternatives in which nothing but the wave function exists, are epistemically incoherent: they do not make the connection between theory and the results of experiments comprehensible, and yet these results are presumably what serve to confirm these theories to begin with.

The worry here seems to be that if, according to the Everettians, the wave function is all there is, and if, further, it 'lives' in an abstract, multidimensional space, then it is unclear how such an object can account for our experience which is, roughly put, the behavior of localized objects in the low-dimensional spacetime we inhabit. Bohmians can easily address this problem, says Maudlin, because they simply *postulate* such localized objects by adding them into the ontology. GRWf theory (collapse with flash ontology) has a similar solution. But Everettians (and first generation collapse theoreticians with them) face the serious challenge of coming up with a comprehensible link between the state of wave function (which is all there is) and what warrants our belief in the theory, namely, the behavior of localized objects in a low-dimensional spacetime, which is our experience. Decoherence, argues Maudlin convincingly (p. 132), simply cannot meet *this* challenge.

At this stage the attentive reader would have probably noticed that present day Everettians and their opponents are engaged in two different sets of problems, and simply talk past each other. While Wallace is busy defending the ontology of multiplicity of worlds by presenting it as no more awkward than any other ontology of emergent entities (call this tactic "emergence"), Maudlin saddles him with the problem of latching that ontology to our everyday experience (call this problem "incoherence").

And the point is that no matter how seriously one is willing to consider "emergence" as a viable defense, it still falls short of solving "incoherence". Maudlin, in the transcript of the discussion session on ontology, succinctly delivers this message:

The problem with what you [Michel Janssen] put forward, which sounds really great, that the theory's telling us this and we're just guys who hate the theory; the problem is the story you told is simply incoherent … we all agree that there's a wavefunction; I have no problem with wavefunctions, but it just can't be the case -- it's simply logically impossible for it to be the case -- that you're in a position to say that 'we have this really strong empirical evidence that this is the right theory, the only problem with it is that with the theory so far I don't know how to make sense of the macroscopic world' (pp. 175-176).

Another can of worms in discussions on Everett's interpretation is the issue of probability. Here, again, present day Everettians employ the defensive tactic they believe gets them off the hook in matters ontological, namely, showing that their pet interpretation is no more awkward and no more plagued with conceptual problems than *other* well-accepted world views (David Papineau, in the chapter "A Fair Deal for Everettians", calls those who criticize Everettians about probabilities "a classic case of a pot calling the kettle black"). But here, again, it seems that their opponents are able to saddle them with a completely different set of problems for which the above tactic is flatly ineffective.

There are several different questions which must be addressed when investigating the meaning and the origin of probabilities in a physical theory. Physicists, for example, are more interested in answering how the probabilities are to be calculated; philosophers, in contrast, tend to ponder what are these calculated probabilities probabilities *for* (e.g., Maudlin 2001). Present day Everettians, however, are under the impression that by answering the former question they also answer the latter. Thus Wallace, in the chapter "How to Prove the Born Rule", takes great pains to demonstrate how a physical *contingency* such as the probabilities that one finds in repeated quantum experiments follows *logically* from a set of axioms that, according to him, are axioms for the rationality of decision making. The reasons for this mathematical exercise (which no doubt would have amused Hume), is that in Everett's interpretation, where there is no collapse and no one actual outcome that breaks the symmetry, "a programme of deriving probabilities from the symmetries remains viable. The language of decision theory makes rigorous sense of what such derivation would look like" (p. 262).

Those who read through the transcript of the discussion which followed that session will find two devastating criticisms of this project. First, as the late Itamar Pitowsky puts it, the assumptions that are treated as assumptions about "rationality" in Wallace's (and earlier in David Deutsch's) axioms are really assumptions about (prior) probabilities; small wonder that their conclusions give probabilistic meaning to the Born rule. What truly justifies this meaning, adds Pitowsky, has nothing to do with rationality but rather with Gleason's theorem and the structure of the Hilbert space. But once we accept that the assumptions are probabilistic to begin with, another problem looms: as pointed out by Meir Hemmo in that discussion, on final account, the fundamental justification for any assumption about priors is an empirical one, and so, in Everett's picture there seems to be no sense in which we could be objectively wrong about these assumptions (since in this theory any priors are equally justified by inductive reasoning).

This additional challenge also applies to Hilary Greeves and Wayne Myrvold who, in the chapter "Everett and Evidence", propose yet another way of making sense of the quantum mechanical probabilities in Everett's interpretation. The gist of this dense chapter is that there exists an account of theory-confirmation through statistical evidence that applies both to a branching and to a non-branching universe, hence does not presuppose the former. But if such an account exists (Greeves and Myrvold show how Bayesian conditionalization, when operationalized in terms of betting preferences, may serve as one), then we can use it to raise our degrees of belief in Everettian quantum mechanics by looking at results of experiments without presupposing the latter and, more importantly, without talking about *probabilities* at all (recall that in Everett's picture the dynamical evolution is completely deterministic and *all* outcomes are realized). They conclude, "We have argued that this account is no less defensible than the structurally identical account according to which chances, in an indeterministic theory, have similar decision-theoretic and confirmation-theoretic relevance" (p. 301).

Once again the defensive tactic of "we're no worse than everybody else" is employed, and once again it falls short of addressing the actual problems the opponents keep tossing at present day Everettians. David Albert (in the chapter "Probability in the Everett Picture") drives this point home forcefully in his unmistakably genuine style:

The worry here is that the question at which this entire program [the decision theoretic program] is aimed, the question out of which this entire program arises, seems like the wrong question. The questions to which this program is addressed are questions about what we would do *if* we believed that the fission hypothesis [the hypothesis that one can eschew probability talk and replace it with a decision theoretic talk about rational bets] were correct. But the question at issue here is precisely *whether* to believe that the fission hypothesis is correct! And what needs to be looked into, in order to answer that question, *has nothing whatever to do* with how we would act *if* we believed the answer to *that* question were 'yes'. (p. 359, emphasis mine)

Albert's point is that the defensive tactic only gets Everettians so far: agreed, all accounts of probability lack a clear analysis of how chances, frequencies, and degrees of belief all fit together, but the point of a philosophical analysis of chance, says Albert, is not to establish *that* chances are related to frequencies, but to show precisely *how* chances are related to frequencies. Albert is not dogmatic about such an analysis -- if it will be shown to be impossible, then the very idea of chance will have been exposed as nonsense -- and he is certainly not as dogmatic as present day Everettians who, with their decision-theoretic program, a priori *deny* the possibility of such an analysis from the start.

The fifth section is dedicated to no-collapse alternatives to Everett. In one of its chapters, "Two Dogmas About Quantum Mechanics", Jeffrey Bub and Pitowsky attempt to deflate the notorious measurement problem by attacking two of its hidden premises: (1) that measurement outcomes cannot serve as primitives in a fundamental theory such as quantum mechanics, and (2) that the quantum state must be interpreted ontologically as a representation of physical reality. Rejecting these two dogmas, what they suggest instead is "a realist information-theoretic interpretation of quantum mechanics as an alternative to Everett's interpretation" (p. 433). Such a rejection involves a preference for a "principle" approach to physics, in contrast to the "constructive" approach (a distinction that goes back to Einstein). The latter is manifest in those solutions to the measurement problem which attempt to dynamically analyze the measurement process. This information-theoretic approach is an interesting proposal that certainly changes the rules of the game. Some of its shortcomings were pointed out elsewhere (Hagar & Hemmo 2006).

While the editors should be commended for including a wide range of opinions in the volume, they have omitted a discussion of other alternatives to Everett in which collapse *does* take place. I find this omission disappointing, especially when the only way to make progress in validating quantum mechanics is by testing its limits. True collapse theories, in contrast to false collapse (a.k.a. decoherence), yield *in principle* different predictions than standard QM in specific experimental setups and as such are worthy of serious consideration.

Perhaps the most fascinating chapter in the final section is Peter Byrne's "Everett and Wheeler: the Untold Story". This recounting has everything a true melodrama should have, and it portrays the midwives of Everett's interpretation as unashamedly humane: Wheeler's "sitting on the fence", the irritating process of editing Everett's dissertation, the cultural and personal clash between Everett and Bohr, and the rediscovery of Everett's letters and notes in which he had expressed so vividly what he thought about all that. My only quibble here is that a similar story -- told recently in detail by a group of Brazilian historians of physics (Osnaghi et al. 2009) -- is only mentioned in passing and should have been given more credit, if not space, as it illuminates yet another facet of the unfortunate encounter of Everett with the Copenhagen orthodoxy.

To conclude, the chapters discussed here, as well as the rest of the book, are written with great clarity by some of the best minds in contemporary foundations of physics. They make *Many worlds?* a fine read, summarizing nicely the state of the art in one of the most radical no-collapse interpretations of quantum theory. The lessons here for Everettians are that if we're stuck with quantum theory, Everett is the best we can do, and that this is not as bad as it seems. The lessons for their opponents are that this is bad enough, and that we can do better. One thing is certain, though: the volume as a whole would remain a valuable contribution to the heritage of Hugh Everett III even fifty years from now. By then, who knows, quantum mechanics may well have been superseded by a theory more fundamental …

*Reference*

Hagar A. & Hemmo M. (2006), Explaining the unobserved: Why QM is not only about information, *Foundations of Physics*, 36(9): 1295-1324.

Maudlin, T. (2001), Interpreting probabilities: What's interference got to do with it?, pp. 283-288, in J. Bricmont et al*.* (eds.) *Chance in Physics*, Springer Lecture Notes in Physics, Volume 571.

*Studies In History and Philosophy of Modern Physics*, 40(2): 97-123.