The Oxford Handbook of Philosophy of Physics

Placeholder book cover

Robert Batterman (ed.), The Oxford Handbook of Philosophy of Physics, Oxford University Press, 2013, 688pp., $150.00 (hbk), 9780195392043.

Reviewed by James Owen Weatherall, University of California, Irvine

2013.12.09


If this collection has an overarching theme, it is that the details matter. If philosophers hope to understand contemporary physics, we need to engage in depth both with the technicalities of our best physical theories and the practicalities of how those theories are applied. The authors in this volume brush aside an older tradition in the philosophy of physics -- and the philosophy of science more generally -- in which actual physics entered only to illustrate high-level accounts of theories, explanation, or reduction. Of course, by itself, dismissing this tradition is hardly worth remarking on: such an approach to philosophy of physics has been going out of fashion for decades. Taken as whole, however, this volume pushes the theme still further, in ways that mark important shifts in recent philosophy of physics.

Perhaps the most important such shift is reflected in the fact that the questions these authors address are, by and large, not ones that come from metaphysics, epistemology, or even general philosophy of science. The focus is on foundational and conceptual problems that arise within the physics itself, either in contemporary practice or historically. And the methods used to address these problems draw heavily on the science, though with a philosopher’s eye towards careful and rigorous argument.

This emphasis is evident, for instance, in Chris Smeenk’s excellent survey of the philosophy of cosmology. Smeenk begins with an overview of the “Standard Model”, known as the Lambda-CDM (for Cold Dark Matter) model, which has been the focus of attention in theoretical cosmology since the 1970s. This model or, perhaps better, family of models combines expanding universe solutions to Einstein’s equation, the fundamental dynamical principle of general relativity, with accounts of the formation of matter in the early universe. Smeenk argues convincingly that the development and broad acceptance of this model has changed the character of research in cosmology, effectively defanging challenges that cosmology is inherently “unscientific” because its object of study -- the universe as a whole -- is unique and not amenable to experimental intervention.

What the Standard Model has shown is that one can productively use features of local physics to draw inferences about the global structure of the universe. Of course, such inferences rely on strong assumptions -- assumptions such as the so-called Copernican Principle, which states that our local region of the universe is “typical”, or in other words, essentially similar to other regions of the universe. And these assumptions are defeasible. Important work by Clark Glymour, David Malament, and John Manchak has made clear that essentially any relativistic spacetime is observationally indistinguishable from many others that differ significantly in their global features. This means that one cannot securely deduce, on the basis of our own observations, that anything like the Copernican Principle is true, even if one takes our best current theory for granted.

But that is just the point of the sort of work Smeenk is doing here. He rightly observes that assumptions of this nature play a central role in contemporary cosmology and he seeks to explore the nature and limits of our justification for accepting them. In this regard, the sorts of issues raised are not so different from what one might expect from a reflective physicist criticizing the field’s methodological norms. This is not to say that the work is not philosophical -- it most certainly is, particularly with regard to the careful analysis of the conceptual structure of current theory. But this is philosophical work that responds, first and foremost, to the foundational issues that are important to progress in cosmology.

One sees a similar methodological stance in John Manchak’s contribution on the causal and topological structure of spacetime in general relativity. Like Smeenk, Manchak looks to our best physical theory -- in this case, general relativity -- for both his questions and the tools with which to answer them.

Manchak focused on the wealth of puzzling features that are exhibited by models of relativity theory. These include singularities -- both black holes and initial/final big bang/big crunch type singularities, and others -- and closed timelike curves, which are curves through four dimensional spacetime that are in principle possible trajectories for massive particles, but which close back on themselves, permitting a kind of time travel. Manchak draws careful attention to what it means to say that these features are exhibited by spacetimes that are physically reasonable. In practice, physicists often disregard models of relativity that have features such as the ones mentioned above. But given our limited access to the global structure of the universe, how are we to decide whether physical configurations apparently permitted by our best theory are possible? Manchak does not offer an answer to this question -- which is for the best, since it is hard to see how one could rule on all possible cases at once. What he does instead is map out as clearly as possible the inferential dependencies between different senses in which a spacetime might fail to be reasonable.

What is so striking here is just how complicated matters turn out to be, and particularly how fairly mild-seeming assumptions about reasonableness in one sense can imply that spacetimes must be unreasonable in other senses. For instance, singularities turn out to be not only compatible with various natural assumptions, but in fact in the presence of such assumptions singularities turn out to be generic, in the sense that almost all spacetimes that are physically reasonable by various compelling standards turns out to be physically unreasonable in the sense that they are singular.

This basic moral -- that once one dives into the details of how our theories actually work, matters are far more complicated than philosophers have imagined -- is one that comes out in many of the chapters.

Gordon Belot’s beautiful article on symmetry and equivalence, for instance, explodes an idea that has been central to much philosophical work on how to interpret symmetries in physics -- namely, that symmetries can be used to more or less directly read off when two mathematical models of a physical theory represent the same physical situation. Focusing just on classical mechanics, Belot goes through a series of precise notions of “symmetry” that arise within the physics and mathematics literatures, and shows that in each case, there are compelling reasons to reject the idea that symmetries straightforwardly help us to delineate the space of physical possibilities. The moral, which I take to be exactly right, is that there is no pre-fabricated recipe for reading the physical possibilities off of the models of a physical theory; substantive engagement with the details of the theory and how we use it to represent the world are necessary. This article should be required reading for anyone looking to study the role of symmetry in our best physical theories.

Margaret Morrison’s chapter presents a similar -- and similarly enlightening -- analysis of “unification” in physics. Physicists like to tell the history of physics as a story of ever-more-unified theories, beginning with Newton’s unification of celestial and terrestrial motion and running through Maxwell’s unification of electricity and magnetism, the unification of electromagnetism and the weak force in electroweak theory, and on to late-twentieth century grand unified theories. Morrison carefully explores the sense in which Maxwell’s theory provides a successful unification of electricity and magnetism, and then asks whether the electroweak theory unifies electromagnetism and the weak theory in the same sense. The answer is “no” -- the sense of unification is different. But this is not to say that the electroweak theory is not a unified theory, or even a more unified theory than our best theory of particle physics taken as a whole (which is also sometimes cited as a step on the road to complete unification). Instead, the upshot is that there is far more subtlety in how we unify theories in physics than the standard story suggests.

Sheldon Smith offers a third variation on this theme of unexpected complication. He considers whether there is an identifiable “principle of causality” in classical physics, in the sense that a basic tenet of the theory is that “the cause precedes the effect”. Smith’s position is that Russell’s famous skepticism regarding causation in classical physics is fully warranted. Not only is there no over-arching principle of causality in the theory; if one digs into the details of instances in which some sort of causality condition is supposed to play a role -- from the preference for retarded Green’s functions to advanced Green’s functions, to the considerations associated with equations of motion accounting for the self-interaction of the electron -- one finds that in fact the assumptions doing the work have little to do with causality, and come from independent considerations.

Perhaps the most surprising instance of this devil-in-the-details kind of argument comes in Mark Wilson’s chapter, where he attacks the idea -- again, a basic assumption of virtually every philosopher to write on the subject -- that there is a univocal thing that goes by the name of “classical mechanics”. Wilson argues that there are (at least) three distinct threads that come together in classical mechanics: point particle mechanics, rigid body mechanics, and continuum mechanics. Each of these uses different mathematical methods, and each makes different claims regarding the basic ontology of the world.

One natural response to this observation is to say that only one of these threads is the fundamental one. But as Wilson convincingly argues, the theory just does not work this way in practice. Suppose one begins by considering point particle mechanics. One can represent many physical situations in this way, from the solar system to a gas in a box. But there are certain kinds of interactions, including collisions between particles, where the details of how the particles interact at short distances matter. And, it seems, these details are not forthcoming; instead, physicists retreat to rigid body mechanics, treating bodies -- including particles -- as rigid, extended objects constrained to move in certain ways. This move allows physicists to solve problems that would have been intractable using just point particles.

But is rigid body mechanics the end of the story? Well, no, because in some cases the ways in which objects deform is of crucial importance, and rigid body mechanics has no way to deal with such cases. Instead, physicists move to a third thread, which is continuum mechanics, where bodies are treated as continuous distributions of matter. This approach allows one to treat deformation in otherwise rigid bodies, but it comes with its own collection of problems (Wilson argues). In particular, one typically represents the properties of continuum matter using tensor fields. These are understood to represent physical quantities by considering properties associated with small but non-vanishing volumes of matter in a limit as the volume disappears. And so, when one pushes hard on continuum mechanics, one falls back to quantities associated with points of space -- which is essentially where we began. In other words, there is no bottom to classical mechanics.

Laura Ruetsche’s article has a somewhat different character -- though one consistent with the theme noted above. Rather than show that when one attends to the details, our philosophical platitudes about physical theories dissolve, Ruetsche shows that in order to even find some of the most important conceptual problems in physics, one needs to dive very deeply into the formalism.

The problem she addresses concerns when two models of certain quantum systems with infinitely many degrees of freedom are physically equivalent. On a fairly conservative understanding of how to individuate physical systems in quantum mechanics, one should say that a system is fully characterized by certain algebraic relationships between the operators representing its observable quantities. These are known as “canonical (anti-)commutation relations” (CCRs). Meanwhile, an equally standard (and well-motivated) criterion for when two representations of an algebra of observables as operators on a Hilbert space represent the same physical situations is when they are related by a unitary transformation. In ordinary quantum mechanics, these two senses of equivalence yield the same determinations in all cases. But once one moves to a context with infinitely many degrees of freedom, such as quantum field theory, one can have unitarily inequivalent representations of the CCRs characterizing some physical system. The result is a significant puzzle about how to interpret quantum theories in this domain.

The problem of inequivalent representations of CCRs is an example of a new generation of work on quantum foundations, which for decades has been dominated by the measurement problem. As Ruetsche and others have argued convincingly over the last decade, despite its centrality in the philosophical literature, the measurement problem is not the only foundational problem facing quantum mechanics. Still, this is not to say that the measurement problem is not important, nor that it does not arise within the physics itself. Indeed, solving the measurement problem is absolutely crucial to providing a coherent quantum theory. And -- reflecting this importance -- there are two articles in this volume devoted to the topic.

One, by Guido Bacciagaluppi, lays out the basic issues by carefully distinguishing two related problems: the measurement problem (how does one square measurement with the standard quantum dynamics?) and the problem of the classical regime (why do macroscopic objects behave classically?). These problems are often conflated, but as Bacciagaluppi argues convincingly, they should be pulled apart, particularly because solutions to one need not be solutions to the other. For instance, modern versions of Everett’s interpretation -- which I will return to below -- use “environment decoherence”, a physical process by which systems become correlated with their environment, to account for the emergence of autonomous, semi-classical “worlds”. This arguably solves the problem of the classical regime. But given that it has proved difficult to recover the so-called Born rule -- which relates a quantum state to the probabilities of various measurement outcomes -- it is less clear that the Everett interpretation solves the measurement problem. Conversely, Bacciagaluppi argues that although Bohm’s theory solves the measurement problem -- since all measurements are measurements of a particle’s position, which is always definite -- whether Bohmian field theories can be understood to solve the problem of the classical regime is an open question (according to Bacciagaluppi).

The second article on the measurement problem is by David Wallace. It addresses a more specific topic -- namely, the recently influential “Oxford (or Deutsch-Wallace-Saunders) Everett” interpretation of quantum mechanics. After discussing the motivations of an Everett-style no-collapse interpretation of quantum theory, Wallace presents the two classic objections to such a view. The first of these is known as the “preferred basis problem.” The basic worry, here, is that the character and identity of the “many worlds” famously associated with the Everett interpretation appear to depend on an arbitrary convention regarding how to represent the quantum state. The second criticism, known as the “probability problem,” meanwhile, is that it is not clear how to understand notions such as “probability” or “expectation” in a context where all possibilities (apparently) in fact obtain.

It is Wallace and collaborators’ answers to these objections -- which Wallace discusses in detail here -- that have earned the Oxford Everett interpretation its current reputation. In brief, Wallace argues that the preferred basis problem is solved by noting that the properties that undergo decoherence determine a basis -- a choice of representation of the quantum state -- and it is only in this basis that terms in a superposition have the required stability properties to qualify as worlds. Wallace solves the probability problem, meanwhile, by offering a new framework for decision theory that is intended to account for why certain parameters in a quantum theory should be treated by rational agents in a way analogous to probabilities in classical decision theory.

Two articles in the volume do veer more towards metaphysics, in a more mainstream sense. One, by Simon Saunders, discusses the conceptual status of “indistinguishable particles” in classical and quantum mechanics; the other, by Oliver Pooley, surveys contemporary perspectives on the ontological status of space and time, with a focus on the substantivalist/relationist debate.

At issue in the Saunders chapter is that in quantum physics, in order to recover the empirically correct statistical properties of a collection of particles, one needs to assume that any two physical configurations are identical -- and thus should only count once if one is summing up all possible configurations -- if they differ only by the exchange of the states of two particles with the same qualitative properties. This feature of quantum statistical mechanics is puzzling because it seems to be at odds with our classical intuitions, according to which physical configurations differing by the exchange of two numerically distinct (but qualitatively identical) particles should count as distinct physical possibilities, and thus count twice for statistical purposes. The (controversial) view Saunders ultimately defends here is that it is a mistake to draw a strong distinction between the status of indistinguishable particles in quantum physics and in classical physics. In classical physics, too, he argues, configurations differing only by the exchange of qualitatively identical particles should only count once for statistical purposes.

Philosophers have been especially interested in this example in recent years because it appears to bear on issues in metaphysics concerning individuals, identity conditions, and haecceitism. For instance, Saunders himself has written at length, both here and previously, on the connection between quantum statistics and Leibniz’s Principle of the Identity of Indiscernibles. Even here, though, where metaphysical issues arise explicitly, the methodological stance is striking -- and in line with the theme I describe above. In particular, Saunders frames the discussion in terms of a well-known problem in classical statistical mechanics, known as Gibbs’ paradox, which concerns the following puzzling fact. If one allows two distinct gases to mix, one finds a particular value for a quantity known as the “entropy” of the mixture. This value does not depend on any measure of the “similarity” of the two gases -- unless the two samples consist of the same gas, in which case one finds a different value for the entropy. Saunders argues that, in order to understand this puzzle that arises within the physics itself, we need to bring to bear substantively metaphysical considerations. In other words, even though we end up doing metaphysics in this chapter, it is metaphysics in the service of physics, and not vice versa.

A similar reversal is apparent in the Pooley chapter, though it is somewhat muted by the fact that Pooley explicitly surveys the contemporary literature on substantivalism and relationism, much of which leaves the physics far behind. Still, Pooley starts with the considerations that led Newton to adopt a version of the substantivalist position in the first place, which are now widely recognized to have been motivated by considerations arising in the science, and not background metaphysical commitments. Specifically, as Pooley presents it, Newton arrives at his substantivalism by way of his attempts to make sense of “uniform motion”, a notion, in turn, that Newton needs in order for his first two laws of motion to be well posed. Pooley does not hide his own substantivalist tendencies, but that does not interfere with his abilities to provide an even-handed and clear overview of the issues that have been discussed in the recent literature and their historical origins.

The articles I have discussed so far represent a little more than half of the 17 chapters in the volume. The bulk of the remainder -- including the contributions by Olivier Darrigol, Tarun Menon and Craig Callender, Leo Kadanoff, Jonathan Bain, and Robert Batterman -- are best conceived as a reader-within-a-reader. They all treat a cluster of related issues concerning “reduction” and “emergence”, mostly in the context of the so-called renormalization group, which consists in a collection of methods for relating physical theories at different energy scales -- that is, methods for relating, say, our theories of microscopic interactions between particles (these are comparatively short distance scale, or “high energy,” theories) and our macroscopic theories of steel beams or boiling water (long distance, “low energy” theories). The renormalization group arises in both statistical physics and in particle physics, and both applications are discussed.

I put “reduction” and “emergence” in quotes in the last paragraph because the sense in which these terms are being used is very much up for grabs in the debate, and at least some of the disagreement seems to come down to different senses in which one might use them. Batterman and Bain, for instance, direct their anti-reductionism against what they call Nagelian reduction, after Ernest Nagel, who famously defended the view that the reductive relation between theories such as statistical mechanics and thermodynamics is best thought of as a derivation, in the logical sense, of the low energy theory (thermodynamics) from the high energy theory (statistical mechanics). Batterman liberalizes this sense of reduction to include mathematical methods, such as limiting procedures, that are not naturally implementable as first-order derivations; thus, a reduction in the neo-Nagelian sense is a deduction of the low energy theory from the high energy theory. But it is not clear that even this liberalized sense of reduction is broad enough to include the sense of reduction that Menon and Callender aim to defend, which is that the low energy theory reduces to the high energy theory if the high energy theory can explain the nomic claims of the low energy theory. Whether this requires a deduction presumably depends on one’s views on explanation, a point on which Menon and Callender are deliberately vague.

Similarly, with regard to emergence, there are many notions on the table. Menon and Callender present three and then conclude that phase transitions in thermodynamics are emergent in one of these senses -- specifically, thermodynamics exhibits conceptual novelty, which means that it uses concepts that are not used in statistical mechanics -- but not in the other, stronger senses; nonetheless, they argue, this sort of emergence is compatible with their sense of reduction, since statistical mechanics can explain why these new concepts are fruitful.

Bain, meanwhile, considers several different senses of emergence, and on his definitions, it is sufficient to show that phenomena in particle physics -- the domain of application of the renormalization group on which Bain focuses -- cannot be derived from higher energy theories for him to have refuted reductionism. And indeed, it seems fair to say that the kinds of relationships that obtain between higher and lower energy theories in particle physics are not well-described as “derivations”. But again, it is not clear that Bain’s anti-reductionism is incompatible with Menon and Callender’s reductionism -- especially since, as Bain makes admirably clear, the renormalization group does provide some systematic link between theories at different scales, and one might well think of this systematic link as providing a kind of explanation.

These remarks may sound critical, but they are not intended to be. Indeed, these papers, taken together, do an excellent job of exploring a fascinating and absolutely central (though long-neglected by philosophers) collection of issues in physics, and the various notions of reduction and emergence are more or less necessary to chart the complex conceptual terrain that arises here. Indeed, the fact that there does not appear to be a grand theory of reduction or emergence to be debated is emblematic of the more general methodological stance that I have already suggested characterizes this volume. At the end of the day, the point is not to state and defend an account of reduction or emergence that is intended to capture all cases in physics (or science more generally). The point is to try to say, as clearly as possible, how the puzzling collection of methods known as the renormalization group hang together.

If there is a critical comment to be made in this connection, it is only this: continuing to use “reduction” and “emergence” at all makes it too easy to class these authors as broadly “for” or “against” a reductionist conception of physics, when in fact all of them appear to (rightly) believe the situation is rather more complicated than that. Of course, this is not to say that there is complete agreement here. For instance, although everyone appears to agree that there are strong pragmatic considerations that lead us to prefer to use low-energy theories in the domains for which they are empirically adequate, Batterman and Bain (but presumably not Menon and Callender) want to argue that these considerations are not merely pragmatic, in the sense that if one attends carefully enough to the details of how modeling across scales works, one realizes that it is in fact impossible to fully reproduce the low energy theory from only the high energy theory, even if one avails oneself of renormalization group methods; in general, one also needs parameters that can only be determined empirically. This seems to be a substantive disagreement about not just which explanatory demands are the salient ones, but also which can be met by the physics, even in principle.

My discussion so far has focused on the individual articles in the volume (and, when appropriate, the interactions between those articles). And, with a few minor missteps that are not worth dwelling on, the chapters are very good. But it is worth stepping back for a moment and considering the volume as a whole.

In the introduction to the volume, Batterman provides some motivation for the topics covered. He explains that when he was in graduate school, there were really just two physical theories that were deemed of interest for philosophical study: general relativity and quantum mechanics. Today, the situation is quite different, in large part because of Batterman’s own influence. Batterman emphasizes that his goal with this volume is to highlight the ways in which philosophy of physics has expanded its bailiwick to include, for instance, classical mechanics, thermodynamics, particle physics, hydrodynamics, and cosmology, and that, in doing so, we have discovered that the interpretational issues that arise in these other fields are just as important and just as difficult as the ones that arise in, say, quantum theory.

But I worry that in exhibiting this expanded canon, certain core topics -- topics that are no less important today than they were twenty or thirty years ago -- have been overlooked. Bell’s theorem, for instance, which to my mind remains the most important result in quantum theory, is not stated or described anywhere in the volume. It is mentioned, but only in passing, in the Bacciagaluppi chapter on measurement in quantum theory. The Kochen-Specker theorem does not even appear in the index. There is no extended discussion of entanglement or locality, in either a relativistic or non-relativistic setting. In fact, no mention is made of the foundational issues that arise when one attempts to combine quantum physics and relativity theory -- a particularly surprising omission both because these issues are of central concern to working physicists, and because quantum gravity, the place where the tensions between relativity and quantum theory are most manifest, is a key example of the expanded canon of subjects on which philosophers have recently begun to write.

Similarly, although the measurement problem itself is well-treated by the Bacciagaluppi chapter, and the Wallace chapter provides an up-to-date overview of one prominent solution, other important approaches are neglected. Indeed, one gets the impression that the measurement problem has been uncontroversially solved by some combination of decoherence and the Everett interpretation. But surely this is not right. It seems to me that in order to give an appropriate overview of the state of the field, one would need an equally detailed treatment of (at least) Bohmian mechanics and GRW theory.

Of course, such remarks can be seen as critical only if one imagined that the volume would provide an evenhanded survey of the current state of the field. And perhaps this is an unfair expectation. After all, we have seen a proliferation of handbooks in the last several years, including Philosophy of Physics, edited by Jeremy Butterfield and John Earman, and the Ashgate Companion to Contemporary Philosophy of Physics, edited by Dean Rickles. Looking at these three volumes together, the gaps in the Oxford Handbook are perhaps less troubling: for instance, the Butterfield and Earman volume covers in great detail many of the more traditional topics that this volume passes over quickly, but overlooks the areas in which this volume excels -- most notably, the renormalization group. So perhaps the best way to see the present volume is as bridging important gaps in the current field.

Still, even from this perspective, an opportunity may have been missed. One of the great virtues of the other volumes in the Oxford Handbooks series -- at least the few I am familiar with -- is that they are ideal starting points for an introductory graduate seminar. It is less clear to me that this volume would be as effective in that regard -- even if it were supplemented by the other handbooks on the market. For instance, although it is surely true that many of the topics that are not covered in this volume are covered in, say, the Butterfield and Earman handbook, the Butterfield and Earman volume is extremely technical -- far more so than this volume. It would not be appropriate for most early graduate students looking for entrance to these debates.

The volume has a second drawback as a starting point for early graduate students, which is that it is uneven. I do not mean that the quality of the articles in uneven. Rather, the issue is that the character of the articles is uneven. Some of them -- Kadanoff, Pooley, Manchak, Smeenk -- are true surveys of the subjects at hand. The Pooley and Smeenk chapters, in particular, would be perfect starting points for someone interested in jumping into these literatures. Other chapters -- Morrison, Bacciagaluppi, Wallace, Ruetsche -- may be better viewed as (perhaps opinionated) introductory articles, setting up topics of recent interest in the literature in a way that presupposes relatively little background. Yet others -- Wilson, Batterman, Belot, Saunders -- are best conceived as original research articles, in the sense that they develop and defend specific views on certain topics, often in opposition to some received view that may or may not be addressed elsewhere in the volume. These articles presuppose not only some background familiarity with the physics, but also considerable familiarity with the philosophical literature.

In sum, there is much richness here, and an advanced graduate student or an established researcher would learn a great deal from reading the volume from cover to cover -- including the chapters that cover one’s own areas of expertise. (At least, this was my experience.) From that perspective -- perhaps the most important one -- the volume is excellent. But not all of the chapters would be appropriate for early graduate students, and someone looking for a complete picture of the field would likely want to supplement this handbook with other resources, particularly on quantum mechanics, quantum field theory, and quantum gravity.