Probabilities in Physics

Placeholder book cover
Claus Beisbart and Stephan Hartmann (eds.), Probabilities in Physics, Oxford University Press, 2011, 437pp., $99.00 (hbk), ISBN 9780199577439.

Reviewed by Douglas Kutach

2012.05.31


This collection of new essays assembled by Beisbart and Hartmann is intended for a technically sophisticated audience of philosophers who want to learn more about how to understand probability in the context of statistical mechanics and quantum mechanics. One third of the essays will be especially handy for philosophers looking for refreshers on the foundations of statistical mechanics or the role of probability in currently prominent interpretations of quantum mechanics. Another third of the essays are suitable for philosophers of science who lack extensive knowledge of quantum mechanics but are seeking alternatives to the tired distinction between subjective and objective interpretations of probability. The remaining fraction of the book comprises highly technical ventures that will be largely inaccessible to the general philosopher but may reward the professional philosopher of physics who examines them carefully. Overall, the collection is of high quality, and I expect most of the intended audience will find several of the papers especially instructive.

Several contributions in this volume can be summarized fairly quickly. Jos Uffink provides a nice review of prominent developments in the history of statistical mechanics. Roman Frigg and Charlotte Werndl catalogue various definitions of entropy, connecting the information-theoretic notions with the statistical mechanical notions. Michael Dickson presents the formalism of effect algebras, whose purpose is to facilitate the use of quantum event spaces by enforcing axioms of probability that are weaker than the standard Kolmogorov axioms. His contribution, like all the papers on quantum probability in this volume, hews fairly close to the standard quantum formalism of projection operators defined on an appropriate Hilbert space.

In one of the more technical papers in the collection, Laura Ruetsche and John Earman explore a strategy for extending the traditional observable-algebra formalism for non-relativistic quantum mechanics to make it general enough to accommodate quantum field theory and the thermodynamic limit of quantum statistical mechanics. Their target obstacle is that probabilities in collapse and modal interpretations of non-relativistic quantum mechanics standardly rely on minimal projection operators, which are not always available in more general observable-algebras. They present a "Maximal-Beable approach" to provide appropriate formal resources to compensate, which results in a characterization of the space of possible quantum states in terms of two-valued homomorphisms on a subspace of projection operators forming a maximal abelian subalgebra of the full observable-algebra. They note that the empirical content of the Maximal-Beable approach is unclear, however, because it does not specify which components of the formalism can be properly assigned probabilities nor does it indicate how any bestowed probabilities would be linked to the preparation of initial conditions and to measurement outcomes.

The main reason I find the topic of probability compelling is that traditional descriptions of the conceptual landscape -- Beisbart and Hartmann's introduction being representative -- organize interpretations of probability using the categories 'subjective' and 'objective', yet some probabilities in physics are difficult to engage using this distinction. For example, experiments on the diffusion of gases and the decay of radioactive material reveal robust regularities that can be expressed as truths of the form, "Given an initial experimental setup of type S, there is a probability P of the detector registering the arrival of a particle of type X." On the one hand, it is misleading to think of such probabilities as subjective because the experiments themselves reveal that some values for P fit the observed data much better than others; the experiments themselves do not depend on anyone's degrees of belief in any straightforward way. On the other hand, the reliability of probabilistic rules summarizing the behavior of macroscopic setups and detectors does not indicate whether nature is fundamentally chancy. The universe could be governed entirely by deterministic laws of temporal evolution or by fundamentally chancy laws, and either way we could have robust non-accidental patterns that are well-suited for probabilistic description. Something in our conception of probability in science ought to go beyond the objective structure of fundamental reality in order to insulate claims about the chancy behavior of the macroscopic world from the hard-to-settle question of whether fundamental chanciness exists. My hope in reading this collection was to learn some new ideas for how to resolve this tension by moving beyond talk of subjective and objective. Although most of the papers rely on the standard taxonomy, I found several promising suggestions to build on.

One intriguing thread running through several papers was the exploration of 'typicality' as a conceptual tool for understanding the kinds of probabilities invoked in statistical mechanics and Bohmian versions of quantum mechanics. The central motivation for 'typicality' is to garner a representation of enough of the qualitative aspects of probability to get some grip on their objectivity without presupposing all of the quantitative aspects. In particular, we are often in a position to infer that a certain type of event E is objectively overwhelmingly probable even though we are in no position to conclude that E has some specific objective probability. The technical term 'typical' is meant to express "overwhelmingly likely" without presupposing the existence of any particular probability. At present, however, discussions of typicality have not yet settled on a sufficiently precise meaning nor have they come to an agreement about its role in elucidating the status of probabilities in physics.

In one of the volume's more accessible papers, Tim Maudlin helps to clarify the underlying logic of Detlef Dürr's typicality-based justifications for the probability distributions used in statistical mechanics and Bohmian versions of quantum mechanics. Bohmian mechanics in particular has only deterministic laws but generates the correct probabilistic quantum predictions by maintaining that particles always remain in a state of quantum equilibrium where the probability of any chosen configuration is equal to the fraction of the wave-function-squared inhabiting that region of configuration space. The appeal to this probability measure can make Bohmian mechanics appear ad hoc; it only reproduces the standard quantum predictions by imposing the Ψ2-measure as an additional axiom. One helpful observation made by Maudlin is that the typicality-based defense of the wave-function-squared measure allows Bohmian mechanics to evade this criticism. The Ψ2-measure need not be thought of as the one true objective probability measure. Instead, its introduction can be defended on the grounds that it has practical utility because it is equivariant. Configurations distributed according to Ψ2 overwhelmingly continue to remain so under the action of the dynamical laws. No specific probability measure has any special ontological status because the same predictions can be generated by distributing particles in any way that agrees with the Ψ2-measure as to which volumes of configuration space are large and which are small.

Although 'typicality' remains a vague signifier when applied to real systems with a finite number of outcomes, I would like to highlight one further observation by Maudlin that I think should be adopted as a signature feature of a typicality-based justification for a probability measure. Namely, there is no need for the concept of typicality to play any role in assigning objective chances to singular instances. Typicality can be invoked as part of an explanation of statistical patterns without pronouncing on whether a single coin flip has a non-trivial probability of landing heads.

D. A. Lavis also touches on the topic of typicality by distinguishing two types. According to the more formal notion, a behavior (or trajectory in phase space) is typical just in case it lies in a pre-identified set of measure one. A behavior is typical according to the more informal notion just in case it is not "ridiculously special", in the words of Sheldon Goldstein. The formal notion is more useful for proofs invoking ergodicity, but the informal notion applies to a wider range of systems, especially systems that are more realistic by having only a finite number of chance outcomes. This informal notion is also closer to what we are seeking when we attempt to insulate probabilistic claims about macroscopic behavior from the possibility of deterministic fundamental laws. At least, it seems our explanations of why we never see violations of thermodynamic regularities should not hinge on whether violations of thermodynamic principles have exactly zero probability rather than having a probability fantastically close to zero.

In a contribution related to the work on typicality, Michael Strevens builds on his (2003) "Bigger Than Chaos" to make sense of how non-trivial probabilities can be compatible with deterministic laws. The essential idea in his paper is illustrated by a wheel of fortune, a roulette wheel with many alternating red and black patches of equal size. For this system, it is plausible that the dynamical laws are suitably sensitive to the initial conditions so that small differences in the initial conditions often make a difference to the resulting outcome color. Yet it is also plausible that the wheel of fortune is suitably insensitive to the initial conditions in the sense that just about any appropriately sized range of initial conditions will consist of initial conditions half of which result in red and half of which result in black. If there were such a thing as an objective probability distribution over initial conditions, it would be relatively unproblematic to derive non-trivial deterministic probabilities for the wheel of fortune, but the justification for postulating such objective probability measures is contentious.

The innovation Strevens presents is to abandon reference to probability distributions over initial conditions and instead to defend the legitimacy of deterministic chances in terms of patterns of actual frequencies together with additional factors that bolster the conclusion that the resulting probabilities are suitably stable under counterfactual alterations. Any attempt to extract determinate non-trivial probabilities from a world governed by deterministic laws is necessarily going to include a bit of hand-waving and vagueness when it comes to identifying details such as the acceptable degree of sensitivity. This leaves quite a bit of room for explanatory variety, as investigators can make different viable choices for their own explanations of the utility (and thus existence) of non-fundamental chances. My guess is that the best explanations are going to resemble Strevens' account in multiple respects.

Craig Callender's discussion of statistical mechanical probabilities invokes yet another distinction worth exploring. In his telling, a central project in the history of statistical mechanics is to make sense of how static probabilities like the microcanonical probability distribution can be compatible with dynamic probabilities. The conflict is particularly apparent when deterministic laws are operative in both temporal directions. As is well known from reversibility arguments, application of static probabilities to help generate retrodictions of past outcomes will fail miserably to match actual patterns in the past. Callender's discussion of the philosophical responses to this quandary pits globalists against localists.

Globalists want to explain the demonstrably effective (but temporally asymmetric) applicability of static probabilities to local systems as the result of a common cause: the early universe was in an exceptionally low entropy condition but was otherwise microscopically unremarkable. Localists balk at the lack of explanatory details provided by the globalist and even the coherence of their explanatory resources. Localists instead insist that statistical mechanics can be treated on a par with other special sciences like ecology without being held accountable for an explanation of why retrodictions using the static probabilities are so awful. Ecological claims already depend on many other poorly understood preconditions for the appropriateness of its probabilistic claims, and the worries about statistical mechanical probabilities raised by reversibility considerations are no different in kind. Although this debate mostly concerns where to shove the explanatory burden, Callender reasonably concludes that globalism can be resisted without requiring a commitment to subjectivist or instrumentalist interpretations of statistical mechanical probabilities.

Brevity prevents me from revealing more of the treasures that can be discovered in this volume.