Time and Chance

Placeholder book cover

Albert, David Z., Time and Chance, Harvard University Press, 2000, xi+172pp, $31.50 (hbk), ISBN 0-674-00317-9.

Reviewed by Nick Huggett, University of Illinois at Chicago

2002.02.09


Time and Chance is a much bigger book than its 172 pages would suggest: David Albert aims at no less than explanations of the nature of physical probability and the origin of time asymmetry, and an introductory text to these subjects. This book is big in its goals, and, it must be said, in its achievements.

Albert’s explanation of the relevant physics comes (mostly) in three of the chapters: Chapter Two (and an Appendix) introduces thermodynamics, particularly irreversibility and the Second Law; Chapter Three tackles statistical mechanics – how microphysics grounds thermodynamics; and Chapter Seven gives an account of quantum mechanics. All three discussions are as non-mathematical as possible (little more than basic algebra and intuitive ideas about a measure are assumed anywhere) and rely on working through simple examples to teach the relevant ideas. Like all of the book, Albert’s style here is deceptively deep: he writes very directly, as if giving a personal, patient (extremely irreverent) tutorial in plain language, and yet every discussion demands careful rereading to appreciate its full significance. For example, someone unfamiliar with the material would want to think very carefully about the application of the definition of thermodynamic entropy in the examples given. These chapters form an excellent introduction to the physics, but one should not be misled by their familiar style into thinking that they don’t demand careful, hard contemplation.

The two great topics in the foundations of statistical mechanics are probability and the origin of temporal asymmetries. These topics are intimately connected, and the focus of Time and Chance shifts between them, but to outline Albert’s views succinctly I’ll divide them as far as possible. The argument concerning chance begins in Chapter Three.

First, using our senses, augmented with instruments, we can resolve various macroscopic properties – volume, temperature, etc. But such properties are far more coarse grained than the full set of microscopic properties characterizing a system – the positions and momenta of all its constituent particles – properties that we cannot resolve: the macrostate (the set of macroproperties) provides a less than complete specification of the microstate. Next, we know that certain macrostates will – invariably it seems – evolve into certain others – e.g., an ice cube in hot coffee will melt – and we would like to understand why. One seeks an explanation by translating the initial macrostate (ice cube in hot coffee) into some less than complete statement about the initial microstate, then feeding this statement somehow into the deterministic Newtonian equations of motion to derive some (less than complete) statement about the later microstate, which can then be translated back into a macrostate (cooler, watery coffee). Finally, chance plays a role in this story because one translates the macrostate into a statement to the effect that all microstates compatible with the macrostate (and perhaps also with other specified facts) are ‘equally likely’, and it is this information that is fed into the Newtonian dynamics.

But what is the nature of this chancy statement? The standard view, according to Albert, has two parts (which he equates, pp. 67-70, though they probably should be distinguished). First, the standard view takes the equal likelihood to be nothing more than a statement of our ignorance – it says no more than ‘for all we know it could be any one of these states’. Second, consider that the equal likelihood could be represented by an (uncountably infinite) ensemble of systems, one in each of the microstates compatible with the macrostate. The standard view is that thermodynamic properties are averages over such ensembles and that explanations of macroscopic evolutions are found by considering how such ensembles evolve: to find what the later macrostate will be, construct the ensemble corresponding to the current macrostate, let each member of the ensemble evolve according to Newtonian dynamics to some later time, and then derive any macroproperties as averages for the new ensemble obtained. For example, the ensemble of microstates corresponding to ice cubes in hot coffee will evolve to an ensemble whose average thermodynamic state is one of cooler, watery coffee. The two parts of the standard view, Albert argues convincingly, foster a number of problems.

First, our ignorance of the particular microstate is supposed, by itself, to justify the assumption of probabilistic indifference: the standard view invokes the classical interpretation of probability. Albert rightly criticizes this view along familiar lines – lots of measures are compatible with probability axioms, and nothing selects uniformity. Second, indifference cannot only represent our ignorance. Suppose it did; then our explanation of the melting ice cube starts by inferring a statement about our ignorance of the microstate, then considers how that ignorance evolves to a new state of ignorance, which is translated to a melted-ice-cube macrostate – but our evolving ignorance can’t be sufficient to explain the ice cube’s melting! Thus uniform probability represents more than our state of ignorance. It reflects our experience of past thermodynamic systems – we would learn quickly that a probability function which makes it likely that milk will unmix from coffee is bad. Thus indifference represents something about thermodynamic systems themselves. What? The objective probabilities that certain things will happen to a thermodynamic system. Albert (p. 81) favors a Lewis-style supervenience account of objective probability – objective probabilities are determined, in some way, by the frequencies with which events actually occur.

I have a small gripe at this point: one – especially one unfamiliar with the material – might well conclude from Albert’s arguments that indifference has no connection with our epistemic state. But that would be wrong: we really are ignorant about which microstate the macrosystem is actually in, and we really have no reason to favor any. The connection is, of course, that our experiences of the actual frequencies with which events occur lead us to make our subjective probabilities match the objective probabilities as best we can. Subjective indifference is justified because we have learned that nature itself is indifferent.

Thirdly, problems arise because the standard view represents probability in terms of an ensemble, and attempts to equate thermodynamic laws with regularities governing the evolutions of ensembles. Now (although Albert doesn’t mention it), real, practical statistical physics does proceed by equating thermodynamic quantities with averages over ensembles, which is an extremely powerful technique. This fact may seem surprising – why should an individual behave like an average of many individuals? But there is a reason: pick any thermodynamic property, and almost every system in the appropriate ensemble has the same value for it at any time, hence almost every microsystem in the ensemble, and hence almost every macrosystem that picks out the ensemble, has the average value of the property at all times. So for practical purposes we can proceed as if the ensemble average state is the individual state; what Albert criticizes is the idea that individual thermodynamic states literally are ensemble averages and that the laws of thermodynamics can be derived on that basis. This idea looks pretty promising initially: while recurrence results show that no individual system will monotonically approach equilibrium, Boltzmann’s H-theorem shows that an ensemble average will. But there are notorious technical problems with this approach, and in particular with showing that there is a unique equilibrium ensemble: Albert concludes that the program will not provide an interpretation of thermodynamics.

In place of the two parts of the standard view, Albert offers the following alternatives, which, he believes, avoid all these worries. First, we give up its interpretation of probability: indifference over microstates does reflect ignorance, but ignorance grounded in the objective probabilities we have discovered. Second, in place of the evolving ensemble explanation, our account of, say, melting ice cubes runs as follows: the initial state defines a set of compatible microstates, almost all of which lead to later microstates (in, say, 10 minutes) which are compatible with a macrostate of a melted ice cube, so, assuming that all microstates are equally likely, it is almost certain that the ice cube will be melted in 10 minutes. Succinctly, we explain thermodynamic regularities as being almost ubiquitous for members of the ensemble, not as being absolutely certain for the average of the ensemble.

Now, Albert favors a supervenience interpretation of the objective probabilities, which perhaps suggests that he believes that the frequencies with which microstates occur – the supervenience base for the probabilities – are brute facts about the world. But in Chapter Seven he proposes a novel account of the origin of those frequencies, grounded in the ‘GRW’ version of quantum mechanics. In this theory the wave-function of any particle will, with some fixed, low probability per unit time, spontaneously collapse—it will be localized to a small region. Albert notices that microstates with un-thermodynamic behavior – say those that are compatible with the state of being a cup of cold, watery coffee but which lead quickly to a state of being a cup of hot coffee containing ice cubes – are not only few they are also far between in the set of microstates. In other words, even if a system starts in such a ‘bad’ state, most perturbations, even the smallest ones, will lead to a ‘good’ state. And so, Albert proposes, GRW collapses would mean that systems do not spend any appreciable time in any bad states, and so we should expect that systems with un-thermodynamic behavior almost never occur: GRW collapses mean that over any interval good states massively predominate. Thus Albert ultimately proposes replacing the postulate of indifference over microstates with dynamical quantum probabilities.

The discussion of time (a)symmetry begins in Chapter One, with a discussion of what it means for a theory to be time-reversal symmetric. Albert’s conception is not the standard one – classical electrodynamics turns out to be asymmetric – and readers may feel that the standard view doesn’t get a fair shake. Regardless, our experiences involve profound time asymmetries and, given an underlying time-symmetric Newtonian dynamics, there are difficult puzzles about how this can be. First, thermodynamic systems seem always to evolve in the same temporal direction: ice cubes in hot coffee are always more melted in the future not the past, and in general the Second Law of thermodynamics is of course time-asymmetric – systems always seem to have greater entropy in the future than in the past. But how can this be if microphysics is Newtonian, from which it follows (from the famous reversibility and recurrence arguments) that almost every microstate compatible with a given macrostate lies on a trajectory whose entropy increases to the past as well as the future?

Chapter Four is devoted to this question. Briefly, Albert’s solution is to add to statistical mechanics – in addition to the postulates of Newtonian dynamics and indifference – an explicit postulate specifying the particular low entropy macrostate in which the universe was created (so statistical mechanics will remain incomplete until cosmology tells us what state that is). Then conditionalizing not only on the current macrostate of a system but also on the initial macrostate, it is not only most likely that any system will increase in entropy to the future, but also that it will decrease in entropy to the past, as we believe. If this answer seems obvious, then one should consider the previous attempts by Gibbs, Sklar, Schrödinger, Davies, and Gold that Albert rejects. (Note a point that Albert makes, p. 82, but doesn’t emphasize: the proportion of states with un-thermodynamic futures is effectively the same – very, very small – in the collection of all microstates compatible with a macrostate and in the collection of microstates which also have lower entropy in the past. Thus effectively one need not condition on the past state of the universe when calculating the thermodynamic future, only when calculating the past.)

Actually, as Albert demonstrates in Chapter Five, it’s not true that every system will or is even likely to increase monotonically in entropy to the future: he shows how, given his interpretation of statistical physics, the Second Law can be reliably violated. That it does not hold with certainty of any system is familiar (a system could, just possibly, be in an un-thermodynamic state), but Albert shows that it fails with certainty for some systems, which is shocking.

This brings me to a difficulty for Albert’s interpretation of statistical physics, concerning the postulate of indifference that he offers. He proposes that we are to take as equally likely all microstates compatible with the ‘information we happen to have’ (p. 96). But this is to introduce the kind of subjective element that he rejected earlier: ignorance could not explain why ice cubes melt in hot coffee, but neither can which facts we happen to know! Presumably this is a verbal slip (but again, care is needed to make clear the place of subjective knowledge) and what is needed is conditionalization on some relevant facts about the system. Most obviously one could postulate instead that all microstates compatible with the current macrocondition are equally likely. However, the results of Chapter Five show that this won’t do. Albert describes a device (a ‘pseudo Maxwell demon’) that will reliably interact with a system to put it into a microstate which will commence an un-thermodynamic evolution in, say, one hour after the interaction. Assuming that all states compatible with the macrostate of such a system 59 minutes after the interaction are equally probable will lead to the false conclusion that it is highly unlikely immediately to decrease in entropy. Presumably it was to avoid this problem that Albert proposed conditionalizing on our information, but introducing a subjective element seems just as problematic. So what is the correct postulate?

Another asymmetry, discussed in Chapter Six, concerns our different epistemic relations to the past and future (also discussed are our different causal powers over past and future). This chapter contains the most profound philosophical implications of the book, concerning deep questions about the reliability of memory and the nature of free will. I think that it could and should be read widely by philosophers, whether or not they have interests in philosophy of physics (it can profitably be read independently of the rest of the book).

One of the strongest features of experience is that we know much more, with much more certainty, about the past than about the future. This asymmetry seems, however, at odds with time-symmetric dynamics: the capacity for making inferences using time-symmetric physics from present information should be entirely time-symmetric. In fact, as Albert explains, recurrence and reversibility imply that given only the current macrostate of a system, its most probable past is like the time-reverse of its most probable future: ice cubes in hot coffee formed by spontaneous freezing; my body slowly ‘un-decomposed’ and come to life; and in general any organized system formed out of random fluctuations. So the mystery is not just why our knowledge is so biased to the past; it is whether our beliefs about the past are justified at all, given that they are utterly incompatible with Newtonian mechanics plus only our knowledge of current macrofacts.

A typical response is to point out that in addition to whatever we can infer from present macrofacts and Newtonian physics we also have ‘records’ of the past but not of the future. Albert criticizes previous proposals for failing to provide an adequate account of what a record is, and then gives his own view. He notes that reading a record means making an inference about an interaction at some time intermediate between the present and an earlier time at which we know the recording device was in a ‘ready state’. For example, a line on a photographic plate only tells us that a particle went through a detector an hour ago if we also know that just prior to that time the plate was in the detector, unexposed. Reading a record provides more information than inference from the present state alone because it assumes something about the past. (Note, with Albert, that this point is time-symmetric: if I know something suitable about a recording device at a future time, I can read its current state as a record of an interaction between now and then.)

Of course, Albert points out, we now face a regress. How do I know the ready state of the detector? Presumably I have a record of that, in my head or lab notes. But to read that record I must make an assumption about the earlier state of my mind of lab notes, and to make that assumption I need another record, and another and another. But there is a natural place to stop this regress: the initial macrostate of the universe – the one that plays a role in the formulation of statistical mechanics – is the ‘mother of all ready conditions’ (p. 118) that enables us to read any records about the past. Given this knowledge, all the things we think of as records of the past are, reliably, just that after all. (And symmetrically, Albert argues, if we knew the final – or simply a later – macrostate of the universe we would be able to read records of the future.)

This idea should be considered widely and carefully by philosophers. If Albert is right then the openness of the future is far more subjective than we previously thought, turning on the contingent fact that we don’t know the future macrostate (by other arguments of this chapter, if we did know it then we would have the same negligible causal powers over the future that we do over the past, and think what that would show about free will). But if he is wrong then we have to deal with a seemingly crippling skepticism about the past. Does he succeed? Much as I admire his proposal, it seems to me that it faces important difficulties that should be explored.

Any solution to this problem has to satisfy two desiderata: it must make our beliefs about the past justifiable in principle and it must give a plausible story about how we actually form beliefs about the past from records. There are problems on both counts. First, it is plausible that most beliefs about the past are consequences of – hence justified by – records and the particular initial state of the universe. But what could justify our knowledge of the particular initial state of the universe? Not a record of it obviously, for that would require an even earlier initial state. Albert suggests that ‘induction’ will do the trick (p. 94): statistical physics with a postulate specifying the initial macrostate of the universe leads to a variety of well-confirmed predictions. But that will hardly do: our knowledge of those predictions mostly lies in records, but our reading of those records is contingent on the assumption of the initial state, so reliance on them is circular. Remember, without the assumption, Newtonian mechanics plus those ‘records’ make likely a past in which they were created by spontaneous fluctuations, not experimentalists. (Or did Albert mean predictions about some present facts from other present facts, so that we can justify the assumption at any moment if necessary?) The other justification offered (p. 119) is that only the assumption renders deeply held beliefs about the reliability of memory true. Of course without the assumption we have no reason to think that memories are reliable, so the reasoning here is formally circular as well. The argument, I think, is either naturalistic – beings like us cannot but have faith in memory, and so epistemology must assume it is reliable – or transcendental – an assumption of the initial macrostate is a precondition to any scientific knowledge. Either way the argument is not of a piece with ordinary scientific inference; whether it is acceptable I leave as an urgent question to later commentators.

Second, does Albert’s account fit with a plausible story of how we actually read records? Whether or not our beliefs in what a record says are justified, we have to have some prior belief about the initial state of the recording device to even read it – we have to have information about the state and location of the photographic plate an hour ago before we can even imagine what the line on it indicates. So how might we come by such information? Not by inference from only our present experiences plus Newtonian mechanics, since ready state information is what enables us to know more than such inferences allow. And the only other non-ampliative sources of knowledge are records, leading back to our regress, and it seems again that it must be knowledge of the initial macrostate of the universe that allows us to read any of the records we do. But that cannot be right! That is to say that I can only read records because I know the big bang macrostate. And so do you! At this instant! And though the prospect of pursuing cosmology through neuroscience might sound appealing, this conclusion really cannot be sustained. Clearly if Albert’s account is to work, the information we need about initial states must be ampliative: somehow learning to read records means making hypotheses about earlier ready states in a justifiable way. But in what way? And is it possible to amplify present experiences alone or is some initial, innate information about the past needed? And what prevents similar ampliative inferences about the future? (Albert, p. 124, rejects the idea that it is because the final state is one of maximal entropy that it cannot serve as a ready state for reading records.) What these questions require is a more complete account of how it is we come to know enough about the past – and not enough about the future – to read the records we do. I rank these questions as among the most important in epistemology today.

In brief conclusion to a lengthy review, this book should be read (more than once) by anyone with an interest in foundations of physics, and Chapter Six by anyone with an interest in metaphysics or epistemology. One could certainly use the book to introduce bright undergraduates to the topics, but one should be prepared to spend plenty of time carefully teasing out its implications.