Accuracy and the Laws of Credence

Placeholder book cover

Richard Pettigrew, Accuracy and the Laws of Credence, Oxford University Press, 2016, 238pp., $74.00 (hbk), ISBN 9780198732716.

Reviewed by Kenny Easwaran, Texas A&M University

2016.10.18


After receiving a PhD in mathematics and publishing a few papers on the philosophy of mathematics, Richard Pettigrew co-authored a major two-part paper, "An Objective Justification of Bayesianism", with Hannes Leitgeb. This paper came out in 2010 and showed how the simple idea that belief aims at the truth could be used to systematically justify a large number of intuitively plausible principles for confidence, or degree of belief. That paper used some rather flat-footed assumptions about what it means for one's degrees of belief to be close to the truth (it took the space of possible degrees of belief to be measured as if it were physical space, with the Pythagorean theorem applied to probabilities) and about how to calculate which option is best in cases of uncertainty (it uncritically applied rules like expected utility despite the intimate connection it has to some of the principles they sought to justify). But it showed how to unite a lot of work that had been done by various researchers in different contexts, and made formal epistemologists take notice. For those who already accepted that closeness to the truth is fundamental to epistemology, it gave a justification of many powerful epistemic principles. But perhaps more importantly, for epistemologists who accepted the principles that were the conclusions of the arguments, it gave us reason to suspect that perhaps closeness to the truth really is the only fundamental value in epistemology. The idea that all epistemic values can be derived from the value of the truth is known as "veritism", and had been of interest in epistemology for a while.

In the years since that paper came out, this "accuracy paradigm" for formal epistemology has become a major locus of research, including several papers by myself and others. Pettigrew himself has raised funds to support many of the workshops, conferences, and postdoctoral researchers that have supported the development of this research direction. Most of his own papers in these years have been a second pass at the ideas from the original paper, extending them to other epistemic principles, and replacing the flat-footed assumptions with more plausible ones. This book is a third pass at the same ideas, and is an advance over all of these earlier papers, as well as a systematic presentation of their insights in one place. Again, by providing arguments connecting various epistemic principles to the notion of aiming at the truth, it can be seen straightforwardly as an argument for these principles, or conversely as a defense of veritism by showing that no other values are needed.

The book has four parts, corresponding to arguments for four types of principle. Part I covers the basic argument that degrees of belief should satisfy the axioms of probability theory. It also goes into great detail about the common structure for all the major arguments of the book, including very careful discussion of subtle distinctions between stronger and weaker versions of the assumptions involved. Part II gives the arguments for principles about the role of chance in reasoning, extending David Lewis's "Principal Principle". Part III deals with principles for the distribution of credences in the absence of evidence, focusing on the "Principle of Indifference" as well as alternatives. And Part IV deals with principles for how to revise one's credences in light of evidence (or at least, how one should plan to revise them in light of potential evidence). Helpfully, the Parts are each divided into several chapters, with numbered sections, and the table of contents sets out this whole structure very clearly. Someone who is just interested in a few results or topics can use this table of contents to just jump right in, while someone who wants to read the whole thing can get a very good grasp of the structure, and can easily navigate to points they want to remember.

The structure of the argument for each of the epistemic rules in the book is the same, and it is discussed in great detail in Part I. Pettigrew starts with the assumptions that the omniscient credence state (i.e., the one that has credence 1 in all truths and 0 in all falsehoods) is the one that is epistemically best to have, and that the goodness of other credal states depends on their relation to this one. He then characterizes this dependence in various formal ways, and in each Part lays down some principles about which credal states are rational for agents that aren't certain what the truths and falsehoods are. Although the credal state that matches the truth values of every proposition may in fact be best, rationality requires being good not just in the actual world, but also in other possibilities that one thinks might be actual. Each Part then mentions some mathematical theorems about what credal states meet the proposed standard of rationality, and interprets them as epistemic principles. The proofs of the theorems are included in an Appendix to each Part, but they don't interrupt the flow of the main text.

The main investigation of the measure of goodness of credal states other than the omniscient one occurs in Chapter 4. (Chapter 3 considers previous discussions of how this goodness should be measured, including two from Jim Joyce, and the one from the earlier paper by Leitgeb and Pettigrew.) It begins with the idea that the overall goodness of a credal state must be the sum of the goodness of each individual credence.

When we say that we represent an agent by her credence function, it can sound as if we're representing her as having a single, unified doxastic state. But that's not what's going on. Really, we are just representing her as having an agglomeration of individual doxastic states, namely, the individual credences she assigns to the various propositions about which she has an opinion. (p. 49)

Although I think this assumption is not clearly justified, he has at least set our sights on a target for alternatives to this paradigm -- if there is some sense in which a credal state forms an organic unity, then we might end up with different principles of rationality. (I will leave aside the question, familiar from measurement theory, of whether the use of addition to represent agglomeration is substantive, or merely a convention of mathematical representation.)

The interesting and innovative supplement to this idea is that the overall goodness of a credal state can be broken up into its closeness to its "well-calibrated counterpart", and the closeness of its well-calibrated counterpart to the omniscient credence function. Pettigrew summarizes the existing literature arguing for and against the concept of calibration, and shows that this proposal meets the intuitive demands of both sides, which is a remarkable achievement. He then shows that a measure of goodness that is additive over individual credences can be broken up this way if and only if it is an "additive Bregman divergence". Much existing literature by formal epistemologists and statisticians has the aim of characterizing epistemic goodness in this way, but Pettigrew's argument is strikingly new.

This doesn't uniquely characterize the way goodness is to be measured, which leads to a central objection in the literature, due to Aaron Bronfman. For a given measure of goodness, we can say that all credal states with certain features are irrational. These states might be said to be irrational for a different reason according to a different measure of goodness. But just as a suboptimal credal state might be the best way to balance goodness in different possibilities, if there are also different measures of goodness, then maybe something that is irrational according to each might end up rational overall as the best tradeoff between them. Pettigrew discusses this objection at length in Chapter 5 and notes that although strategies like supervaluation won't eliminate the objection, there are two strategies that do: we might say that each agent has their own subjective measure of goodness, or we might say that there is a unique objective measure of goodness. Pettigrew has a tentative proposal for identifying the objective measure, using mathematical symmetry, but he recognizes that this proposal is not very convincing. Thus, throughout the book he tracks which arguments are vulnerable to the Bronfman objection. He notes that the arguments from Parts I and II are, but the arguments from Part III are not. One argument in Part IV is not, but the other two are.

Discussion of rules of rationality occurs in Chapters 2, 10, 12, 13, and 14 as Pettigrew generalizes the account to deal with the issues of the three later parts. In Chapter 2, he begins by considering a rule of rationality formulated in terms of dominance. As he notes, we can't just use a naively phrased principle of dominance as a rule of rationality. If one were to say that an option A is irrational whenever there is another option B that is guaranteed to be better than A (i.e., B dominates A), then there are no rational options when one has infinitely many options, each better than the last. I would say that these cases indicate that we should give up on a binary notion of rationality that an option either has or lacks, and replace it with a comparative notion. But for those who prefer a binary notion, Pettigrew's discussion in Chapter 2 is the most significant and subtle that I've seen. The principle he endorses here is that A is irrational if there is a B that dominates it, which itself is not dominated by any C. He considers versions where B is required to satisfy various other good-making conditions, but suggests that these principles would only be needed if there is some source of epistemic value other than closeness to the truth.

The rationality principles discussed in later chapters are probably the most questionable parts of Pettigrew's argument -- even more than the assumptions he makes about how to measure epistemic goodness. In Chapter 10 he derives various interesting and plausible principles regarding credences about chances from a principle he calls "Current Chance Evidential Undominated Dominance". Unlike the earlier dominance principle, on which A is irrational if there is an undominated B that is better than A in every possibility, this principle says that A is irrational if there is an undominated B that has higher expected goodness than A according to every possible chance function. No explanation is given here for why expectation should be relevant, particularly when calculated from a chance function rather than a credence function. As he says, "I suspect that it does not reside at normative bedrock: there is still work to be done justifying it." (p. 132) But even in the absence of this explanation, the discussion shows what sorts of principles one might focus on to give a more complete version of the argument.

Parts III and IV are substantially shorter than Parts I and II because the structure of the arguments has already been covered in great detail. In Part III he shows that if someone only cares about minimizing the possible badness of their credal state, then they must have equal credence in every possibility, satisfying the Principle of Indifference. If they only care about maximizing the possible goodness, then they must assign all 1's and 0's. If they care about both of these to some degree, then they should have a high credence in one particular possibility, and divide the remaining credence among the others equally. This is a striking result, but I have worries about the rationality criteria used here - it's not clear to me why someone might care only about the best and worst possibilities and not the intermediate ones, any more than why they would care about expectations according to chances. But this is where to look if one wants to justify principles for setting initial credence functions.

Part IV consists of some very interesting discussions of principles relating one's credences at different times. Pettigrew justifies the standard principle of Bayesian conditionalization (and Bas Van Fraassen's related "reflection principle") in three different ways. If one has a current credence function, then the plan that maximizes expected epistemic value after updating on evidence is to plan to conditionalize. Conversely, if one has a plan for how to update, then any current credence function that is not a weighted average of these future possibilities will be evaluated as worse by all these future possibilities than one that is. (Note that for C to be a weighted average of future possibilities is for these future possibilities to be the result of conditionalization of C.) And if one is choosing a current credence function and an update plan simultaneously, and wants to maximize the sum of epistemic goodness across times, then any combination that doesn't satisfy the conditionalization relation will be dominated in total goodness by some combination that does. These arguments involve different rules for rationality, with different degrees of plausibility, but it's striking that these different versions all work and support similar conclusions (though with different directions of fit).

Some philosophers have a vision of what they do as starting from unassailable premises, and giving an ironclad argument for a conclusion. However, I think we've all often seen cases where these arguments are weaker than they seem to the author, and with the benefit of a bit of distance, one can often recognize how the premises were in fact motivated by an attempt to justify the conclusion, which was chosen in advance. Pettigrew avoids the charade of pretending to have come up with the premises independently of recognizing that they lead to the conclusions of his arguments. Instead, he is open about having chosen target conclusions in advance (probabilism, the Principal Principle, the Principle of Indifference, and update by conditionalization) and investigated what collection of potentially plausible principles about accuracy and epistemic decision theory will lead to those conclusions.

For someone who is interested in the relations among these principles, and how they might relate to veritism, this book is essential reading. It does not aim to convince, but instead aims to develop an overall view of a part of epistemology, and show how it fits together. It highlights the weak points, for the purposes of spurring the development of new arguments to shore them up. And it ends with a brief listing of topics for future work. For the general topic of how evidential principles can be derived from a pure concern with truth, Pettigrew's book represents the state of the art.