Ockham's Razors: A User's Manual

Placeholder book cover

Elliott Sober, Ockham's Razors: A User's Manual, Cambridge University Press, 2015, 314pp., $29.99 (pbk), ISBN 9781107692534.

Reviewed by Daniel Steel, University of British Columbia

2016.01.27


Elliott Sober's book is a welcome and impressive contribution to current philosophy of science literature on confirmation and scientific reasoning. Ockham's razor is, roughly, the idea that simpler or more parsimonious explanations, hypotheses, or models should be preferred, other things being equal. While the idea that simplicity is a theoretical virtue is familiar to scientists and philosophers and some philosophical literature exists on the topic, Sober's book is the only up-to-date philosophical monograph on the topic I know of. Moreover, Sober's book is clearly written, broad in its scope, extremely astute, and generally unafraid of relevant technicalities while remaining user-friendly. In short, this is a book well worth reading.

The overall structure is as follows: an historical overview of Ockham's razor, a critical examination of leading current approaches, and then an examination of a number of specific arguments from science and philosophy that invoke simplicity or parsimony as a grounds for preferring one hypothesis or model over another (which for convenience I will refer to as "parsimony arguments"). Chapter 1 consists of a review of famous statements of, or about, parsimony from the history of philosophy and the history of science. The figures include Ockham himself, Aristotle, Copernicus, Leibniz, Descartes, Newton, Hume, Kant, and Maxwell. Two main themes emerge from this historical overview. The first is that there are, as Sober's title suggests, several versions of Ockham's razor. Thus, Sober distinguishes the razor of silence from the razor of denial. The first recommends agnosticism about causes that are unneeded to explain the phenomena, while the second recommends inferring that superfluous causes do not exist (p. 59). The second theme is to document the tendency, among these historical figures, to justify Ockham's razor by appeal to a teleology attributed to either God or nature. According to such a perspective, more parsimonious hypotheses are more likely to be true because God or nature works in the most perfect, elegant, and hence efficient manner to generate a multitude of phenomena from a few basic causes (pp. 59-60). Hence, a shift towards secularism and away from a teleological conception of nature in science and philosophy in the twentieth century led to new rationales for Ockham's razor.

The title of chapter 2, "The probabilistic turn," indicates the leading technical framework applied to parsimony by these secular philosophers and scientists. Here Sober distinguishes two central "parsimony paradigms" for explaining how simplicity can be epistemically relevant. "In the first, more parsimonious theories have higher likelihoods. In the second, parsimony is relevant to estimating a model's predictive accuracy" (p. 141). These two paradigms are explained in more detail below. In addition, chapter 2 continues the trend of the historical narrative of chapter 1. Whereas chapter 1 is, in part, a story of a gradual moving away from theistic or teleological justifications of parsimony, chapter 2 chronicles an analogous abandonment of the idea that more parsimonious hypotheses are more probable than more complex ones. The two "parsimony paradigms," then, are both approaches to justifying Ockham's razor that do not rely on this assumption. The remaining three chapters focus on parsimony arguments in specific contexts. Thus, chapter 3 examines the role of parsimony in phylogenetic inferences made by evolutionary biologists, chapter 4 looks at parsimony in arguments about attributing mind-reading abilities to chimpanzees, and chapter 5 reviews a number of parsimony arguments found in the philosophical literature on topics such as atheism, the mind/body problem, and nominalism.

As the subtitle suggests, one of Sober's aims is to provide a user's manual for those who encounter parsimony arguments in science or philosophy. Sometimes such arguments have merit, sometimes they do not, and Sober's book is intended to help its readers tell the difference. Relatedly, the book also aspires to defend several general philosophical points about its topic. Three general philosophical claims about the epistemic relevance of parsimony emerge from Sober's discussion.

  • Reductionism: Parsimony is not an epistemic end in itself but must be justified by appeal to some independent epistemic standard.
  • Non-Circularity: Parsimony arguments need not assume that the world is simple or even that simpler hypotheses are more likely to be true than more complex alternatives.
  • Contextualism: Parsimony arguments inevitably rely on defeasible, context dependent empirical assumptions.

These three themes are illustrated by what Sober characterizes as the two leading "parsimony paradigms" alluded to above.

The first paradigm relies on the law of the likelihood. According to this approach, evidence E favors a simpler hypothesis S over a more complex alternative C if and only if P(E|S) > P(E|C). Justifying Ockham's razor in a specific context, then, requires producing a good argument that this inequality holds there. The likelihood parsimony paradigm is reductionist because parsimony is treated as epistemically significant only to the extent that it parallels well-founded assessments of relative likelihoods. Moreover, it is non-circular, as the approach does not assume that the simpler alternative, S, is true, and does not even assume that P(S) > P(C). And the approach is contextualist because arguments about likelihoods inevitably make recourse to probabilistic premises whose plausibility, or implausibility, depends on the particulars of the case in question.

Sober calls the second parsimony paradigm "frequentism." This discussion taps into Sober's earlier work in collaboration with Malcolm Forster on the relationship between Akaike Information Criterion (AIC) and parsimony (e.g., Forster and Sober 1994). Arguments falling in the frequentist parsimony paradigm attempt to show that that a preference for simpler hypotheses can improve expected predictive accuracy. As in the case of the likelihood paradigm, this approach is also reductionist, non-circular, and contextualist. It is reductionist because it treats parsimony as epistemically valuable only to the extent that it tends to enhance predictive accuracy. It is non-circular because it does not assume that simpler models are true or that they are more probable than complex ones. Indeed, frequentism refrains from assigning probabilities to models altogether. Finally, it is contextualist because justifications of claims that a preference for simpler models enhances expected predictive accuracy depend on substantive assumptions that may be more or less plausible depending on the circumstances (pp. 133-135).

The likelihood paradigm receives the most attention and is the basis for most of Sober's analyses of parsimony arguments. Sober's approach is typically as follows: a) specify a metric for assessing the comparative parsimony of hypotheses of a particular type (e.g., number of adjustable parameters, number of causes, number trait changes in a phylogenetic tree, etc.), b) identify a set of sufficient conditions for P(E|S) > P(E|C), where S is more parsimonious than C according to the metric, and c) ask whether those sufficient conditions are plausible. Examples include "Reichenbach's Theorem" for common cause and separate cause explanations (pp. 106-108), the analysis of parsimony in reconstructing phylogenetic trees (pp. 169-175), and the problem of evil (pp. 246-251). One of the main takeaways of these analyses is the positive claim that there can be good epistemic reason to prefer simpler hypotheses. And Sober endorses some specific parsimony arguments, such as the one he reconstructs for the problem of evil. But the contextualist theme is always emphasized: the quality of a likelihood construal of a parsimony argument inevitably depends on the plausibility of the requisite probabilistic assumptions in the given circumstances.

In general I found Sober's analyses of specific parsimony arguments (e.g., about the interpretation of experiments on mind-reading among chimpanzees) to be very judicious and insightful. Moreover, I think that his three general philosophical themes, highlighted above, are valuable contributions that should frame current discussions on the topic. However, I do have one dissatisfaction with Ockham's Razors. That is the failure to seriously engage with current theoretical accounts of Ockham's razor that do not take probability as the fundamental explanatory notion. Kevin Kelly is the best-known advocate of such a position in the current literature, arguing in number of publications that truth-finding efficiency rather than probability is the key to justifying Ockham's razor (Kelly 2007; Kelly 2010; Kelly and Mayo-Wilson 2010).

Sober briefly considers Kelly's approach, but quickly rejects it for two reasons: (1) that it assumes that hypotheses are tested one by one when statistical hypothesis testing is inherently contrastive, and (2) that it involves a dichotomous decision to accept or reject a hypothesis (pp. 151-152). The first of these objections seems to be a reference to the fact that the application of Kelly's approach to statistical cases (rather than examples in which hypotheses make deterministic predictions about the data) remains a work in progress. It is, I think, entirely legitimate to point out this limitation of Kelly's approach as it currently stands. But it seems unreasonable to dismiss the approach on these grounds since this limitation may not be a serious problem in all cases. Parsimony arguments concerning the Copernican and Ptolemaic models of the solar system might be good examples.

The second objection, I think, is rather unfair, as both the likelihood and frequentist paradigm similarly involve discrete comparisons. Thus, the likelihood approach aims to show that likelihoods favor a parsimonious hypothesis over a more complex alternative (i.e., that P(E|S) > P(E|C)), while in the frequentist paradigm a model might be preferred on the grounds of having a better AIC score than alternatives. In general, the challenge of justifying Ockham's razor is to specify epistemic grounds preferring a simpler hypothesis to a more complex alternative. The result of such an exercise is a claim to the effect that the simpler hypothesis, S, should be preferred to the more complex alternative, C. Whether that preference is expressed by saying that S is accepted, favored, or used as a basis for predictions does not seem to be a fundamental difference. In addition, Sober also does not consider criticisms Kelly advances against the likelihood and frequentist approaches to parsimony. One of the most important of these is that they rely on probabilistic assumptions that are frequently difficult to justify and often question begging (Kelly 2010). Again, I would have liked to see an engagement with this corner of the literature on Ockham's razor.

But these critical observations notwithstanding, Ockham's Razors is a great book that philosophers, scientists, and anyone else interested in reasoning about the empirical world would do well to read carefully.

REFERENCES

Forster, M. and Sober, E. (1994). How to Tell when Simpler, More Unified, or Less Ad Hoc Theories will Provide More Accurate Predictions. British Journal for the Philosophy of Science, 45, 1-36.

Kelly, K. (2007). A New Solution to the Puzzle of Simplicity. Philosophy of Science, 74, 561-573.

Kelly, K. (2010). Simplicity, Truth, and Probability. In Prasanta Bandyopadhyay and Malcolm Forster (Eds.), Handbook on the Philosophy of Statistics (pp. 983-1024). Dordrecht: Elsevier.

Kelly, K. and Mayo-Wilson, C. (2010). Ockham Efficiency Theorem for Stochastic Empirical Methods. Journal of Philosophical Logic, 39, 679-312.