Sandra Mitchell’s Unsimple Truths attempts, in a format intended to be accessible to a broad audience of scientifically literate readers, to show how complexities revealed by modern sciences in general and biology in particular demand a fundamental refiguring of traditional philosophical perspectives on science. The central elements of this traditional perspective, according to Mitchell, are reductionism, a commitment to explanation via universal laws similar to those found in physics, and a conception of causation according to which the impacts of distinct causes can be neatly separated from one another. Mitchell’s own contrasting perspective, which she labels integrative pluralism, emphasizes the importance of explanations that span multiple levels of analysis, that rely on contingent generalizations of varying degrees of stability, and that recognize the complexities inherent in dynamic systems involving a multitude of interacting and context dependent causes. Unsimple Truths extends and streamlines arguments from her 2003 book Biological Complexity and Integrative Pluralism and includes some entirely new material, including a chapter on the policy implications of complexity and a critique of the notion of modularity used in discussions of causation. I found Unsimple Truths to be very clear, provocative and often strikingly insightful, although I did have some critical reactions that I detail below.
Unsimple Truths is organized into an introduction followed by five full chapters. The introduction features a case study of major depressive disorder that illustrates how explanations of complex systems often involve many interacting causes at various levels of analysis, such as genes and social environment, which contribute to shaping an effect. The introduction goes on to set up a contrast between what Mitchell calls “traditional epistemology” of science and integrative pluralism. Mitchell sees traditional scientific epistemology as inspired by the success of Newtonian physics and codified by such 19th century philosophers John Stuart Mill and William Whewell (pp. 10-12). According to the traditional picture, science should aim to unify a wide variety of phenomena under a few simple, universal laws. Integrative pluralism by contrast insists that explanations of complex phenomena will typically require integrating models at several levels of analysis rather than a reduction to one supposedly fundamental explanatory framework (p. 13). In addition, integrative pluralism stresses the importance of pragmatic considerations in guiding decisions about which level of organization or abstraction to focus on in a given explanatory project. Finally, integrative pluralism emphasizes that many of the complex processes and structures studied by modern science, such as species, ecosystems, and the global climate, are dynamic and evolving, which requires that scientific means for representing and explaining them must continually change as well. Thus, on Mitchell’s picture, science could never be complete nor could it even converge ever more closely to approximations of completion. The subsequent chapters elaborate this integrative pluralist view of science in greater detail.
In chapter 2, Mitchell defends a concept of emergent properties inspired by research in biology that uses that same term and responds to a well-known critique of emergent properties by Jaegwon Kim. Like Kim, Mitchell accepts “compositional materialism,” according to which everything is composed solely of basic physical components (p. 23). But contrary to Kim, Mitchell argues that compositional materialism is compatible with emergent properties. Kim reasons that if compositional materialism is true, then any causal relationship between properties characterized at a higher level — e.g., being pricked by a pin causes pain — can be explained by facts about the microstructural properties of the system — e.g., a signaling cascade in a neural network. Mitchell challenges a crucial premise in Kim’s reasoning, namely, the assumption that “every material object has a unique complete microstructural description” (cited in Mitchell, p. 28). She argues, quite rightly in my view, that complete descriptions are impossible because representations are inevitably limited by their medium and the cognitive abilities and interests of their users (p. 31). However, a weaker premise could be substituted in Kim’s argument to replace the one that Mitchell objects to. Instead of asserting that there is one microstructural description that can explain every higher level relationship, what Kim’s argument needs is the much more modest claim that for every higher level relationship there is some microstructural description that explains it. Moreover, this weaker claim is compatible with the examples of emergent phenomena that Mitchell cites from the scientific literature, for instance, the elaborate flocking patterns of starlings (p. 35). In these examples, scientific explanations were ultimately given in terms of interactions among the components of the system, for example, a model of birds in a flock moving in response to their neighbors. What the scientists seem to mean by the term “emergent” in these cases is not that the property is inexplicable in terms of its components, but that it could not have been predicted on the basis of knowledge of the components alone (p. 36). But emergence in this sense is surely compatible with Kim’s position, since he makes no claims about our abilities to predict the macro behaviors of complex systems from their micro components. Moreover, a number of the features of emergent properties noted by Mitchell — sensitivity to small differences of initial conditions, non-linearity, or feedback loops — are all things that can make a system difficult to predict but which do not appear to preclude explanation. So, the upshot seems to be two conceptions of emergence, one focused on inexplicability in principle and the other on unpredictability in practice, both of which strike me as worthy of philosophical interest.
Chapter 3 of Unsimple Truths presents Mitchell’s views on laws of nature. The main theme of her approach is to reject the traditional philosophical dichotomy between generalizations that are true as a matter of natural necessity and those that are mere accidents and to replace it with a continuum based on her notion of stability. According to this proposal, the difference between between physical laws, such as Galileo’s law of free fall, and biological laws, such as Mendel’s law of independent assortment is one of degree rather than of kind. Moreover, Mitchell’s approach is motivated by a pragmatic perspective that focuses on the functions laws perform — prediction, explanation, intervention — rather than on a quixotic attempt to identify necessary and sufficient conditions for being a law of nature. I heartily agree with Mitchell’s rejection of the traditional accident-versus-naturally-necessary distinction and with her pragmatic perspective on laws. However, I think her concept of stability could profit from further elaboration. Most of Mitchell’s discussion in this chapter is devoted to explaining how laws are contingent in various ways. But the distinctive characteristic of laws on her account is stability, not contingency. As a result, the reader is left with a number of questions about Mitchell’s proposal. For example, are there several types of stability or just one? If the former, are different varieties of stability pertinent to different scientific roles of laws? How do stable generalizations arise in evolutionarily contingent systems, and how are they maintained? And is there some connection between the source of stability of a biological generalization and which functions it is best suited to perform?
In chapter 4, Mitchell discusses gene knockout experiments and argues that they challenge a concept known as modularity, which some have argued plays a central role in the concept of causation (Hausman and Woodward 1999). Modularity can be defined in reference to structural equation models, which consist of a list of equations in which variables on the left hand side of each equation are causes of the variable on the right hand side. For example, consider this simple model:
X ⊕ (Y ⊗ W) =Z
The variables in this equation are binary (i.e., their value must be either 0 or 1). The ε in the first equation is an error term, and would be associated with a probability distribution. Normally, a structural equation model would also include error terms in the other equations, but I have omitted these for simplicity. The ⊕ and ⊗ stand for Boolean addition and multiplication, respectively. Thus, the bottom equation says that Z equals 1 just in case either X = 1 or both Y and W equal 1. Modularity means that it is possible to intervene to change any one of the equations without altering any of the others. For example, suppose the intervention consisted of forcing X to be 0, which could be represented by “wiping out” ε = X and replacing it with 0 = X. If modularity holds, then we can use the other three equations in the model to compute the resulting value of Z, namely, 1. Mitchell argues that modularity should not be regarded as a conceptual sine qua non of causation but instead as a feature of certain types of causal systems that, when present, simplifies causal inferences. I agree with this perspective on modularity, and I concur that modularity can be a problematic assumption for complex systems. Nevertheless, I do not think that gene knockout experiments are counterexamples to modularity.
Mitchell notes that it is not unusual for an experiment of this kind to find no difference between the normal and knockout strains. Often, this occurs because the targeted gene initiates a causal chain that results in the outcome while suppressing other causal processes that would also generate it. Such examples do indeed create difficulties for causal inference; in particular, they appear to be counterexamples to the common assumption that causal connections generate probabilistic dependence (Steel 2008, pp. 68-75). However, I do not think they are counterexamples to modularity. To the contrary, modular organization facilitates adaptability in the face of disruption because it allows the system to alter a targeted mechanism or find alternative solutions while preserving other functions. Indeed, there is a biological literature that focuses on adaptability as a key to understanding the evolution of modularity in living organisms.1 Moreover, modular structural equations can represent redundancy and robustness. For example, suppose that the model above represents the influence of a gene X on an outcome Z. Let the variable X equal 1 when the gene is present and 0 when it is knocked out. When present, X initiates a process that results in the outcome (i.e., Z = 1), but at the same time it suppresses another set of interactions involving Y and W that also produce the outcome. Thus, this model illustrates how modularity is compatible with the null-results that often occur in gene knockout experiments. Of course, real-life scientific examples can be expected to be more complex than this simple model. For example, there might be multiple possible backup mechanisms, some degree of randomness involved in determining which backup is activated, the variables might be continuous rather than binary, and there might be many more than four variables. Yet all of these complications can be accommodated in modular structural equation models. Mitchell also emphasizes that robustness can result from the ability of a complex system to reorganize some interactions among its components in order to regain an impaired function (pp. 71-72). But it is not clear why this shows that a modular representation of the system is impossible. A causal model like the one given above could represent a situation in which the backup mechanism is, as it were, ready and waiting in the wings, or a situation in which the backup mechanism is actively constructed in response to the intervention. Mitchell objects that a modular representation of a dynamic robust causal system would necessitate accounting for all physically and chemically possible ways the effect could be achieved when the cause is absent (pp. 80-81). But why should this be so? A causal model of a gene knockout experiment, for instance, can limit itself to those backup mechanisms that are biologically feasible for the organism in question.
In chapter 5, Mitchell explores some implications for policy that follow from her views. The main theme of this chapter is a critique of what she calls the “predict-and-act” conception of scientific contributions to environmental policymaking, epitomized by cost-benefit analysis. Predict-and-act insists that science should provide policy makers with confident predictions of outcomes of distinct policy choices, which can then serve as the basis for informed decisions. Mitchell argues that predict-and-act approaches are inappropriate for complex systems that exhibit deep uncertainty, wherein the uncertainty of predictions is difficult, if not impossible, to quantify. In such circumstances, insisting that uncertainty be reduced before any action is taken is a recipe for endless delay. Mitchell’s main counter proposal to cost-benefit analysis is a defense of robust adaptive planning (RAP). Instead of attempting to identify the policy option that produces the optimal balance of expected benefits over costs, RAP attempts to identify strategies that achieve satisfactory outcomes over a wide range of scenarios that are compatible with what we know (pp. 93-94). Often strategies that are robust in this sense are adaptive: they can be modified in the course of implementation as new information becomes available. The robustness of various strategies can be tested through computer simulations, which can also be used to search for unforeseen circumstances, or surprises, that could defeat even the most robust strategies. Mitchell’s advocacy of RAP is a very welcome contribution to discussions of environmental policy, where there is a tendency to focus on shortcomings of cost-benefit analysis without offering any clear alternative analytical procedure. There are, however, a number of questions about RAP that merit further philosophical inquiry. For example, RAP relies on a distinction between what is known, which constrains possible scenarios, and what we are uncertain about, which is allowed to vary across scenarios. But the division between what is known and what is uncertain is often both hazy and politically contentious and could, thus, benefit from some explication.
The final chapter functions both as an overview of the main themes of the previous chapters and as an opportunity to further elaborate Mitchell’s integrative pluralism. The chapter examines explanations that combine causal factors and models from multiple levels of analysis and of pragmatic considerations in deciding which explanatory factor to emphasize.
Overall, I found Unsimple Truths to be a stimulating presentation of a wide-ranging and sophisticated perspective on science that is both accessible and deeply engaged with current debates in philosophy of science. Thus, objections to the particulars of some arguments notwithstanding, I would recommend this book to professional philosophers of science as well as to readers in general who are interested in the nature of modern science and its relevance to important contemporary issues, such as climate change and explanations of mental disorders.
Hausman, D., and J. Woodward. 1999. “Independence, Invariance and the Causal Markov Condition.” British Journal for the Philosophy of Science 50: 521-83.
Mitchell, S. 2003. Biological Complexity and Integrative Pluralism. Cambridge: Cambridge UP.
Steel, D. 2008. Across the Boundaries: Extrapolation in Biology and Social Science. Oxford: Oxford UP.