2007.12.14

William C. Wimsatt

Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality

William C. Wimsatt, Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality, Harvard University Press, 2007, 450pp., $49.95 (hbk), ISBN 9780674015456.

Reviewed by Robert C. Richardson, University of Cincinnati


This is not a linear book. It is a complex book. Wimsatt begins with the fact that humans are limited beings confronted with a complex world. This has an array of implications for our understanding of our understanding of the world, and also for our understanding of the world. These are the broadest themes running through the book. They are reflexive. Begin with our understanding of our understanding of the world. We are limited beings and that means we must deploy reasoning strategies suited to our limitations, constructing models that are subject to a variety of idealizations. This in turn has implications for our understanding of the world, which will be piecemeal and approximate. Wimsatt aims to reform philosophical practice in a way that reflects our ways of understanding the world we live in, and also informs us about how we should conduct our science. At the same time, our understanding of ourselves is a scientific enterprise, and that informs our philosophical projects concerning how we understand our understanding of the world.

The two lead themes -- that we are limited beings and that we confront a complex world -- percolate through the book, and lead seamlessly to a panoply of topics. The existence of human cognitive limitations leads Wimsatt to reflections on the role of heuristics in human reasoning, on the importance of false models in reaching toward the truth, and on robustness as an epistemic gauge of reality. The fact that our world exhibits a complex structure leads Wimsatt to discuss the significance of laws as opposed to generalizations, the complex organization evident in natural systems and in levels of organization. The two broader themes come together when he discusses the importance of reduction and its failures, the construction of models, the importance of localization, experimental controls, and even emergence as an inevitable byproduct of the hierarchical organization in natural systems. The two threads interact in a variety of ways. So, for example, the complexity of natural systems means that reductionism will fail because the world will not cooperate, and emergence will become inevitable; however, from a methodological standpoint, starting out as a reductionist is essential because it reveals the importance of organization that we could not otherwise see. This is why I say this is not a linear book. You can begin with almost any theme, and you will be led into the rest.

The book brings together modestly edited versions of some of Wimsatt's most important and influential essays published over the last thirty years. They concern topics such as robustness, heuristics, complex systems, reduction, and emergence. These themes are nicely tied together with a substantial "Introduction" consisting of three new chapters (which make up Part I), two very useful introductions to Parts II and III that tie together the four chapters in each part, a concluding Part IV (chapter 13), and four brief Appendices. The new material is important and useful. Altogether, this is a very substantial and important piece of work. In the end, even those who are acquainted with Wimsatt's work will benefit from seeing them in this context. Those who are not yet acquainted with it will benefit even more.

In the remaining space, I'll sort through some of the main themes that permeate Wimsatt's book. I'll also pay attention to the connections between them, since these are as important as the individual themes. I will structure my discussion around the two broad themes I started with -- human limitations and natural complexity. This will naturally lead to a number of other issues.

Heuristic procedures, as Herbert Simon thought of them, are "rules of thumb" guiding our decisions and choices. Wimsatt identifies a wide variety of properties characteristic of heuristics (pp. 39, 68, 76-77, appendix A). The key contrast is with algorithms. Whereas algorithms are procedures which are guaranteed to yield a correct solution to problems for which they are designed, heuristic procedures are not. (Sometimes, computer scientists think of algorithms as procedures that will, at least, stop, and not as procedures guaranteed to yield the correct solution. Heuristic procedures are not even guaranteed to stop. Wimsatt leans on Simon's understanding of heuristics.) The advantage heuristics offer to us lies not in accuracy, but in economy. Heuristics are less demanding in terms of time, memory, and computational requirements. Still, if they are useful they are accurate enough in the key cases to which they are applied. Moreover, errors introduced by relying on heuristics will tend to be systematic. They will fail in characteristic classes of cases, and the errors are predictable. We are not, Wimsatt says, "LaPlacean demons" for whom computational requirements are incidental. The "Re-Engineering" Wimsatt recommends derives from this simple observation together with the thought that much philosophy proceeds as if we were just such computationally unencumbered decision makers. Wimsatt writes:

A more realistic model of the scientist as problem solver and decision maker includes the existence of such limitations and is capable of providing real guidance and a better fit with the actual practice in all of the sciences. In this model, the scientist must consider the size of the computations, the cost of data collection, and must regard both processes as 'noisy' or error prone. (p. 78)

Wimsatt doesn't directly explore the human limitations which would define this more realistic model of the scientist. He does explore an array of methods deployed within science, and problems that bear the earmarks of heuristic procedures. Models commonly deployed within the biological sciences, for example, embody systematic idealizations that are justified primarily because they simplify problem solving. The ubiquity of such models has a number of implications that are important.

First, the fact that there are systematic failures facilitates the idea that the elaboration of models is a self-correcting process. "False models" are, in Wimsatt's terms, the "Means to Truer Theories" (ch. 6). Linkage mapping in early 20th century genetics is a case in point. A linear model of chromosomes, with the chances of recombination between pairs of genes dependent on their separation, violates the naïve Mendelian ratios, but does so in a systematic way. These linked traits exhibit non-random patterns of inheritance. If three genes are arrayed on a single chromosome, it is possible to determine their relative locations on the chromosome using the degree to which they depart from the Mendelian expectations. Suppose that there are three genes A, B, and C, arrayed on a chromosome in that order. They should all exhibit some linkage. If so, then the chances of recombination between A and C would be expected to be the sum of the chance of recombination between A and B and that between B and C. This was the assumption Morgan and Sturtevant began with. The deviation from the naïve Mendelian ratio is a measure of their linkage. However, the simple additive model also fails, even though it is approximately true. The experimental data showed that the recombination frequency of the more distant genes is sometimes less than would be expected based on the other recombination frequencies. This in turn can be explained as a consequence of multiple crossings over, which implies that the observed recombination frequency systematically underestimates the actual distance between them; and as a result the simple additive rule will break down. The simpler models in both cases break down. The pattern to that failure in turn became the lever for constructing a more adequate model.

Second, Wimsatt turns the fallibility of the procedures into an epistemic advantage. The key concept is robustness, which is allied to Donald Campbell's idea of triangulation. Wimsatt says "Things are robust if they are accessible (detectable, measurable, derivable, definable, producible, or the like) in a variety of independent ways" (p. 196). If we have available a number of independent, but uncertain, pieces of evidence which all point toward the same conclusion, then that conclusion may in fact be extremely probable even if each piece of evidence is uncertain. By contrast, even a highly reliable procedure will have some likelihood of failure. It is a simple exercise in probabilities to see that the former may be superior to the latter. After all, the likelihood of getting an error in the latter case is the sum of the likelihood of errors in each step. Think of adding a long series of numbers. At each step there is a small chance of error, but the likelihood of some one error can be quite high. By contrast, with independent but unreliable pieces of evidence, the likelihood of error is the likelihood that all the evidence is wrong. If there are many witnesses to a crime, who all agree on the details, the chances of error are reduced.

Of course, this depends critically on the various measures being independent of one another. When the measures are not genuinely independent, we have a case of "pseudo-robustness." These are artifacts. The biases in models of group selection are an interesting case that Wimsatt has discussed in many places. Commonly, these models assumed that all the migrants from groups are pooled, and then new groups are formed by randomly drawing from this migrant pool. The assumption is one that makes the models analytically tractable. However, it virtually guarantees that there will be no group selection. The fact that these models all indicate that there is no group selection does not really give independent reasons for that conclusion (pp. 84 ff.).

There may seem to be some irony that after beginning with a demand for simpler and efficient heuristics that are consonant with our limitations, Wimsatt in the end defends the resulting picture as one that is actually likely to yield correct conclusions. So we begin with flawed procedures, and end up with something more reliable than the best available algorithms. Ironic? Perhaps, but also realistic.

The other direction from which we can approach Wimsatt's themes begins with the fact that the natural world exhibits a complex structure. This is, of course, another theme rooted in Herbert Simon's work. As Wimsatt says, "… levels of organization are a deep, non-arbitrary, and extremely important feature of the ontological architecture of our natural world, and almost certainly of any world that could produce, and be inhabited or understood by intelligent beings" (p. 203). On the face of it, there is a hierarchical order to this structure (ch. 10). Organisms consist of cells, cells consist of molecules, molecules consist of atoms, etc. To some extent, this order reflects differences in size, which in turn affects the forces that act on objects. I am inclined to think of size and mereological structure more as ways of ordering levels and less as ways of defining levels. Wimsatt offers a more attractive way of individuating and defining levels. He says, "Levels of organization can be thought of as local maxima of regularity and predictability in the phase space of alternative modes of organization of matter" (p 209). This is, as he says, the closest he gets to defining levels.

Acknowledging a hierarchical organization in the world raises questions concerning the importance of reduction in scientific practice. Wimsatt recognizes the attraction of reduction, and wants to embrace it without sacrificing the more complex organization he sees in the natural world. As Alan Donagan once said, he is making reductionism respectable. Wimsatt distinguishes two very different enterprises that are often conflated in discussions of reduction. On the one hand, there are intra-level cases, in which one theory replaces its predecessor. On the other hand, there are inter-level cases, in which theories at different but adjacent levels are stitched together. Wimsatt presses that these two types of cases engender very different dynamics. In the case of succession, theories may be more or less similar, and replacement occurs when there is less rather than more similarity. Elimination is an option, and given that such similarity mappings are intransitive, it is probably inevitable. The inter-level case is more central to articulating the implications of a hierarchy of organization. Crucially, instead of replacement, what we see is co-evolution of theories and the development of models that span more than one level of organization. The aim is in the end to enhance explanation. Sometimes we can explain what we see by appealing to higher levels of organization; sometimes we need to look lower. Mendelism certainly captures significant patterns in the world. Acknowledging linkage, and chromosome structure, explains some things that Mendelism does not. Shifting to a molecular level explains yet different things. The theories that capture these mechanisms are not competitors. Instead, they supplement and enhance one another. Identities play a key role in this co-evolution of theories. "Morgan's gene is the molecular gene, at a different level of description, and conversely" (p. 265). This is not a matter of similarity, but identity. The theories are actually dissimilar, and our understanding of the gene has changed over time; but the thing described is just the gene. Wimsatt presses that assuming identities has significant heuristic power. The principle at work is Leibniz' Law. "Two things are identical if and only if any property of either is a property of the other" (p. 266). One can quarrel with the formulation of the idea. The thought is unimpeachable. Assuming the Mendelian gene and the molecular gene are one is a powerful and important idea. It is important that whatever traits we discover in exploring Mendelian genes must also be traits of molecular genes. So we discover, using Mendelian tools such as breeding experiments, that some alleles are dominant and others are recessive. Since there is no Mendelian explanation of dominance, we need to explain it some other way. Likewise, using Mendelian tools we can discover mutations. Since there is no Mendelian explanation of mutations, we need to explain it some other way. Molecular genetics fills the gaps, but it does not supplant Mendelian genetics. Identifications are the tool for expanding our explanatory resources.

Let's assume with Wimsatt that a reductive explanation is one that exhibits a mechanistic explanation of some higher-level property (p. 275). There are various ways such mechanistic explanations might fail. One possibility is that there are extra-systemic factors governing system behavior. This is in fact quite common, even in cases that are paradigms for inter-level mechanistic explanation such as the lac-operon. This is not the possibility Wimsatt most relies on, though he acknowledges its importance. The second possibility is that organization matters more than composition. In such cases Wimsatt sees emergence. Rather than asking how we recognize emergence, Wimsatt inverts the question in a way that is arresting. He asks "what conditions should be met for the system property not to be emergent" (p. 277). If a system property is not emergent, then he contends it should not depend fundamentally on the organization of the system's constituents. The system in the limit must be aggregative: parts must be intersubstitutable; changes in the number of parts must result only in quantitative differences in system behavior; systemic properties must be invariant under reaggregation; and the components must affect system properties additively. These are stringent constraints, only infrequently met in natural systems. Even the paradigm systems of classical statistical mechanics fail this test. That would not trouble Wimsatt. Failures of aggregativity are tools, heuristics, for revealing organization and organizational influences; accordingly, they are powerful tools for constructing better theories, for getting nearer to the truth. That is the way they function. The idealization of the gas laws and statistical mechanics gives us leverage when we want to understand such things as the molecular structure of actual non-ideal gases. The idealization of panmixia in the principles of population genetics gives us leverage when we want to understand such things as the actual structure of natural populations.

Wimsatt argues that failures of any one of the four conditions on aggregativity are sufficient to issue in emergent properties; that is, satisfying all four is necessary for reductive explanations absent emergence. This is one of few places that I am less than sympathetic with his conclusions. The four conditions are surely sufficient to render a reduction of a system property; just as well, they guarantee non-emergence of the system property. Wimsatt's own accounting allows that mass is an aggregative property, lacking emergence. I suppose there are a few others, such as counts on population size, body counts, or the national debt, but not many. Indeed, Wimsatt argues that failures of aggregativity are the rule rather than the exception (pp. 277 ff.). Wimsatt is not at all oblivious to the worry that his standard for emergence is promiscuous. He explicitly acknowledges the concern. The constraints imposed by aggregativity are, in any case, powerful heuristic guides to the construction of better theories, and revealing about the importance of organization. In the end, I am inclined to think that the fourth condition is crucial. Intersubstitutability, for example, is a very strong limitation. Cooks know that it is important to add the flour to the melted butter in order to form a roux. It won't do to reverse the order, and we cannot just substitute anything at hand for the butter. The fourth condition, however, is crucial. As Wimsatt's own discussion indicates (pp. 283 ff.), in non-linear systems, aggregativity fails because of the distinctive organization of the system. To use his example, even amplifiers in a sequence do not behave in a way that allows for intersubstitution. They are nonetheless linear, insofar as the individual amplifiers behave linearly. I'm not convinced this is enough for emergence. In non-linear systems, by contrast, there is feedback, and system properties depend on component properties in a way that is unpredictable from component properties alone. In some cases, there is enough dependence on context that even the properties of components cannot be predicted apart from the properties of the system, or from the properties of the constituents outside the system, independent of the systemic context. I suspect this better fits the conditions for genuine emergence of systemic properties. Perhaps, though, Wimsatt could respond that it only highlights one category of emergent properties. If Wimsatt is right, our world is resplendent with emergent properties. There is an attraction to desert landscapes which I appreciate, having grown up in one. There is also an attraction to the complexity we see in jungle landscapes and coral reefs. But then desert ecosystems are much less simple than the Quinean vision suggests. Wimsatt's vision in the end is the more engaging one.