Simulation and Similarity: Using Models to Understand the World

Placeholder book cover

Michael Weisberg, Simulation and Similarity: Using Models to Understand the World, Oxford University Press, 2013, 224pp., $65.00 (hbk), ISBN 9780199933662.

Reviewed by Eric Winsberg, The University of South Florida

2013.03.32


It has now been over fifty years since Patrick Suppes and others began to bring to the attention of the philosophy of science community the central importance of models in science. Understanding the construction, structure, function, and epistemology of models has since become one of central topics in philosophy of science. And as philosophers of science turn more of their attention towards a wider array of scientific disciplines -- especially disciplines, such as climate science, that inform matters of public interest and policy and in which modeling plays a more prominent role than high theory -- the importance of having a satisfactory account of modeling in science will become more and more important.

Over the last decade or so, Michael Weisberg has, through a number of widely discussed and influential articles, established himself as one of the most important new voices in the philosophical literature on scientific modeling. This book of ten chapters brings together some of that work, enhances its clarity and scope, and provides a clear and compelling account of Weisberg's vision of the role of "modeling and idealization in modern scientific practice." (p.4). It is lively, well-written, and should be accessible to novice audiences as well as informative and provocative to disciplinary insiders. It skillfully makes use of a relatively small set of carefully explained and not-overly-complicated examples to give an account that succeeds in being sophisticated and attentive to the details of scientific practice without getting overly mired in the details of "case studies" that sometimes plague the literature on scientific modeling.

After a brief introductory chapter, chapter 2 provides a taxonomy of models that divides them into three fundamental classes: concrete models, mathematical models, and computational models. Weisberg has a clear favorite exemplar of each of these: The San Francisco Bay-Delta Model, the Lotka-Voltera Model of predator-prey relationships, and Schelling's model of urban segregation, respectively. He offers this taxonomy not just as a convenience, or as a descriptive claim, but as "the correct taxonomy . . . at the epistemic level." (p.20) What this means is that "a philosophical account of models and modeling needs these three categories to account for modeling as it is practiced in contemporary science."

I'm a bit skeptical of this claim. Its success depends on assimilating two other categories of models in the literature to his tripartite taxonomy: model organisms, and what Stephen Downes (1992) has called idealized exemplars. According to Weisberg, both are examples of concrete models. But I think this overlooks certain features that these models have that are, epistemically, importantly different from concrete models. Compare the use of Drosophila melanogaster to an orrery -- a canonical concrete model. Weisberg would have us think that the only interesting difference is that the second one is constructed, while the first one comes off the shelf. But this overlooks the fact that what justifies the inferences we can draw from the former is entirely different from what justifies the inferences we can draw from the latter. We think that what we learn about genetics from D. melanogaster is applicable throughout the plant and animal kingdoms because we have a particular kind of antecedent reason for thinking it is relevantly similar to all those other creatures -- they all have the same evolutionary history, or are all of a kind, or all use the same DNA-based genetic system, or something like that. We think that we can learn the motions of the planets from an orrery because we think it has been correctly built to encode veridical knowledge about the planets and their orbits. An orrery can be wrong in a way that D. melanogaster cannot. Of course, we can still draw incorrect inferences from a model organism. But I would argue that they fail for entirely different kinds of reasons[1] than do the inferences we might draw from someone's geocentric orrery.

Next consider Weisberg's claim that Downes' example of the textbook model of the eukaryotic cell is just a "concrete model, albeit one that probably never has been built." I think this overlooks the ways in which models can be used to make inferences. Notice that all of Weisberg's three canonical models, The San Francisco Bay-Delta Model, the Lotka-Voltera Model, and Schelling's model, all produce predictions -- given some input -- mechanically. Making inferences with Downes' model requires the user of the model to have an implicit understanding of it that allows them to do something akin to mentally simulating its behavior. This too is a basic difference that should not be glossed over. The point seems especially important given what Weisberg says about the "folk ontology of models" in chapter 4. There, he explicitly states that the mental pictures that are the "aids to thinking about the model . . . are not part of the model itself." (p.68). But without mental pictures that aid us in thinking about the kinds of models Downes has in mind, they do not generate inferences in the relevant way that a model needs to do. Many such idealized models, like Maxwell's mechanical models of the ether, are meant to license dynamical inferences about their target systems. But for them to do so requires their users to know how to reason about how they should be expected to evolve. They do not mechanically generate, like the San Francisco Bay-Delta models, their own dynamics. Maxwell and his contemporaries could have disagreements about how his models would behave. Concrete models are not like this -- they behave however they behave.

Chapter 3 is about "The Anatomy of Models." Here Weisberg argues that the fundamental components of a model are its structure, the model description that articulates that structure, and a construal, which interprets the model's structure. I do not have the space here to delve into all the details of the construal. It is one of the richest and most careful parts of the book. The construal includes four sub-components (assignmentscope, and two kinds of fidelity criteria) that tell you what systems the model is supposed to be about, in what respects, and to what degree of accuracy. In my view, Weisberg absolutely nails this part of the account. This is probably the most rewarding chapter of the book -- especially when it is subsequently applied to some of the questions raised in chapter 4 and others in chapter 8. There has always been a puzzle for the view, despite its obvious appeal, that scientific models could be mathematical structures. How could something as sparse as a mathematical structure do more than merely represent some phenomenon in an empirically adequate way? How, for example, could a mathematical structure represent causal dependencies in a system? These are the kinds of problems Weisberg's notion of a construal solves by spelling out, in detail, how models can be interpreted structures.

In chapter 4, Weisberg uses the apparatus he has carefully constructed in chapter 3 to go after what he calls the "fictions view" of mathematical modeling associated with Roman Frigg, Peter Godfrey-Smith, Adam Toon, and others. Weisberg's attack on the fictions view comes in two steps. The first is to show that the view has four problems that it cannot overcome. The most forceful of these, in my opinion, are the problems of inter-scientist variation and of the limited representational capacity of fictions. But the second part of the attack is the part I find especially compelling. This is where Weisberg shows that the work he did in chapter 3 (along with the added component of a "folk ontology of modeling") can answer all of the problems that the fiction view was designed to solve -- including the one I mentioned in the last paragraph. On my view, the burden has always been on proponents of the fictions view to show that their claims about models -- counter-intuitive as they are -- are necessary for understanding various aspects of scientific practice. In this respect, Weisberg has eroded the ground they stand on.

The next big chunk of the book, from chapters 5 to 9, turns from the nature of models to the practice of modeling. The discussion is divided, through chapters 5, 6, and 7, between target-directed modeling, idealization, and modeling without a specific target, respectively. These three chapters will primarily be of interest to the disciplinary insiders I mentioned above. Still, this is where Weisberg really does the work of defending his conception of modeling as a distinctive kind of scientific hypothesis-making.

Chapter 8 details Weisberg's account of the relationship between models and their real-world targets: similarity. Chapter 9 covers his now well-known account of model robustness. Chapter 10 is a brief conclusion that summarizes the main claims of the book. I offer a slightly mixed review of the last two substantive chapters. On the one hand, there is much to commend about the positive account of model-world relationships of chapter 8. His arguments against what he calls "model-theoretic accounts" and in favor of similarity accounts are forceful and compelling. And his detailed account of the similarity relation, drawing skillfully on his account of construals from chapter 3, is extremely illuminating and, in my opinion, successful. The idea that "similarity" is the right notion for capturing the model-world relation has been much maligned as overly simplistic and empty. Everything, it is often said, is similar to everything else in some respects. But Weisberg deftly shows that fancier notions like isomorphism, partial isomorphism, and homomorphism, are mired in intractable problems. And he shows that once models are understood to be interpreted mathematical structures (using his notion of a construal) a very detailed and workable notion of similarity can be articulated.

On the other hand, chapters 8 and 9, covering topics central to the epistemology of modeling, brought to the foreground for me what I think is the greatest weakness of the book: the virtual absence of any discussion of the relation between models and theory. I am somewhat torn about this. To some degree, this is a matter of emphasis, and one cannot fault a book for focusing on one topic rather than another. And it has been part and parcel of the turn to modeling that I discussed in the opening paragraph to decry the overemphasis of theory in the philosophy of science. This has mostly been a good thing. Finally, I think Weisberg is keen to show the extent to which modeling is its own kind of activity.

But throughout the book, and especially in chapters 8 and 9, there is the promise that the book's primary concern is "with the epistemic level." (p.20). Yet none of Weisberg's three canonical examples rely much, if at all, on theory.[2] And a model's connection, if any, to theory plays no role at all in how the model falls out in his taxonomy in chapter 2. As a result, little of the book discusses the role of theory in credentialing scientific models. This might leave the reader with the impression, especially after reading chapters 2, 8, and 9, that one can understand the epistemology of modeling without paying careful attention to the relation that a model has, if any, to some background theory. And this strikes me as profoundly misleading. Seat-of-the-pants phenomenological models can be illuminating and accurate, but they don't have the same epistemological status as say, the model of the hydrogen atom that is a principled application of quantum mechanics, which is yet different from a semi-empirical model that mixes quantum mechanics with some seat-of-the-pants elements.

Chapter 9 is a retelling of the central claims and arguments of Weisberg's now widely cited and widely discussed paper on robustness analysis -- adding material from his work with Ken Reisman on the discovery of the Voltera principle. Weisberg's central claim is a rebuttal of the claims of Orzack and Sober, that "robustness analysis adds very little to scientific enquiry" and confirmation. Weisberg outlines three different kinds of robustness: parameter robustness, structural robustness, and representational robustness. The discussion is clear and forceful, and it has provoked a great deal of discussion over the last several years. Unfortunately, Weisberg does not take the opportunity to respond to some of this discussion. I'm thinking, in particular, of the claims of Odenbaugh and Alexandrova (2011) that Weisberg has exaggerated the power of robustness analyses, and of the debate between Lloyd (2009) and Parker (2009, 2011) over the importance of Weisberg's account of robustness analysis for understanding the confirmation of climate models.

Finally, no book review would be complete without one or two worries about inaccuracies. Here is mine: in Chapter 7 we are told how Feynman's famous ratchet and pawl machine "deepens our understanding of why perpetual motion machines are impossible" and "could also tell us which laws of nature would have to change in order for such a machine to be constructible." (p.126) Weisberg claims that we "learn about the nomological dependence between energy conservation and perpetual motion machines." (p.128.) But here there are two confusions. There is no interesting connection between conservation of energy, simpliciter, and the impossibility of perpetual motion machines.[3] Moreover, while the impossibility of these machines follows from the phenomenological second law of thermodynamics, that law is only approximately true, and the connection between the aforementioned impossibility and the microscopic laws is not perfectly well understood, if it exists at all. That is, as best as we know, our microscopic laws do not actually in principle forbid perpetual motion machines -- even though they almost always preserve the second law. This, after all, is one of the main lessons of another model-without-a-target: Maxwell's Demon. What does Feynman's model actually teach us? I would say that it teaches us, via a kind of Kuhnian exemplar, how to find the flaws in purported perpetual motion machines, which we have every reason to think are in practice impossible to build.

I have focused in this review on the points in the book with which I have the strongest agreement and disagreement, and on some of the issues that I wish Weisberg had addressed in more detail. However, my disagreements should be taken to show the richness of the account, rather than to detract from its overall merits. The book is a valuable contribution and a welcome addition to the philosophical literature that should be read by anyone with a serious interest in scientific modeling and in philosophy of science more generally.

REFERENCES

Albert, D. (2000). Time and Chance. Cambridge, MA: Harvard University Press

Downes, S.M. (1992) "The importance of models in theorizing: a deflationary semantic view." PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1, 142-153

Odenbaugh, J., and Alexandrova, A. (2011) "Buyer beware: robustness analyses in economics and biology." Biology and Philosophy, 26, 757-771

Lloyd, L., 2009. "Varieties of support and confirmation of climate models," Aristotelian Society Supplementary Volume 83 (1):213-232.

Parker, W., 2009 "Confirmation and adequacy-for-purpose in climate modelling." Aristotelian Society Supplementary Volume 83 (1):233-249.

Parker, W., 2011. "When Climate Models Agree: The Significance of Robust Model Predictions," Philosophy of Science 78 (4):579-600.



[1] Of course, genetically engineered model organisms such as "knock-out" mice complicate the picture, but not in a way that will help proponents of a simple taxonomy.

[2] Actually, theory plays a substantial role in the Bay-Delta model. We trust it in part because the people who made choices of how to scale it down, and implement it in other ways, had good theoretical knowledge of fluids. But this aspect of its credentialing is somewhat underdiscussed.

[3] The impossibility of perpetual motion machines is a thermodynamic phenomenon, and to the extent that we understand the connection between thermodynamic irreversibility and microdynamics, it surely has something to do with far more specific features of those dynamics than merely energy conservation. It has to do with things in the neighborhood of chaotic mixing, or the fibrillation of state space volume by the dynamics, or something like that. See (Albert, 2000, p. 107-109) for a good discussion of the ratchet and pawl model, Maxwell's demon, and possibility of perpetual motion machines.