This is an important collection of very good to excellent chapters, most by well-known scholars and researchers; those less well known will not remain so for long. I have investigated, and written and lectured on, mathematical modelling and model-theoretic semantics for some 30 years and I learned a great deal from this collection. As its title indicates, the chapters explore aspects of three broad issues: models, simulations and representations. Although each is assigned a separate "Part" in the book, there are numerous interconnections among them and these interconnections are explored by a number of the contributors; Patrick Suppes explores models and simulations in the context of neurosciences and Ronald Giere explores the role of physical models in representation, to give but two clear examples. The editors in the preface provide a succinct and accurate summary of each of the contributions. So, anyone wishing to determine quickly whether the collection covers issues, and moreover covers them in a way, in which she/he is interested, can find out by reading 4.25 pages.
Before turning to a more detailed examination of some of the contributions that I found especially interesting, it is worth noting that a significant number of the chapters require a grasp of mathematics; moreover, different domains of mathematics are employed in different chapters. The required mathematical familiarity is not deep but also not trivial. The most challenging chapter, in this respect, explores aspects of algebraic quantum field theory; that said, it is a rich and penetrating chapter and worth a careful read, which is why it is among the ones I discuss below. For almost all the chapters, a comfort with symbolism is required. This is not surprising. The specific sciences employed either as examples of a philosophical thesis or as a basis for generating philosophically interesting issues could not adequately serve those purposes without the mathematics employed. The authors' main interests are philosophical and, hence, each has attempted to employ the minimum formal apparatus required.
Although, understandably, it has been placed in "Part II: Simulations", it is appropriate to begin with comments on Suppes' "Models and Simulations in Brain Experiments"; appropriate because Paul Humphreys dedicates the volume to him, with the simple comment "pioneer", which indeed he is. Those of us who have been in awe of Suppes' work in logic, psychology, experimental philosophy, philosophy of science -- especially what has become known as the semantic view of scientific theories -- and many other areas in which he has been a pioneer, will not be disappointed by this paper. The only other person in the last six decades of the twentieth century who I can think occupies such stature and warrants the description "pioneer" is John von Neumann.
In his chapter, Suppes is principally interested in exploring the experimental basis for observing brain activity and in identifying the problems of interpreting the results. As has become his hallmark, he grounds the examination in two experiments. These provide a rich basis for understanding process and challenges of interpretation. I found the final part of the paper involving a simulation groundbreaking. The starting point for the exploration is the well-studied, since the 1950s, stimulus-response (S-R) models. The goal is to reveal the physical basis of how the brain engages in computations. The S-R model developed using weakly-coupled phase oscillators is mathematically rigorous and elegant. Although only one element in a fuller understanding of how the brain physically produces and comprehends language -- and a somewhat programmatic one at this point -- it demonstrates admirably the role of simulations in closing in on the goal.
The volume opens with a chapter by Tarja Knuuttila and Andrea Loettgers. Alluding to Kuhn, they title their paper, "The Productive Tension: Mechanisms vs. Templates in Modeling the Phenomena". Their focus is on the interdisciplinary exchange of models. The example on which they concentrate is the Lotka-Volterra Model, which is extensively used in population ecology to model phenomena such as predator-prey dynamics and population-size equilibrium dynamics (Malthus gave a general and imprecise framework of the latter). There are many other models that have the same characteristics -- the well-known application of game theory to ecology, economics and psychology, for example -- but this one is ideally suited to their purposes because Lotka and Volterra developed the model independently: in different ways and with different motivations. After one section describing the development of the model by Volterra and another, following it, describing its development by Lotka, they get down to two tasks. The first is to demonstrate how these two approaches to modelling exemplify an interdisciplinary exchange of concepts and model representations. The second, and more important, is to demonstrate how Paul Humphreys' concept of a computational template provides a new analytical tool for resolving tensions, such as that found in the different approaches of Lotka and Volterra to modelling. It was tempting while reading the first few pages to assume that the invocation of Paul Humphreys was motivated by the fact that he is one of the editors. That, however, is not the case. By the end of the chapter, the tension they identified, and the relevance of Paul Humphreys' concept of computational templates to embracing and understanding it, is crystal clear and compelling.
The relationship between theories and their models is pretty worn territory, and few new things have emerged in the last few decades. Tracy Lupher's "Theories or Models? The Case of Algebraic Quantum Field Theory" opens up the landscape and, consequently, a whole new set of philosophical issues. He correctly notes that much of the last seventy some odd years of discussion of this relationship in philosophy of physics -- discussions since Alfred Tarski's 1938 paper, in which he provides his model-theoretic definition of semantic consequence -- has focused on Newtonian mechanics. Lupher explores the richness to be found by focusing on algebraic quantum field theory (AQFT), a theory that is conceptually clear and mathematically rigorous. This is untrod terrain, making it a fresh and rewarding journey. He identifies six models (6-tuple specifications on a Hilbert space) that share a common structure: the abstract algebra of AQFT. Each model preserves the mathematical relationships expressed in the algebra. This is the backdrop for a compelling examination of the equivalence and inequivalence of theories and of models, which grounds his view that theories that are claimed to be physically inequivalent are in fact equivalent; it is their models that are different. This establishes that theories and models are distinct in AQFT. Along the way a host of interesting philosophical issues arise: whether models, in a non-trivial way, ever faithfully represent the abstract algebra such that there is an isomorphism (and not a homomorphism, as in unfaithful representations) between abstract observables, for example.
The issue of emergence (and its companion anti-reductionism) has been around for millennia, its fortunes waxing and waning over time. For much of the twentieth century it has been out of favour in philosophical and scientific circles. With the advent of computer simulations, its fortunes improved somewhat. Mark Bedau's contribution extends his former arguments for weak emergence. As one would expect given the somewhat unflattering history of accounts of emergence, he has strict tests that any viable conception of emergence must meet; he believes weak emergence passes those tests. This chapter is important because the approach he pursues is different, although complementary to his earlier writings. His earlier writings employed "underivability except by simulation" in the definition of weak emergence. Here he employs the concept of explanatory incompressibility, a concept that rests on "crawling the micro-causal web". An incompressible explanation of a macro-property is one that can only be given by crawling the complete micro-causal web; no short cut is possible without loss of accuracy or completeness.
I find this a much more accessible, and determinable, method of defining weak emergence that one resting on underivability without simulation; the latter is at best a negative criterion. By contrast, there are ways, to which Bedau alludes, for determining incompressibility. One of his twists of wording that seems right and illuminating is, "in principle irreducible in practice". For weak emergence, there surely is not in principle incompressibility. Such "in principle irreducible" claims have been the shoal on which most emergentist views have run aground. Nonetheless, Bedau succeeds in mounting the case that a stronger claim than that an explanation is in practice incompressible is indeed possible. There are compelling reasons for thinking the practical barrier is one that in principle cannot be breached. The chapter left me agnostic on whether this is just a stronger than normal version of epistemological emergence or a truly weak form of ontological emergence. Which one it is, is the next challenge that weak emergentists, whom I sense hanker after weak ontological emergence, will have to squarely face.
I have long admired Giere's lucid expositions and penetrating analyses. Those virtues are again present in his "Representing with Physical Models". It is a short chapter (7 pages) in which he teases out a common feature of three ways of representing using models: theoretical, physical and computational models. The common feature is that the representing involves an intentional agent selectively invoking similarities between a model and an aspect of the world. At first glance, this might not seem a particularly penetrating point but two aspects are worthy of closer inspection. First, on the semantic view of theories (which Giere has promoted, as have I) theories are non-linguistic, model-theoretic entities. The relationship of the model to the world is an isomorphism. The mathematical concept of isomorphism is clear and rigorous; specifying the conditions under which the isomorphism obtains between a model and the world has proven complex. Appeals to abstractions and simplifications have provided some traction but, in turn, have raised questions about the validity of processes by which abstraction and simplification is achieved. By making explicit that an intentional agent (I would suggest that it is usually a group of agents) determine the required (desired) similarity lays bare the fact that intentional agents using rigorous mathematics are at the heart of the scientific enterprise of representing. It is the purposes at hand that play a crucial role, a point Isabelle Peschard makes in her chapter, albeit in a different way: she examines models of turbulence. Second, representation using physical models has been given short-shrift in philosophy of science. Giere begins the process of rectifying this. He uses Watson and Crick's double helical model of DNA as his example but he could have used Linus Pauling's a-helical model of proteins or August Kekulé's ring model of benzene; all used physical models to represent the chemical structure. There is no shortage of other examples and all support Giere's point that these are powerful representations, which also involve intentional agents.
There are many other chapters that are worthy of careful attention; indeed, I found them all in some way illuminating and valuable. The ones on which I have focused simply reflect my principal interests. It is common to find in reviews of fictional works such claims as, "I could not put this book down". There are very few books in philosophy of science about which I would make that claim; this is one for which it is an apt description.