Calculated Surprises: A Philosophy of Computer Simulation

Placeholder book cover

Johannes Lenhard, Calculated Surprises: A Philosophy of Computer Simulation, Oxford University Press, 2019, 256pp., $74.00 (hbk), ISBN 9780190873288.

Reviewed by Kevin B. Korb, Monash University

2020.02.22


In the early days of electronic computers there was considerable doubt about their value to society, including a debate about whether they contributed to economic productivity at all (Brynjolfsson, 1993). A common view was that they made computations faster, but that they were not going to contribute anything fundamentally new to society. They were glorified punchcard machines. Such was the thinking behind infamous predictions like the one attributed to the president of IBM in 1943 that there may be a world market for five computers. Of course, by now such views seem quaintly anachronistic. Quantum computers offer the potential for exponential increases in computing power -- and "nothing more" -- but are the only way hard encryption is ever likely to break. Computers and the internet are all the evidence needed that some qualitative differences are breached by sufficiently many quantitative steps.

While these general questions are resolved, this debate still echoes elsewhere, including the philosophy of simulation. Some insist that the role of scientific simulation demands a radical new epistemology, whereas others assert that simulation, while providing new techniques, changes nothing fundamental. This is the debate Johannes Lenhard engages in his book.

Lenhard lands on the side of a new epistemology for simulation, while not landing too very far from the divide. Rather than claiming there is some one special feature of simulation that demands this new epistemology, as some have, he berates those who focus primarily on this or that specific feature that appears special; rather, the significant features are all special together. Per Lenhard, those significant features are: the ability to experiment with complex chaotic systems, the ability to visualize simulations and interact with them in real time, the plasticity of computer simulations (the ability to reconfigure them structurally and parametrically), and their opacity, that is, our difficulty in comprehending them. It is the unique combination of all these new features which forces us onto new epistemological terrain.

More exactly, Lenhard's central thesis is that this combination means simulation is a new, transformative kind of mathematical modeling. To see what the unique combination produces, one needs to consider the full range of features, and therefore also the full range of kinds of computer simulation. Focusing only on a single type of simulation is as limiting as focusing on a single feature, per Lenhard. For example, much existing work exclusively considers models using difference equation approximations of dynamic systems, such as climate models. But conclusions reached on that basis are likely to overlook the rich diversity of modeling characterized by such methods as Cellular Automata (CA), discrete event simulation, Agent-Based Modeling (ABM), neural networks, Bayesian networks, etc.

Striking the right level of generality in treating simulation is important. Clearly, one can be either too specific or too general. In this moderate stance, Lenhard is surely right.

Plausibly, the class of simulations are bound together by family resemblance, rather than some clean set of necessary and sufficient conditions. It is a pity, then, that Lenhard simply upfront rejects consideration of stochasticity as an important feature of simulation. He says, reasonably, that some sacrifices have to be made ("even Odysseus had to sacrifice six of his crew"). And it's true that some simulations are strictly deterministic, not even using pseudo-indeterminism, such as many CA. But it's also true that stochastic methods are key for most of the important simulations in science. Furthermore, they have opened up genuinely new varieties of investigation, including all the varieties of Monte Carlo estimation, and are essential for meaningful Artificial Life and ABMs. This is a major and unhappy omission in Lenhard's study.

One of the aspects of simulation Lenhard definitely gets right is the iterative and exploratory nature of much of it, emphasizing the process of simulation modeling. The ease of performing simulation experiments, compared to the expense and difficulty of experiments in real life, don't just allow for millions of experiments to be run per setup (routinely driving confidence intervals of estimated values to neglible sizes, assuming we're talking about stochastic simulations), but allow for using early simulation runs to inform the redesign or reconfiguration of later simulations, in an exploratory interaction of experimenter and experiment. Instead of simply relying on the outcomes of a few experimental setups to provide clear evidence for or against some theory driving the experiment, simulation allows for an iterative development of the model, with early experiments correcting the trajectory of the overall program. This underwrites much of the "autonomy" of simulation from theory. If a theory behind a simulation is incomplete, or simply in part mistaken, simulation experiments may nevertheless direct the research program, with feedback from real-world observations, expert opinion, or subsequent efforts to repair the theory. As Lenhard writes, in simulation "scientific ways of proceeding draw close to engineering" (p. 214).

Indeed, Lenhard points out that simulation science requires an iterative development of models. In many cases, the theory implemented in a simulation is very far from being sufficient even to provide a qualitative prediction of a simulation's behavior. In one example given, Landman's simulation of the development of a gold nanowire contradicted the underlying theory; only after the simulation produced a virtual gold nanowire was a physical experiment run which confirmed the phenomenon (Landman, 2001). The underlying physical theory inspired the simulation, but the simulation itself forced further theoretical development. This aspect of simulation science explodes the traditional strict distinction in the philosophy of science between contexts of discovery and justification. This distinction may be of analytic value, for example when identifying Bayesian priors and posteriors in an inductive inference, but in simulation practice the contexts themselves of discovery and justification are one and the same. To be sure, Lakatos's concept of scientific research programs throwing up anomalies (Lakatos, 1982) and overcoming them already weakens the distinction, but in simulation science the necessity of combined discovery and justification is ever present.

In connection with iterative development, scientific simulation has converged even more closely with engineering, widely adopting the "Spiral Model" for agile software development, which is precisely an iterative development process set in opposition to one-shot, severe tests of theoretical (program) correctness, i.e., in opposition to monolithic software QA testing. The Spiral loops through: entertaining a new (small) requirement, designing and coding to fulfill the requirement, testing the hoped-for fulfillment, and then looping back for a new requirement. This equivalence of process makes good sense given that simulations are software programs. To better understand simulation methods as scientific processes, a deeper exploration of this equivalence than Lenhard provides would be useful.

The epistemic opacity of simulation models is one of their notable features that Lenhard highlights. It is very common that human insights into how a simulation works are limited, a fact which elevates the importance of visualizations of the intermediate and final results of a simulation and of interacting with them. Lenhard points out that this raises issues for our understanding of "scientific understanding". Understanding is traditionally construed as a kind of epistemic state achieved within the confines of a brain. Talk of an "extended mind" brings home the important point that books, pens, computers and the cloud significantly enhance the range of our understanding, allowing us to "download" information we haven't bothered to memorize, for example. But there still needs to be a central agent who is the focal point of understanding, at least in common parlance. Lenhard promotes a more radical reconception: that it is something like the system-as-a-whole that does the understanding. The human-cum-simulation can perform experiments, make predictions, advance science, even while the human acting, or examined, solo has no internal comprehension of what the hell the simulation is actually doing. Since successful predictions, engineering feats, etc. are standard criteria of human understanding, we should happily attribute understanding to the humans in the simulation system satisfying these criteria. This seems to be much of the basis for Lenhard's claim that simulation epistemology is a radical departure from existing scientific epistemologies, since it radically extends our understanding of scientific understanding. I'm afraid I fail to see the radical shift, however. Anything described as understanding attributed to humans within a successful simulation system can as easily be described as a successful simulation system lacking full human understanding of a theory behind it. Lenhard fails to elucidate any clear benefits from a shift in language here. On the other hand, there is at least one clear benefit to conservatism, namely that we maintain a clear contact with existing language usage. We are all interested in advancing both our understanding of nature and our ability to engineer within and with it; it's not obviously helpful to conflate the two.

Epistemic opacity also has epistemological consequences that Lenhard does not fully explore. While he emphasizes, even in his title, that simulation experiments often surprise, he does not point out that where the surprises are independently confirmed, as with the Landman case above, this provides significant confirmatory support for the correctness of the simulation, on clear Bayesian grounds. For those interested in this kind of issue, Volker Grimm et al. (2005) provide a clear explanation, from the point of view of Agent-Based Models (ABMs) in ecology.

Another unexplored topic is supervenience theory. This is more general than computer simulation theory, to be sure, but is connected to the opacity of simulations and complexity theory, and is especially acutely raised in the context of Artificial Life and Agent-Based Modeling, which provide not just an excuse but a pointed tool for considering supervenience. The very short form is: ABMs give rise to unexpected, difficult-to-explain high-level phenomena from possibly very simple low-level elements and their rules of operation (perhaps most famously in "boids" simulating bird flocks; Reynolds, 1987). This is known by a variety of names, such as emergence, supervenience, implementation and multiple realization. It is not inevitable that a philosophy of simulation should encompass a theory of supervenience, but it is probably desirable.

It seems to me that in some respects an even more radical discussion of computer simulation than that pursued by Lenhard is in order. Simulations are literally ubiquitous across the sciences. That is, I'm unaware of any scientific discipline which does not use them to advance knowledge. It is in wide use in astronomy, biology, chemistry, physics, climate science, mathematics, data science, social science, economics -- and in many cases it is a primary and essential experimental method. Lenhard, oddly, at least appears to disagree, since he states that their common use has only reached "amazingly" many sciences, rather than simply all of them. I'd be interested to know which sciences remain immune to their advantages.

Lenhard's book introduces many of the issues that have been central to the debates within the philosophy of simulation and adopts sensible positions on most. He, for example, points out that model validation grounds simulations in the real world, offering a methodological antidote to extremist epistemologies' flights of fancy. Lenhard's is a book that patient beginners to the philosophy of simulation can profit from and that specialists should certainly look at. My main complaint, aside from its fairly turgid style (its German origin is clear enough), is the many important and interesting sides to simulation science that are simply ignored. A lack of examination of the scope and limits of simulation is one of those.

The ubiquity of simulation now goes even well beyond the domains of science themselves. It has recently found interesting and potentially important applications in history (e.g., University of York, 2020). Brian Skyrms has famously applied simulations to the study of philosophically interesting game theory (e.g., Skyrms, 2004). Social epistemology has employed simulation for some time already to answer questions about how collective beliefs and decisions may be arrived at (Douven, 2009; Salerno et al., 2017). I have applied simulation to the evolution of ethics and utility (Mascaro et al., 2011; Korb et al., 2016) and to studies in the philosophy of evolution (Woodberry et al., 2009; Korb & Dorin, 2011). I am presently attempting to build a computational tool for illustrating and testing various philosophical theories of causation. There is every reason to bring simulation into the heart of philosophical questions and especially into the philosophy of science. It is even plausible to me that instruction in simulation programming may become as necessary to graduate philosophical training as it already is in many of the sciences.

Paul Thagard formulated the key idea first: if you have a methodological idea of any merit, you should be able to turn it into a working algorithm (Thagard, 1993). Since a great deal of philosophy is about method, a great deal of philosophy not only can be, but needs to be, algorithmized. Simulation provides not just a test of the methodological ideas, and not just a demonstration of their potential, but also a test of the clarity of and relations between the underlying concepts, a test of the philosophizing itself. Who cannot simulate, cannot understand.

REFERENCES

Brynjolfsson, E. (1993). The productivity of paradox of information technology.  Communications of the ACM, 36(12), 66-77.

Douen, I (2009).  Introduction: Computer simulations in social epistemology. Episteme, 6(2), 107-109.

Grimm, V., Revilla, E., Berger, U., Jeltsch, F., Mooij, W. M., Railsback, S. F., Thulke, H., Weiner, J., Wiegand, T. & DeAngelis, D. L. (2005). Pattern-oriented modeling of agent-based complex systems: lessons from ecology. Science, 310(5750), 987-991.

Korb, K. B., Brumley, L., & Kopp, C. (2016, July). An empirical study of the co-evolution of utility and predictive ability. In 2016 IEEE Congress on Evolutionary Computation (CEC) (pp. 703-710). IEEE.

Korb, K. B., & Dorin, A. (2011). Evolution unbound: Releasing the arrow of complexity. Biology and philosophy, 26(3), 317-338.

Lakatos, I. (1982). Philosophical Papers. Volume I: The Methodology of Scientific Research Programmes (edited by Worrall, J., & Currie, G). Cambridge University Press.

Landman, U. (2001). Lubricating nanoscale machines: Unusual behavior of highly confined fluids challenges conventional expectationsGeorgia Tech Research News.

Mascaro, S., Korb, K., Nicholson, A., & Woodberry, O. (2011). Evolving ethics: The new science of good and evil. Imprint Academic, UK.

Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. In Proceedings of the 14th annual conference on Computer graphics and interactive techniques (pp. 25-34). ACM.

Skyrms, B. (2004). The stag hunt and the evolution of social structure. Cambridge University Press.

Salerno, J. M., Bottoms, B. L., & Peter-Hagene, L. C. (2017). Individual versus group decision making: Jurors' reliance on central and peripheral information to evaluate expert testimony. PloS one, 12(9).

Thagard, P. (1993). Computational philosophy of science. MIT press.

University of York. (2020, January 9). Mathematicians put famous Battle of Britain 'what if' scenarios to the testScienceDaily. Retrieved February 7, 2020.

Woodberry, O. G., Korb, K. B., & Nicholson, A. E. (2009). Testing punctuated equilibrium theory using evolutionary activity statistics. In Australian Conference on Artificial Life (pp. 86-95). Springer, Berlin, Heidelberg.