Nancy Cartwright's Philosophy of Science

Placeholder book cover

Stephan Hartmann, Carl Hoefer, and Luc Bovens (eds.), Nancy Cartwright's Philosophy of Science, Routledge, 2008, 406pp., $120.00 (hbk), ISBN 9780415386005.

Reviewed by Mathias Frisch, University of Maryland, College Park


Nancy Cartwright's Philosophy of Science is an excellent collection of thought-provoking essays on Cartwright's many and varied contributions to philosophy of science. Cartwright is without doubt one of the most influential philosophers of science writing today and a volume like the present one has long been overdue. Her contributions have been seminal in many different areas: her work on models and idealizations, for example, played an important role in the development of an entire sub-discipline in philosophy of science focusing on the role of models in application driven science, and her widely-cited work on causal laws and effective strategies has helped frame the contemporary debate on the role of causation in science. The volume devotes considerable attention to these issues and many more -- with one unfortunate omission. In recent years Cartwright has urged us to investigate in more depth the connection between science, values, and politics and I was disappointed to see that there is no examination of this topic in the volume. Yet overall the book, which includes brief responses by Cartwright to each paper, presents an extremely valuable resource for both students and researchers and should be required reading for those interested in Cartwright's philosophy of science.

Anyone looking for a brief overview of Cartwright's views should consult Carl Hoefer's excellent introduction to the volume. The first set of papers focuses on how theories and models represent the world, while the second set focuses on causes and capacities. Ronald Giere's paper, straddling both issues, provides an especially clear and sympathetic summary of Cartwright's account of how models represent reality, while the exchange between Cartwright and Giere can serve as a useful introduction to Cartwright's claim that we need to invoke capacities over and beyond a notion of causal structure.

A central component of Cartwright's view is that high-level theories do not directly represent physical systems and that instead the relation between theoretical laws, like Newton's laws or the Maxwell-Lorentz equations, and the world is mediated by two kinds of models: interpretive and representative models. Interpretive models are 'laid out within the theory itself' and allow us to understand the theory's laws by providing concrete representations of the abstract formalism. The 'toy models' discussed in physics textbooks are examples of interpretive models. Representative or phenomenological models, by contrast, represent actual phenomena, but in general go beyond any particular theory in the way they are constructed. Since representative models generally do not satisfy the theory's laws, Cartwright has argued that the high-level laws either 'lie', if construed representationally, or ought to be construed non-representationally as 'tools for model-building'.

The term 'model' not only plays different roles in Cartwright's account but has been used in many different contexts and with many different meanings in philosophy of science, which has resulted in a fair amount of confusion in the literature. Daniela Bailer-Jones's essay is helpful here, offering a careful examination of the place of models in Cartwright's account that clarifies the different roles of interpretative and representative models.

Ulrich Gähde focuses in more detail on Cartwright's account of how theories are applied to concrete systems. According to Cartwright, theory application is a three-step process, only the last of which is the derivation of a representative model, governed by phenomenological laws, from a theory's fundamental laws. The first two steps are providing, first, an unprepared description of the system in question and, second, a prepared description in the language of the theory. Gähde illustrates this process using the discovery of Haley's comet as a case study and argues, perhaps surprisingly, that Cartwright's account bears a certain affinity to the formal account of theories developed by the Munich structuralist school in the tradition of Wolfgang Stegmüller.

While I agree with Gähde that Cartwright's and the structuralists' accounts are similar in that they describe the relation between a theory and a physical system in terms of multiple layers, I am not sure whether the two accounts are really as closely parallel as he suggests. Gähde likens Cartwright's unprepared description to the 'data structures' of the structuralists' account. But the structuralists' data structures are already formalized and the move from unprepared to prepared description merely consists in restricting the data structure to a proper substructure. By contrast, Cartwright's unprepared description is informal and need not yet involve parameters recognized by the theory. Thus, Cartwright's unprepared description takes place prior to and outside of the structuralists' formal framework.

Gähde asks how, in the example of Haley's comet, we should draw Cartwright's distinction between fundamental and phenomenological laws. But I think that he is misled by the simplicity of his example: the solar system is a simple enough system that the fundamental laws, with minor ad hoc modifications, can be applied directly to it. Or, put differently, this is an example where an interpretive model of Newtonian physics -- that of a simple gravitational system -- also functions as a representational model.

Margaret Morrison's paper examines the respective roles of interpretive and representational models through a detailed investigation of the BCS theory of superconductivity. Morrison takes Cartwright to claim that interpretive models are prior to representational models. In particular, she takes Cartwright's claim, that the idea of Cooper pairing in the BCS model of superconductivity is ad hoc, to suggest that representational models "are of secondary importance" (74). Against this, Morrison argues that we need a "richer account of the role of representation than that which arises from the interpretive models" (81). Yet I think that this way of putting things partly obscures what I take to be the main point raised by Morrison's fascinating discussion. As Gabriele Contessa points out on Cartwright's behalf in the response, representational and interpretive models play importantly different roles in Cartwright's account, and neither is in any interesting sense primary to the other. Moreover, when Cartwright says that representational models are built with the help of ad hoc assumptions, this does not relegate representational models to a secondary role. Rather, the assumptions in question, which can be empirically well-supported, are ad hoc only with respect to the underlying fundamental theory. Instead of assigning representational models a secondary role, the presence of ad hoc assumptions is problematic for the underlying theory, since Cartwright takes the need of such modifications in representational to undermine any inference from the success of a representational model to the truth of the fundamental theory used in constructing the model.

Thus, the real question raised by Morrison's account is whether it is indeed true that the BCS model essentially involves elements that are, from the perspective of fundamental quantum mechanics, ad hoc -- and this question is important not because of the light it might shed on the importance of representational models, but rather because of its implications for the status of the fundamental theory with the help of which the models were constructed. While Morrison suggests that many of the important concepts were conceived prior to the development of the microphysical theory, she insists that the central idea of Cooper pairing was eventually given a firm quantum mechanical foundation.

Paul Teller asks whether Cartwright's view that science centrally makes use of models that are limited in scope and are never completely accurate is compatible with Arthur Fine's well-known view that we should accept the results of science on a par with "homely" truths. Prima facie, Cartwright's and Fine's accounts of science seem to be in direct conflict -- for how can it be the case both that the laws of physics lie and that they are true on a par with other homely truths? But Teller argues in a thought-provoking and rich essay that the two views are in fact compatible and that a synthesis of both Cartwright's and Fine's insights suggests changes in how we ought to think about truth. Teller's central claim is that the apparent conflict between the two views dissolves once we recognize that homely truths also do not provide us with exactly true representations. It is a feature of all human representations, Teller claims, that they either involve idealizations -- and hence make precise claims but do not exactly represent the world -- or are inexact or vague in the claims they make. That is, all representations are inexact, either because the description or model itself is vague or inexact, or because the representation only inaccurately represents. Thus, scientific representations can be true in the very same way as more homely truths: we accept both as true if they are sufficiently accurate and reliable in the context at issue.

In her reply to Teller, Cartwright worries that in stressing the similarities between homely and scientific representations he proposes to treat scientific models themselves as vague. But, Cartwright insists, formal precision, especially in the highly mathematized sciences, is a crucial element in the success of modern science. Yet while Teller does indeed maintain that a term like "Newtonian mechanics" is vague (since our conception as to what is to be included in Newtonian mechanics evolved over time and at any one moment variable interpretations of the theory exist) his account explicitly allows individual mathematical representations to provide precise formal descriptions. Particular mathematical models are inexact not in that they make vague claims but in that they partially misrepresent the way things are. There is, however, another distinction that plays a crucial role in Cartwright's account -- that between the abstract principles of high-level theories, which on their own are neither true nor false, and the 'fitting out' of these principles in concrete models -- and it would be interesting to see how that distinction can be captured in Teller's framework.

Cartwright famously has argued that causal explanations, unlike theoretical explanations, "have truth built into them" (quoted on p. 169). The papers of Mauricio Suárez and Stathis Psillos provide a particularly useful examination of this thesis, taking opposite sides in the debate. Suárez offers a detailed and subtle defense of a partial realism similar to Cartwright's, while Psillos argues, in an essay that also examines Cartwright's notion of capacity, that her distinction between explanations that carry with them an ontological commitment and those explanations that do not is indefensible.

Several of Psillos's arguments are meant to show that an inference to the most likely or to the best cause is not different in kind from other ampliative inferences. In any such case we are licensed to believe the conclusion if we believe the probability of the conclusion to be high or if it possesses other explanatory qualities. But as I understand Cartwright, the difference between causal and theoretical explanation does not so much consist in two putatively different ampliative inference procedures that allow us to conclude that (1) c causally explains E or that (2) a theory T explains E, respectively. Rather, the crucial difference consists in what we are committed to infer once we accept as conclusion (1) or (2), respectively: accepting c as the best or as the most likely causal explanation of E commits us to the existence of c, while accepting T as the best or as the most probable theoretical explanation of T does not commit us to believing in the truth of T. And the reason for this is that on Cartwright's account a high-level theory is not the kind of thing that is meant to provide us with a truthful representation of the phenomena.

It also seems to me that there is a much simpler reply available to an argument by Christopher Hitchcock, discussed by Suárez, than the one offered by Suárez himself on Cartwright's behalf. Hitchcock worries that Cartwright's account is circular, since if causal explanation is a success term, then I already need to believe in the existence of c in order to accept that 'c causally explains E'. But this worry gets things exactly backwards. It is precisely our warrant for believing that c is the best causal explanation of E that also constitutes our warrant for believing in the existence of c.

A number of essays focus on Cartwright's views on causation and capacities. James Woodward's essay provides an excellent introduction to his own interventionist account of causation, and offers a careful discussion of several disagreements between him and Cartwright on the issue of causality. The discussion ranges from the question of whether there should be a 'monolithic' account of causation, as Woodward has proposed, or an account of causality as a cluster concept, which Cartwright seems to favor, to a defense of a modularity assumption, which Cartwright has criticized and which asserts, roughly, that each causal equation in a system of equations should represent a distinct causal mechanism that can be intervened on or disrupted without disrupting any of the other mechanisms.

Interestingly, Julian Reiss argues in his contribution that a framework appealing to capacities of the kind Cartwright favors also requires an assumption of modularity (or autonomy) if it is to make contact with planning and control. Reiss's main aim is to argue that there may be social capacities that an experimentally-based approach to economics, rather than the current theory-based approach, could help to uncover. Cartwright has motivated her own views on capacities by invoking John Stuart Mill's appeal to the notion of tendency. Christoph Schmidt-Petri argues persuasively, however, that it is a mistake to read Mill as having endorsed non-Humean capacities.

Iain Martel maintains that contrary to what is often claimed, a Causal Markov condition for continuous causal models can be satisfied in the quantum mechanical EPR case. He proposes an alternative common-cause of the EPR setup to the model proposed by Cartwright herself. According to Martel's model, which, as he points out, is very close to the naïve textbook account of quantum mechanics, a local measurement at one wing of the apparatus acts on the non-local joint state of the quantum system and thereby is the common cause of the correlated measurement outcomes at the two wings of the experiment. By contrast, according to Cartwright's model, which violates the Causal Markov condition, the quantum state at the source acts at a spatiotemporal distance together with the state of the measurement apparatus in a wing of the experiment to produce an outcome in the wing. (In her response Cartwright defends the plausibility of such action-at-a-distance by appealing to "Humean" considerations. But of course the original Humean, David Hume himself, found in the Treatise that "nothing can operate in a time or place, which is ever so little remov'd from those of its existence.") Thus, comparing the two models makes vivid that there is no account of EPR-type correlations that does not require us to give up at least some of our cherished pre-quantum mechanical intuitions concerning causation.

The final set of papers focuses on Cartwright's anti-fundamentalism and her arguments for the disunity of science. Carl Hoefer's essay presents a cogent defense of fundamentalism, appealing to examples of spectacular successes of scientific theories and arguing that resisting a 'crosswise' induction from such cases amounts to resisting any form of induction. But an anti-fundamentalist could ask for reasons why she should take her theories to cover cases beyond the ones where the theory can be shown to apply. Hoefer does not want his fundamentalism to be committed to any kind of reductionism (according to which the success of higher-level theories would provide a reason for the applicability of our fundamental physics) and also does not want to be committed to a full cross-wise induction involving present-day fundamental physics (which he concedes is at best an approximation to the True Final Physics). His main argument is that fundamentalism recommends itself due to its simplicity: since we believe that the world is everywhere made out of the same subatomic building blocks, the simplest hypothesis is that these building blocks are everywhere and everywhen governed by the same fundamental laws.

Michael Esfeld argues that there is a tension between Cartwright's endorsement of wholism and her patchwork view of science. One of his arguments is: (1) A commitment to wholism implies a commitment to an underlying systematic story of the world. (2) Quantum mechanics as a universal fundamental theory provides the "only physical basis" for accepting wholism. (3) But quantum fundamentalism is incompatible with a patchwork view of theories. Yet as far as I can tell, Esfeld does not offer much of an argument in support of the second premise. Cartwright herself has argued that current quantum theory cannot serve as a fundamental theory. And, indeed, if Cartwright's view of a dappled holistic world were correct, then even if we were committed to there being a 'right' underlying holistic story of the world we could not and should not expect ever to possess a correct theory that tells this story. Given our aims and purposes and our cognitive limitations, such a theory would be utterly useless to us.

Brigitte Falkenburg offers a detailed critical discussion of Cartwright's views on quantum mechanics and its relation to classical physics. I worry, however, that her reconstruction of Cartwright's claim that systems can be assigned both quantum and classical states remains too 'theory-driven' to fully capture Cartwright's view.

The last essay in this book of rich, thought-provoking and stimulating essays is perhaps my favorite one. Alfred Nordmann offers a hermeneutic reading of Cartwright's account of "models as the stage on which negotiations take place" between theory and phenomena (p. 372), where the model becomes the impersonalized "reader" of both the theory and the phenomena. Nordmann's reading shows how Cartwright's account would meet the challenges posed by Hoefer's and Esfeld's fundamentalism and by Psillos's realism: "Literalness", or straightforward truth or falsity, emerges only in the local "alignment of phenomena, models, and theory" (p. 371); hence, the success of science gives us no grounds for believing in a universal, fundamental physics. And while models directly exhibit causal structures, there is no direct, determinative relation between models and the higher-level theories with the help of which they are constructed.