String Theory and the Scientific Method

Placeholder book cover

Richard Dawid, String Theory and the Scientific Method, Cambridge University Press, 2013, 202pp., $95.00 (hbk), ISBN 9781107029712.

Reviewed by Nick Huggett, University of Illinois at Chicago


This book should be of interest to a far wider audience than those of us who work in the foundations of string theory. Richard Dawid's innovative book argues that something is happening in the methodology of real contemporary science (and not only physics) that supersedes almost everything we thought we knew about confirmation. It should be read by anyone working in general issues in philosophy of science, and confirmation in particular.

You are probably aware of the intense interest in theoretical physics in finding a quantum theory of gravity, a marriage between quantum mechanics (QM) and general relativity (GTR), the two great theoretical advances of the twentieth century. Of the approaches to this challenge, string theory is the most prominent. However, as Dawid explains (chapter one), this enterprise is (apparently) different from previous revolutions, because it is not driven by empirical anomalies, nor is it well-controlled by experiments, because characteristic processes typically occur in situations (for instance involving huge amounts of energy) that are far from any experimentally accessible regime. (What motivates the enterprise in the absence of anomalies are the remarkable successes of QM and GTR, coupled with the fact that there are situations in which both apply.)

And yet, even without narrow empirical constraints, string theorists are confident that they are on the right track. Why? The aesthetics of the mathematics is one answer given (e.g., Greene 1999). Self-deceptive 'groupthink' is another (e.g., Smolin 2006). The purpose of Dawid's book is to describe and defend a methodology between these optimistic and cynical extremes -- one that he believes is also employed beyond the rarified contexts of quantum gravity. He proposes (what I will dub) a 'post-empirical' methodology, in which other kinds of evidence than prediction (especially novel prediction) count. Not surprisingly, there are several points at which one might decide to leave the bus, and I will indicate some of them as I outline the route that Dawid takes; but the position should be taken seriously, in part because Dawid offers some serious answers to objections, and in part because this book attempts to shed light on extant scientific methodology in a very original way.

Here's the general issue (sketched in chapter two): suppose we have some body of empirical facts that we want to theorize. As a matter of logic, there must be many collections of axioms -- physical postulates -- that entail them, so how could one such collection be confirmed by the evidence over any other? Well, some such theories might be ruled out rather easily: some simply involve adding superfluous postulates, some might postulate irregular laws of nature, and so on. In other words, the problem of underdetermination by data can be addressed by limiting the range of possible theories by imposing constraints on what constitutes an acceptable theory. But how could these constraints ever be adequate to narrow the field down to a unique theory? The standard responses to the question are familiar: pessimistically, accept underdetermination as a limit on knowledge; more optimistically, argue that underdetermination rarely arises in practice; or retreat to extra-empirical criteria such as 'maximizing' simplicity or explanatory virtue. The problem with the latter two more epistemically optimistic views is that of unconsidered alternatives: perhaps there is no known alternative because of a lack of ingenuity; and how can we tell that a theory is 'best' without explicit comparison? Dawid's book proposes a new option in this very familiar landscape of responses to underdetermination. Chapter three describes the view.

First, Dawid restricts the general scope of scientific claims: a theory is not accepted as strictly true, but simply as compatible with (and generally predictive of) 'the next level of phenomena'. Imagine empirical phenomena arranged in generations according to the technology, resources, etc. needed to investigate them: suppose that we have technological access to generation n, but not to generation n+1, so we can't test theory, T, compatible with generation n, that makes predictions about n+1. How could we come to believe that there was no alternative theory T', also compatible with generation n, that makes different predictions about n+1? Dawid's answer is that we would have to be justified in constraining the range of possible theories to the point at which it is probable that T -- or rather, any theory agreeing with T up to level n+1 -- is the only theory satisfying the constraints. In this way, T can be confirmed even in the absence of relevant empirical data; hence 'post-empirical' confirmation.

It's important to recognize that this framing changes the game considerably from the standard realism debate: no longer is the question one of 'absolute' truth, but rather one of truth about the next layer to be revealed experimentally. Of course, Dawid is not the first to suggest a reframing, but it strikes me that his way of doing it opens up substantive issues in a potentially profitable way (which he discusses in chapter seven). First, Dawid's view may seem to capitulate to the constructive empiricist over the traditional realist: all that is to be argued is that T will continue to be empirically adequate. But things look different if, as Dawid argues in chapter six, T claims to be a 'final' theory (or rather final theory 'schema', whose filling out amounts to a redefinition of the scientific project.). Second, can a kind of entity realism follow a similar strategy? Hacking claimed that 'sprayable' entities typically survive scientific progress, rather as theory T is expected to survive according to Dawid. Does the criterion of 'sprayability' limit possible ontologies sufficiently to explain entity preservation over time?

Chapters six and seven also develop a structural realist interpretation of string theory, which will be of special interest to philosophers of physics. His argument turns especially on the analysis of dualities -- the 'physical equivalence' of theories with spacetime of different geometries or topologies, for instance. Dawid argues that the equivalence is real, and shows that the apparent differences between dual theories -- concerning the number of dimensions, say -- are unreal. What is left is 'structure'.

Returning to methodological issues, confirmation of T clearly hangs critically on how restrictive the constraints on possible theories are, and how they are justified. Here (especially chapter three) Dawid presents string theory as an example of the methodology at work. First, string theory is an extension of the quantum field theory program of high energy physics (HEP), and so bound by the methodological constraints of that program: for instance, candidate theories lie within the framework of path integral QM developed by Richard Feynman, and utilize the ideas of gauge symmetry arising from Noether's theorems. Second, Dawid argues that in the history of HEP, theories (a) for which no alternative was known, after searching, and that (b) give explanatory unity to a range of physical systems, have turned out to correctly predict the next generation of phenomena (for instance, the electro-weak theory, confirmed at CERN in the 1970s and '80s). A 'meta'-induction on the success of the HEP methodology leads to the conclusion that a theory that satisfies the HEP constraints will make reliable predictions about the next generation of predictions. And, says Dawid, string theorists correctly identify string theory as such, and so are justified in their confidence.

These are bold claims, and of course evaluating their soundness lies in the details Dawid provides, so the following is merely an enumeration of issues facing the argument -- most of which he tackles in the book. First, there are the factual claims about the methods that string theorists employ: is what goes by the name 'string theory' today (holography, M-theory, and non-Feynman methods, for instance) really within the HEP tradition in a way that makes the induction valid? And again, over the past ten years, the claims of string theorists have become less vociferous, and yet nothing has occurred to question the reliability of the HEP methodology, so what explains the change according to Dawid's picture? More generally, how strong is the evidence that string theorists really engage in such reasoning?

The second kind of question concerns the general reliability of the post-empirical method. (Again, Dawid addresses many of these questions, but readers will want to assess how satisfied they are with the answers.) Dawid is generally cautious here, seeking primarily to explain in general terms how the methods can succeed; for instance, Dawid explains how his post-empirical methodology relates to inference to the best explanation. But there is something closer to a demonstration of reliability in the Bayesian analysis of the 'no alternatives argument' that Dawid has proposed (with Hartmann and Sprenger, forthcoming). Under suitable conditions -- including a highly constrained space of possible theories -- they show how the failure to find an alternative explanation of the evidence is itself evidence that there is no alternative, raising the probability that a known explanation is uniquely correct (in the sense described above). Clearly such considerations support the picture that Dawid proposes -- if their assumptions hold, as perhaps they do in HEP.

Beyond string theory, Dawid argues at length (in chapters four and five) that this mode of confirmation is not peculiar to string theory. First he explains that string theory completes a trend in twentieth century physics towards ever more inaccessible phenomena -- hence to 'post-empirical' science. Second, he argues that similar methods are in fact to be found to greater or lesser degree throughout the history of physics, and in other sciences, for instance, paleontology. He accepts that testing by novel prediction is the gold standard of scientific testing but argues that in many cases the methods he has described must be used -- so to reject them would be to reject a large part of what we count as scientific knowledge.

I think that Dawid has identified an important, real feature of scientific methodology, neglected by philosophy of science -- one that should be taken seriously. I also think that this book provides a plausible framework for thinking about it. To what extent Dawid is right about the way such 'post-empirical' reasoning functions in science is a harder question; even harder is the question of whether it can be justified along the lines he proposes. Obviously he believes he is right, and he argues plausibly to that effect, but the questions are big ones, deserving investigation and debate among a wide readership. I thus commend the book to you, and encourage you to engage the important issues that it raises. If Dawid is correct, then he has opened up a whole new way of thinking about scientific confirmation.


Richard Dawid, Stephan Hartmann and Jan Sprenger, "The No Alternatives Argument", British Journal for Philosophy of Science, forthcoming.

Brian Greene, The Elegant Universe, Norton, 1999.

Lee Smolin, The Trouble with Physics, Houghton Mifflin, 2006.