Cosmological Fine-Tuning Arguments: What (If Anything) Should We Infer From the Fine-Tuning of Our Universe for Life?

Placeholder book cover

Jason Waller, Cosmological Fine-Tuning Arguments: What (If Anything) Should We Infer From the Fine-Tuning of Our Universe for Life?, Routledge, 2020, 323pp., $120.00 (hbk), ISBN 9781138742079.

Reviewed by Robert C. Koons, University of Texas at Austin

2020.02.03


The argument from fine-tuning is the theistic argument most likely to earn the respect (if grudging) of atheists, although it is not the one most favored by theistic philosophers. The fine-tuning problem is also treated with great seriousness among contemporary cosmologists, including those committed to naturalism. Naturalistic cosmologists rely on the multiverse hypothesis to explain (or explain away) the fine-tuning of our universe for organic chemistry and life.

Nonetheless, many philosophers are skeptical about whether there is really any phenomenon here to be explained, either by theism or by the multiverse. Jason Waller’s new book contains detailed consideration of the various forms this skepticism might take, and in each case, he provides convincing arguments for the conclusion that the fine-tuning skeptics are wrong. In Chapter 5, the last chapter of the book, Waller develops his own, very novel version of a theistic fine-tuning argument, one that sidesteps the issue of scientific evidence of fine-tuning for life. Sadly, this new argument is unconvincing and subject to powerful objections.

In his first chapter, Waller deals ably with several important preliminary issues. He defends a reasonable version of modal fallibilism, according to which our ability to conceive of a scenario is positive but defeasible evidence for real or metaphysical possibility. Waller argues for what he calls the Identification Thesis, against the Distinction Thesis of Keith Parsons (2013) and Alexander Pruss (correspondence, 2018). Waller claims that we can identify something’s happening for no reason at all with its happening by chance, while Parsons and Pruss claim that chancy explanations always refer to some non-trivial but indeterministic cause of the phenomenon. Waller’s position in effect collapses the distinction between objective probability and the epistemic probability of unexplained, brute facts. This collapse seems problematic: an objective probability should be grounded in some mechanism with a measurable propensity to produce a certain result. When evaluating the hypothesis that an event happened as an inexplicable brute fact, we must instead fall back on something like epistemic probability: what credence would a reasonable person assign to the unexplained event?

In Chapter 2, Waller both develops his own definition of fine-tuning and summarizes the current state of scientific evidence for the fine-tuning of the laws, constants, and structure of our universe for organic life. Waller defines ‘fine-tuning’ in a way that does not entail the existence of a fine-tuner. Here is Waller’s definition:

x is fine-tuned for y given z iff (i) x is contingent and could exist (or be true) in a large number of ways, given z and (ii) there are only a smaller number of ways in which x could exist or be true that are compatible with y’s existing (or being true), given z.

A couple things to notice right away. First, Waller’s definition does not explicitly mention probability. One could take the reference to the number of “ways” to be implicit appeals to probability. We could interpret the definition as requiring that the conditional probability of x and y, conditional on z, is much smaller than the conditional probability of x alone on z. However, we are not required so to interpret it.

Second, Waller puts the definition in terms of metaphysical possibility rather than conceivability. Instead of asserting that x is contingently true, we could require only that it be weakly conceivable that x be false — i.e., that we cannot know a priori that it is impossible. And, instead of requiring that there are a large number of ways in which x could be true, we could require only that there be a large number of ways of conceiving the truth of x. Such a revision would be desirable.

Finally, Waller’s definition places no direct restriction on y (the goal or end of the fine-tuning). This is defensible, but every case in which fine-tuning plausibly supports some hypothesis, the fact y will have some special feature, such as being intrinsically valuable or interesting, or involving a low degree of informational or algorithmic complexity.

To illustrate my points, consider the scenario described in the last chapter of Carl Sagan’s science fiction novel Contact. In it, mathematicians discover a striking fact about the expansion of the number pi in base eleven: at a certain point, a string of 0’s and 1’s appear, constructing a perfect circle of 1’s against a background of 0’s. We could imaginatively extend Sagan’s scenario by supposing that, immediately after this striking image, the base-eleven expansion of pi goes on to provide rigorous proofs of several important mathematical conjectures, 3-D maps of the universe at regular intervals, and a host of other interesting information. In such a scenario, we would have strong evidence for the conclusion that the number pi has been fine-tuned for the sake of representing the discovered information, despite the fact that we were antecedently quite confident that the value of pi was not contingent. Fine-tuning can be evidence for contingency: we don’t have to have independent evidence of contingency prior to verifying the fine-tuning.

The Sagan-inspired scenario also illustrates that the target or goal of fine-tuning must be something interesting — either especially valuable, or algorithmically simple. A simple program could produce the pattern of 0’s and 1’s that Sagan describes, and it is that very simplicity of the pattern buried in the complexity of the base-eleven expansion that demands explanation.

Waller’s definition has the consequence that any contingent fact that can be true in a large number of ways is fine-tuned for something, a point first made by Cory Juhl in 2006. Interesting cases of fine-tuning must involve some significant goal, like embodied consciousness or organic life (in a universe governed by simple and regular laws). Otherwise, fine-tuning either would be toothless or would support wildly ad hoc hypotheses. This is illustrated by Robin Collins’s demon illustration (2009): every time we flip a coin 100 times, we see an extremely unlikely event for which the coin flips have been “fine-tuned”. Nonetheless, we should not infer that the coin flips were engineered by a demon who was trying to get the flips to represent his favorite number in binary notation.

The human brain is intuitively a significant outcome. Cosmology indicates that the universe itself had to be precisely fine-tuned in order to make functioning human brains possible, and Waller aptly summarizes the range of relevant data from quantum mechanics, particle physics, and cosmology. Waller also introduces a fourth case of fine-tuning: the metaphysical fine-tuning that makes genuinely emergent properties in chemistry and biology possible. This dovetails with some of my own recent work on irreducible thermodynamic and chemical forms in quantum mechanics (2018 and 2019).

If we try to make fine-tuning precise by means of probabilities, we face a number of technical difficulties, including the normalization problem. If the range of possible values of a parameter is infinite, and the anthropic range is finite, and if we try to base our probabilities directly on the parameter, we will have to conclude that the probability of an anthropic value is precisely zero (or infinitely close to it), no matter how large the finite range is. It seems that coarse tuning should be as impressive (or infinitely close to such) as fine-tuning, and yet no one would be discussing the phenomenon if it had turned out that the range of anthropic values was finite but quite large. What’s going on?

Waller discusses some technical solutions to the puzzle, including ideas drawn from the work of Pruss, Peter Vallentyne, and E. T. Jaynes, but none of these solutions really dissolves the puzzle. We can define probability-like relations in such a way that fine-tuning is strictly better evidence than coarse-tuning, but it remains the case that the difference between the two is only infinitesimal, and that still seems puzzling.

A better solution family is one endorsed by Luke Barnes (2012) and Collins (forthcoming, 2009): find principled grounds for limiting the parameter space to some finite range a priori, and build that limitation into our background knowledge. Now we can get a real, finite difference in significance between fine and coarse tuning. The initial limitation of the parameter space can be performed in several different ways. In some cases, existing theory can make no sense of the parameter’s taking certain extreme values (as Barnes argues). In other cases, we can focus on what Collins calls the epistemically illuminated range. It doesn’t really matter, so long as the limitation is not ad hoc.

Another plausible solution is simply to avoid the use of probabilities entirely. It’s very plausible that we have a natural aptitude for detecting purposeful design, a kind of Reidian faculty. It seems that what triggers a legitimate design inference in cases of fine-tuning is the absolute smallness of the number of ways that a parameter can be that is consistent with some interesting outcome.

Waller provides devastating criticisms of the recent skeptical arguments of some scientists, including Victor Stenger, Clément Vidal, Fred Adams, and Gilbert Fulmer. In his critique of Stenger, he relies on a well-argued 2012 article by Fred Barnes.

In Chapter 3, Waller considers philosophical objections of three kinds: those against theistic inferences from fine-tuning, those against the multiverse hypothesis, and general objections to making any inference at all. The third category includes Elliot Sober’s arguments that fine-tuning arguments are guilty of ignoring observer selection. Sober appeals to Arthur Eddington’s example of wrongly inferring that the fish in a lake are all large, based on a sample that uses a net that can only collect large fish.

Sober’s argument relies on a form of the fine-tuning argument that uses a comparison of likelihood ratios: comparing the likelihood of anthropic values on a theistic or multiverse hypothesis versus its likelihood on one-universe naturalism. He argues that the likelihood in both cases is 1, since our background knowledge must include our existence as observers, just as the background knowledge in the Eddington case must include that fact that we can catch only large fish.

Sober applies his analysis to John Leslie’s famous firing-squad analogy. If I survive the ordeal of being shot at by a firing squad, I ought to be able to use my survival as evidence for hypotheses about the incompetence or unwillingness of the squad, and yet Sober’s principle would require me to build my survival into my background knowledge, making all likelihoods equal to 1. Sober admits, however, that a bystander could use my survival as evidence favoring some hypotheses about the squad over others.

Waller points out that nothing prevents us from taking the bystander-viewpoint in evaluating hypotheses, and, indeed, this seems to be the reasonable thing to do, both in the firing-squad case and in the case of cosmological fine-tuning.

Some skeptics have argued that the existence of fine-tuning actually lowers the probability of God’s existence, and Waller is quite effective in destroying these arguments. It is true that God could have created and sustained life in a world naturally inhospitable to it, by miracle if necessary. The existence of fine-tuning does count against the existence of a God who values life but who is indifferent to the consistency of the laws of nature with life’s existence. However, it greatly increases the probability of the existence of a God who favors the natural evolution and perpetuation of life in a world with simple and regular laws.

Waller discusses Hans Halvorson’s recent claim (2014) that fine-tuning introduces an anomaly for theism: why, if God favors life, should He create a world whose laws of nature make the existence of life so unlikely? Why make it necessary to fine-tune the parameters, when God could have imposed laws on nature that wouldn’t have needed to be fine-tuned for life to exist? Here, as Waller points out, Halvorson is guilty of the very confusion that he accuses his opponents of, namely, that of confusing the absolute probability of anthropic values with the conditional probability of anthropic values, conditional on the non-existence of God. Defenders of theistic fine-tuning arguments have only to claim that the latter is low, not the former. So, there is no sense in which God had to overcome an objectively low probability of life.

Moreover, it is quite plausible that God intentionally created a universe in which exquisite fine-tuning was required in order for life to exist. It has long been part of Western theology to suppose that God created the universe in order to manifest his “glory”: that is, to provide objective evidence of his wisdom and power. Just as mountain-climbers manifest their glory by choosing difficult mountain faces to climb, so God might intend to manifest his glory by fashioning laws of nature that require precise fine-tuning of parameters for life to emerge billions of years later.

In respect of the multiverse hypothesis, Waller tackles the objection that the fact that there are a large number of universes does not change the probability that this universe be fine-tuned for life, although it does increase the probability that some actual universe or other is fine-tuned. Here it seems (as Waller argues) that observer selection is relevant. We find ourselves in a fine-tuned universe because it is impossible that we should be anywhere else.

In Chapter Four, Waller maps out five possible explanations: theism, the multiverse, brute chance, an unknown contingent cause, and an unknown necessary principle. He argues very convincingly that brute chance and contingent causes cannot work. Waller cites an interesting argument by Eric Rasmusen and Eric Hedin (2015), inspired by Hume’s argument against miracles: the chances against coincidental fine-tuning are so great that we would have good reason to doubt the underlying science rather than embrace the coincidence. Hypothesizing contingent causes would seem to do no more than push the explanation back a step, since we would always face the problem of what fine-tuned the hypothetical cause to produce the fine-tuning for life.

Could the fine-tuned free parameters of our current laws be derivable from some more fundamental, parameter-free laws? Waller argues that it is hard to believe that this could be so, but he overlooks the analogy of Euclidean geometry, from whose parameter-free axioms we can derive precise values, like that of pi. Still, if the laws did entail anthropic values, this would provide even stronger evidence for a transcendent designer, one who fashioned the fundamental laws of nature so as to entail the right sort of values. What if the fundamental laws of nature were metaphysically necessary? As I argued above (from the Sagan scenario), evidence for fine-tuning is evidence for contingency.

There are two kinds of multiverse hypotheses: those that posit a physical mechanism that generates the universes, and those that take the multiplicity to be a brute fact. The most serious problem for the first kind is a problem it shares with any design-free explanation: the difficulty of explaining the fact that the universe-generating mechanism was itself fine-tuned for generating enough universes of sufficient variety.

Waller focuses on the second kind. In collaboration with mathematician Robert Milnikel, he argues that the probability of a life-permitting universe in a random multiverse is high, if we suppose that each type of universe has an equal and finite probability of existing and that there are an infinite number of life-permitting types. This seems plausible, although I would suppose that there must be some epistemic bias toward smaller and simpler worlds.

What can we say about the option of necessarily existing explanations? For example, what if we supposed that the multiverse existed necessarily? Waller suggests that it would be “self-defeating” for the multiverse proponent to suppose this, since “the whole purpose of the multiverse is to make our universe more probable on the assumption that it is contingent.” (p. 236) This depends on Waller’s conflation of epistemic and objective probability. Even if the multiverse is necessary, we can still ask how great is the epistemic probability of finding at least one life-permitting universe, given the multiverse’s size and variety.

In Chapter 5, Waller offers a novel theistic argument, based on the fact that the universe is fine-tuned for the existence of the actual world (which Waller names ‘Alpha’). It is of course necessarily true that the universe be fine-tuned for whatever world happens to be actual. If any feature of the universe had been different, a different world would have been actual. So, Waller’s argument is an a priori argument for God’s existence, one that does not depend on any of the empirical facts discussed in chapter 2.

Waller’s argument is non-Bayesian. It has the form of an argument by elimination. He argues that there can be only two possible explanations for Alpha’s being actual: (i) the free choice of a necessarily existing creator acting on unknown purposes that entail the choice of Alpha, and (ii) brute fact. The latter involves an infinitesimal probability and so can be eliminated. That leaves us with God as the only possible explanation. Any explanation of Alpha’s actuality that appealed to contingent fact would be viciously circular, since the definition of Alpha includes all such contingent truths.

Waller’s appeal to God’s unknown purposes is objectionably ad hoc. The prior probability that God should (either contingently or necessarily) have just the right purposes to make Alpha the only viable choice is just as low as the prior probability of Alpha’s being actual as a matter of brute fact.

Waller concedes this, which is why his argument does not appeal to Bayes’s theorem. However, if we can eliminate the brute-fact hypothesis on the grounds that it involves an infinitely low probability, why can’t we eliminate the specific theistic hypothesis (with its inclusion of Alpha-specifying purposes) on the same grounds? The difference lies in the fact that in the case of theism the low probability attaches to the prior probability of the hypothesis, not to the likelihood of Alpha’s actuality conditional on the hypothesis, while in the case of the brute fact hypothesis, the low probability attaches to the conditional likelihood and not to the prior probability of the hypothesis itself.

But why should that difference matter? I think it all depends on Waller’s initially embracing the Identification Thesis in Chapter 1. On the alternative view, the low likelihood of Alpha, given the brute-fact hypothesis, just is the low prior probability of the hypothesis that Alpha exists without any external explanation. On Waller’s view, the low likelihood of Alpha, given the brute-fact hypothesis, involves an instance of objective chance with infinitesimal measure. Waller is proposing that we not eliminate hypotheses on the grounds of infinitesimal prior probability, but instead eliminate those that posit actual events with an infinitesimal objective chance of occurring. I don’t find this proposal very attractive. Moreover, I don’t think objective chance applies in the case of brute, unexplained facts, and so the argument fails for two reasons.

REFERENCES

Barnes, Luke A. (2018). “Fine-Tuning in the Context of Bayesian Theory Testing,” European Journal for Philosophy of Science 8:1-15.

Barnes, Luke A. (2012). “The Fine-Tuning of the Universe for Intelligent Life,” Publications of the Astronomical Society of Australia 29(4):529-64.

Collins, Robin (2009). “The Teleological Argument: An Exploration of the Fine-Tuning of the Universe.” In The Blackwell Companion to Natural Theology, edited by William Lane Craig and J. P. Moreland. Madden, MA.: Wiley-Blackwell.

Collins, Robin (forthcoming). “How to Rigorously Define Fine-Tuning.”

Halvorson, Hans (2014). “A Probability Problem in the Fine-Tuning Argument.”

Juhl, Cory (2006). “Fine-Tuning is not Surprising.” Analysis 66(4):269-75.

Koons, Robert C. (2018). “Hylomorphic Escalation: A Hylomorphic Interpretation of Quantum Thermodynamics and Chemistry,” American Catholic Philosophical Quarterly 92:159-78.

Koons, Robert C. (2019). “Thermal Substances: A Neo-Aristotelian Ontology for the Quantum World,” Synthese.

Leslie, John (1989). Universes. Abingdon, U.K.: Routledge.

Parsons, Keith (2013). “Perspectives on Natural Theology from Analytic Philosophy,” In The Oxford Handbook of Natural Theology, ed. John Hedley Brooke, Russell Manning, and Fraser Watts, Oxford: Oxford University Press, 247-62.

Pruss, Alexander (2005). “Fine- and Coarse-Tuning, Normalizability, and Probabilistic Reasoning,” Philosophia Christi 7(2):169-78.

Pruss, Alexander (2011). Possibility, Actuality, and Worlds, New York: Continuum.

Rasmusen, Eric and Hedin, Eric (2015). “Fine-Tuning, Hume’s Miracle Test, and Intelligent Design.”

Sagan, Carl (1985). Contact: A Novel. New York: Simon and Schuster.

Sober, Elliott (2009). “Absence of Evidence and Evidence of Absence: Evidential Transitivity in Connection with Fossils, Fishing, Fine-Tuning, and Firing Squads,” Philosophical Studies 143(1):63-90.

Vallentyne, Peter (2000). “Standard Decision Theory Corrected,” Synthese 122:261-90.