The Autonomy of Mathematical Knowledge: Hilbert's Program Revisited

Placeholder book cover

Curtis Franks, The Autonomy of Mathematical Knowledge: Hilbert's Program Revisited, Cambridge University Press, 2009, 213pp., $32.99 (pbk), ISBN 9780521183895.

Reviewed by Allen Hazen, University of Alberta

2013.09.17


Some of the rhetoric of this book is annoying. Franks is one of those philosophers who, after claiming to have taken the "naturalist turn," turn to talking as if there was a clear distinction between philosophy (retrogressive, usually foundationalist) and science (progressive, in no need of foundation): Hilbert's philosophical contribution, the claim seems to be, was to have removed questions of consistency from philosophy and transplanted them into mathematics, and this is evidence of his deep and original philosophical insight. Hilbert is praised for his indifference to questions of "foundations," and his hope -- prominent at least in his polemical writings -- that (finitistic) consistency proofs would justify classical mathematics against the philosophical (both men, of course, having been both mathematicians and philosophers!) criticisms of Brouwer and Weyl is correspondingly downplayed. But the rhetoric of the introductory and concluding chapters should not distract the reader from the challenging and novel discussion of the aims and fabric of proof theory in the central chapters.

Franks claims that Hilbert believed mathematics to be autonomous and in no need of philosophical defense (and so infers that his program couldn't have had such a foundationalist aim). In support he adduces the fact that Hilbert wasn't worried about the possibility of contradictions: he had never been panicked or thrown into existential angst by the paradoxes. His (misplaced, as Gödel showed) confidence that a (finitistic) consistency proof would be discovered for (an appropriate formalization of) set theory grew out of an underlying confidence that set theory was consistent. This underlying confidence was, I think, shared with many mathematicians in the early 20th century. Whitehead, for example, in a 1905 essay on geometry, says:

None of the reasoning of this memoir depends on any special logical doctrine which may appear to be assumed in the form in which it is set out. Furthermore certain contradictions recently discovered have thrown grave doubt upon the current doctrine of classes as entities. Any recasting of our logical ideas upon the subject of classes must of course simply issue in a change of our ideas as to the true logical analysis of propositions in which classes appear. The propositions themselves, except in a few extreme instances which lead to contradictions, must be left intact. [1]

I think this was a natural and defensible attitude. It was clear, to anyone familiar with the mathematical work of the preceding decades, that much good mathematics had been done with the notion of sets. There were also some pathological set theoretic arguments, arguments that had a definite air of hocus-pocus about them and led to antinomy. There must, it would have been natural to think, be some principled way sorting the good from the pathological, but its exact statement might be quite subtle: no sensible mathematician would stop using set theory until the line was drawn, but would press onward, relying on experienced judgment to avoid pathology. Such an attitude has, it seems, been rewarded by subsequent history. Hilbert's proof theory didn't draw the line, but the so-called "Iterative Conception of Set" -- which can be seen as immanent in Zermelo's 1908 axiomatization, was described more explicitly by Zermelo in 1930, and was on display in Gödel's work on the (relative) consistency of the continuum hypothesis and axiom of choice at the end of the 1930s -- is now almost universally seen as providing a principled separation of good set theory from paradoxical. In Gödel's words, "this concept of set . . . has never led to any antinomy whatsoever; that is, the perfectly 'naïve' and uncritical working with this concept of set has so far proved completely self-consistent." [2] Decades of experience, and the semantic clarity of the conception, have largely silenced the critics: Hilbert had hoped that a consistency proof would do the job more quickly and cleanly.

Hilbert's initial response to the paradoxes displays his attitude. His colleague Zermelo had (a year or so before Russell) discovered Russell's paradox, but this didn't immediately inspire Hilbert to look for consistency proofs. Rather, he first formulated a more complicated argument to a contradiction ("Hilbert's Paradox"), which differed from Russell's in that the steps leading to the construction of a paradoxical set seemed similar to constructions used in "good," normal mathematics. Hilbert's Program only came after he convinced himself that the principle separating good from pathological set theory would have to be subtle.

The failure to find consistency proofs, however, has not deprived the work of Hilbert and his followers in his "Program" of interest. The central aim of Franks's book is to give philosophers a sense of why this work is, despite the failure of its polemical aims, so fascinating. Central is the demonstration that metamathematical notions -- proof, provability, consistency -- can be translated into purely mathematical terms, and expressed in the language of an archetypally simple and central part of mathematics, elementary number theory. The key (and comparatively late) step here was Gödel's "arithmetization," discussed in Chapter 3. Proof theorists of the 1920s had worked with unanalyzed syntactic concepts: symbol, string of symbols, sequence of strings . . . . These are clearly understood, and allow for a precise mathematical development (and indeed can be used in the formulation of a purely formal, axiomatic, theory of syntax, as in the final chapter of Quine's Mathematical Logic), but are not traditional mathematical concepts, with the result that a theory formulated in terms of them is hard to compare with other mathematical theories. It is easy to assign numbers to symbols and strings in a systematic way, but it is far from obvious that important syntactic relations (such as that of one string of symbols being the concatenation of two others) correspond, under the assignment, to anything definable in familiar mathematical terms. Gödel apparently first obtained his incompleteness result for a system of "analysis" (= theory of natural numbers and sets of natural numbers), but believed that extending it to "arithmetic" (= a theory in which quantified variables range only over numbers) would require "new concepts": that symbols for familiar notions like addition would have to be supplemented by ones for notions like "the strings of symbols assigned the numbers x and y when concatenated form the string assigned the number z" if self-referential sentences were to be formulated. Only later did he realize that the syntactic notions (under a reasonable assignment of "Gödel numbers" to symbols and strings) were "arithmetic," definable in a first-order language with variables ranging over the natural numbers and primitive symbols denoting 0 and the successor, addition and multiplication functions.

It is also important to determine exactly the strength of the mathematical assumptions used in metamathematical reasoning. Clearly, if a metamathematical consistency proof is to yield some, non-circular, justification or defense of a classical mathematical theory, the assumptions of the proof must not be stronger than those of that theory itself! (Franks doesn't devote much attention to the question of why someone uninterested in questions of "justification" would demand the same restrictions on the means used in metamathematics, but perhaps this is unimportant: questions of the strength of assumptions needed in various proofs are part of the phenomenology of mathematics, interesting even if questions of justification are left to one side.) Hilbert and his co-worker Bernays spoke of metamathematical proofs as proceeding by "contentual" and "intuitive" reasoning.

Franks is perhaps unjustly dismissive of their use of this epistemological vocabulary in characterizing the sort of proofs they sought. It is reasonably clear that both number-theoretic concepts like addition and multiplication and syntactic ones like concatenation pass muster, and that the use of mathematical induction to establish results whose formulation requires quantification over an infinite domain doesn't. Thus Primitive Recursive Arithmetic (essentially the system defined by Skolem in 1923 [3]) and analogous theories of concatenation would qualify as intuitive and contentual, whereas full (first-order) Peano Arithmetic and an analogous system of Quinean "protosyntax" probably wouldn't. But this judgment depends on our sense of when two systems, apparently concerned with different objects and formulated in terms of different primitive concepts, are "analogous." The epistemological language used by Hilbert and Bernays seems to be the only one that can be used to characterize what systems with very different primitives have in common. Once, however, we have Gödel's reduction of syntax to arithmetic, we no longer need the generality of the epistemological characterization: we can assume without loss of generality that the mathematics in metamathematics is all arithmetic, and specify the assumptions of a metamathematical proof by reference to particular axiomatic theories of numbers. (Tait [4] has argued that Hilbert's notion of "finitistic" proof should be identified exactly with Primitive Recursive Arithmetic. Franks expresses some skepticism on this, but does not discuss the issue at length.)

Even if we worked with unreduced protosyntax (and so avoided relativity to the details of particular arithmetizations), notions such as that of a formal proof can be defined in more than one way. Some results in metamathematics -- most famously Gödel's Second Incompleteness Theorem -- are sensitive to these details: their proof depends on the particular definitions used. Such results are said to require "intensionally" correct definitions, in that they are provable if a definition capturing (enough of) the meaning of, e.g., "formal proof" is used, but might not be if a different definition -- even one true of the same strings of symbols (or, under arithmetization, of the same numbers) is substituted. Franks's most interesting (and most technically challenging) discussion, in chapters 4 and 5, is of this issue. Gödel originally proved his incompleteness theorems for a very strong system of (higher-order) Peano Arithmetic, and there is no serious question about the intensional correctness of the definitions he uses. (The artificiality of arithmetization can make this hard to see: Quine's use of protosyntax makes for beautifully clear exposition here!) Certain intuitive consequences of this intensional correctness -- statements of the nature of "if one column of formulas is a derivation of B from A, and another is a derivation of C from B, the result of putting one above the other is a derivation of C from A" -- required for the proof of the Second Incompleteness Theorem are readily derived in strong arithmetic theories.

When we try to extend our investigation to weaker axiomatic theories, things are less clear-cut. Should we take, for example, a proof in first-order logic to be a natural deduction (or even axiomatic) derivation, or perhaps a Beth-Smullyan tableau? When dealing with strong theories, our decision on such questions doesn't matter. In Peano Arithmetic, for example, we can prove that the same consequences are derivable from a first-order formula whatever we take a first-order proof to be. In weaker theories (theories in which, for example, the exponentiation function cannot be proven total: converting natural deduction derivations into semantic tableaux can produce a super-exponential increase in the number of symbols in a formal "proof") this is no longer so. But then we can ask, in connection with such theories, what counts as an intensionally correct definition of formal provability. After all, if a definition of provability based on arithmetizing natural deduction and one based on arithmetizing tableaux cannot be proven equivalent, it is at least not obvious that both can be thought of as correct formalizations of the notion of formal provability! I am not convinced of all Franks's conclusions here, but his discussion of these issues (which draws upon and summarizes work by several recent proof theorists, among them Robert Solovay, Pavel Pudlák and Samuel Buss) is fascinating and thought provoking. It's enough to make one put up with the "naturalist" rhetoric.

The writing is clear, and the exposition of technical matters full enough to allow non-specialists to follow it. The technical matters are, however, technical. I think a student should have at least the logical background of, say, Boolos, Jeffrey and Burgess's Computability and Logic before tackling this book. Some acquaintance with weak theories of arithmetic and such concepts as "syntactic cuts" would also be useful: parts of John Burgess's Fixing Frege are perhaps the most elementary introduction to these matters in print.

REFERENCES

[1] A. N. Whitehead, "On mathematical concepts of the physical world," Philosophical Transactions of the Royal Society, 1906. Passage quoted is on pp. 17-18 of the reprinting in F.S.C. Northrop and Mason W. Gross, eds., Alfred North Whitehead: An Anthology.

[2] K. Gödel, "What is Cantor's continuum problem?" American Mathematical Monthly 1947. Quoted sentence is on page 180 of the reprinting in volume II of Gödel's Collected Works.

[3] T. Skolem, "On the foundation of elementary arithmetic by the recursive mode of thought without the use of bound variables ranging over infinite domains," English translation in J. van Heijenoort, ed., From Frege to Gödel: A Sourcebook in Mathematical Logic, 1879-1931.

[4] W. Tait, "Finitism," Journal of Philosophy, 1981.