Philosophy of Mathematics: Set Theory, Measuring Theories, and Nominalism

Placeholder book cover

Gerhard Preyer and Georg Peter (eds.), Philosophy of Mathematics: Set Theory, Measuring Theories, and Nominalism, Ontos, 2008, 181pp., €79.00 (hbk), ISBN 9783868380095.

Reviewed by Douglas M. Jesseph, University of South Florida

2009.04.15


The ten contributions in this volume range widely over topics in the philosophy of mathematics. The four papers in Part I (entitled "Set Theory, Inconsistency, and Measuring Theories") take up topics ranging from proposed resolutions to the paradoxes of naïve set theory, paraconsistent logics as applied to the early infinitesimal calculus, the notion of "purity of method" in the proof of mathematical results, and a reconstruction of Peano's axiom that no two distinct numbers have the same successor. Papers in the second part ("The Challenge of Nominalism") concern the nominalistic thesis that there are no abstract objects. The two contributions in Part III ("Historical Background") consider the contributions of Mill, Frege, and Descartes to the philosophy of mathematics.

The editors propose that "[t]ogether, these articles give us a hint into the relationship between mathematics and the world" (Preface). Certainly, the papers on nominalism deal with this sort of issue, and the ontological status of mathematical objects is never far removed from Mill's or Descartes' philosophies of mathematics, but the collection really fails to have much of an over-arching theme. Lack of a salient organizing principle does not, however, mean that the works collected here fail to have any value.

Of the four papers in Part I, the papers by Douglas Patterson and Andrew Arana stand out as significant contributions to the current literature. Patterson's "Representationalism and Set-Theoretic Paradox" takes up the issue of how best to handle the paradoxes of naïve set theory. Three questions guide Patterson's investigation: first, what should we make of the "naïve abstraction principle" that every open sentence determines a set; second, what is the relation between the set-theoretic paradoxes and such semantic paradoxes as the Liar or the Grelling paradox; third, how ought we construe the languages in which set theory is stated? One traditional approach to the set-theoretic paradoxes is to restrict the abstraction schema by introducing a distinction between classes (for which abstraction holds, though some of them are "too big" to be members of anything) and sets (for which abstraction fails, although all sets can be members of some further set in an interative hierarchy). If the distinction is accepted, the paradoxes of set theory are avoided and taken to be sui generis inconsistencies that arise, not from any semantic pathology, but rather from a confusion of sets with classes. As a bonus, the set/class distinction yields a standard semantics for set theory that is based on classical logic.

Patterson finds this distinction unmotivated, and argues for a "settist" view that does not attempt to distinguish sets from classes. Moreover, he is not tempted by the prospect of using a paraconsistent logic for set theory and its semantics. His proposal is to retain classical logic, deny the set/class distinction, and abandon the goal of giving a semantics for set theory. The result is that the three questions guiding the inquiry have simple answers: naïve abstraction is simply false (because it is inconsistent), the set-theoretic and semantic paradoxes have a common source in a false view about meaning, and there is no need to give a semantics for set theory, since all the relevant work can be done without assuming that set-theoretic claims are meaningful. We may well believe that some set-theoretic claims are true and others false, and we may think we express such beliefs in a language that has well-defined truth conditions. But, Patterson insists, we can drop the assumption that talk about sets has any meaning and still communicate our beliefs. Following this proposal through demands rejection of the thesis of "representationalism" -- that belief is a relationship between a thinker and a (linguistic) representation that expresses the belief. This, in turn, requires a rather complex story about the nature of belief that goes beyond the scope of the current paper. The prospects for success on this front are at best uncertain, but it is at least an interesting study in the price to be paid in overcoming the set-theoretic paradoxes.

Arana's contribution, "Logical and Semantic Purity," studies the notion of "purity of method" in mathematics. Hilbert famously proposed that mathematical proofs should be "pure" in the sense that they rely only on principles that are required by the content of the result proved. In articulating this rather vague notion of purity, Arana draws a useful distinction between two ways in which a proof might be pure, corresponding to two ways of understanding what is required by the content of a theorem. On the one hand, there is logical purity, which can be taken as the requirement that a proof must use only those axioms or definitions strictly necessary to derive the result. On this way of thinking about purity of method, the goal is to isolate a minimal set of axioms required for proof of a theorem, and characterize a proof as pure when it employs only this minimal set. The proposal is not without its difficulties, but the intuitive idea is clear enough.[1] In contrast to the broadly syntactic or proof-theoretic notion of logical purity, semantic purity involves "whatever must be understood or accepted in order to understand [a] theorem" (p. 42). There are some subtleties involved in articulating what is involved in understanding or accepting a theorem, but the root notion is tolerably clear and Arana shows that Hilbert's "purity of method" can be read as demanding either logical or semantic purity.

The heart of the paper is an ingenious set of examples showing that some results require more concepts (or propositions) to be proved than to be understood, while other results demand more concepts (or propositions) to be understood than to be proved. In particular, Arana shows that the problem of finding a general solution to a cubic polynomial with rational coefficients is easily enough understood by someone familiar with elementary algebraic operations, but the general solution to the problem requires ineliminable recourse to complex numbers. Thus, "since complex numbers needn't be understood in order to understand this problem, we have an example where what is needed to solve a problem exceeds what is needed to understand it" (pp. 46-7). On the other hand, Euclid's proof of the infinitude of primes is a very simple application of principles that can be formalized in a fragment of first-order Peano arithmetic. But, plausibly, to understand the content of the theorem stating the infinitude of primes one must understand and accept a second order induction axiom. This, however, goes well beyond what is needed to prove the theorem. Arana also considers the case of Gödel sentences which express arithmetical content in the language of Peano arithmetic but which cannot be proved from any consistent finite axiomatization of arithmetic. These, also, are candidates for theorems whose content can be grasped by someone who understands the relevant axioms of arithmetic, even though such sentences cannot be proved from those axioms.

The other two papers in Part I of the collection are of less interest. Mark Colyvan's "Who's Afraid of Inconsistent Mathematics?" is a very elementary presentation of two historical cases taken to illustrate the use of inconsistent mathematical theories -- the naïve set theory used by Frege and the infinitesimal calculus. The discussion of the calculus case seems particularly ill informed. Colyvan imagines that "[w]hen the calculus was first developed in the late 17th century by Newton and Leibniz, it was fairly straightforwardly inconsistent" (p. 31). There is no doubt that the foundations of both the Leibnizian calculus differentialis and the Newtonian method of fluxions were a matter of controversy, but it is far from obvious that the charge of inconsistency is warranted. Leibniz took infinitesimals as "fictions" that could abbreviate otherwise complex exhaustion proofs, maintaining that infinitesimals were in principle eliminable and could be shown never to lead to inconsistency. Likewise, Newton held that his calculus was based on concepts of uniform motion and acceleration that offered a coherent foundation for the calculus that was guaranteed to avoid paradox. Colyvan's proposal is to explore the use of a paraconsistent three-valued system that would block the "explosion" that follows from inconsistency in a classical system and renders any sentence a theorem. His suggestion is that inconsistent mathematical theories might still be of use, provided that the underlying logic is altered. The discussion of paraconsistent logic is quite elementary, and Colyvan makes no attempt to work out how this logic might be applied. Perhaps the greatest lacuna here is the fact that working mathematicians generally accept classical logic and take provably inconsistent theories as candidates for revision. Thus, there is no obvious philosophical interest in the notion that mathematical practice might be reconstituted along the lines suggested here.

Wilhelm Essler's "On Using Measuring Numbers according to Measuring Theories" completes Part I of the collection. Essler considers the axiom guaranteeing the infinity of natural numbers, which asserts that any two distinct numbers have distinct successors. Russell and Whitehead had recourse to an axiom of infinity to derive this part of Peano arithmetic, but Essler argues that it can be shown to follow from a general theory of measurement. He takes Otto Hölder's 1901 treatment of magnitudes, and shows that, taking these magnitudes as primitive, the relevant Peano principle is derivable. It is, of course, no wonder that a relatively strong axiomatization of the theory of magnitudes can yield a strong result, so the interest in the Essler's derivation is frankly minor.

A nominalistic approach to the philosophy of mathematics is one that denies the existence of abstract objects (typically on the grounds that the causal isolation of abstracta would make them unknowable and their connection to the world of concreta quite inexplicable). Two obvious problems confront nominalism: how to give an account of mathematical truth or proof in a way that eschews a commitment to abstract objects, and how to navigate the "indispensability thesis" which claims that because our best physical theories are ineluctably committed to the existence of abstracta, nominalism fails. Jody Azzouni's contribution "The Compulsion to Believe: Logical Inference and Normativity" has little direct connection with nominalism (although he has many publications that defend a nominalistic treatment of mathematics). Instead, he is concerned with the connection between our intuitions about valid inference and the normative constraints that logical principles impose on formal derivations. He concludes that formal derivations are not the source of the intuitions that motivate acceptance of a result. Azzouni argues that, rather than rely on formal derivations, we use "inference packages" which he characterizes as "topic specific, bundled sets of principles naturally applied to certain areas" (p. 73). This may, indeed, account for the intuitive compulsion mathematicians often feel to accept a result. It is, however, more difficult to account for the intuitive compulsion we feel to accept the elementary inference forms of classical logic. On this point, Azzouni concedes that there is no way to account for the intuitions underwriting the strong normative role of classical logic without appealing to these very principles themselves.

The other three contributions to Part II deal much more directly with nominalism, although they are of relatively little interest. John P. Burgess and Gideon Rosen's 1997 book A Subject with No Object: Strategies for Nominalistic Interpretation of Mathematics covered essentially all of the various nominalistic projects philosophers have pursued, but the papers here take no note of the obstacles Burgess and Rosen have identified to a successful nominalization of mathematics. Otávio Bueno's "Nominalism and Mathematical Intuition" proposes an "agnostic nominalism" that does not deny the existence of abstract mathematical objects, but remains uncommitted to their existence or non-existence. He makes the familiar point that a mathematical Platonist's appeal to intuition (such as Gödel's famous account of the axioms of set theory "forcing themselves" on the mind of one who considers them) is inadequate to develop a convincing mathematical epistemology. His agnostic nominalism avoids such difficulties, but it is unclear how this agnosticism can be reconciled with any useful account of mathematical truth or proof.

Yvonne Raley's paper "Jobless Objects: Mathematical Posits in Crisis" considers four formulations of the "makes no difference" argument proposed by Alan Baker. The argument reasons that the existence of abstract objects is irrelevant to the concrete physical world (including human neurophysiology and cognitive structures). Thus, there can be no good reason to believe in such things. Raley considers various ways in which the argument can be formulated (in terms of the causal inertness of abstract object, their indispensability to physics, their role in securing mathematical truth, and their role in accounting for mathematical knowledge). She concludes that postulating mathematical objects can do nothing to explain our ability to know mathematical truths, from which it follows that no argument from their alleged indispensability can be conclusive against the nominalist. The odd consequence is that on Raley's view there turn out to be no mathematical truths (aside from those made trivially true by vacuous universal quantification), so accounting for our mathematical knowledge seems an utterly pointless exercise. Susan Vineberg's brief essay "Is Indispensability Still a Problem for Fictionalism?" covers very familiar territory in the literature on nominalism and concludes that the indispensability arguments can be overcome by embracing mathematical fictionalism. This essay offers nothing new and would have benefited from taking account of at least some of Rosen's writings on fictionalism.

Part III of the collection contains two papers, one by Madeline Muntersbjorn on "Mill, Frege, and the Unity of Mathematics," another by Raffaella De Rosa and Otávio Bueno entitled "Descartes and Mathematical Essences." Muntersbjorn's concern is with what she terms the "unity of mathematics," although it is entirely unclear what kind of unity is at issue. Frege did not believe that all the branches of mathematics could be reduced to an underlying unified theory -- arithmetic, for Frege, is a definitional extension of logic, but geometry remains synthetic a priori in Kant's sense. Yet Frege did believe, contra Hilbert, that the sentences used in mathematics express nonlinguistic propositional contents that are the same regardless of the language used. It seems that by "unity" Muntersbjorn understands something like the objectivity of mathematical truth, or the determinate content of mathematical concepts. She admits that Frege's famous polemics against Mill's "psychologism" are essentially correct because Mill's account of the origins of mathematical knowledge depends too heavily on specific accidents of an individual's psychological history and cannot secure the objectivity we typically attribute to mathematical truths and concepts. Muntersbjorn goes on to argue that Mill's socio-political views were based on a conception of human psychology as essentially uniform across individuals, while Frege's notoriously reactionary politics denied the equality of humans. On her reading, Frege "rejected psychologism because the view presupposed the existence of something [he] did not believe in, namely a unity of mind shared by all human beings independent of origin" (pp. 154-55). This bizarre interpretation takes Frege's political views as structuring his account of logic. Frege's point, however, was that differences in individual psychological history (not psychological capacity) make it impossible to identify the content of a concept like "finite integer" with any specific past experience through which the concept was learned.

De Rosa and Bueno's account of Descartes' philosophy of mathematics is more interesting and important. They note that in some contexts (such as the Meditations) Descartes seems to opt for a variety of mathematical Platonism, while in other contexts (such as the Principles of Philosophy) he endorses something very much like a conceptualism. This apparent contradiction is resolved by reading the Cartesian doctrine of innate ideas as a kind of hybrid between Platonism and conceptualism. Certain capacities or ways of thinking are "engraved" in the mind by God and guide our thoughts in a way that makes the content of mathematical thought objective (in the sense that it is the same for all thinkers) without committing Descartes to the existence of an independent realm of self-subsistent mathematical objects. On the whole, the reading seems plausible and has the benefit of avoiding the familiar difficulties with both Platonism and conceptualism.



[1] Arana notes that there might be several different logically pure proofs of a given theorem from an axiom set. There is also a danger of trivialization, since if we take A to be any finite axiomatization of a theory, then the single axiom conjoined from all the members of A will be a logically minimal.