What is the relationship, if any, between the formal tools of logic and natural language? Jaroslav Peregrin's book aims to offer both an introduction to, and a philosophical assessment of, some of the main tools in the contemporary logician's toolkit, such as propositional logic, predicate logic, the lambda and epsilon calculi, and modal logic. Peregrin's main theses are that logic is mostly concerned with the relation of following from to be found, in his view, in natural language; that formal languages are essentially models of natural language; that, because adequate models must ultimately answer to reality, logic is essentially an a posteriori discipline; that more complex logical tools, such as the language of first-order logic and its underlying semantics, don't directly interpret natural language, but are rather part of a toolkit that is used to model parts of it, often in creative ways; and that, for this reason, logical laws are typically the result of a process of abstraction and idealisation.
The book is well written and accessible; it introduces, and offers a philosophical commentary on, the main formal tools developed by logicians over the past 140 years or so. In doing so, it covers philosophically important topics such as the interpretation of variables, higher-order logics, universal algebra, and the correspondence between valuations and deductive calculi in propositional logic.
The book is rather compact, which is certainly a virtue. The inevitable downside, however, is that most topics receive only cursory coverage. For instance, key topics such as non-classical logics, higher-order logic, and Carnap's categoricity problem -- that standard axiomatisations of classical logic are compatible with non-standard interpretations of the connectives and quantifiers (bar conjunction) -- are only very briefly discussed. And, most importantly, one of the book's main assumptions -- that there is a relation of following-from in natural language -- isn't really defended. Yet, the assumption has been influentially challenged by a number of authors, such as Glanzberg. In general, the book doesn't provide an even telegraphic guide to the contemporary literature on the philosophy of logical systems. For instance, although Peregrin does dedicate a couple of sections to Carnap's categoricity problem, he doesn't attribute the problem to Carnap or cite the relevant literature. In addition, he offers a rather idiosyncratic characterisation of bilateralism (the view that logic should comprise rules for asserting and denying complex propositions), thereby missing out on the most elegant known response to the categoricity problem as well as on one of the main responses to standard intuitionist arguments against classical logic.
Chapter 1 sets out Peregrin's general views on logical and natural language. Logic, we are told, is in the business of providing formal tools -- artificial languages, methods of proof, a formal semantics -- to assess the validity of arguments in natural language. As Peregrin puts it, logic "is basically a study of overt reasoning" (p. 3). But, he points out, while we've learned a great deal about natural language arguments using those tools, they present their own problems:
the artificial languages that became the new dwelling-places of logic are artificial in the sense that they were built exclusively by means of our definitions. Therefore, everything we can find out about them cannot be other than a consequence of our . . . definitions. (p. 3)
To be sure, one might plausibly hold that artificial languages are models of natural language (Shapiro, 1998) and that, if the definitions are good, we can study aspects of natural language by studying formal, mathematically tractable languages that represent these aspects. And indeed, this is precisely Peregrin's view. The aim of formal languages is not to replace natural languages (which have developed over millennia, and work just fine); it is simply to allow us to study, with mathematical precision, the vernacular we already understand.
In particular, logical laws are born out of a process of reflective equilibrium: "the back-and-forth movement between considering the arguments which are . . . taken for correct and the tentative explicit articulation of the corresponding rules" (p. 5). Logical laws are not empirical generalisations, essentially because they're normative: they already govern natural language, if only implicitly.
So does the language of logic adequately represent (aspects of) natural language? Peregrin argues that the language of predicate logic is not "directly tied to natural language", since the quantifiers are governed by (natural deduction) rules involving variables "which have no direct counterparts in natural language" (p. 9). However, this is an extremely controversial thesis: it might be objected that we use these rules all the time in mathematical proofs (indeed, Gentzen's natural deduction rules were precisely aimed at modelling mathematical reasoning). It might even be argued that inductive uses of 'every' broadly conform to ∀-I: when we've established that the members of an arbitrary representative sample of objects have property Φ, we can thereby conclude that all objects have property Φ.
Chapter 2 recounts a familiar story: that there is a basic relation of following from to be found in natural language, that arguments have multiple premises and a single conclusion (the standard view), and that arguments belonging to the following from relation are correct. But how to determine which arguments are correct? We can't take a poll, but we can observe that some arguments are more basic than others. For instance, instances of →-E are more basic than Peirce's Law. Peregrin suggests that we focus on logical validity, i.e., on arguments that are correct in virtue of their form, and identify "some basic forms" along with some ways to "derive further valid forms from already given ones" (p. 21).
Chapter 3 focuses on the languages of logic, with their distinction between parameters, whose "use is as old as logic", and constants. Parameters are "the principal means of abstraction" (p. 22); they make it possible to individuate logical forms and to isolate logical words -- the logical constants. Following Carnap, Peregrin takes the relation between logical expressions in natural languages and the logical constants to be one of "explication" (p. 24). He also rejects, plausibly enough, the Russellian view of formal languages as representing "the logical forms that are represented -- not very transparently -- by the means of natural languages" (p. 26).
Chapters 4-8 introduce and discuss propositional logic. In Chapter 4, implication is seen as a means of 'internalisation' (via →-I), i.e., as a means of expressing facts about following from by means of conditionals. Even if this is certainly a plausible view, though, it ignores the tradition originating with Kripke's seminal 1975 paper, which, in order to deal with the semantic paradoxes, recommends adopting a paracomplete logic, i.e., a logic that restricts, among other things, principles such as →-I and ¬-I.
Bilateralism is briefly discussed on p. 42 as a view of logic that resorts to the relation of counterfollowing, which obtains between a set of sentences Γ and a sentence A when if every B ∈ Γ is the case, A is not the case. However, standard treatments of bilateralism (as found, e.g., in Smiley, Rumfitt, and Restall) take this to be rather the view that logical rules should involve principles for asserting and for denying propositions.
Peregrin accounts for negation via a primitive notion of inconsistency. We have an introduction rule, that if X, A is inconsistent, then X implies ¬A, i.e., X ⊢ ¬A; and an elimination rule, that if X ⊢ ¬A, then X, A is inconsistent, where X is inconsistent iff A ⊢ A for every A. These rules imply what Peregrin respectively calls ¬-I and ¬-E: that if X, A ⊢ B and X, A ⊢ ¬B, then X ⊢ ¬A; and that A, ¬A ⊢ B. However, this isn't a helpful way to present the basic principles for negation. Peregrin's rules absorb ⊥-E, the principle that absurdity, ⊥, entails any sentence A, into ¬-E. But this doesn't allow one to formalise, for instance, minimal logic, which licenses the more standard formulation of negation elimination, that A, ¬A ⊢ ⊥, but not the elimination rule for ⊥, that ⊥ ⊢ A. From an inferentialist point of view, Peregrin's formulation of negation introduction also makes ¬-I circular, if it is viewed as a definition of negation (Peregrin admits as much on p. 76).
A better story is told by Tennant: we assume a primitive relation of contrariety among atoms, so that, e.g., 'x is all red' and 'x is all green' imply ⊥; we view ⊥ as a "logical punctuation sign", i.e., we essentially interpret it as the empty set rather than as a propositional constant; we formulate ¬-I and ¬-E, respectively, as the principles that if X, A ⊢ ⊥, then X ⊢ ¬A and that A, ¬A ⊢ ⊥; and, finally, in order to get intutionistic logic, we assume that ⊥ satisfies ⊥ ⊢ A, which, given the availability of ⊥ ⊢, is just an instance of weakening on the right (Steinberger, 2011). This doesn't make the definition of negation circular, and allows it to express a 'relevant' notion of negation, which doesn't necessarily license the inference from A and ¬A to B. (Of course, if one wishes to reject ⊥ ⊢ A, one would need to reject the transitivity of the deducibility relation, as Tennant does.)
In Chapter 5, in which a standard Gentzenian calculus for propositional logic is introduced, Peregrin observes that on the assumption -- indeed, Dummett's Fundamental Assumption -- that the I-rules for $ "represent the only way in which we can reach a" sentence with $ dominant (p. 54), one can derive $'s (harmonious) E-rule from its I-rule. For instance, if the only way we can introduce A → B is via a derivation of B from A, then we can be sure that any situation in which one can assert A → B is a situation in which a derivation of B from A is available, which gives us →-E. Surprisingly, though, Peregrin doesn't even mention in passing the formidable difficulties faced by Dummett's assumption, discussed at length by Dummett himself.
Chapter 6 briefly introduces Hilbertian calculi and their algebraic structure; chapter 7 focuses on properties of calculi more generally. Peregrin introduces the notion of a material interpretation (interpretationM), as any assignment of sentences of a natural language to sentences of a formal language. We can then say that an argument scheme is validM iff all its instancesM are correct arguments, that a calculus is soundM iff all its provable arguments are validM, and that a calculus is complete iff all the validM arguments are provable in the calculus. Of course, soundnessM and completenessM are not mathematical properties of the artificial language; so we can't prove whether they hold.
Chapter 8 briefly touches on non-classical logics, such as intuitionistic and paraconsistent logic. Again, it ignores bilateralist formalisations of logic, and hence sees classical logic as the result of adding classical reductio, that if X, ¬A ⊢ ⊥, then X ⊢ A, to intuitionistic logic. The chapter also contains a discussion of Quine's famous argument that to change the logic of a logical expression is to change its meaning.
Chapters 9-10 cover predicate logic. Peregrin's view here is that "variables are nothing that a logician discovers as (covertly) present in natural language, they are the logician's expedients of giving an account for the language and its workings". As a result, "the rules . . . involving variables do not have any direct counterpart in natural language", and so logicians' use of variables has led to logical regimentation becoming "more a matter of a creative art than a mechanical replacement of natural language expressions by artificial constants and parameters" (p. 99). The view isn't really argued for, however. Consider, for instance, the sentence "For every child, if she is Austrian, she is happy". Here, arguably, the pronoun "she" plays the role of a variable. Why is this simple-minded (and much-used in the classroom) thought not a good one? We're not told. What is more, the standard natural deduction I-rule for ∀, which allows one to infer ∀ξφ(ξ) from φ(τ) provided τ doesn't appear free in ∀ξφ(ξ) or in any of the assumptions on which φ(τ) depends, i.e., provided τ is effectively arbitrary, is constantly used in mathematics. So, one might think, this rule does have a direct counterpart in natural language.
Peregrin concludes that, with the introduction of predicate logic, "logical constants are no longer understood as straightforward proxies for logical expressions of natural language but as parts of a toolbox that we use to articulate forms" (p. 105). This suggests an instrumentalist view of formal languages, according to which their purpose is not so much (among other things) to uncover logical forms, but to help us find more or less accurate models of natural language. Yet it may be objected that if we're good at articulating forms in the right kind of way, i.e., if our analysis of quantified expressions adequately accounts for the validity of natural language arguments involving those expressions, then it's legitimate to assume that we may have effectively uncovered the logical form of quantified sentences.
Chapters 11-13 center on formal semantics. Peregrin cites two standard arguments in favour of a semantic account of consequence: that, because of Gödel's First Incompleteness Theorem, there is more to consequence than derivability-in-S (since S will have Gödel sentences that are independent of S and yet intuitively provable); and that, unlike derivability-in-S (and indeed the consequence relation of first-order logic), consequence isn't compact (since, as Tarski once pointed out, it intuitively validates the ω-rule). However, Peregrin doesn't mention standard proof-theoretic accounts of consequence (see, e.g., Prawitz, 1985), which do not reduce consequence to provability in any given system. Moreover, it may also be argued that the intuitive notion of consequence to which Tarski is referring is paradoxical (Murzi, 2014), since it intuitively validates principles that give rise to versions of Curry's Paradox.
Consequence in propositional logic is defined the standard way, as truth-preservations in all "acceptable truth-valuations" (p. 123), where the space of acceptable truth-valuations is delimited "by means of the well-known truth tables" (p. 124), i.e., by the standard interpretation of the logical connectives. But how is that interpretation fixed? Carnap famously showed that, if logic is formalised in the standard way (assertion-based, single-conclusion), our use of logical expressions only fixes the interpretation of conjunction -- this is Carnap's categoricity problem. Peregrin's discussion here is helpful -- it covers some important results by Hardegree about the relationship between proof-systems and the space of admissible valuations.
We then move to the interpretation of first-order logic. Peregrin presents the model-theoretic conception of consequence, on which truth and consequence are ultimately accounted for in terms of reference or denotation. This presupposes the availability of a universe of objects to interpret the terms and predicates of the language. But "what should be in our universe?" (p. 137). Although the natural answer is of course everything, Peregrin suggests that the only answer that can plausibly be given is everything that's contextually relevant. The chapter includes a brief discussion of second-order logic and its expressive power. But, again surprisingly, there is no mention of the availability, given a second-order meta-theory, of absolutely general interpretations of first-order theories, which allow everything to be part of our universe of discourse (Williamson, 2003).
Chapter 14 is on modal and intensional logic; chapter 15 address the semantic incompleteness and categoricity of second-order logic (pp. 166-169). The discussion here isn't exactly news. We are told that "we cannot uniquely characterise natural numbers within first-order logic" and that although "we can do it within second-order logic", the "price that we have to pay" for this is that second-order logic is incomplete, i.e., that for any candidate axiomatisation of second-order logic S, there are second-order logical truths that are not provable in S.
Chapter 16 offers a philosophical wrap-up. It begins by reiterating Shapiro's logic-as-modelling view: "There are human practices of (overt) reasoning and argumentation", and "we can see logical theories as capturing regularities of these practices" (p. 174). Models, however, must ultimately answer to reality. For instance, one can determine that all instances of the schema (φ ∧ ψ ) → φ are true only if the formal ∧ and → adequately model aspects of the meanings of, respectively, 'and' and 'if' in the vernacular. Thus, since models must ultimately answer to reality, logic isn't an a priori enterprise (p. 182).
One of the main assumptions of the book is that natural language has relation of logical consequence -- of following from, as Peregrin puts it. However, such an assumption has been forcefully criticised by Glanzberg (2015), essentially on the grounds that truth in contemporary formal semantics is absolute and not relative to a model, and that the model-theoretic notion of consequence only gets off the ground if we can quantify over interpretations of the language, which however presuppose a notion of truth-in-a-model. In general, the book could have benefited from a more direct interaction with the contemporary literature.
All in all, Peregrin's book is a valuable entry point for beginning to think about the philosophy of logical systems. It is significantly more compact than, for instance, Tim Button and Sean Walsh's recent Philosophy and Model Theory. But it is inevitably also less rich, both technically and philosophically.
Many thanks to Brett Topey and the NDPR editors for very helpful comments. Work on this review was partly funded by the FWF (Austrian Science Fund, project number P29716-G24), whose support I gratefully acknowledge.
Button, T. and Walsh, S. 2018. Philosophy and Model Theory, Oxford University Press, Oxford.
Carnap, R. 1943. Formalization of Logic, Harvard University Press, Harvard (Mass.)
Dummett, M. 1991. The Logical Basis of Metaphysics, Harvard University Press, Harvard (Mass.).
Glanzberg, M. 2015. 'Logical consequence and natural language', in O. H. Colin Caret (ed.), Foundations of Logical Consequence, Oxford University Press, pp. 71-120.
Kripke, S. 1975. 'Outline of a theory of truth', Journal of Philosophy 72, 690-716.
Murzi, J. 2014. 'The inexpressibility of validity', Analysis 74(1), 65-81.
Prawitz, D. 1985. 'Remarks on some approaches to the concept of logical consequence', Synthese 62, 153-71.
Restall, G. 2005. 'Multiple conclusions', in P. Hájek, L. Valdés-Villanueva and D. Westerståhl (eds.), Logic, Methodology and the Philosophy of Science: Proceedings of the Twelfth International Congress, King's College Publications, London, pp. 189-205.
Rumfitt, I. 2000. '"Yes" and "No"', Mind 109, 781-824.
Shapiro, S. 1998. 'Logical consequence: Models and modality', in M. Schirn (ed.), Philosophy of Mathematics Today: Proceedings of an International Conference in Munich, Oxford University Press.
Smiley, T. 1996. 'Rejection', Analysis 56(1), 1-9.
Tennant, N. 1999. 'Negation, absurdity and contrariety', in D. M. Gabbay and H. Wansing (eds.), What is Negation?, Kluwer Academic Publishers, Dordrecht, pp. 199-222.
Williamson, T. 2003. 'Everything', Philosophical Perspectives 17(1), 415-465.