This volume is an announcement that an important school of philosophical logic is flourishing in Canada. The essays in *On Preserving* offer an original conception of logic called *preservationism* and studies of an accompanying inference relation called *forcing* (not to be confused with Cohen's eponymous methods in set theory). Peter Schotch and Raymond Jennings pioneered these ideas circa 1980, and work is now carried on by their former student Bryson Brown, logical polymath Alasdair Urquhart, and a third generation: Dorian Nicholson, Kim Sing Leung, and Gillman Payette. And so the volume marks an arrival -- the Canadian school of paraconsistency. I came to this book hoping for a self-contained and accessible introduction to preservationism, which has otherwise only been available in scattered papers. The book will give an attentive reader a solid understanding of the preservationist program, from its foundations to its cutting edge. The book is a landmark contribution to the projects of modal and paraconsistent logic, as well as philosophical logic more generally.

As a non-classical logic, the basic idea behind preservationism is that something is wrong with the classical inference relation. If we ever find ourselves with beliefs and commitments that are, as a whole, inconsistent, the classical rule of *ex falso quodlibet* reduces everything to triviality. This seems neither descriptively nor prescriptively adequate. The preservationist response is not to replace the rules of classical logic, though, but to graft on a new piece of machinery, called the *forcing relation*, defined in terms of classical logic. Forcing, when possible, avoids *ex falso quodlibet* by being sensitive to the consistency of subsets of premises, not the set of premises overall. To paraphrase (from p. 98):

Question: How do you reason from an inconsistent set of premises?

Answer: You don't, since every formula follows in that case. You reason from consistent subsets of premises.

Preservationism is presented as a contingency plan for when we are stuck with bad data (p. 30). This is an unusual and interesting tack amongst non-classical logics, others of which are commonly thought of as rivals, replacements, or alternatives to standard logic. It is unusual as a paraconsistent system, too, since at the global level *ex falso quodlibet* still holds.

In Schotch and Jennings' semi-autobiographical introduction, we learn that preservationism was born out of concerns with modal logic. Wanting to break the validity of certain theorems in deontic logic, the authors hit on the idea (independently discovered by Tarski and Jonsson in 1951) of generalizing the access relation between worlds; instead of modalities defined in terms of Rxy, the authors study Rxy_{0} … y* _{n}*. Schotch and Jennings presented this work in 1978, not then aware of or concerned with paraconsistency. An audience member pointed out that the

*n*-frames looked to her like a model of non-trivial reasoning from inconsistent data. With

*n*worlds on hand, there are in effect

*n*separate compartments for storing data. For example, if we must store three formulae in two worlds, then by 'pigeonhole' reasoning, any two of the formula must appear together at a world, but there need not be a world with all three formula together. Inconsistent information in particular can be kept separate, and

*ex falso quodlibet*subverted, as long as there isn't too much inconsistency relative to the number of worlds on hand. Thus, as if by accident, Schotch and Jennings found themselves with the seeds of a paraconsistent logic.

What is the forcing relation, and what does it preserve? The classical inference relation preserves truth between a set of premises and a conclusion. In their important chapter, Schotch and Payette argue that the classical relation is dubious, since a "true set of premises" is either a meaningless expression (how can a *set* be *true*?) or else presupposes that a set of data {A, … , B} can be incontrovertibly gathered up into a single sentence A& … &B, and the latter is, in effect, just what preservationism controverts. The forcing relation instead relates sets of sentences, and preserves how finely the sets must be divided up if all the resulting cells are to be internally consistent. In jargon, forcing preserves level. Again, the notion of forcing depends on an already-defined underlying logic X; the X-level of a set of sentences is *n *iff *n* is the smallest partition where each cell is consistent with respect to X. (If no such *n* exists, as is the classical case for a singleton like {A&~A}, then the level is said to be infinite.) Where Γ is a set of formulae, *A* a formula, and X a logic,

Γ forces A,

iff

for any division of Γ into *n* cells, with *n* the least possible consistent partitioning, at least one cell X-proves A.

Forcing is a relation between sets, and preserves level, in the sense that the level of Γ is identical to the level of the closure of the forcing relation on Γ. Less tersely, forcing seems to preserve the coherence, or incoherence, of a set of premises.

The strength of the book throughout is in its attractive technical developments. The layout is effective, with many boxed definitions and clearly marked theorems.

Three of the early essays (chapters 3, 4, and 5) focus on the modal origins of preservationism. The main purpose of the early chapters is to provide proofs, including a lovely arrangement by Urquhart, that the *n*-ary frame conditions match up with the Schotch/Jennings condition for necessity, ◻:

◻A holds at world x

iff

for all y_{0}, … , y* _{n}*, Rxy

_{0}… y

*and*

_{n}*A holds at one or more one of y*

_{0}… y

*.*

_{n}This is the way that ◻A and □~A can obtain, without ◻(A & ~A). Nicholson's chapter includes some illuminating diagrammatic representations of the otherwise mysterious n-frames, and Leung and Jennings' chapter is a detailed treatment of the relationship between frames and modal axioms.

How all that relates to forcing and preservation, it must be said, is not immediately clear. Someone not already conversant with preservationism will need to take some time in settling in. On this count, the arrangement of chapters is not ideal. Faced with 50 pages of algebraic completeness proofs of various stripes, firstly, a non-negligible amount of mathematical sophistication is needed (I doubt only one semester of formal logic would do), and secondly, while completeness is important, it is informative only after someone fully understands and wants to study preservationism.

I would recommend non-initiate readers to skip quickly to focus on the central chapters by Gillman Payette, starting with "Preserving What?" (co-authored with Schotch). This is where the main definitions are clearly laid out and some core theorems stated and proved. These helpful chapters detail the mechanics of forcing, and make precise just what can and cannot be preserved. They show, for example, that a logic can be completely characterized by the concept of level. Results like this are an indication that the preservationist approach has tapped a deep vein.

In his standout chapter "Preserving Logical Structure", Payette investigates which properties of the underlying logic X are retained by the X-level forcing relation. Many properties are. Monotonicity, though, is not, at least not in full generality. That is, just because Γ forces *A*, it does not follow that the set theoretic union Γ∪Δ will force *A*.* *This is interesting both technically and philosophically, since failure of monotonicity has elsewhere been thought to model belief revision. Payette shows that preservationists only have the conditioned principle that if adding information to the premises does not make the premises more inconsistent, then forcing preserves level (p. 129). Other structural properties of the underlying logic X are fully preserved; in fact, forcing is so well behaved that even logics without a cut rule can inherit a sort of cut rule, which is striking. With his work Payette has done a service to his school and given lucid attention to an interesting new technology.

Brown and Nicholson's useful chapter produces a syntactic representation of forcing. They develop too the general failure of monotonicity and give it a philosophical basis. The intuition against *ex falso quodlibet* is that an inference from (maybe inconsistent) premises to *absolutely everything *just isn't an inference at all. There is no sense in which an arbitrary formula is derived from inconsistent premises. What, then, about the dual, the inference from no premises to a conclusion? Usually we call such a conclusion a theorem or logical truth. The authors point out that if *ex falso* is bad reasoning, then its dual is bad, too. It was at this point I saw clearly that preservationism is about reasoning in the medium-sized realm of human experience -- filtering out what happens at the fringes of both contradiction *and *tautology. Using a multiple conclusion logic, Brown and Nicholson succeed at characterizing an X-forcing relation that captures X-level forcing. In the related final chapter, Brown compares forcing to relevant and dialetheic logics. He shows in effect how preservationist ideas about ambiguity can lead naturally to Dunn's truth tables for first-degree entailment FDE. This makes sense in the context of blocking "trivial" inferences -- since as a consequence of its semantics, FDE has no theorems at all.

Motivational philosophical topics such as ethical reasoning and practical inference are introduced by Schotch and Jennings in chapter 2 and treated at length by Schotch in chapter 9. Schotch develops some generalizations and variations on forcing to model more closely what happens in real life cases of belief revision. He makes a good case that science works by holding various commitments that, nevertheless, can be overturned, that this involves some degree of inconsistency, and that this process can be modeled by forcing. There are also repeated suggestions, intriguing if rather hastily stated, that deontic logic ought to take paraconsistency seriously.

This is a slim volume, and, especially as the book is the first official presentation of the school, I would have liked to see more articulation and discussion of this neat philosophy of logic. In fiction, a collection of interconnected short stories can often signal a novel that never quite came together, and there is some of that feel here. The introductory essay provides some frame for the whole, but the chapters are not integrated. Some material is repeated from chapter to chapter, while other material is never really explained at all. I think the book could have been a single co-authored monograph -- but it is not. Let me just register a few loose ends. Apart from a nice diagram by Nicholson on page 56, I did not see any suggestions for how to understand, or even read, *n*-ary access relations on worlds, nor justification for the somewhat esoteric Schotch/Jennings ◻ condition. Similarly, the baseline motivations for generalizing the access relation, and/or for studying a paraconsistent logic, are sketchy. A recurring theme is that we, human reasoners, are prone to errors, and that subsequently, our obligations are prone to be in conflict, since obligations are human conventions -- but this remains a gesture. It would have helped me to see more carefully worked, concrete examples of inconsistency, deontic or otherwise, and more responsible referencing and footnoting, to specify literature the (sometimes rather curt) authors seem to have in mind. Instead, a robust strain of rhetoric runs through the book. The words "chafe", "balk", "awestruck", "jaundiced", "gladdened", "inviting", "dirty", "unpalatable", "browbeaten" and "scornfully" are used -- all between page 85 and 88. A more reasoned invitation to preservationism could better serve the project.

On a basic level, for instance, there is throughout an important but largely implicit distinction drawn between a set {A, ~A} and a formula A&~A. The authors take it as basically obvious that the former can be a perfectly sensible (if unfortunate) bit of data, and basically obvious that the latter is always absurd. The latter claim will find many adherents, to be sure, but most of those adherents I think would "balk" at the former, too. Since preservationists stake out some controversial territory, they owe some explanation as to why other nearby territory is off limits -- beyond conjecture about what "even the angels" want (p. 10).

There are a few typos. There is a box where there shouldn't be at the bottom of p. 63; there is a capital sigma where there should be a capital gamma on p. 163.

Preservationism cuts a fascinating middle path. On the one hand, we can still say that any change of logic is, in a sense, a change of subject -- for instance, {A&~A} still forces arbitrary B and allowing A&~A to be true would be to change the meaning of the connectives. The preservationists do not want to replace classical logic. If nothing else, they take it as hopelessly entrenched, and in this way the program is very conservative (p. 19). On the other hand, we have in the forcing relation a flexible and perspicuous new attachment for our old logical machinery that brings it up to date and prevents misfiring that cannot be explained away. "Forcing allows us to keep within sight of the familiar", Payette writes, "even while straying into unexplored logical territory" (p. 143). Preservationism is related to the first proposed paraconsistent logic of Jaskowski and that later presented by Rescher and Brandom. It is also related to relevance logic, in that the semantics feature a generalized access relation on worlds and that the logic turns out to be paraconsistent as a byproduct. Because it is a sort of meta-logic, deriving a new inference relation over a pre-existing pool of logics, I think it may be possible that forcing can provide the tools for philosophical discussion of logical pluralism, a topic that is today gaining momentum.

"On Preserving" will suit as part of an advanced undergraduate (or, more likely, graduate) course, and will generously repay independent study. The Canadian school is open and taking enrollments.