Cuts and Clouds: Vagueness, Its Nature, and Its Logic

Placeholder book cover

Richard Dietz and Sebastiano Moruzzi (eds.), Cuts and Clouds: Vagueness, Its Nature, and Its Logic, Oxford University Press, 2010, 586 pp, $120.00 (hbk) ISBN 9780199570386.

Reviewed by Pablo Cobreros, University of Navarra

2010.12.14


Cuts and Clouds is a collection of 31 original essays on vagueness authored by influential philosophers currently working in the area. The book is meant to contribute to the contemporary debate on several questions concerning the nature and the logic of vague expressions. It is divided into two parts (Nature and Logic), each divided further into four more parts: "What is Vagueness?", "Vagueness in Reality", "Tolerance and Paradox" and "Vagueness in Context" in the first part; "Supervaluationism", "Paraconsistent Logics", "Many-Valued Logics" and "Higher-Order Vagueness" in the second. The book is more similar to JC Beall's (2003) or to Paul Egré and Nathan Klinedinst's (2011) anthologies than to Rosanna Keefe and Peter Smith's (1997) or Delia Graff Fara and Timothy Williamson's (2002). Most of the articles in Cuts and Clouds presuppose an important amount of background knowledge; I would recommend the book to philosophers with an active research interest in the field of vagueness. The editors provide a helpful survey introducing the problematic of vagueness and unifying the themes discussed in subsequent chapters.

Since the topic of vagueness has been in the limelight for at least the last thirty years, there are lots of different vagueness-related questions, and it has become difficult to make simple categorizations classifying the partisans of this or that other cause in the debate. This fact, along with the length of the book and the complexity of most of the essays, precludes a detailed treatment of each contribution in this review. I will discuss in some detail just those papers whose subject is closer to my personal interests (this should not be taken as an assessment of the quality of the papers that are not discussed).

What is Vagueness? and Vagueness in Reality

A standard dispute about the nature of vagueness is whether vagueness is a feature of the way reality is represented or a feature of reality itself. In their essays, Agustín Rayo, Scott Soames and Matti Eklund hold that vagueness is on the semantic side while Stephen Schiffer and Nathan Salmon hold that reality itself is vague (Stewart Shapiro criticises the grounds on which this distinction is made). Rayo develops a metasemantic account of vagueness. According to this view, the root of vagueness is not located in the kind of semantic status associated with the expressions of the language but in the linguistic practices that render those expressions meaningful. In particular, Rayo considers David Lewis' account of convention and argues that a convention might prevail to varying degrees; conventions regarding the use of an expression that have a low degree of prevalence are what lead to borderline cases. Soames addresses an objection to the possibility of partial definition raised by Glanzberg (2003). Soames also discusses several issues connected to his account of vagueness: the unknowability of undefined sentences, the connection to ignorance in borderline cases, the rejection of excluded middle and the law of non-contradiction and, finally, a case for both partial definition and context-sensitivity of vague expressions. Establishing an analogy with Quine's position on the inscrutability of reference, Eklund makes a distinction between first- and second-level indeterminacy. First-level indeterminacy concerns the attribution to vague sentences of a third differentiated semantic status between truth and falsity. Second-level indeterminacy is the idea that there is no determinately best way of assigning a semantic value to a given vague sentence. Eklund makes a case for vagueness as second-level indeterminacy and discusses several objections to this view. Brian Weatherson discusses the general question of (as Eklund (2005) puts it) what vagueness consists in, making a case for vagueness as indeterminacy in contrast to other characterizations (Eklund's vagueness as tolerance, Patrick Greenough's vagueness as epistemic tolerance and Nicholas J.J. Smith's vagueness as closeness).

Dorothy Edgington provides a critical survey of Roy Sorensen's recent work on epistemicism (Sorensen's radical epistemicism relies on his controversial idea of ungrounded truth). According to Edgington, the prospects of an adequate theory of vagueness involves rejecting the temptation of a reductive analysis of definiteness. Schiffer and Salmon argue for the existence of vagueness in the world. According to Schiffer, a theory of vague propositions should be compatible with the Q-constraint: if one is in a quandary as to whether x is F, there is nothing incorrect about one's being in that state. Schiffer argues that truth-status theories of vagueness (whether bivalent or non-bivalent) are inconsistent with this constraint (at this point, however, it looks as if Schiffer leaves out glut theories). Schiffer sketches a theory of vague properties according to which these are individuated by the use made of the words expressing them and argues that the theory respects the Q-constraint.

Tolerance and Paradox

One of the tough problems of vagueness is to provide a good solution to the sorites paradox. Ideally, the solution should explain not just what is wrong with the argument (generally, whether unsound or invalid) but where and why our intuitions are fooled. For example, if one holds that the argument is unsound because the tolerance premise is not true, one should explain why it is so plausible in the first place. Sven Rosenkranz argues for an agnostic solution to the paradox, a solution according to which we are not in a position to know whether or not there are cut-offs in suitable sorites series for vague expressions. Leon Horsten discusses the nature of phenomenal concepts and its relation to the thesis that perceptual indiscriminability is not transitive. In particular, Horsten is concerned with arguments from Raffman (2000) and Fara (2001) to the effect that, contrary to the common opinion, perceptual indiscriminability is transitive. Hartry Field discusses Paul Horwich's acceptance of classical logic in the case of vagueness (particularly, Horwich's acceptance of the least number principle). Field makes use of strong Kleene logic to provide a solution to the sorites; issues concerning higher-order vagueness and a unified solution with semantic paradoxes are discussed.

Mario Gómez-Torrente develops what he calls the dual picture of vagueness. Dual because according to it, occasions of use of a sorites-susceptible predicate might be divided into regular occasions (in which case the predicate has an extension) and irregular occasions (in which case the predicate lacks an extension). The view makes use of a Kripkean view on the meaning fixation mechanism at work for sorites-susceptible predicates. According to it, the reference of this kind of expression is fixed with the aid of linguistic preconceptions, sentences that are firmly accepted by competent speakers. Gómez-Torrente explains how linguistic preconceptions help to fix the reference of sorites-prone predicates for the regular occasions, but how they fail to do so for occasions where preconceptions are in conflict (these include, of course, cases related to the sorites paradox). His dual picture of vagueness is intended to possess the benefits of different theories: it does not need to make use of a non-classical semantics but rejects the idea of a sharp cut off in suitable sorites series while at the same time it does justice to the idea that the reference fixation mechanism is successful on a number of occasions. In discussing the different options in order to address the paradox, Gómez-Torrente seems to overlook theories admitting truth-value gluts as tolerant solutions to the paradox. Though these theories make use of a non-classical semantics, I think they nevertheless deserve attention in the present context since they are able to make true all the premises in the paradox and thus, in some sense, to do justice to all the intuitions underlying it (I will come back to this point in the discussion of Beall's paper).

Vague expressions (vague predicates to simplify) seem to be fully tolerant in the sense that if x and y are similar enough in P-relevant respects, then if x is P so is y. On the other hand, vague predicates allow us to make distinctions and so they are useful when talking about the world. The problem is that these two properties are in tension since full tolerance seems to entail no cut-offs while utility seems to entail cut-offs. In 'Vague Intensions: a Modest Marriage Proposal', Beall endorses a nihilist view on vagueness, taking full tolerance as the characteristic feature of vague expressions. According to Beall, the vague intension of a predicate F is a relation r relating F to a multiplicity of extensions. In particular, r determines a core extension, but also relates F to more inclusive extensions attending to the (or a) F-relevant similarity relation. That is, if α is in an extension Ei such that r(F, Ei) and b is F-similar to α then there is an extension Ei+1 which extends Ei including b and such that r(F,Ei+1). Under reasonable assumptions, this entails that the intension of a vague predicate is all inclusive. Consequently, Beall endorses the nihilist conclusion that all vague predications fail to be true. Now, if vague predicates have intensions of this sort, how can we save their utility? Beall concedes that vague predicates are not useful, at least in the sense explained above, but he claims that we can still save an appearance of utility. Vague predicates lead to homonym relatives that in fact have cut offs (those that take some extension among the several ones related to the vague predicate). The marriage proposed in the title concerns nihilists and non-nihilists. According to Beall, theories of vagueness that reject full tolerance (such as supervaluationism and epistemicism) are different theories on how sharp homonym relatives to vague predicates are generated. The nihilist is right about vague predicates. The non-nihilist is right (or at least some theory in the market might be right) about their precise relatives. Beall ends the paper with a discussion of objections and replies to this peculiar view on vagueness. Though I find appealing the idea that vague expressions are tolerant, it is not obvious that full tolerance is inconsistent with utility (this debate is quite sensitive to the way things are understood so to fix ideas the best thing is to adopt a principled way to talk). Accepting tolerance amounts to accepting that, for any vague predicate P, the following principle is valid:

x ((x~Py & Px) Py)

('x~Py' means that x and y are similar enough in P-relevant respects)

Utility amounts to the idea that there are objects that are P and objects that are not P (and not both). Now, tolerance is valid in some dialetheist theories like Graham Priest's logic of paradox, LP, and (with qualifications) in subvaluationism (see Dietz (2011), sect. 5, for a nice survey on these solutions to the paradox). However, LP and SbV share an important shortcoming: the validity of the tolerance comes at the price of loosing the validity of modus ponens and, as Zardini (2008, p. 339) points out, the failure of modus ponens deprives the tolerance principle of its intended force. Can we endorse tolerance plus utility while making use of a decent conditional? Zardini's (2008) and Cobreros, Egré, Ripley and van Rooij (2010) provide a positive answer.

In a line similar to Beall's, Peter Pagin tries to do justice to tolerance while keeping the usefulness of vague expressions. Pagin's proposal is based on what he calls 'central gaps'. A central gap is a domain restriction in a sorites sequence bigger than the tolerance level (the threshold of similarity between objects that makes the tolerance principle compelling) that renders only positive and negative cases of the vague predicate. Pagin notes that the informal formulation of tolerance can be understood in two different ways and provides a model-theoretic characterization of central gaps, showing that the weaker formalization of tolerance is true in any model with a central gap. Pagin's defence of the success of our use of vague expressions is based on the idea that speakers can ordinarily dismiss the objects in the central gap for the predicate and on those occasions bivalence and tolerance can be consistently maintained (in ordinary contexts there is a contextual quantifier domain restriction). In other contexts, such as those in which we consider the tolerance of vague predicates, we might be unable to dismiss objects in the gap and so vague predicates are incoherent.

Vagueness in Context

Contextualist theories of vagueness have become popular in the last decade. The reason is that they promise to provide an ideal solution to the sorites paradox (explaining in particular why it looks like vague predicates are tolerant) without involving a drastic revision of classical logic. According to contextualists (at least to an important group of them) there is a cut off point in any suitable sorites series. However, the extension of the predicate varies with the context in a way that makes it look as if there is no such cut off. There is a cut off point somewhere in the series, but it shifts with context so it is never in the part of the series we are looking at.

One simple objection to contextualist theories is that, although vague expressions are typically context-sensitive, vagueness remains when we keep the context fixed. In their paper, Jonas Åkerman and Patrick Greenough try to address this simple objection against contextualism. The authors argue that contextualism (in particular, their preferred generic form of contextualism: boundary-shifting contextualism [p. 277]) can explain how vagueness remains even when the context is held fixed. Åkerman and Greenough consider two options available for the contextualist. The epistemicist contextualist strategy adopts an epistemicist explanation (for example, that of Williamson's based on margins for error) for context-fixed expressions. The radical contextualist strategy makes use of higher-order vagueness as vagueness in the meta-language to explain the vagueness of context-fixed terms. The interesting point of this response to the simple objection is that epistemicists lack a principled reason to dismiss hybrid explanations such as the epistemicist contextualist (since, as they observe, well known epistemicist accounts of vagueness are not purely epistemicist) and non-epistemicists such as supervaluationists lack a principled reason to dismiss the radical contextualist explanation since they make essential use of higher-order vagueness as vagueness in the meta-language (Keefe 2000, ch. 8).

Though the paper provides, I think, a satisfactory answer to any of the objectors, there is something puzzling when we take it as a general response to the simple objection. There are two different and seemingly incompatible replies, one for the epistemicist and another for the non-epistemicist. Assuming that the authors succeed in their purpose, the first response is persuasive for the epistemicist but not for the non-epistemicist, the second is persuasive for the non-epistemicist but not for the epistemicist. So although for everyone there is a persuasive reply, there is no (single) persuasive reply for everyone.

Andrea Iacona discusses an argument of Williamson's (1995) according to which classical logic plus disquotational truth entails an epistemic view of vagueness. The response to Williamson is based on a distinction between two notions of saying: a truth-conditional notion and an intentional notion. What is said in the intentional sense by an utterance of a sentence 's' can be described as the set of admissible valuations that are compatible with the understanding of the sentence that can be rightfully ascribed to the speaker. What is said in the truth-conditional sense is an interpretation, that is, a way of understanding a sentence in a sufficiently specific way for the purpose of ascribing truth or falsity to that sentence. Iacona argues that truth and falsity apply to sentences relative to interpretations (p. 293) and, thus, disquotational truth is preserved since for any interpretation i: 'p' is true if and only if p (the preservation of classical logic is guaranteed by the classicality of valuations). When a sentence, as uttered on a given occasion, is a borderline case (and so there is more than one valuation compatible with the actual understanding) there is nothing said by the sentence in the truth-conditional sense (p. 294). In this way, classical logic plus disquotational truth does not entail epistemicism: the disquotational schema is unrestrictedly accepted, but utterances in borderline cases do not say anything (in the truth-conditional sense), and thus there is nothing to be ignorant of either. Iacona alleviates the tension of this position with the intuitiveness that utterances in borderline cases do really say something by means of his distinction between the two notions of saying: utterances in borderline cases say something in the intentional sense.

The paper closes with an explanation of the sorites paradox and a comparison between standard supervaluationism and McGee and McLaughlin's (1995) proposal, which is a close cousin of Iacona's. It is not easy to come up with an adequate supervaluationist notion of saying, and perhaps Iacona's development can be applied elsewhere (I am thinking of Andjelkovic and Williamson's (2000) and discussions of it in García-Carpintero (2007) and López de Sa (2009)). I have, however, a query concerning the proposal. The orthodox supervaluationist avoids an epistemicist view by acknowledging that there might be truth-value gaps; Iacona points out that the gappy treatment leads to failures in classical logic. In this account, it looks as if this sort of problem is moved to the level of saying. For example, where 'p' is uttered in a borderline case, the sentence 'There is something p says (in the truth conditional sense)' is true since every valuation makes it true (in Iacona's jargon, every evaluation overlaps on that sentence), and so the set of all admissible valuations compatible with the understanding of 'p' is in fact an interpretation (in Iacona's sense) of that sentence. So it seems there is something 'p' says after all. Of course, the account might require just that 'p' does not say any of the admissible valuations, but this leads to the failure of classical logic at the level of saying (true existential generalization without a verifying instance).

Max Kölbel explores how vagueness as a semantic phenomenon can be accommodated in a standard semantic framework along the lines of Kamp and Lewis' framework for modal indexical languages. Kölbel argues that among the three strategies -- ambiguity, indexicality and sensitivity to the circumstance of evaluation -- only the last provides an adequate explanation of vagueness as extension-indeterminacy. According to that alternative, a sentence containing a vague predicate in a given context of use might express a proposition whose truth-value varies according to different sharpenings.

Faultless disagreement is the idea that disagreement on a particular subject need not involve some of the parties being mistaken. One might wonder whether judgments involving borderline cases are cases of faultless disagreement. In his paper, Dan López de Sa argues that this is not the case. The reason is that in borderline cases, we do not typically (and should not) respond by taking a view. Thus, in the case of vagueness, there are no contrasting judgments that are like building blocks for the appearance of faultless disagreement. López de Sa argues in the first place that this claim is accounted for by paradigmatic cases of semantic and epistemic views of vagueness. In the case of a semantic view, such as supervaluationism, the weakest sensible norm of assertion states that one should not assert an untrue proposition (predicting this way that one should not take a view in borderline cases). In the case of epistemicism, it is natural to adopt a stronger norm of assertion: one should not assert unknown propositions; which, again, entails that one should not take a view in borderline cases. López de Sa completes his discussion by considering how the claim that one should not take a view in borderline cases is compatible with apparently incompatible responses to borderline cases. I think that to make his claim stronger, López de Sa could have considered views other than standard supervaluationism and epistemicism. In particular, Ripley (2010) argues that some experimental data seem to receive a better explanation in the frameworks of non-indexical contextualism and dialetheism.

Supervaluationism

Schiffer (1998 and 2000) poses the following objection to supervaluationism. Consider Harry, a borderline case of baldness and his friend Renata who says that Harry is bald. The sentence 'Renata said that Harry is bald' is true, but according to supervaluationism, the sentence should be evaluated with respect to all the admissible ways of making 'bald' precise. Now, it looks false that Renata said any of the precise propositions resulting from making 'bald' precise (Renata didn't say, for example, that Harry is bald354, where 'bald354' means having at most 354 hairs). Schiffer's objection considers indirect discourse involving de re interpretations of vague expressions like 'Everest is where Al said Ben was' and demonstratives like 'There is where Al said Ben was'.

In their papers, Manuel García-Carpintero and Rosanna Keefe supply different responses to this objection on behalf of the supervaluationist. García-Carpintero's response makes use of Kaplan's account of the truth-conditions of de re attributions in which modes of presentation play a role (the adoption of this account is based on Fregean considerations for reference shift in indirect discourse). This strategy commits the view to the idea that there are vague entities, but García-Carpintero argues that this commitment is compatible with vagueness as semantic indecision since these are mind- and language-dependent entities, not belonging to the objective world.

Keefe's response to Schiffer's objection follows a more orthodox supervaluationist line. The key issue concerning speech reports such as 'Renata said that Harry is bald' is that, according to the supervaluationist, there is a penumbral connection between what is said by Renata and the report: for every precisification pn, 'Renata said that Harry is baldn' is true in pn. Thus, the report is true in every precisification (supertrue); this, of course, does not imply that there is a way of making precise 'bald', say 'baldn', such that according to every precisification Renata said that Harry is baldn. More generally, although for every precisification, there is an n such that Renata said that Harry is baldn, there is no n such that for every precisification, Renata said that Harry is baldn. The supervaluationist can coherently maintain that the report is true without a commitment to the idea that Renata said something precise. Keefe shows that a similar explanation works for de re attributions such as 'Everest is where Al said Ben was'. In the final part of her paper, Keefe considers whether reports involving demonstratives pose a particular difficulty for supervaluationism. Keefe argues that problems surrounding demonstratives are not particular problems of vagueness and that they equally affect theories other than supervaluationism.

In her paper, Fara discusses two problems for the supervaluationist theory: the failure of truth-functionality and the explanation of why the sorites inductive premise seems true (what Fara (2000) calls the psychological question). The first issue has been discussed quite a lot, but almost all the arguments rest about intuitions on truth-functionality and penumbral connections. Interestingly, Fara adopts a different strategy here. Leaving aside intuitions about penumbral connections (intuitions on which a bivalentist, as Fara, agrees) the only justification of how a disjunction might be true without either disjunct being true relies on the idea that, in a certain sense, each disjunct could be true. But Fara presents a case that, seemingly, cannot be justified in these terms. The case is one in which a disjunction is true even when each disjunct is not only untrue, but unsatisfiable by supervaluationist standards. Consider an operator for definiteness 'D' and define α to be a borderline case of F (BFα): ¬DFα & ¬D¬Fα. In standard supervaluationist logic (that is, a normal modal semantics for 'D' in which accessibility is reflexive and which replaces the standard definition of logical consequence for global consequence) the sentence 'p & ¬Dp' is unsatisfiable as is the sentence '¬D¬p & ¬p'. Thus, 'p & Bp' and 'Bp & ¬p' are likewise unsatisfiable. However, Bp might well be true in a model and 'p or ¬p' is true in every model. Thus 'Bp & (p or ¬p)' will be true in any model in which 'Bp' is true; but the former is classically equivalent to '(p & Bp) or (Bp & ¬p)' (note that each disjunct is unsatisfiable by supervaluationist standards).

Regarding the psychological question, Fara considers Keefe's (2000) account according to which there is a scope confusion (the scope confusion depends on particularities of supervaluationist semantics). I agree with Fara that supervaluationism is unlikely to address the psychological question, but I think this worry extends to many theories of vagueness. I believe that a successful explanation ultimately requires a plain acceptance of tolerance (as in Zardini (2008) and Cobreros, Egré, Ripley and van Rooij (2010)). With respect to the first point of Fara's, the commitment to true disjunctions with unsatisfiable disjuncts certainly looks bad. But it is clear that there are two notions of satisfaction playing a role in Fara's argument: local and global truth. The first is concerned with the truth-value a sentence might get if we make its vague expressions precise. The second is about the truth-value of a sentence after the superevaluation (in this respect it is illustrative that examples involving pink, etc., concern atomic sentences and their negations while Fara's disjunction involves complex sentences). '(p & Bp)' is globally unsatisfiable but is locally satisfiable, and it is the local notion of truth which is at work in the justification of true disjunctions with untrue disjuncts. By analogy, standard supervaluationists maintain that 'A 34-year-old woman is in her early thirties' (the example is from Montminy (2008)) is a necessarily borderline sentence and, thus, it is impossible for it to be true. But the truth of the corresponding instance of excluded middle does not threaten the justification based on the possibility of making expressions precise (even though the disjuncts, in some sense, cannot be true).

Paraconsistent Logics

Supervaluationist logic and subvaluationist logic are duals. Based on this fact, Dominic Hyde argues that any virtue or vice of one theory transfers to the other theory in dual form. More particularly, Hyde criticises the failure of truth-functionality in both theories. The supervaluationist's non truth-functionality is present in the solution to the sorites paradox (the existential generalization expressing that there is a cut off point in a sorites series is true although there is no particular item in the series of which it is true) and in the validity of classically valid sentences (the truth of A or ¬A does not entail the truth of some disjunct). Hyde argues that the examples from Keefe and Edgington fail to provide independent evidence for the non truth-functional behaviour of the existential quantifier and of the disjunction.

Still, the supervaluationist might try to argue that his theory is better than subvaluationism on the grounds that supervaluationist logic coincides with classical logic for single-conclusion arguments. However, (this is a nice part of Hyde's paper) the supervaluationist cannot dismiss the relevance of multiple-conclusion arguments. The reason is (roughly) that in the supervaluationist theory, due to its paracomplete character, denial cannot be equated with assertion of the negation. In order to fully spell out the link between assertion and denial, on the one hand, and logical consequence on the other, the supervaluationist needs to make use of multiple conclusions (p. 396). In this respect, the departure of supervaluationism from classical multiple-conclusion arguments is as significant as the departure of subvaluationism from multiple-premise arguments.

Hyde's conclusion is that supervaluationism and subvaluationism offer different but equally inadequate views on vagueness. The source of this inadequacy is to be placed in the failure of truth-functionality that enables the retention of classical validities. I think that people interested in the discussion about truth-functionality in vagueness should take this paper into account. In particular, Hyde's argument for the role of multiple-conclusions in the supervaluationist theory is, in my opinion, a significant contribution. Once we look at multiple conclusions, supervaluationist logic loses much of its initial appeal.

Priest considers philosophical problems of identity in relation to change. The paper develops a formal specification of identity in which the properties of consistency, substitutivity and transitivity can fail. Identity is defined in a second-order language with Leibniz's Law in a standard way; the trick is that the underlying logic is not classical logic but a second-order formulation of the Logic of Paradox LP. The standard definition of identity within LP yields a relation that is symmetric and reflexive but not necessarily transitive, as suggested by examples involving identity thorough change (substitutivity and consistency also fail). Priest takes it that classical identity is fine as long as there are no reasons to believe that some of its properties should fail: 'consistency is a default assumption … classical properties of identity may be invoked unless and until the default assumption is revoked' (p. 411). This idea is formalized in a non-monotonic consequence relation: minimally inconsistent (second-order) LP. Priest ends the paper with an application of this account of identity to the case of vagueness. According to Priest, the existence of some sort of cut-off in sorites series is demonstrated by the forced-march sorites paradox (p. 413). The reason why we find this cut-off so counter-intuitive is that sentences at each side of the cut-off have in fact the same value (the account of identity is here applied to metalinguistic identity; its non-transitivity prevents, as Priest puts it, the value bleeding from one end of the series to the other).

Although I am sympathetic to the general approach, I have one eoncern. Priest gives a general answer to the objection that the conditional involved in the definition of identity must be a conditional supporting modus ponens, and I think the general answer is fine. But I think there is a problem connected to the use Priest makes of that definition of identity in order to explain why the existence of cut-offs (of any sort) looks implausible to us. Friends of tolerance think that an appropriate response to the sorites paradox, including the explanation of why the tolerance principle looks true, requires endorsing the tolerance principle. The tolerance principle is valid in LP; however, friends of tolerance think that LP does not provide an adequate response since, as pointed out before, the failure of modus ponens deprives the tolerance principle of its intended force. The same situation holds for Priest's explanation of why the existence of cut-offs looks so odd to us (compare this question with the question of why the tolerance principle looks to be true). Since the definition of identity is based on a conditional for which modus ponens fails, the dissatisfaction of a friend of tolerance carries over from the validity of tolerance in LP to the fact that sentences at each side of the cut off have the same value.

Degree Theories and Many-Valued Logics

A natural way to generalize classical semantics for non-classical values consists in extending the set of values and redefining logical constants as functions from the new set of values. Degree theories of vagueness expand the classical set of values to infinitely many values (commonly, the set of real numbers in the closed interval [1, 0]). A sentence might take a truth-value in the continuum between 1 (perfect truth) and 0 (perfect falsity). Logical constants are generalized for the new set of values, making use of numerical functions (this sort of strategy is loosely referred to as fuzzy semantics). For example, Lukasiewicz infinite-valued logic defines logical connectives this way (where |A| denotes A's degree of truth): |¬A| = 1 − |A|, |A B| = min{A, B}, |A B| = max{A, B} and |A B| = 1 if |A| |B|, otherwise |A B| = 1 + |B| - |A|.

Graeme Forbes is concerned with three cases that pose a difficulty to the classical interpretation of identity: changes in objects (as with Theseus' ship), cases of gradual destruction and Chisholm's Paradox (a modal variant of Theseus' ship). Forbes argues that the three cases should be addressed with the same apparatus (the Uniformity Constraint). The author holds that in the aforementioned cases there is no fact of the matter concerning some identity statements (Forbes argues that whether this is a claim about the objects involved or about the concept of identity is not a substantial question). Forbes develops a semantics for identity that mimics a fuzzy logic (and so inherits the merits of fuzzy logic to address the psychological question) but avoids a commitment to degrees of identity. To answer standard problems concerning the suitability of fuzzy logics to model vagueness, Forbes argues that some features of the models employed should not be taken as representational features.

One standard argument against degrees is that these theories make no real progress over classical semantics. The idea is that degrees of truth are ultimately committed to unknowable semantic boundaries and, in that case, there is no point for the more complex semantics of degrees. Thus, if one is inclined towards an epistemicist explanation of vagueness, it appears that degrees of truth do not have any appeal. In his contribution, John MacFarlane argues against this standard opinion. Based on an observation of Schiffer's (attending to the different phenomenology of partial beliefs associated with cases of uncertainty and partial belief due to vagueness), MacFarlane argues for a hybrid theory of vagueness combining degrees of truth with hidden semantic boundaries. He rejects classical semantics since it forces a wrong explanation of the partial belief characteristic of borderline cases (it forces the interpretation of partial beliefs in borderline cases as a case of uncertainty). MacFarlane's fuzzy epistemicism provides an elegant solution to this problem. In addition, MacFarlane addresses traditional objections against degrees of truth from the perspective of fuzzy epistemicism: higher-order vagueness, the problem of multidimensional predicates and worries concerning degree functionality.

Part of the difficulty of a truth-value gap account depends on the idea that there is a sui generis speech act of denial that is not reducible to an assertion of negation. Mark Richard addresses the Frege-Geach objection about how to combine the speech act of denial with other compounding expressions like conditionals. The basic idea is that not only negation but also the other logical constants contribute to the force conventionally associated with a sentence. Richard develops a semantics for the force associated with connectives based on a distinction between assertion and denial as first-order commitments and disjunction and conjunction as second-order commitments. In addition, the author tries to address some objections, first against truth-value gaps in general and second against the trisection thesis (a sorites series is divided in three segments: the true cases, the false and the gappy).

Fuzzy logics do not have much support among philosophers as a solution to vagueness. This contrasts with the fact that fuzzy logics have been used in practice for several vagueness-related issues. In his paper, Simons develops an account that tries to capture the advantages of fuzzy logics on the one hand and the advantages of supervaluationism (particularly regarding penumbral connections) on the other, combining aspects of both theories. Simons takes from supervaluationism the idea of a multiplicity of admissible valuations and from fuzzy semantics the numerical ascription to values.

As pointed out by Schiffer (and MacFarlane in this volume), the classical treatment of subjective probabilities yields wrong results in the case of vagueness. Nicholas Smith considers this problem and develops a generalization of the classical treatment for Luckasiewicz's infinite-valued logic. Smith provides a unified account of subjective probabilities that renders the classical results for a perfectly precise language but is sensitive to Schiffer's observation for vague languages. The unified treatment constitutes an argument for degrees of truth.

Higher-Order Vagueness

I take it that higher-order vagueness is one of the most difficult problems of vagueness. This is probably due to the fact that a precise and clear characterization of the phenomenon has proved to be elusive (characterizations usually make use of particular features of this or that theory). One way to characterize higher-order vagueness is as a consequence of the notion of a borderline case together with the intuition that there are no sharp boundaries (of any sort) in a suitably long sorites series. Imagine a long series of patches of colour ordered from clearly red to clearly orange, each patch in the series indiscriminable in colour from its adjacent patches. There seems to be no sharp transition from clearly red to clearly non-red objects (that is, there is no patch in the series that is clearly red followed by a clearly non-red patch). This requires, of course, cases that are 'neither clearly red nor clearly not red', that is, borderline cases of 'red'. But there seems to be no sharp transition either from clearly clearly red to clearly not clearly red objects (this requires borderline cases of 'clearly red'), nor from clearly clearly clearly red to clearly not clearly clearly red, etc. Though empirical factors might rule out higher-order vagueness at some finite level, it is usually assumed that theories of vagueness should treat non-terminating higher-order vagueness as, at least, a logical possibility.

In her paper, Diana Raffman discusses two forms of higher-order vagueness: the first concerning the existence of borderline cases of borderline cases (iterated indefinitely), the second concerning the vagueness of some metalinguistic complex predicates with prescriptive force like 'mandates application of 'old''. Raffman argues that both forms of higher-order vagueness are less theoretically important than what is commonly assumed by philosophers.

In a line similar to Raffman's, Crispin Wright tries to show that higher-order vagueness is an illusion grounded on a misconception of the nature of ordinary ('first-order') vagueness. Wright discusses two intuitions underlying the motivation for higher-order vagueness: the ineradicability intuition and the seamlessness intuition. The first, attributed to Dummett and Russell, concerns the idea that introducing a third category between two contrary vague predicates F and G does not make either of F or G precise. The second concerns the idea that there seems to be no sharp transition (of any kind) in suitable sorites series and that, in order to do justice to this we must assume an endless hierarchy of borderline cases. In this light, higher-order vagueness is understood as what Wright calls 'the buffering view': 'an infinite hierarchy of kinds, each potentially serving to provide an exclusion zone and thereby preventing a sharp transition, in a suitable series, between instances of distinctions exemplified at the immediately preceding stage of the hierarchy' (p. 527).

Wright argues that there are problems concerning this view, that it is not well motivated by either intuition and that it is at odds with the nature of ordinary (first-order) vagueness. A problem for the buffering view comes when we consider the truth of each gap principle needed to ensure the absence of sharp transitions in a sorites series. As Wright argued (1987, 1992), the truth of the gap principles conflicts with the rule of D-introduction (see 'DEF' on p. 535). (Fara (2003) actually shows that gap principles are unsatisfiable for finite sorites series given the supervaluationist notion of global validity.) Further, Wright shows that, even in the absence of D-introduction, the buffering view has to face a revenge problem formulated in terms of an operator of absoluteness. Wright also argues that the seamlessness intuition cannot be fully captured making use of the buffering view. His arguments are connected to forced-march versions of the sorites paradox. Finally, Wright argues that the plausibility of the ineradicability intuition rests on a misunderstanding of the nature of (first-order) vagueness. According to Wright, the ineradicability intuition makes use of the notion of a borderline case as if it were a third category between clear cases of contrary concepts. Wrights claims that it is this incorrect understanding of the nature of (first-order) vagueness that engenders the commitment to the buffering view.

I have just two quick remarks on this difficult paper. First, in order to accommodate gap principles we need to reject the rule of D-introduction. However, this does not entail rejection of Dummett's principle (the idea that 'α is F' is incompatible with 'α is not definitely F') as Wright claims (p. 540). Cobreros (2008) presents a logic, regional validity, weaker than global validity but still in the spirit of a truth-value gap theory. The set of sentences {p, ¬Dp} is regionally unsatisfiable (acknowledging Dummett's principle) but the rule of D-introduction is not regionally valid. In Cobreros (2011) it is discussed how gap principles can be accommodated given the notion of regional validity. The second remark concerns the revenge problem. I agree that this is a problem for most theories of vagueness (not, however, if Cian Dorr's contribution to this volume is correct). But still, the strengthened version of the paradox can be accommodated in the subvaluationist theory (see Cobreros 2010).

Suppose we have a semantics for an operator for definiteness 'D' that establishes when a sentence of the form 'Dφ' is true. Then we define a sentence φ to be ultratrue iff all sentences in the set {Dnφ | n in ω} are true. Are there ultratrue sentences? Dorr's paper develops an argument to show that the answer to this question is negative. The argument works in a reasonable way for Williamson's epistemic theory; Dorr's efforts are directed to show that the argument can be adapted for non-epistemic readings of definiteness.

Dorr points out a direct consequence of the idea that there are no ultratrue sentences: the rule of D-introduction cannot be taken as valid. The reason is that, if the relevant notion of logical consequence is transitive, then a false sentence can be derived from any sentence (p. 573). In fact, for a notion of deductive consequence with the rules of Cut and Reflexivity the addition of D-introduction would make, in good sense, any theory inconsistent. Since it can be shown, making use of Cut, Reflexivity and D-introduction that,

Claim: If {Dnγ | γ in Γ, n in ω} φ then Γ φ

Let '' be the intended notion of logical consequence and assume that '' is complete. If there are no ultratrue sentences, for any non-empty set Γ, the following must hold: {Dnγ | γ in Γ, n in ω} ; by '' completeness, {Dnγ | γ in Γ, n in ω} and by the previous claim, Γ (note also that, given D-introduction, if a sentence follows from the empty set, then it is ultratrue). Clearly, if there are no ultratrue sentences, the rule of D-introduction must be rejected.

A first consequence of this concerns the notion of logical consequence that the supervaluationist should be committed to. Canonical supervaluationism (Fine (1975), Keefe (2000)) endorses global validity according to which D-introduction is a valid rule of inference. If there are no ultratrue sentences, however, this is not a tenable position. The supervaluationist might (should) still endorse the weaker notion of regional validity (Cobreros 2008, 2011) in which D-introduction is not valid, but which is, nevertheless, truth-value gap friendly. A second consequence concerns, more generally, worries concerning higher-order vagueness. Gap principles are locally satisfiable for finite sorites series, but their absolute definitization is not (this is the kind of revenge problem pointed out by Wright). If there are no ultratrue sentences, however, theories committed to local validity are not threatened by this. In general, if there are no ultratrue sentences, problems concerning higher-order vagueness are much less compelling than is usually acknowledged. Dorr's argument, if successful, has an important impact on the problem of higher-order vagueness.

Cuts and Clouds contains many novel, interesting theses on most of the questions discussed in the recent literature on vagueness. Philosophers with an interest in the contemporary debate (and its future development) should not miss this volume.

REFERENCES

Andjelkovic M. and Williamson, T. (2000) Truth, Falsity and Borderline Cases. Philosophical Topics 28: 211-244.

Beall, JC. (2003) Liars and Heaps: New Essays on Paradox. Oxford University Press.

Cobreros, P. (2008) Supervaluationism and Logical Consequence: A Third Way. Studia Logica 90(3): 291-312.

Cobreros, P. (2010) Paraconsistent Vagueness: A Positive Argument. Synthese.

Cobreros, P. (2011) Supervaluationism and Fara's Argument Concerning Higher-order Vagueness. In Egré, P. and Klinedinst, N. (2011).

Cobreros, P., Egré, P. Ripley, D. and van Rooij, R. (2010) Tolerant, Classical, Strict. Journal of Philosophical Logic.

Dietz, R. (2011) Indeterminacy and Vagueness. In Horsten, L. and Pettigrew, R. (2011).

Egré, P. and Klinedinst, N. (2011) Vagueness and Language Use. Palgrave McMillan (forthcoming).

Eklund, M. (2005) What Vagueness Consists In. Philosophical Studies 125(1): 27-60.

Fara, G. D. (2000) Shifting Sands: An Interest Relative Theory of Vagueness. Philosophical Topics 28: 45-81. Originally published under the name 'Delia Graff'.

Fara, G. D. (2001) Phenomenal Continua and The Sorites. Mind 110: 905-935. Originally published under the name 'Delia Graff'.

Fara, G. D. (2003) Gap Principles, Penumbral Consequence and Infinitely Higher-order Vagueness. In Beall (2003): 195-222. Originally published under the name 'Delia Graff'.

Fara, G.D. and Williamson, T. (2002) Vagueness. Ashgate Publishing (2002).

Fine, K. (1975) Vagueness, Truth and Logic. Synthese 30: 265-300.

García-Carpintero, M. (2007) Bivalence and What is Said. Dialectica 61: 167-190.

Glanzberg, M. (2003) Against Truth Value Gaps. In Beall, JC. (ed.) (2003): 151-194.

Horsten, L. and Pettigrew, R. (2011) The Continuum Companion to Philosophical Logic. Continuum Publishing Corporation (forthcoming).

Keefe, R. (2000) Theories of Vagueness. Cambridge University Press.

Keefe, R. and Smith, P. (1997) Vagueness: A Reader. MIT Press (1997).

López de Sa, D. (2009) Can one Get Bivalence from (Tarskian) Truth and Falsity? Canadian Journal of Philosophy 39: 273-282.

McGee, V and McLaughlin, B. (1995) Distinctions without a Difference. Southern Journal of Philosophy 33 (supp.): 203-251.

Montminy, M. (2008) Supervaluationism, Validity and Necessarily Borderline Sentences. Analysis 68(1): 61-67.

Nouwen, R., R. van Rooij, H.-C. Schmitz, and U. Sauerland (Eds.), Vagueness in Communication. Berlin: LICS, Springer. (forthcoming).

Raffman, D. (2000) Is Perceptual Indiscriminability Nontransitive? Philosophical Topics 28: 153-175.

Ripley, D. (2010) Contradictions at the Borders. In Nouwen et al. (2010).

Schiffer, S. (1998) Two Issues of Vagueness. The Monist 81: 193-204.

Schiffer, S. (2000) Vagueness and Partial Belief. Philosophical Issues 10. E. Villanueva (ed.) Blackwell: 220-257.

Sorensen, R. (1985) An Argument for the Vagueness of 'Vague'. Analysis 54(3): 134-137.

Williamson, T. (1995) Definiteness and Knowability. Southern Journal of Philosophy 33 (supp.): 171-191.

Wright, C. (1987), 'Further Reflections on the Sorites Paradox', Philosophical Topics 15: 227-290.

Wright, C. (1992), 'Is Higher-Order Vagueness Coherent?', Analysis 52(3): 129-39.

Zardini, E. (2008) A Model of Tolerance. Studia Logica 90(3): 337-368.