In recent years Nozick's notion of knowledge as tracking truth has witnessed a revival. Nozick's idea is that a proposition p is known by an agent when p is true, the agent believes that p and if p were true (were not true) then the agent would believe (would not believe) p.^{[1]}

Sherrilyn Roush has written an interesting book trying to alleviate some well known alleged defects of Nozick's notion of knowledge. One of these alleged defects is that Nozick's agent does not respect various forms of logical omniscience, in particular closure under known implication. It is important to remark here that this is not due to computational limitations of any kind but as a result of a normative feature of Nozick's definition. We will provide specific examples below, but a typical case where logical omniscience fails (in Nozick's account) is the case where the agent knows that p and he fails to know some entailed proposition q, even when the agent knows that p entails q (i.e. when the rule of closure under known implication fails). Other forms of logical omniscience (some of which also fail in Nozick's account) would impose knowledge of all tautologies; another would impose closure under adjunction and finally yet another would impose closure under entailment (i.e. if A is known and A entails B then B should be known).

Roush has modified Nozick's original account in various ways. As a result she claims that the modified account recovers closure under known implication. We will discuss this claim below, but first notice that Nozick's definition is rather rich. It delivers an account of knowledge by presupposing in the background both the notion of belief and, more centrally, the notion of the counterfactual conditional. Until recently the resulting notion of knowledge has not been studied (under a logical point of view) in detail by formal epistemologists. Exceptions are the interesting insights provided by Timothy Williamson's essay about the limits of knowledge (Williamson, 2000) and some results presented in an essay that Rohit Parikh and I recently wrote (Arló-Costa and Parikh, 2006).

Roush's strategy to reformulate Nozick's notion is to appeal to a radical form of Bayesian epistemology where the only primitives are betting behavior and classical Kolmogorovian conditional probability. Belief is then reconstructed in terms of betting behavior, and conditionals are reduced to conditional belief, in turn represented by classical conditional probability. The move is symptomatic of a view that tends to dominate contemporary philosophy of science, where radical forms of probabilism of this kind are taken for granted and praised as improvements over allegedly more opaque notions like the notion of a conditional itself. This probabilism in question is radical in its reluctance to assume non-probabilistic primitives (aside, in this case, from betting behavior). So, belief is introduced probabilistically as follows:

S has a belief that p if and only if in a situation controlled for disturbing factors in which S distributed money over p and --p, with equal payoff amount per bet for p and --p, S would bet at least 19 times more on p than on --p (p. 48)

As long as the agent is coherent, this is equivalent to working with high probability with a threshold set at 95%. It is useful to notice here that this notion of belief is not closed under adjunction. Take two propositions p and q such that the propositions expressed by them intersect in points carrying 90% of the probability of the space and such that the p and not q worlds carry only 5% of the probability and the q and not p worlds carry 5%. Then it is clear that both p and q are independently believed but their conjunction fails to be believed. This observation will be useful below.

It is also very important to see that Roush is not focusing here on *rational *betting behavior. Although this is not stated explicitly in the book, apparently Roush is willing to allow for belief elicited under incoherent betting behavior. In situations of this kind the agent would be a victim of a Dutch Book argument, but apparently this is not a concern the author has (personal communication with the author). This observation will be equally important below.

The notion of knowledge as tracking truth is reformulated as follows: Agent S *knows by tracking* that p if and only if (I) p is true, (II) S believes p, (III) P(b(p) | p) > s, and (IV) P(- b(p) | -p) > s, where the threshold s is also set at .95. Now it is clear that even if the underlying notion of belief were closed under consequence this notion of knowledge as tracking truth is still unclosed. In particular it is not true that if S knows that p and p entails q, S knows that q. To remedy this alleged defect Roush proposes to complement the notion of knowledge as tracking via a recursive definition of knowledge that embeds the tracking notion:

S knows that p if and only if:

S knows p by tracking

or

p is true, S believes p and there are q_{1}, …, q_{n} none of which is equivalent to p, such that q_{1}, …, q_{n }together imply p, S knows that q_{1}, …, q_{n} imply p, and S knows q_{1}, …, q_{n}

where S *knows that* q_{1}, …, q_{n}* imply* p if and only if

(a) it is true that q_{1}, …., q_{n} imply p,

(b) S believes that q_{1}, …, q_{n} imply p,

(c) P((-b (q_{1}) ∨ … ∨ -b(q_{n})) | -b(p)) > s^{[2]}

(d) P(b(p) | b (q_{1}), …, b(q_{n})) > s

(e) if (b) is fulfilled because of inferences S made from q_{1}, …, q_{n} to p, then every step of inference in this chain is one where S knows that the premises imply the conclusion (p. 47).

Our first observation is that when probability is subjective and self-applied the definition leads to incoherence, as Roush recognizes. So, we will eliminate this possible reading of the definition from now on by understanding that the probability function is the probability function of an attributor and the belief is belief that the observer attributes to an external person.^{[3]}

The second observation is that, in view of previous remarks about belief, the recursive definition of knowledge does not accomplish what it is supposed to accomplish, namely guarantee closure under known implication. Consider an example that is well known in the empirical literature:

(*E*) Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

The task is to rank various statements "by their probability," including these two.

(*B*) Linda is a bank teller.

(*B & F*) Linda is a bank teller and is active in the feminist movement.

Amos Tversky and Daniel Kahneman verified experimentally (Tversky and Kahneman, 1983) that agents robustly ranked *B&F *as more probable than *B, *and Ashley Sides, Daniel Osherson, Nicolao Bonini, and Riccardo Viale extended these results to betting behavior (Sides et al., 2001). So, it is perfectly possible that agents who are aware of the fact that *B&F* entails *B *(and who know the corresponding tautology B&F → B), still bet on *B* at a lower rate than they bet on *B&F*. So, it is clear that in this case the instance of the rule that infers K(B) from K(B&F) and K((B&F) → B) fails. For the agent might actually know by recursion B&F, and he might also know by recursion the tautology B&F → B,^{[4]} but fail to reveal via his betting behavior that he believes B (and therefore he cannot be represented as knowing that B via recursion).^{[5]}

On the other hand, even if Roush were willing to restrict herself to coherent betting behavior, she would have back the form of logical omniscience that guarantees knowledge of all logical truths as well as closure under plain implication (rather than closure under known implication). And apparently she would not want to have these forms of logical omniscience validated for her account. So, if incoherent betting behavior is allowed, the proposal in the book does not manage to guarantee closure under known implication (in situations that are well-known experimentally); and if only coherent betting behavior is allowed^{[6]} the proposal validates more than the author wants. All this suggests that the definition of belief used in the book is rather problematic. We will consider below some alternatives: solutions within the type of probabilism she advocates, and solutions that adopt full belief as an independent primitive notion in the construction.

Finally, closure under adjunction fails as well. Why? Consider the following instance of Adjunction. For simplicity let's just focus on four worlds: w_{1}, w_{2}, w_{3}, w_{4}, such that the last two are q-worlds and the first two are p-worlds and the intermediate two are p&q-worlds. Let's assume that the agent's beliefs in the proposition {w_{2}, w_{3}, w_{4}} and in the proposition {w_{1}, w_{2}, w_{3}} are veridical, namely the agent believes it exactly in the worlds where it holds. Let's in addition assume that the probability of w_{2} is 0.89, the probability of w_{3} is 0.01 and the rest of the measure is distributed evenly between the first and the last worlds. The probabilistic view of belief assumed by Roush is not closed under conjunction. Therefore nothing precludes that the proposition expressed by b(p&q) is exactly {w_{3}}. But now P(b(p&q) | b(q) & b(p)) is not greater than the threshold, violating (d) above. So, even when both p and q are known by tracking, their conjunction fails to be known.

Is there any simple way of solving some of the aforementioned problems? Three solutions can be envisaged. One of them is to abandon radical probabilism and to assume a new doxastic primitive: namely a notion of full belief closed under logical consequence. This seems to be sympathetic with the spirit, if not the letter, of Roush's characterization of belief. She proposes that her notion of belief is a form of *full belief*, although it seems that this clashes with her probabilistic definition. There is, to be sure, a doxastic notion corresponding to her definition. Kyburg and Teng call it, for example, *risky knowledge* (Kyburg and Teng, 2002). But this notion is not the normative notion of full belief, which seems to require closure *both* under conjunction and under entailment.

Another solution, within probabilism, might be to require that the conditional probability of b(p) given b(q) and the conditional probability of b(q) given b(p) are high -- and to require betting behavior to be coherent. In (Arló-Costa and Parikh, 2006) we adopted a similar solution, by requiring that the conjoining (and known) propositions have to be counterfactually interdependent for their conjunction to be known.

There is finally a third possible solution, which requires a prior discussion about the underlying notion of probability -- aside from requiring coherent betting behavior. Let's start with an additional question: is there any good reason for staying within the boundaries of radical probabilism? Is there a reason for utilizing conditional probability in the first place rather than conditionals in the definition of knowledge as tracking truth?

Roush thinks that the use of probability dispels eventual obscurities brought by the alternative use of conditionals in her definitions. This might be true but her use of probability is problematic in various additional ways. First, her notion of probability leads to a partial characterization of knowledge given that her definitions cease to work when the conditioning event has zero measure (something that one should expect when evaluating counterfactuals). The problem is pressing given that the only alternative solution is to assign measure one to conditionals with impossible antecedents. Van McGee has shown persuasively that this solution is not desirable (McGee, 1994). This is, nevertheless, a fixable problem, by switching to a more sophisticated view of probability that allows for conditioning with events of zero measure. Doing so has an additional payoff as well, namely that a notion of probability of this kind can be used to define a notion of full belief of the sort that seems to interest Roush.

The main idea is that a conditional measure (as axiomatized by L.E. Dubins^{[7]}) induces a system of nested cores in the probabilistic space and that, when the space is countable and when Countable Additivity (CA) is imposed, one can show that there is always an outermost core as well as an innermost core (see (Arló-Costa, 1999), (Arló-Costa and Parikh, 2005)). The latter is best seen as encoding the strongest proposition held with 'almost certainty' carrying measure one; while the former is best seen as the strongest fully believed proposition. Therefore full belief is not identified with measure one or with high probability. Moreover the two notions are automatically closed under logical consequence, avoiding the aforementioned problems. Of course, all this depends on using a finitely additive characterization of primitive conditional probability (axiomatically characterized by Dubins' axioms) which allows conditioning on events of measure zero and that *does not* coincide with the standard Kolmogorovian view.

There is nevertheless a second problem that arises even in the case of using a De Finettian notion of primitive conditional probability. This second problem is that the probabilistic conditionals used by Roush have a very different logical structure than the 'ontic' conditionals used by Nozick in this original definition (which, in turn, derive from the work of David Lewis in (Lewis, 1973)). In (Arló-Costa, 2001, 2001a) I argued that probabilistic conditionals require a *cumulative*^{[8]} view of belief change that imposes the validity of the so-called Import-Export conditionals laws.^{[9]} I am not aware of any system of counterfactual conditionals that obeys these laws, or, for that matter, of any cumulative system of belief revision.^{[10]} In the aforementioned article, I mentioned that the cumulative form of supposition arising from probability might nevertheless play a role in analyzing indicative conditionals. But these are not the type of conditionals that one wants for tracking truth. In conclusion, there are very good reasons not to use conditional probability rather than conditionals in the standard definition of tracking truth. Doing so is tantamount to a change of theme. Nevertheless, from now on, I'll take this change of theme for granted for the purposes of this review.

It is easy to verify that the notion of knowledge by tracking is not closed under known entailment. The recursive definition of knowledge intends to solve this by guaranteeing that this type of closure obtains. But as we explained above it seems that the definitions used in the book fail to guarantee this form of closure (and others as well). It is less clear whether one *wants* that type of closure for this notion of knowledge.

We can consider a variant of an example proposed by Nozick to tackle this issue (discussed in p. 70-1). A grandmother is able to recognize visually whether her grandson is in good health. This ability need not generalize arbitrarily with other persons though. She happens to see her grandson and declares he is in good health. But her grandson has a twin (something that the grandmother does not know) and she is bad at recognizing symptoms in the case of the twin, and she is bad as well at differentiating between her grandson and his twin. According to closure under implication (verified pace incoherent betting rates in the recursive view) since she claims to know that the person she saw is in good health we can attribute to her also that she knows that either the person she saw is in good health (h) or that the person she saw is not her grandson's twin (-t). This is not what an unchanged formulation of the tracking view would yield. For, consider the closer worlds to the actual world where (-h & t) is true. Notice that in these worlds she will not believe that t and she will believe that h (because she reliably misreads symptoms in the case of her grandson's Spartan twin), so she will believe that (h or -t). So, the second tracking clause fails and we have the apparently intuitive situation where the grandmother knows that the person she saw was in good health but that she does not know that if the person she saw is her grandson's twin then he is in good health.

The reason for the divergence between Nozick's view and Roush's recursive reformulation, is that, pace the important aforementioned problem of incoherent betting rate*s*,^{[11]} in the recursive view you know (h or -t) by recursion as long as you satisfy clauses (a) to (e) for knowing (in a technical sense induced by Roush's definition of knowledge by recursive tracking above) that h entails (h or -t). But this does not seem to speak in favor of saying that the grandmother knows (in a pre-systematic manner) that if the person she saw is her grandson's twin then he is in good health.^{[12]}

Lack of closure^{[13]} is one of the distinctive features of Nozick's notion of knowledge, which also applies (in a different manner) to the probabilistic reformulation of Roush's knowledge as tracking. One can attribute knowledge (via tracking) to an agent that there is a desk in front of him, but even when this entails that he is not an envatted brain, one cannot track this piece of information. So, Nozick is happy to concede to the skeptic that we cannot attribute anti-skeptical knowledge, as long as he can recover most of the knowledge that we pre-systematically believe we have. But as Roush points out in section two of chapter two (p. 54-55) her recursive view of knowledge brings back (by closure) the anti-skeptical knowledge as well. One does know (recursively) that one is not a brain in a vat, if one knows by tracking that there is a desk in front of one's body. This is her reaction to this issue:

It is familiar that if we were brains in vats we would have no indications of that. Therefore, nothing we could cite as evidence would be different (from the inside) from what a brain in a vat could cite. […] It is not obvious that this affects our knowledge. An inability to justify our beliefs is part of the frustration of the skeptical hypothesis, but there seems to be a general severance of our effort from knowledge in this case. What we discover on skeptical reflection is not precisely that we do not have knowledge, but rather that nothing we could do, would make any difference to whether we know that the skeptical hypothesis is false, and that if it isn't false, then most of our knowledge is implicated. […] What we learn on skeptical reflection is how difficult it is to find conclusive reasons for believing that we know, even if our ordinary beliefsareknowledge. (p. 54-55, my emphasis in the first case and the author's in the second case)

So, the recursive tracking view does not refute the skeptic either. Not at least from the point of view of the agent producing knowledge claims (or 'from the inside' as Roush says parenthetically). At most the recursive view shows what either anti-skeptical or skeptical (attributed) knowledge (whether the agent is not, or is a brain in a vat) depends on (whether there is or there is not a desk in front of the agent's body in our example above). The main goal, therefore, is not to produce an internalistic refutation of skepticism, but to characterize what is useful both in everyday and scientific (attributed) knowledge. This is an interesting and creative way of playing the epistemological game (closer to the dominant research program in contemporary formal epistemology). Nevertheless, the extent to which recursive closure should be imposed for normative reasons while making knowledge attributions is less clear. The case of disjunctions seems of considerably more practical interest than the skeptical case. And if the case of disjunctions does not give us normative reasons to introduce recursive closure, it is unclear why agnosticism (of the sort professed by Nozick) is not a better response in the case of skeptical hypotheses as well.

Chapter III fleshes out more detailed conditions for the applicability of the counterfactual conditions of knowledge as tracking and argues further in favor of utilizing probability rather than conditionals in those conditions. I'll focus here on only one of these examples originally proposed by Goldman (against Nozick -- discussed on p. 98-9) because various salient issues converge in it.

Sam sees Judy across the street and concludes correctly that it is Judy. But Judy has a twin, Trudy, and therefore Goldman concludes: '[a]s long as there is a serious possibility that the person across the street might have been Trudy rather than Judy (even if Sam does not realize this) we would deny that Sam knows'.

The notion of serious possibility proposed by Goldman seems to be Isaac Levi's notion of serious possibility,^{[14]} which in turn is derived from a primitive notion of full belief. Following ideas first voiced by De Finetti, Levi starts with a primitive (*and logically closed) *notion of full belief and then declares seriously possible all events compatible with the strongest held full belief. Moreover, both in De Finetti's view and in Levi's, all probabilities are conditional on the current full beliefs. Roush unfortunately puts the cart before the horse here and proposes to use probability to assess serious possibility, while in the usual accounts having a notion of serious possibility is itself a condition of possibility for having a notion of probability to begin with.^{[15]} Roush's assumption of a radical form of probabilism might motivate (although it does not justify) the aforementioned move. But we saw that not having a primitive notion of full belief is one of the problems that determine that recursive closure does not deliver the goods that it is supposed to deliver (namely guaranteeing closure under known implication).

Back to the example: Roush asks *whose* serious possibility we are talking about. Obviously the serious possibility is not Sam's but it is the serious possibility of the observer who attributes knowledge to Sam (or not).^{[16]} This brings to the fore the fact that Nozick's notion of knowledge is the kind of knowledge that we attribute to others, not the one that the agent (Sam in this case) attributes to himself.^{[17]} In Nozick's theory we are dealing all the time with knowledge attributions, not with knowledge claims. Nozick tends to be clearer about this in his formulations and examples than Roush, but it is worthwhile to point this out here. This is so because throughout Roush's book there is a constant tension between the tacit Bayesian point of view she adopts (which is usually formulated internally in terms of the epistemic and doxastic claims of the agent) and her desire to deal with knowledge attributions (which requires a completely different epistemological stance). The analysis of Goldman's example shows why this tension might matter for the analysis of concrete examples (one does not want to attribute knowledge to Sam based on a notion of serious possibility that is not his). Nozick, by appealing to an 'ontic' and objective view of conditionals remains more comfortably installed inside the game of knowledge attributions.

One of the main topics of chapter three is how to individuate the probability functions used in the probabilistically reformulated tracking conditions. But one of the most basic questions (which is never answered conclusively by Roush) is whether we are dealing with personal probability and if so, whose personal probability we are talking about. It seems that in the case of this example the probability in question could either be Sam's self-attributed personal probability,^{[18]} the personal probability of the observer, or the probability that an observer attributes to Sam. The latter option seems to be the one that is required here, although this option would require attributing probability to events that the observer thinks are seriously possible *according to Sam*. But this precludes straightforwardly utilizing the notion of the serious possibility of the observer itself, which is the one apparently used by Goldman.

Roush's analysis proceeds here by considering a series of hypothetical scenarios obtained by modifying accordingly an ur-probability function that is paradigmatically radical in its reluctance to attribute probability 1 to anything except for logical truths.

But in this case it is not clear whether one should assume that Sam's probability (even when considered from the point of view of the observer) obeys (should obey) some of the probabilistic constraints flowing from this analysis, or whether the observer can attribute these constraints to Sam. Notice as well that these hypothetical exercises fulfill here the same role that similarity comparisons fulfill in a carefully formulated model-theoretical account of Nozick's knowledge using conditionals.^{[19]}

The analysis of the so-called generality problem (i.e. what is the appropriate level of generality one should adopt in the description of events for them to be considered external knowledge carriers or not) is interesting. Nevertheless, it seems that when these arguments are successful one can perfectly well recreate them via the use of conditionals evaluated in different models. Notice also that the size comparisons utilized in conditions like condition (*) (p. 85) can also be carried out by comparing the size of the corresponding conditional propositions. This does not detract from interesting analyses of various examples, which we do not have the space here to reproduce. It seems to me that much of the value of the book can be found in the piecemeal analysis of examples of this kind. And although Roush feels more comfortable conducting these analyses armed with the tools of probability theory, many of these analyses can also be carried out without drastically modifying the standard theory of conditionals. Moreover, as we argued above, there are solid reasons for either adopting a less radical probabilistic stance or for abandoning it altogether. Most of the reasons for abandoning probabilism are based on the analysis of a series of experiments where the evidence is not cumulative through time.

Chapter 4 compares Roush's view with other forms of externalism. She provides convincing arguments in favor of her view. Chapter 5 utilizes the tools of confirmation theory to articulate the ideas that evidence indicates the truth of hypotheses and that evidence discriminates between the truth and the falsity of hypotheses. The book concludes with a further appeal to confirmation theory to illuminate the realism/anti-realism debate.

Nozick himself considered the possibility of using likelihood ratios in order to articulate the tracking ideas. Apparently he thought that the view was inferior to his official presentation using conditionals (Roush seems to agree). This could have been caused by the fact that important arguments were presented then against the theory of confirmation, which therefore was at the time considered problematic. Time has passed and in spite of the ingenuity of new defenders (like Roush) these problems remain. For a contemporary guide to the many problems that probabilism creates (rather than solves) in the analysis of scientific knowledge see the recent précis of my colleagues K. Kelly and C. Glymour (Kelly and Glymour, 2004).

The game of defeating the skeptic is far from being the only game in town for contemporary epistemologists. And by the same token the game of defining knowledge is far from being the obligatory strategy one needs to adopt in order to play an interesting epistemological game.^{[20]} The rich epistemological work done in recent years in the vicinities of philosophy (computer science, mathematical economics and psychology, as well as decision and game theory) points in a different direction, where a wealth of different notions of knowledge are assumed axiomatically and then studied both mathematically and empirically. The central ideas in Roush's book focus on applying some of Nozick's ideas both to standard epistemological conundrums and to live issues in philosophy of science. The resulting tour de force is interesting and thought-provoking, in spite of the fact that the initial chapters are affected by some of the aforementioned problems. Reading might be especially rewarding if one takes lightly some of the more radical aspects of probabilism that influence various chapters of the book, and focuses instead on the many analyses of concrete epistemological examples throughout the book.

**References:**

H. Arló-Costa and R. Parikh (2006) Belief, Knowledge and Tracking the Truth, FEW 2006, http://ist-socrates.berkeley.edu/~fitelson/few/schedule.html.

H. Arló-Costa and R. Parikh (2005) Conditional Probability and Defeasible Inference, *Journal of Philosophical Logic*, 34: 97-119.

H. Arló-Costa (2001) Bayesian epistemology and epistemic conditionals: On the Status of the Export-Import Laws, *Journal of Philosophy*, Vol. XCVIII, 11, 555-598.

H. Arló-Costa and R. Thomason (2001a) Iterative probability kinematics, *Journal of Philosophical Logic*, 46: 479-524.

H. Arló-Costa (1999) Qualitative and Probabilistic Models of Full Belief, *Proceedings of Logic Colloquim'98*, *Lecture Notes on Logic* 13, S. Buss, P. Hajek, P. Pudlak (eds.), ASL, A. K. Peters, 1999.

Bar-Hillel, M. and Neter, E. (1993) How alike it is versus how likely it is: A disjunction fallacy in probability judgments. *Journal of Personality and Social Psychology*, 65:1119-31.

K. Kelly and C. Glymour (2004) Why Probability Does Not Capture the Logic of Scientific Justification, in Christopher Hitchcock, ed., *Contemporary Debates in the Philosophy of Science*, London: Blackwell, 2004.

H. E. Kyburg, Jr. and C. M. Teng (2002) The Logic of Risky Knowledge, *Proceedings of WoLLIC*, Brazil.

I. Levi (1983) *The enterprise of knowledge*, MIT Press, Boston.

D. Lewis (1973) *Counterfactuals*, Harvard University Press, reissued by Basil Blackwell, London, 2001.

V. McGee (1994) Learning the impossible, in E. Eells and B. Skyrms (eds.) *Probability and Conditionals: Belief Revision and Rational Decision*, Cambridge University Press, New York, 179-99.

Nozick, R *Philosophical Explanations*, Oxford University Press, 1981.

A. Sides, D. Osherson, N. Bonini and R. Viale (2002) On the reality of the Conjunction Fallacy, *Memory & Cognition*, 30(2):191-198.

Tversky, A. and Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. *Psychological Review*, 90:293-315.

T. Williamson (2000) Knowledge and its Limits, Oxford: Oxford University Press.

^{[1]} I am grateful for comments provided by Rohit Parikh and Isaac Levi. While writing this review I exchanged copious correspondence with the author. She provided both very valuable insights as well as background information about her book. So, I would like to thank her especially for these exchanges, which helped to improve the relevance and correctness of the review.

^{[2]} Here 'b' is a belief operator and the probability function ranges over modalized sentences rather than propositions in a sigma-field. Finally '-' is just classical negation.

^{[3]} In a personal communication Roush explained that she might have been too concessive in print. Nevertheless there are more routes to incoherence or triviality in this case than the ones openly considered by her in print. In particular L.J. Savage has an agrument in The Foundations of Statistics (2nd rev. ed. Dover (1972), p. 58) that indicates that second order probablities of the sort she seems to envisage in the case of self-application lead to triviality (the agent should be absolutely certain that b(p) or absolutely certain of the negation of this statement). So, in the particular case where belief goes by coherent high probability, Savage's argument offers an alternative route to triviality. It should be pointed out here that Roush manifested in personal communications that she is endorsing only very weak coherence requirements and that her agents might manifest betting behavior that would allow a clever bookie to pump money from them. So, Savage's argument will apply only in cases when the rules are self-applied and the agent is coherent.

^{[4]} This leads to a second and separate problem. For apparently a necessary condition for knowing a tautology recursively is that the agent (incoherently) assigns high probability (distinct from 1) to it -- otherwise condition (c) would be undefined. But we might assume here that the agent knows at least* some* immediate tautologies of the kind we are considering even if this requires assuming this second form of incoherence.

^{[5]} Roush (personal communication) mentions that although this is not indicated explicitly in the book, she might be interested in a rule that has K(B&F), K((B&F) → B) and b(B) as premises and K(B) as conclusion. But it is not difficult to see that the previous argument (appealing to incoherent betting rates) can be applied to construct counterexamples to this modified rule as well (by exploiting the eventually incoherent *conditional *betting rates of clause (b) and the interplay between this clause and clause (e)). In addition, this modified rule lacks the elegance and motivation of the traditional rule of closure under known entailment.

^{[6]} And if knowledge of tautologies is posited independently of the main recursive characterization of knowledge.

^{[7]} "Finitely additive conditional probabilities, conglomerability, and disintegrations," *Ann. Prob.* 3 (1975):89-99.

^{[8]} A revision function is cumulative if the revision with a conjunction (K*(A & B)) is equivalent to intersecting K*A and K*B. The picture that thus arises (in the presence of other basic axioms) is one where the agent always increases his knowledge monotonically while acquiring knowledge compatible with his view. For example, a detective who is told erroneously by an otherwise reliable source that A is the case will never be able to erase that piece of information (via a revision with the negation of A) on pain of inconsistency.

^{[9]} These laws establish that [(p&q) > r] and [p > (q > r)] are logically equivalent.

^{[10]} See (Arló-Costa and Thomason, 2001a) for a comparison between cumulative and non-cumulative notions of belief change.

^{[11] }Which is not less pressing here than in previously considered cases. In fact, there are empirical studies isolating a 'disjunction fallacy' in betting behavior as well. See (Bar-Hillel and Neter, 1993).

^{[12]} The problem of disjunctions is mentioned in passing in page 63.

^{[13] }Not only under known implication but other forms of closure fail as well. See (Arló-Costa and Parikh, 2006) for a logical study of Nozick's notion of knowledge.

^{[14]} See (Levi, 1983).

^{[15]} Whatever are the details of the analysis of the notions of possibility and probability, the corresponding notions should be carefully differentiated. It is important to remark that possibility might also come in degrees, but the corresponding notion is also different from probability. A theory of this type was first presented by the economist G. L. S. Shackle in the sixties, and more recently a general theory of possibility judgments has been proposed by D. Dubois and H. Prade. See, for example: Possibility Theory, Probability Theory and Multiple-valued Logics: A clarification, *Annals of Mathematics and Artificial Intelligence*, Kluwer, Dordrecht, V. 32, p. 35-66, 2001.

^{[16] }It is convenient here to remember that in order to escape paradox, the notion of probability, if subjective, should be the probability (and therefore the underlying notion of serious possibility) of the observer who attributes knowledge -- rather than Sam's self-attributed probability. Otherwise, as Roush recognizes, if the recursive definition is self-applied it would lead to paradox or triviality. See comments made above immediately after the presentation of Roush's recursive definition.

^{[17]} Recall also the problems related to self-applicability of the rules mentioned both by the author and in a previous footnote in this review.

^{[18]} Which leads to paradox or triviality.

^{[19]} So, whatever blemish might be attributable to the similarity considerations while using conditionals, they reappear here in a different disguise.

^{[20]} Contemporary British epistemology seems to have abandoned this stance. The recent work of, for example, T. Williamson and E. Craig seems to confirm this impression.