Quentin Smith (ed.)

Epistemology: New Essays

Quentin Smith (ed.), Epistemology: New Essays, Oxford UP, 2008, 440pp., $39.95 (pbk), ISBN 9780199264940.

Reviewed by Trent Dougherty, Baylor University


This is a stellar collection of epistemologists writing at the center of their specialties. It is not just reviewer boilerplate to say that this volume is ideal for an epistemology course for advanced majors or a graduate seminar in contemporary epistemology. Almost everyone who works in epistemology will want to read at least some of these essays. The subtitle is a bit of a misnomer since many of the pieces have been circulating for awhile or are quite old. For example I first read Peter Klein's "Useful Falsehoods" in 2004 and substantial portions of Marian David and Ted Warfield's paper in 2003. Still, these are mostly brand new or under-circulated pieces for which any epistemologist should be grateful.

George Bealer's "Intuition and Modal Error" defends the classical method of intuition-driven philosophical investigation. The defense takes one through facinating territory including the relationship between epistemic possibility and metaphysical possibility. The piece is illuminating and offers, in my judgment, a successful defense of intuition as a guide to possibility. I'm glad he did not shy away from pointing out that extreme skeptics about modal intuition are left with nothing to motivate their own arguments against the use of intuitions as evidence. It is surprisingly practical in its diagnosis of and prescription for avoiding modal error, though some of the details are highly programatic at best.

Ernest Sosa's coyly named "Skepticism and Perceptual Knowledge" argues that dream-based skeptical puzzles are importantly different from and a "deeper" problem than either BIV or evil demon skeptical puzzles. I find this claim puzzling and dubious. His basis is that since dreams are so common, the skeptical threat they pose is less outlandish. But for a skepticism of any interesting scope this is not clearly true, for the hypothesis that my whole life has been a dream is not clearly more outlandish than that I've been in an experiment by a mad scientist all my life. Still, his discussion of whether in dreams "seeing is believing" is very interesting, and he appears to bring certain considerations pertaining to modal contexts to bear upon how we think about cognitive activity in dreams (though he doesn't put it this way). The conclusion to his essay is a nice precis for his latest volumes on virtue epistemology.

Klein, in "Useful False Beliefs", displays his typical combination of insight, creativity, and analytic precision. Furthermore, his detailed knowledge of the literature is evidently at his fingertips. As one would expect from an essay of that title by this author, Klein argues against the anti-Gettier condition that knowledge must not essentially depend on a falsehood. He gives several cases he suggests plausibly show how one could take a route to knowledge via a false belief. He carefully considers various objections, and even more carefully gives detailed replies. He also offers a general theory of when a falsehood has been useful in knowledge. Though the essay is a case study in analytic method and generally enlightening as a piece of epistemology regardless of the reader's specialty, it does suffer from one defect. The notion of essential dependence is left insufficiently defined. This lacuna affects the cases in the following way. In the four cases he gives as examples of knowledge dependent on falsehoods, one strikes me as not a case of knowledge, one strikes me as very complex and ambiguous (do people even believe mathematical models to be true), and the remaining two don't -- on the account I prefer -- essentially depend on a falsehood, though this is not the place to make that case. Without some clear cases, motivation for the view is lacking.

Also a model of philosophical acumen is David and Warfield's "Knowledge-Closure and Skepticism". It demonstrates, lucidly, that it is harder than one might think to formulate a plausible closure principle and, furthermore, that it is not at all clear that a plausibly true closure principle will be useful in an argument for skepticism. For my money, the best thing about this piece has always been potential lessons for how we think about the nature of defeat, for one prime use of closure is to learn that apparent knowledge of premises has been defeated.

Timothy Williamson takes the opportunity to attempt to shore up his arguments for cognitive homelessness from his "Why Epistemology Cannot be Operationalized" (2000). He notes that his premise that an agent knows at t1 that a luminous condition C obtains only if C obtains at t+1 has not gone unchallenged. Unfortunately, he doesn't address any of the specific attacks but tries to address them via generalizing the anti-luminosity argument, replacing knowledge with probability. I was worried from the start when he began the model with the proviso that "When such a condition obtains, there should not even be a small non-zero probability that it does not obtain", for modest probabilisms will require no such condition. As if in anticipation of this point he two pages later cites in a footnote his three full pages from his 2000 article in which he replies to Richard Jeffrey's lifetime of work. The fact is Jeffrey-style non-reliance on certainty is not wedded to his subjectivism, nor is his suggestion that learning from experience is a kind of know how. So modest empiricism needn't be committed to what Williamson calls "operationalism". And here's the real problem, for as it is rather hastily defined at the beginning of the article, its core content seems to be that the operationalist requires a method which will guarantee for any p one considers that one's judgment regarding the truth of p will be objectively likely to be true. Tellingly, he cites no living epistemologist as being committed to the kind of cognitive access he claims such a project would require (and it is less than clear that this project has the kinds of commitments he claims). Unfortunately this piece strikes me as an extremely talented fighter shadow boxing.

Robert Audi also takes the opportunity to defend his prior views on ethical intuitionism and moderate foundationalism in his "Rational Disagreement as a Challenge to Practical Ethics and Moral Theory: An Essay in Moral Epistemology". Fortunately, the essay is less unwieldly than its title, and, frankly, raises fewer concerns for me. Audi defends his prior views from (seemingly confused) criticism by Roger Crisp. The charge seems to be either that disagreement about moral principles is evidence that there are no objective moral principles or that it makes figuring out what one ought to do too hard (if the latter, that's where "Practical Ethics" would come in, but, as I say, I'm just not clear on the charge or the title). Audi compares ethical intuitionism with moderate epistemological foundationalism to successfully show that ethical intuitionism can provide a way out of moral skepticism without leading to dogmatism.

I'm afraid to report that I found Panayot Butchvarov's "Epistemology Dehumanized" very difficult to follow, highly idiosyncratic, and overly polemical. With regard to the titular concept, he's for it, rather than against it, and advocates some kind of epistemic logic as properly dehumanizing.

A considerable chunk of the essays are centered around the concept of epistemic justification, especially concerning immediate or experiential justification.

Hilary Kornblith's contribution is as straightforward as his title. In "Knowledge Needs no Justification" he covers familiar territory and so can be brief. On the one hand, internalist accounts of justification are rejected as candidates for a necessary condition on knowledge because they are too subjective or assume naive rationalism. On the other, the more objective an account of a normative status becomes, the less it seems like an account of justification, since that must be sensitive to human cognitive limitations. The upshot of the dilemma is that we should -- as Richard Foley has long argued -- divorce our account of justification from our account of knowledge. Sosa comes under special criticism for trying to straddle the line with his animal knowledge/human knowledge distinction. Unfortunately, Sosa walks right into the critique by making the empirical claim that reflection on belief will increase the probability of truth. However, I must say that I think Sosa thinks there's more to the desirability of reflection than that, so Kornblith's case against Sosa seems incomplete. What I find more objectionable is the treatment of internalism. He cites three examples: Roderick Chisholm, Laurence BonJour, and Foley. He rejects BonJour's view as requiring an unrealistic degree of access to one's reasons, and Foley's for being indexed to a single individual's personal standards. However he never pins either of these properties on Chisholm and, indeed, confuses the meaning of the meta-epistemological passage he quotes from Chisholm with the first-order view that such a methodology results in. Furthermore, latter-day neo-Chisholmians supply versions of internalism in which the "access" one needs is little more than consciously hosting qualia. Indeed, just a few chapters away, Earl Conee and Richard Feldman provide an internalist theory of evidence which Kornblith's critique doesn't touch.

Alvin Goldman is a bit more friendly, in principle, in his "Immediate Justification and Process Reliabilism". The friendly bit is in offering to flesh out the internalist notion of an appropriate response to experience in terms of reliable mental processes. The unfriendly part comes in his rapid-fire dismissal of the internalist accounts of immediate justification of Feldman, James Pryor, and Michael Huemer. I think all the criticisms of Huemer have plausible answers. Goldman's method of attack is to provide two individuals with the same sense data which are not the same with respect to the justification of some target proposition. But a charitable reading of Huemer (or an easily accessible nearby slight modification) takes seeming states to be conceptualized states, awash with the subject's other stored information. It is one's total mental profile, says the mentalist evidentialist, which determines what one's experience supports. Two people can have the same receptor irritation and end up with different experiences for the purpose of justification.

Anthony Brueckner's "Experiential Justification" also offers a critique of a form of moderate empiricist foundationalism, focusing on Pryor. After a nice laying out of basic epistemological taxonomy, and an even-handed setting forth of Pryor's view (novices will find this helpful, specialists can skip right to the last section before the conclusion), he raises a closure-based concern. It is well-known that standard advocates of immediate perceptual justification of a Moorean or dogmatist bent will often affirm both reasonable closure principles and the thesis that experience does not by itself discriminate between common sense hypotheses and skeptical hypotheses. So my being appeared to zebra-ly supports the proposition that there is a zebra even though it doesn't discriminate (not logically anyway) between this hypothesis and the hypothesis that there is a cleverly disguised mule. And yet I am justified in denying that it is a cleverly disguised mule. His concern is that it is "mysterious" how one becomes justified in the denial of the skeptical hypothesis. He says "If E is my entire source of justification for p, and E is not sufficient to justify a belief of not-q, then the claim that p itself (once justified) yields justification for not-q seems to involve manufacturing justification out of thin air" (118). As with Goldman's objection, I think a moderate holism is the answer. Evidence supports within a conceptual scheme which serves as backdrop. It is not mediated because experiences don't become candidates for justifiers until they are conceptualized. The same informational backdrop which allows the appearance to count in favor of the zebra thesis renders the mule hypothesis otiose.

In their "Evidence" Conee and Feldman take an opportunity to fill in some details of evidentialism. That one's epistemic justification supervenes -- unlike, say, one's moral traits like conscientiousness -- on one's evidence seems like a truism. So the real challenges should be in characterizing what evidence is, what it is to have it, and how it supports propositions. It is chiefly these three questions they address before disowning John McDowell's and Williamson's factive notions of evidence. If evidence is most generally characterized as that which justifies or that which we have to go on in forming beliefs, then it's not surprising that our "ultimate evidence" consists in -- or is at least arrived at via -- experience. So the toughest nuts to crack are giving an account of what it is to have evidence and just how evidence plays its supporting role. They admit that saying what it is to have evidence is a real dilemma, but note that this just means there will be various ways of fleshing out the theory according to one's own intuitions about what is in fact justified. This leaves as the hardest problem of all the nature of evidential support. For example, they point out, having an experience of an event A which is highly correlated with -- or even entails -- B does not justify the proposition that B occurs without being in possession of some information about the indicative relationship between A and B. They give a tantalizingly brief sketch of the outlines of a coherence theory, but we must await further details. The main weakness is the casual dismissal of non-doxastic seeming states. They admit that appeal to such states would have many theoretical advantages, but assert that seemings are not necessary for justification, saying "When one has a fully articulated good reason to believe something, then the conclusion is justified whether one has the additional non-doxastic seeming state or not" (96). I don't know what counts as a "fully articulated good reason" for p, but it's hard to imagine having one without its seeming that p is true. Suppose, for example, that you believe p with a high degree of justification and you believe that p entails q with a high degree of justification (approx .95 for each, let's say). Now suppose that you hold both these beliefs before your mind and yet it doesn't seem true that q. Should you believe q? It seems wrong to say so. One could say, "Well it clearly fits the evidence", but Conee and Feldman have already admitted that no notion of fit that consists merely of logical relations will suit a theory of epistemic justification, because the closure of our justified beliefs under logical relations contains infinitely many items we clearly have no business believing. Even if we are talking about propositional justification -- if the notion even makes sense -- to say that every proposition in the closure set is justified separates too greatly propositional from doxastic justification. If their basic notion of evidence entails that it consists of experiences that present themselves as revealing the external world, then it's hard to imagine having this without it seeming that things are as the experiences present them to be.

2009 saw the sad loss of John Pollock. So it was with a sense of both thankfulness and sadness that I began to read his essay in this volume. In "Irrationality and Cognition" Pollock proposes to limn the concept of rationality by considering its opposite, irrationality. He considers both practical and epistemic irrationality. He distinguishes sharply between the two, noting that they have different formal properties, but makes the latter a function of the former. In this regard his view bears interesting similarities to Foley. He poses the fascinating question: Why are we able to be irrational? Like Descartes, he fingers freedom. Key to his understanding of both practical and epistemic irrationality are the notions of cognitive heuristics and biases. He asserts the bold and admittedly under-argued thesis that all practical irrationality results from failure to consciously over-ride the results of biases and heuristics when we rationally believe they are mistaken. Furthermore, all epistemic irrationality comes down to practical irrationality in deliberation. All irrationality, he seems to suggest, is in some way a deliberative failure. The cause of this problem, he suggests, is that the cognitive modules for biases and heuristics arose at a more distant stage of the evolution of our mental apparatus than did conscious thought, especially the kind that usually finds faults in the heuristics, and the two modules haven't "meshed" well as a result. It would, of course, take a considerable amount of empirical research to support such a claim, and that brings us to Pollock's very conception of philosophy, at least epistemology. It is, he supposes, a branch of cognitive psychology in that all it does -- can do, should do -- is describe our cognitive architecture. Since the data are arrived at via introspection, however, it is a form of psychology engaging in which does not require laboratory experiments. I must now display a passage which I genuinely do not understand but which would be important to understanding just what Pollock is suggesting about the nature of normativity. Very near the end, when discussing epistemology as introspective descriptive psychology he says,

But rationality is also normative. The normativity of rationality is a reflection of a built-in feature of reflective cognition -- when we detect violations of rationality, we have a tendency to desire to correct them. This is just another part of the descriptive theory of rationality (274).

He makes remarks that sound very Kornblithian in that they peg normativity to actual practice. But it's just not clear whether he's naturalizing normativity or normativizing, if you will, our practice. His arguments could perhaps be read either way. As one whose central philosophical pursuit is the nature of the normativity of rationality and who has found Pollock's writings among the most helpful, I long to understand whether this is a reduction or not. Is the emphasis on the "just"? Or is there a genuine teleology being admitted that we are urged to accept as natural? I just don't know, and now, sadly, may never know (his other latter-day writings do, of course, provide some guidance). This is among the questions John Pollock left us with and we do his memory well if we continue to pursue them with a fraction of the diligence and intelligence with which John Pollock did.