This is a valuable collection of thirteen new essays, written by some influential senior figures together with some superb young epistemologists. The editors briefly summarize each paper in their useful introduction. The entire introduction can be viewed on Google books, so I have tried to avoid producing redundantly similar summaries here. I'll try to bring up some specific ideas and points found in the papers that I found interesting or important and that could not be fully conveyed in the editors' short summaries. As usual for a review, I'll be forced to be extremely selective in a partly arbitrary way. Anytime I make a much-too-quick critical remark in the compressed reviews of the individual essays below, please read this as an attempt to start off a longer discussion that must continue elsewhere.
Alvin Goldman argues that, even though our intuitions may give us some defeasible, a priori justification (for various philosophical conclusions), empirical considerations can threaten to defeat that first-order justification, and those threatening empirical defeaters can themselves be defeated only by further empirical investigation. In particular, there is the threatening empirical fact, recently pressed by experimental philosophers, that for many of your intuitions, only a fragile majority of the philosophical population shares that intuition.
Goldman makes a novel proposal for how you might combat that empirical threat, how you might defeat that defeater. He brings up the Condorcet Jury Theorem, which tells us that the view held by even a slim majority may be highly reliable, in particular more reliable than the group's individual members, and increasingly reliable (approaching perfect reliability) as the group's size grows, so long as two assumptions hold: (i) each member has a better than random chance of holding the right view, and (ii) the members adopt their views independently of each other. It's an empirical matter that these conditions apply to philosopher's intuitive judgments, but Goldman optimistically suggests philosophers thus might empirically confirm that these conditions do apply and thereby shore up the quality of their total evidence in support of their philosophical views.
I find this proposal interesting, but it has the following peculiar implication. It's reasonable to worry that the intuitions of philosophers are not independent: while jurors get put on a jury by lawyers and judges, philosophers are admitted to the profession by other philosophers, and agreement with standard intuitions plays a part in the admission process. If that's so, then Goldman's Condorcet-based argument would have us favor the findings of surveys of outsiders to philosophy. But that seems peculiar: when I decide who's trustworthy about some matter, I am tempted to place greater trust in people who have agreed with me on other matters, so why should I favor surveys of outsiders rather than well-tested philosophers? The issue here is related to the central, difficult question in the peer disagreement debate: does the fact that someone is disagreeing with you give you a reason to lower your confidence in their reliability (as compared with your own reliability)? These questions continue to pull me in two directions.
Jonathan Ichikawa and Jonathan Weinberg both tackle the question of whether experimental philosophy threatens the a priori, or armchair, status of philosophy. The threat is that, because experimental philosophy is both empirical and sometimes done with the aim of revising philosophical conclusions, those conclusions turn out to be empirical. Ichikawa and Weinberg both aim to defuse the threat. In particular, they both independently make the following very compelling point (which I believe is also, at least implicitly, briefly made in Albert Casullo's paper). The point, as I would put it, is that everything is empirically underminable, even our conclusions in math and logic, since you can always receive empirical evidence that, say, you've been given a reason-impairing drug. Thus, if the role of experimental philosophy is anything like that, and it arguably is, we shouldn't think this shows that philosophy is any less appropriately categorized as a priori than math or logic.
Joshua Thurow also aims to respond to some of the issues raised by recent experimental philosophy, but his strategy is broader. He outlines a positive explanatory theory of the a priori, one that builds upon Christopher Peacocke's well-known meta-semantic theory and is motivated by objections that Thurow develops against some details of Peacocke's view. Thurow then argues that his theory survives some of Timothy Williamson's recent criticisms of conceptual role semantics. The theory is meant to block any attempt by experimental philosophy to eliminate any epistemic role for intuitions, but the theory allows experimental critiques to play a restricted role in correcting our intuitions, which are highly fallible on Thurow's theory.
David Henderson and Terry Horgan offer an account of armchair reflection as it's normally practiced by philosophers. In rough outline, the account is familiar: it's the pursuit of wide reflective equilibrium in our judgements, particular and general, applying philosophical concepts to hypothetical scenarios. As the authors say in their first sentence, this is a kind of reflection "we suppose should strike philosophers as recognizable, even familiar" (p. 111). They devote a section to exploring a suggestive analogy with the explicit articulation of our implicit knowledge of syntax from the armchair. (Rawls also mentioned this analogy, in brief paragraph, when he discussed reflective equilibrium in the opening chapter of A Theory of Justice.)
Many readers will find Henderson and Horgan's account, in outline, to be fairly uncontroversial. What's most novel and most controversial in the approach? For one thing, they qualify the a priori status of the philosophical conclusions arrived at by the method: our conclusions are only "low-grade a priori," as they say. What is the alleged empirical dimension of philosophical knowledge? For one thing, our conclusions must be tested against all kinds of knowledge that an armchair philosopher might acquire before starting her reflections, including commonly known hard science and psychology. For another, there is the important fact that new empirical psychological findings about our brains and our reliability enter into the overall reflective process. While the primary initial inputs into the method are the contents of intuitive judgments, the fact that these are the contents of psychological judgments really occurring in our minds is part of the data that must be considered before the reflective process can be finalized. These matters seem to be one of many things in the paper that Henderson and Horgan examine at much greater length in their recent book, The Epistemological Spectrum. The paper appears to be a bit of a teaser for the book.
I have to register one complaint about the paper: they never defined, even roughly, the notion of a "conceptually grounded necessary truth", even though the title of the paper is "On the Armchair Justification of Conceptually Grounded Necessary Truths". Again, their book has somewhat more on this.
Christopher Hill argues, against Quine and others, for the presently controversial view that some beliefs are immune to empirical revision. He argues on the basis of considerations about the functions our concepts serve (an argumentative strategy I liked very much). Some concepts, Hill says, serve a non-empirical function. An example is a concept introduced by an abbreviative definition. The function of such a concept is to "achieve economy of belief and reasoning" (p. 143). Hill discusses some examples, including the abbreviative definition, "a fortnight is 14 days".
Quine complained that conventional introductions are merely events, and our original intentions will be forgotten later in the life of a concept. Hill's reply is that
there is no natural limit to the lifespan of an intention to use a concept as an abbreviation. Contrary to what Quine seems to have thought, there is no tendency in the nature of things for intentions of this sort to be swept away by our pursuit of more fundamental cognitive goals. (p. 147)
There is a line of resistance against this view that Hill does not take up in the paper. He doesn't consider the transmission of concepts to other people, who may become fully competent users without knowing of or remembering the original intentions of the introducer. Someone could thereby have false views about, say, fortnights that are corrected empirically. Perhaps Hill would prefer to endorse a more restricted version of his thesis, a version that only concerns users who fully retain all their intentions from the time they themselves introduced a concept.
Anna-Sara Malmgren continues earlier work of hers that criticized Tyler Burge's provocative thesis that we can have a priori knowledge on the basis of testimony. Her essay begins with an attempt to re-construct, in admirably clear terms, Burge's account of epistemic warrant that leads him to his thesis. Malmgren reviews the basic problems she sees with Burge's views. Her criticisms are extremely hard to disagree with, but a recent paper by Ram Neta attempts to identify a gap in one of her arguments. Neta suggests there is a way in which introspection can yield an a priori testimonial entitlement, where the testimony is from one's past self. To respond to Neta, Malmgren relies on the following nice strategy: if the sort of example Neta proposes is a genuine instance of testimony, we should expect it to be defeasible in the same ways that every uncontroversial instance of testimony is agreed to be defeasible. These defeaters can involve things like insincerity or communication failure. And Malmgren argues that Neta's case, which essentially involves introspection of one's own judgments, cannot admit such defeat, and thus does not appear to resemble the familiar notion of testimony. Malmgren also rightly emphasizes that Neta's case of testimony only offers an uninteresting form of a priori testimonial knowledge: knowledge from self-testimony. (Malmgren's essay also examines some other strategies for salvaging Burge's thesis, and explains why they are unappealing.)
Ernest Sosa asks what justifies the foundational, or non-inferential, beliefs that anti-skeptical common sense strongly implies we must have. Sosa, developing the larger project he is well known for, argues that the answer must appeal to the notion of competence. For example, in the case of perceptual beliefs, their justification must involve something more than their having contents that match the subject's perceptual experiences. A belief about how many speckles are on a hen might accurately match the number of speckles represented in her experience by accident, without the subject exercising any competence to reliably discern so many speckles.
But, argues Sosa, not all foundational beliefs are perceptual. A (hypothetical) reliable blind-sighter might form beliefs competently, without any associated perceptual experiences. Less hypothetically, Sosa argues that we have intuitions that are not themselves experiences, but rather are "seemings", or "attractions to assent", based on nothing more than our understanding of the propositions involved. And the justification for these intuitions, again, can be found in the exercise of an intellectual competence, a competence that enables the subject to reliably discern the true from the false.
Sosa examines the epistemologies of Moore and Wittgenstein to motivate and help support this view of intuition. These short historical excursions are illuminating, and will remind some readers that there are misleading caricatures of these philosophers' views floating around philosophy department lounges today (e.g., Moore's real view is very far from the dogmatist or direct realist views of James Pryor or Michael Huemer).
Joel Pust's excellent essay was one of the most interesting, at least to me (though I think its arguments do not succeed, as I'll explain). Pust compares two epistemological perspectives, labeling them "Cartesianism" and "Reidianism". The Cartesian says all justified beliefs must be shown to be sufficiently probable given only a very narrow core of foundational beliefs, namely a priori intuitions and introspection. The Reidian accuses the Cartesian of arbitrariness in privileging just intuition and introspection.
Pust defends the Cartesian against this Reidian accusation of arbitrariness. He offers two arguments to defend the Cartesian (pp. 215-218). One argument claims that intuition, or at least the strongest deliverances of intuition, are special because they are infallible, that is, they cannot be in error. Pust realizes he must defend this claim against examples like Frege's intuition for Basic Law V, which Russell's paradox showed to be inconsistent. Pust says that, in such a case, we use stronger intuitions, intuitions for things like non-contradiction, in our endorsement of Russell's refutation, and thus it was not an intuition of the strongest sort that turned out to be false.
I think that Pust's argument here doesn't get us very far. Paradoxes like the Liar and Curry's show that often a very small set of our very strongest intuitions are inconsistent. There is little, if anything, that we can trust to be infallible intuition.
The other argument Pust gives in defense of the privileged status Cartesianism gives to intuition and introspection is this. While perception, induction, and other targets of traditional skeptical attack are all domains that allow for possibilities of general error, the Cartesian's foundations cannot be generally false, even if both intuition and introspection are fallible.
Pust says we have an intuition that perception could be entirely off, but we lack all intuition that a priori intuition could be too. The suggestion, then, is that intuition can be privileged as a foundation for skepticism-immune belief-formation.
I think this argument doesn't succeed in showing the Cartesian's foundations to be immune to skeptical threat. For, even if there (intuitively) is no skeptical possibility in which all our beliefs of a certain sort are false, so long as for each such belief there is a skeptical possibility in which it is false, we still face the challenge of showing how these beliefs amount to knowledge (which, given a closure principle, amounts to showing how we know the denial of each of the skeptical possibilities). It's not the typical way of presenting things, but the basic structure of the skeptical paradox can be replicated using many possibilities of one-off error, rather than a single possibility of general error. Take any fallible intuition (or intuitive belief), and then consider the challenge to show that you are not in a skeptical scenario where that very intuition (or belief) is mistaken. When constructed in this way, the skeptical problem may be less "gripping", in some sense, but it doesn't appear to be easier to solve.
Many epistemologists will find Ralph Wedgwood's very interesting paper worth reading and thinking about. His arguments join the currently lively debate over how to respond to the skeptical paradox about perceptual knowledge. He takes a Bayesian argument (given by Roger White and others) to show that, somehow or other, we must have a priori justification for the conditional, if it appears as if this is a hand, then this is a hand. Wedgwood develops two arguments that aim to explain how it is that we have such justification. His arguments rely on the premise that there is a rational belief-forming process of taking experiences at face value, a process that allows you to infer this is a hand from it appears as if this is a hand, and to do so without appealing to any justification for the corresponding conditional.
Wedgwood's first argument involves the claim that we can deploy this process in suppositional reasoning, and thereby acquire a priori knowledge of the conditional. This offers a kind of reconciling hybrid view to those who like Pryor's dogmatism and those who accept White's Bayesian argument. Thoroughgoing dogmatists, like Pryor himself, though, will doubt whether the process transmits justification in a suppositional context: it might be essential to have the experience with all those aspects of its phenomenology that cannot simply be suppositionally entertained.
Wedgwood's second argument relies on the assumption that all the principles of rational belief formation are a priori. He imagines a "Platonic soul" who relies on its a priori knowledge of rational belief formation to predict, even before it beams into a body it will soon inhabit, that it will acquire justification for various conditionals linking appearance and reality. Given certain defensible "meta-justification" principles linking (i) current justification to believe one will soon have some justification for a belief B, and (ii) current justification for belief B, Wedgwood again concludes we have a priori justification for appearance-reality linking conditionals. Those who want to resist this argument might focus their attention of the issue of whether the principles of rationality really are all a priori. (If they are not, some proposal will be required concerning how armchair epistemologists go about doing any good work. Other papers in this volume, such as Henderson and Horgan's, and Williamson's, would help with this.)
The papers by Albert Casullo and Carrie Jenkins both consider and respond to a set of criticisms of the coherence or significance of the a priori. Casullo addresses arguments that have been offered by Philip Kitcher, John Hawthorne, Timothy Williamson, and (interestingly!) Carrie Jenkins. Jenkins qualifies as a target for a dedicated defender of the a priori like Casullo because Jenkins has a somewhat tempered view of the a priori. She says in her own article that her view is one where "all knowledge, whether a priori or a posteriori, is ultimately empirical" (p. 283). Jenkins criticizes more thoroughly skeptical views of the a priori, in particular a Quinean view she articulates and dismantles. She also responds to arguments from Penelope Maddy and David Papineau. The theme of her targets' views is that a plausible form of naturalism undercuts the coherence of the a priori. Jenkins admirably tries to reply to all these naturalism-inspired attacks in ways that don't conflict with the assumption of naturalism itself; instead she finds flaws in other steps these arguments take.
The volume concludes with the only contribution that is hostile to the notion of the a priori. Timothy Williamson argues the a priori doesn't cut at the epistemically significant joints, in the same way that classifying some plants as bushes doesn't cut at the botanically significant joints.
His strategy is to consider a paradigmatic case of a priori knowledge and a paradigmatic case of a posteriori knowledge, and then argue that the cases "are almost exactly similar" (p. 296) with respect to all their interesting epistemological properties. The cases involve a character, Norman, who uses a skillful exercise of his imagination in order to arrive at paradigmatically a priori knowledge that all crimson things are red, and paradigmatically a posteriori knowledge that all recent volumes of Who's Who are red. Norman imagines an instance of either a crimson thing or a volume of Who's Who, observes that the imagined instance is of a red thing, and then he forms a general belief by generalizing the case. If the exercise is skillfully done, as Williamson puts it, Norman must not be inclined to imagine an atypical instance of the relevant general class; for example Norman should consider a prototypical example of crimson rather than a peripheral shade. Williamson argues there is skill in both exercises because Norman's past perceptual experiences appropriately tutor his imagination, and thus experience plays the same role in both a priori and a posteriori knowledge, a role he calls "more than purely enabling but less than strictly evidential" (p. 298).
Williamson then argues imagination plays a similar role in knowledge of math and logic, thus extending his imagination-based model to the most traditional domains of a priori knowledge. In particular he argues that Norman can know the logical truth that all individuals are self-identical just by imagining a thing, observing it is self-identical, and then generalizing to all things.
Let me raise a challenge to this model that proposes to base our knowledge of generalizations on imagined typical instances. Williamson's view is inspired by the formal rule of universal generalization in proof theory, though his model is not a model of logic but of reasoned change in view (as Gilbert Harman called it). Can we rationally infer a generalization from an imagined typical instance, granted that our reasoning about the instance was entirely rational and granted that prior to generalizing we retain no (undischarged) assumptions about the instance? Williamson suggests we can, but I think we cannot. The problem I see is that absurd arguments such as the following become licensed:
(1) Suppose 99% of kids like chocolate and suppose Joey is a kid. (assumption)
(2) Joey likes chocolate. (a reasonable statistical inference, assuming 1)
(3) If 99% of kids like chocolate and Joey is a kid, then Joey likes chocolate. (conditional proof, no assumptions)
(4) Thus, for any x, if 99% of kids like chocolate and x is a kid, then x likes chocolate. (universal generalization?!? no assumptions)
Of course, if we change these claims here to say that Joey (and x) is a typical kid, then the reasoning is no longer absurd. But, Williamson explicitly denies that these generalizing inferences require us to first "check" that our instance is typical; it need only be typical. (p. 304).
I think the example shows that ordinary reasoning -- reasoned change in view -- from an instance to a generalization has restrictions beyond those we hold to the rule of universal generalization in proof theories. And it is also not enough to add the restriction that the imagined instance be "typical". It seems to me the reasoner needs to have justification to believe more about the instance. My suggestion, then, is that we can generalize reasoning about an instance only if we have independent (propositional) justification to believe reasoning to be truth-preserving over all the cases we generalize to. (I defended this in my paper, "Knowledge of Validity". (Brian Weatherson defends a competing diagnosis of these examples in his paper, "Induction and Supposition".))
Williamson anticipates my suggestion and objects as follows: "To require us to check that the imagined instance is typical of all members of a domain D before we universally generalize over D is to impose an infinite regress, for 'The imagined instance is typical of all members of D' is itself a universal generalization over D" (p. 304). To avoid such an infinite regress, I hold the view that we have non-inferential justification for believing a number of general principles in logic, such as the reflexivity of identity and the validity of modus ponens and various other rules. This allows my proposed rule for generalizing on an instance to still usefully generate justification and knowledge of plenty of other generalizations. For example, we can learn something about all numbers because we have non-inferential justification to believe that our reasoning about an arbitrary number is valid.
Because my limited space has prevented me from discussing many of the virtues and many of the intellectually provocative aspects of the essays in this excellent collection, I'll simply refer readers to the book itself.