The Social Contexts of Intellectual Virtue: Knowledge as a Team Achievement

Placeholder book cover

Adam Green, The Social Contexts of Intellectual Virtue: Knowledge as a Team Achievement, Routledge, 2017, 245pp., $140.00 (hbk), ISBN 9781138236356.

Reviewed by Joshua C. Thurow, The University of Texas at San Antonio

2017.07.05


Virtue epistemology, of the sort developed by Ernest Sosa and John Greco, has risen to become one of the most plausible accounts of knowledge, competing alongside evidentialism, reliabilism, and Williamson-style knowledge-first epistemology. But like these other views, virtue epistemology has been developed primarily from an individualistic perspective on knowledge: a single person, using his or her epistemic abilities, coming to a belief. Adam Green thinks that taking a more social perspective will produce a richer and more plausible version of virtue epistemology -- one that can avoid various objections to credit views of knowledge and that can deepen our understanding of both epistemic injustice and the problem of disagreement.

Green develops a version of Greco's credit view: knowledge is a successful belief through ability for which one deserves credit. Greco's view has been strongly contested by Jennifer Lackey, who offers as a counterexample the case of Morris. Morris is a visitor to Chicago and asks the first adult passerby he sees how to get to the Sears Tower. The passerby is a geographically competent resident of Chicago and gives excellent directions. Morris seems to know that the path told to him leads to the Sears Tower, but he doesn't deserve much if any credit for his knowledge; the passerby deserves most of the credit for Morris coming to a true belief. To deal with this objection Green suggests that we should think of cases of testimony, including Lackey's case, as cases of teamwork aimed at bringing about a true belief in one (or more) members of the team. Sosa's analogy of the archer is replaced with a soccer or basketball player. Morris is like a basketball player who receives an incredible pass from a teammate, freeing him up to make an easy layup. The teammate receives the lion's share of credit for the basket because of his incredible assist, but the shooter receives some credit too. He played an important role competently. The passerby receives the lion's share of credit for giving Morris a true belief, but Morris deserves some credit too; he competently played his role in receiving the testimony, monitoring for signs of good or bad testimony and attending to the recommended attitude strength (17). Green, then, proposes the following principle:

CREDIT FOR US: If x knows that p, then the abilities that contribute to the formation and sustenance of x's belief that p deserves [sic] primary credit (or something close to it) for x knowing p whether those abilities are contributed solely by x or also by other agents (14, italics in text).

Green calls his view the "extended credit" view of knowledge.

The notion of a person playing a role in a team effort, according to Green, helps resolve a different puzzle recently put forward by Lackey. Lackey has presented three cases involving testimonial chains -- A testifies that p to B, who then on the basis of A's testimony, asserts that p to C -- in which each chain is very reliable and it appears that C could know p; and yet if C were to learn that B's assertion that p is based solely on accepting A's testimony, C would feel cheated and her belief would to some extent be defeated. In one example C goes to B to have an oncology test because C's research revealed that B is one of the most respected oncologists around. B's very competent oncologist colleague, A, reads the test and tells B that it indicates pancreatic cancer. B doesn't have time to discuss the details with A prior to meeting with C. At their meeting, B tells C that the test indicates pancreatic cancer. All the testimonial links are very reliable and each member of the chain has good evidence to trust the previous member, so it appears that C can come to know that he has pancreatic cancer through this chain. But if C were to learn that B is just telling him what A said, C's belief would to some extent be defeated.

Why is this? Lackey says this is an unresolved puzzle. Green's solution is that in such cases, there is a social norm for testifiers to p to make their position in a testimonial chain clear. Despite B being a reliable testifier, B failed to follow this norm, and so failed to play his or her role properly. This norm arises in cases, essentially, where we think it is important to get testimony that p from someone who has a wealth of direct evidence on p (not just testimonial evidence). These include cases when "intermediate links in testimonial chains could easily fail to be discriminating, when the desirable qualities of a chain originator are rare or contested," and in high-stakes cases (39). In short, then, because B simply asserts p without noting that she is acting merely as an intermediary, B flouts the social norm for her role - a role that C rightly expects B to follow. When C learns that B has flouted this norm, C acquires a defeater of B's competence, which thus defeats to some extent C's belief that p (although this defeat is misleading since in fact B's testimony is very well-grounded).

Role in a team effort is just one kind of epistemically significant social factor. In two very interesting and detailed chapters, Green confronts an empirical challenge to relying on testimony and a situationist challenge to epistemic virtue. He persuasively argues that both challenges can be rebutted by identifying epistemically significant social factors. Space prevents me from discussing both arguments, so I'll focus on the first. Joseph Shieber argues that empirical evidence indicates that humans "are not reliable in monitoring their interlocutors for trustworthiness, deceit, and competence" (Shieber 2012, quoted by Green on p.45). Green replies that we are nevertheless often rational in relying on testimony because of the social context of the testifier: (i) the testifier may testify in a situation where it would be foolish of him to be deceptive because of the likelihood of being found out, (ii) the testifier may have a certain status the possession of which makes it very likely he is reliable (e.g. special certification), (iii) the testifier may play a role in a hierarchy of information-flow and the recipient has reason to think the hierarchy produces reliable results. So we don't always need to have good information about the testifier in order to reasonable trust him; information about the testifier's social context or role is often sufficient for reasonable trust.

Green discusses how the extended credit view can deepen our understanding of two popular issues in contemporary epistemology: epistemic injustice and disagreement. He suggests that the extended credit view can both help explain why epistemic injustice is bad and highlight four kinds of epistemic injustice. He adopts a broadly Aristotelian view of human flourishing -- flourishing is a matter of functioning well as a human and humans are by nature social creatures. Humans by nature aim to know and their knowledge is socially embedded -- that is, their knowledge depends in various ways upon other people and social structures. Epistemic injustice is bad because it hinders or attacks human flourishing (178-180), specifically a human's capacity or exercise of a capacity for knowing. Epistemic injustice is particularly insidious because it can force people to respond by becoming more dependent on themselves -- less dependent on others -- which magnifies the injustice by threatening their ability to flourish as socially dependent knowers. He then outlines four ways such an ability is threatened. First is niche impoverishment -- a powerful subset of the community sets up a social structure (intentionally or not) that ends up favoring their own ability to gain knowledge over the ability of some other subset (a possible example that comes from my recent experience of buying a home: the way mortgages are described and how closing cost sheets are structured). Second is when societal pressures infect one's self-understanding by presenting prejudiced categories or roles. Third is when prejudice leads many to treat a person or group as less credible, resulting in them being given lesser roles in team epistemic projects. Fourth is the diachronic affect of these other sorts of injustices, which tend to persist and sometimes grow.

Green argues that the extended credit view can support a more steadfast position on disagreement. Green argues that David Christensen's (a prominent conciliationist) response to a certain kind of example provides a tool that, combined with the various ways in which our knowledge is socially scaffolded, justifies a more steadfast position in some cases of disagreement. The kind of example is one in which an epistemic peer asserts a proposition that one finds absurd or very clearly false -- such as in Lackey's case of her neighbor, who she has very good reason to think knows the restaurants in Chicago very well, who says that My Thai is not on Michigan Ave; Lackey herself is very familiar with this restaurant, goes to it frequently and is very confident it is on Michigan. In these cases Christensen says one needn't change one's credence much at all because of one's "personal information" - that is, one is better aware of one's own attentiveness, sobriety, sincerity and the like than one is of another person's. Green infers that when a disagreement is extreme -- i.e. "someone has to be markedly unreliable given the nature of the impasse" (204) -- then personal information can allow one to privilege one's own position. Green argues that social structure functions analogously to personal information in cases where one's belief is grounded substantially in testimony. The Condorcet jury theorem shows how the reliability of a group can be amplified quite considerably the more members you have even if the members are only modestly reliable individually; unreliability can likewise be amplified. And the way the group works to arrive at a belief can affect how reliable the group is. It is typically pretty difficult to get information about the reliability of group members and the structure of one's group -- but in some cases one may be far better aware of the reliability of one's own group members and of the structure of one's own group than of those features of another group. If in such a situation your own group has an extreme disagreement with another group, your better awareness of the features of your own group function like your awareness of your personal information in the other cases, and so should allow you to remain steadfast.

This is but a taste of the issues that Green compellingly addresses. He considers other objections and issues such as socially distributed cognition, epistemic authority, and the notion of ability. I commend his discussion of these matters as well. But I want to conclude by first raising a concern about his discussion of credit and then an objection to credit views generally -- including his own extended credit view.

Green distinguishes between three kinds of credit -- credit for good-making, for possession, and for participation. Credit for good-making is "credit for bringing about the good-making feature(s) of some desirable outcome" (104); credit for participation is credit for participating in some way in the bringing about of such a good-making feature. Credit for possession "accrues to one when one is the legitimate beneficiary/owner/possessor of the good" (105). One could have credit for possessing a good while having little credit for making that good -- Green gives the example of a coach whose team wins the game on the final play although the quarterback rejected the coach's calls the entire game. The coach gets the credit for coaching the win even though he gets almost no credit for bringing about the win. This is an important distinction for Green because he wants to say that in cases like the Morris case, Morris and only Morris gets credit for possessing his true belief, that Morris doesn't have much credit for making the true belief, but that he has some credit for participation.

However I don't think credit for possession is a species of the kind of credit at issue, namely credit due to achievement. A coach can get 'credited' with a win even if he hasn't achieved anything - think of a coach who is so bumbling that his team always ignores everything he tells them, listening instead to an assistant coach. The wins still go on the coach's record. So credit for possession isn't, in itself, an achievement. I think it is better to think in terms of what one has credit for. A coach may get credit for coaching the team to the win, and the assistant coaches can get credit for training the players to perform well in the win. An assistant coach may even get some credit for the coach's coaching the team to win if he helped out with the coaching in some important way (note that getting credit for coaching the win is different from getting credit for the coach's coaching the win). So the only time "credit for possessing a true belief" is a genuine achievement is when the believer deserves credit, good-making or participating, for arriving at a true belief. And as Green rightly notes, a knower may only get a little credit for this when he adequately plays his role as a testimony receiver.

But now we can raise a general challenge to credit views. We talk about how well someone knows a proposition; A and B can both know p while A knows p better than B. It seems that in such cases, A has more of whatever makes for knowledge than does B. But credit for arriving at a true belief can't be what that is because of cases like Morris. Morris deserves little credit for his true belief, but he may know the location of the Sears Tower better than someone who has come to the same belief using iffy map-reading skills to correctly read a map. Both can know where the Sears Tower is located; Morris deserves little credit for his knowledge, the other person deserves a lot of credit for his knowledge, but Morris knows the location better. So amount of credit can't explain knowing better. What does? Intuitively, having a more reliable source (the testimony of the local is more reliable than the other person's map-reading skills), or having better evidence. Interestingly, it does not follow that credit isn't required for knowledge. It could turn out that having credit is required for knowledge even though having credit isn't definitive of knowledge.

Should these admittedly sketchily-developed objections be sound, they wouldn't detract from Green's achievement in this book. It is an excellent, interesting, and fruitful defense of a credit view of knowledge as well as a valuable contribution to our understanding of how social factors affect knowledge.

REFERENCE

Shieber, Joseph (2012) “Against Credibility.” Australasian Journal of Philosophy 90.1: 1-18.