Many things can be biased: coins, dice, methods of predicting the weather, descriptions, policies (e.g., regarding admissions), laws, people (including judges, referees, parents, grandparents, teachers, etc.), their perceptions, beliefs, views, judgments, verdicts, actions, etc. This is a nice list of things that can be biased. Is there something they have in common that unites them? In his recent book, Bias: A Philosophical Study, Thomas Kelly argues that there is. He first distinguishes a pejorative from a non-pejorative (innocuous) sense of “bias”: to say that a person or her beliefs are biased in a pejorative sense expresses a negative evaluation of that person or her beliefs. To say that a coin, pair of dice, or method of predicting the weather is biased does not. But in both cases Kelly offers an account of bias according to which it is a systematic, objective, non-random departure from genuine norms (pejorative sense) or standards of correctness (non-pejorative sense). Biased coins systematically depart from the standard set by fair coins where it’s just as likely that a fair coin will flip heads as tails. Referees depart from a norm of impartiality if they are disposed to favor one team over another and their calls are biased if they manifest favoring one team over another. Judges in their verdicts also depart from a norm of impartiality if they manifest favoring (or disfavoring) one race, gender, or ethnicity over another. Departures from genuine norms warrant negative evaluations that “often have normative significance for what we should do or think” (167).
As Kelly acknowledges, there seem to be counterexamples to this account of bias. Aren’t there systematic departures from norms and standards that are not biases? Two of Kelly’s own examples involve systematic mispronunciation of certain words and systematic mistakes in long division. And aren’t there systematic misperceptions, as when in the Muller-Lyer figure the line with arrowheads pointing inward looks longer than the one with arrowheads pointing outward even though they are of equal length? But that involves a mistake, not a bias.
Contrary to what Kelly says, misleading evidence does not imply bias. He rightly suggests that the norm of rational or justified belief requires fitting your beliefs to the evidence (124, 154; the case of Frank, 78). However, when the evidence is misleading that means that you will often have false beliefs, as in the Mueller-Lyer example. People many years ago fit their beliefs to the evidence and believed that the sun goes around the earth, so they had a false belief. If you were in the Matrix, fitting your beliefs to the evidence would result in your holding false beliefs about the world. None of this involves bias. Kelly seems to think it does because it violates the “norm of truth or accuracy” (77). But it seems obvious that basing your beliefs on the evidence does not involve bias even if the evidence is misleading.
Where have things gone wrong here? Perhaps systematic violations of the so-called norm of truth or accuracy does not determine bias because it is not what Kelly calls a genuine norm. Internalists about epistemic justification will deny that there is a genuine norm of truth or accuracy, holding instead that truth is the aim or goal of epistemic justification but that justification involves the responsible pursuit of this aim or goal which requires fitting our beliefs to the evidence. So fitting our beliefs to the evidence is the relevant norm. If this is right, then Kelly’s norm-theoretic account of bias will not be threatened by the fact that misleading evidence does not constitute bias. His general account of bias in the pejorative sense could be right, but where he goes wrong is in taking the so-called norm of truth or accuracy to be a genuine norm (186).
Another potential counterexample to Kelly’s theory involves systematic violations of the rules of logic. Someone who systematically affirms the consequent or denies the antecedent departs from the rules of logic. Those are mistakes and the resulting arguments are invalid, but people who systematically commit these mistakes are not biased in virtue of committing them. The same would be true of someone who systematically commits a scope fallacy, say, reading sentences such as, “What you know must be true” as, “If you know something, it’s necessarily true,” rather than as, “Necessarily, if you know something, it’s true.” They make a logical mistake, which can lead them to accept external world skepticism, but they aren’t biased.
Kelly feels the pull against his view that the examples of mispronunciation and long division provide. They seem to be counterexamples to it (150). However, instead of giving up his view or modifying it in light of such examples, he says he is offering an explication rather than a reductive analysis of bias in terms of necessary and sufficient conditions. Borrowing the notion of explication from Carnap, Kelly says, “the aim of an explication is to capture the theoretically interesting and important notion in the vicinity” of some concept even if the explication “departs at the margins from ordinary usage” (151; my italics). So he proposes to give an explication of bias, not an analysis.
Still, logical mistakes involving the systematic departure from rules of logic count as biased reasoning according to Kelly’s explication of “bias.” And because considering those departures as biased is a serious departure from the ordinary usage of “bias,” not merely a “departure at the margins,” Kelly’s explication of bias is incorrect. Kelly might respond that the rules of logic are not genuine norms in the pejorative sense but only standards of correctness comparable to the standards of correctness that determine whether a coin is biased. But that does not seem plausible since we negatively evaluate invalid arguments and negative evaluation is the crucial difference for Kelly between a pejorative and a non-pejorative use of “bias.”
As a warm-up to considering more complicated cases of bias involving agents, Kelly introduces the notion of symmetry (152). He thinks that many departures from norms involve deviations from symmetry, which is the requirement that we “treat like cases alike, for some contextually relevant respect of likeness or resemblance” (154; 160). For instance, in a biased admissions policy we might treat people equally qualified for college differently because of their race, gender, or ethnicity thereby violating symmetry. Here symmetry requires that equally qualified applicants have equal chances of getting in regardless of their race, gender, or ethnicity, so a departure from equal opportunity amounts to bias. A basketball referee who is disposed to call fouls on one team but not the other because he favors it over the other is biased and departs from symmetry by failing to treat like cases alike.
Of course, the real requirement for being unbiased is that we treat relevantly similar cases in relevantly similar ways. An admissions policy need not be biased just because it treats the more qualified applicants differently from the less qualified ones. An NBA basketball team will not be biased just because it hires better players over less skilled players. A judge is not biased just because she convicts defendants where there is overwhelming evidence that they are guilty but fails to convict defendants where that is not the case.
Treating relevantly similar cases in similar ways seems to be a necessary condition of being unbiased, but is it sufficient? In his book Social Justice, Joel Feinberg quotes Forrest Gregg who was a professional football player who played for Vince Lombardi, coach of the Green Bay Packers. Gregg said that Lombardi treated everyone the same. . .just like dogs! This example is supposed to show that treating like cases alike, or relevantly similar cases similarly, is not sufficient for doing something that is morally permissible. But if that is what Lombardi did, he was not biased and neither were his actions. (He may have been what Kelly calls an “unbiased jerk” (5–6), though it’s puzzling why he would say such a jerk is unbiased given his systematic-norm-departure account of bias.)
Lombardi was like a basketball referee who always calls charging on the player with the ball who runs into the defensive player even when the defensive player moves sideways to get in front of the offensive player. That’s called “blocking” and is a foul on the defensive player. Suppose the referee calls charging on both teams in these situations equally often; he does not favor one team over another when it comes to calling fouls in the specified situation. In this case, the referee is incompetent but, intuitively, not biased and neither are his calls. However, on Kelly’s norm-theoretic account of bias, the referee is biased because he systematically departs from the rules of basketball.
Kelly imagines a different scenario involving an incompetent referee (116–17; referred to later at 203). Kelly seems to have in mind a referee who sometimes calls a charge on the offensive player when the defensive player moves in front of him and sometimes calls a block on the defensive player in the very same situation. He does this for both teams. Because it’s random which way this incompetent referee calls a foul, it is not a systematic departure from the rules of basketball and so not a case of a biased referee or call on Kelly’s account. While this example of an incompetent referee is not a problem for Kelly’s account of bias, mine is. It has the implication that the systematically incompetent referee is biased but intuitively he is not. He just makes lots of mistakes in his calls in a patterned way, systematically calling charges that are actually blocks.
After having defended his norm-theoretic account of bias, Kelly goes on to apply it to questions involving knowledge and peer disagreement. He discusses three cases involving the belief that pit bulls are dangerous where this is in fact true (206–212). (In thinking of these cases, you might want to replace pit bulls with snakes.) A biased thinker believes this without any empirical evidence; an unbiased thinker believes this only if she has compelling empirical evidence; an intermediate thinker, like the biased thinker, believes this without any empirical evidence, but his belief is innate. We can imagine that evolution favored people who had a disposition to believe that pit bulls (snakes) are dangerous over those who would believe it only if they had compelling empirical evidence (included in this class would be people who disbelieved it without such evidence and those who would suspend judgment without that evidence). Kelly argues that, unlike with the biased thinker, it is no accident that the intermediate thinker believes that pit bulls are dangerous. Kelly’s view seems to be that a reliably formed true belief (formed in a suitable environment) that is not accidentally reliable constitutes full-fledged knowledge (perhaps with some other non-defeater conditions added to the account that are not relevant here) (211).
But this doesn’t seem right. Let Truenorth be a person who has a reliable internal compass but who has no evidence that he does. Imagine that having such a compass was an advantage from the standpoint of evolution. Our ancestors who had it were able to give more precise directions to others about how to get to a water hole, avoid swamps and sink holes, find food, avoid predators, etc. Imagine, too, that people stopped relying on this internal compass because even more precise human-made compasses were invented, but they did not lose this innate ability to tell north from south and east from west with their internal compass. Suppose Truenorth is at a party and people are bragging that they can tell compass directions with their eyes closed, but none of them has any empirical evidence either that they can or cannot. They decide to put it to the test. Truenorth draws the longest straw and goes first. They blindfold him, spin him around this way and that. With the blindfold still on, they then ask him to point north. Of course, he does. They do this nine more times without telling him the results after each test. He gets it right every time. If for some reason Truenorth believes that he really is pointing north, intuitively, he does not know that he is. There is a kind of luck involved in his getting it right that is incompatible with knowledge. When you believe something for no reason, from an internal perspective it’s like guessing the correct answer. But on Kelly’s view he does know because he is in fact non-accidentally reliable about compass direction due to the evolutionary origins of his reliable capacity. Knowledge may be non-accidentally justified true belief, but it is not non-accidentally, reliably produced true belief, at least not in Kelly’s sense of “non-accidentally.”
Kelly also takes up the relevance of bias to disagreement among people on controversial issues, say, on political questions about whether there is a right in the Constitution that is the basis of a legal right to abortion. He argues that if you could give good reasons to think that your opponent is biased on the issue, or closely related issues such as the moral rightness or wrongness of abortion, that could provide a powerful undercutting reason to discount his arguments and position. But what if you can’t? Doesn’t he have as much reason to charge you with bias as you have to charge him with bias? And then shouldn’t you both either suspend judgment or at least considerably reduce your level of confidence in your position?
Kelly argues that it might make a difference whether one person is in fact unbiased and the other biased. He mentions externalist views of knowledge that don’t require justification and so in this case would not require independent reasons for you to think that the other person is biased in order for it to be permissible for you to stick to your guns (225). He later says that “a person who is in fact unbiased will generally and all else being equal be in a stronger position to rationally resist skeptical pressure than a person who is biased” (231; my italics). But at most Kelly has shown that the unbiased person is more likely to have true beliefs, that is, to be more reliable than the biased person. But from the standpoint of rationality, Truenorth is no more rational in believing that he is pointing north when the blindfold is on than someone who is unreliable but gets it right by guessing! So even if the unbiased person is more reliable than the biased person, it does not follow that he is rationally in a stronger position.
But Kelly sees a different problem for the externalists because the situations that concern him involve disagreements about complicated and controversial issues and a background of psychological knowledge about bias. In particular, knowledge about what is called “the bias blind spot” which is a tendency for all of us to be blind to our own biases while at the same time attributing them to others. He ends with a “perhaps depressing note” which is captured in his speculation that “the rich get richer” and “the poor get poorer” (228–29; see also, 11, bottom of page). By this he means that it is likely that a person with accurate and unbiased first-order views about some topic will have accurate and unbiased views about other people and sources of information, which will likely lead to further accurate and unbiased views about the topic, and so on. On the other hand, Kelly speculates that people with biased views about a topic will tend to be unreliable in their assessment of other people and sources of information which will lead them down a path of inaccurate and biased views about the topic and whom to pay attention to on the topic. Perhaps the unbiased views are more reliable, but reliability seems neither necessary (demon-worlds; The Matrix) nor sufficient (Truenorth) for justification (or rationality), nor sufficient (given true belief) for knowledge.
Kelly says that a large portion of his book is devoted to quite general metaphysical and epistemological questions about bias (11). I have tried to focus on his answers to those questions by focusing on what he says about the nature of bias and its implications for knowledge and disagreement. But there are many important points that Kelly makes that I have had to ignore including: the relationship between bias in members of a society and a biased society, whether some sorts of bias are more fundamental than others, how bias can be compatible with knowledge and even required by it, the difference between biases based on content and those based on process, the relationship between disagreement, bias, and rationality, norms that are meant to reduce bias if followed, and on and on. I do not believe that there is any publication in philosophy or psychology dealing with bias or relevant to it that Kelly is unfamiliar with. His erudition is super-human. His examples that he names in all caps perfectly set up the topic that he then goes on to discuss. These are some of the many virtues Kelly and his book possess that I have not discussed.
I have written a critical review of Kelly’s book, but I am confident that my review is completely unbiased (there needs to be a winking emoji here). If Cook Wilson were alive, he need not fear that Kelly made a mistake in publishing his book because publications in general lead to “fruitless and unproductive exchanges” (13). Kelly’s book can prompt many fruitful and productive exchanges about bias.