Bad Beliefs: Why They Happen to Good People

Bad Beliefs

Neil Levy, Bad Beliefs: Why They Happen to Good People, Oxford University Press, 2022, 188pp., $70.00 (hbk), ISBN 9780192895325.

Reviewed by Alex Worsnip, University of North Carolina at Chapel Hill

2022.11.02


Neil Levy’s book is a treatment of the psychology and epistemology of beliefs about matters that, as Levy pithily puts it, “are controversial but shouldn’t be” (x). Levy’s paradigm examples are beliefs about climate change, evolution, and the safety and efficacy of vaccines: in particular, beliefs about these subjects that run contrary to the clearly established expert consensus about them. Levy wants to understand why such beliefs are held, in defiance of expert consensus. This may seem like a purely descriptive, empirical, psychological question. Yet in considering the social and psychological processes that produce such beliefs, Levy also wants to consider the normative question of whether such beliefs are formed rationally. The central claim of Levy’s book is a radical and surprising one: bad beliefs are (at least typically) “the product of genuinely and wholly rational processes”—rational in the (demanding) sense that they “respond appropriately to evidence, as evidence” (xii; Levy’s emphases). He takes this to motivate a shift away from strategies to address bad beliefs that try to fix the processes of thinking that produce them, and toward strategies that focus on “improving the epistemic environment” that believers are confronted with (xiv–xv).

Levy’s book has some noteworthy virtues. It is on an extremely important topic that deserves more attention from epistemologists. It is also deeply engaged—as, given its aims, it needs to be—with the literature in empirical psychology and cognitive science. Unlike some philosophers working on these topics, Levy doesn’t cherry-pick individual studies or absorb psychological theories uncritically: rather, he dives deep into the literature, considering a range of studies, and discussing with acuity what conclusions should be drawn from them. His discussions of this literature are a model of philosophically sophisticated cognitive science, and show how knowledge of the empirical literature and sensitivity to complex philosophical issues are both necessary for determining what the empirical evidence does and doesn’t show. But with all this said, I found myself unpersuaded by Levy’s case for his central claim that bad beliefs are (typically) rational (or, produced by rational processes that are appropriately responsive to evidence; I’ll assume these are equivalent in what follows).

To assess this claim, it’s important to know what is meant by a ‘bad belief’. Levy is clear that bad beliefs are not just false beliefs; indeed, falsity is neither necessary nor sufficient for a belief’s being bad (x). Surprisingly, though, some of what Levy says more positively about what ‘bad’ beliefs are threatens to make his central claim almost definitionally false. For example, on the back flap of the book, he characterizes bad beliefs as ones that “blatantly conflict with easily available evidence.” But it’s hard to see how such beliefs could be rational in a sense that concerns appropriately responding to evidence.

Now, it initially appears that Levy has a resolution of this apparent tension, one that appeals to a distinction between (a) the “totality” of the evidence; (b) the evidence that is (easily) available to the agent; and (c) the evidence that the agent actually possesses. The idea seems to be that bad beliefs conflict with the evidence in sense (a) but not in sense (b) or (c), or perhaps in sense (a) and (b) but not in sense (c).[1] However, if this is what Levy means, then he is committed to the idea that there is some evidence out there that impugns bad beliefs, but which the relevant believers don’t possess. And in chapter 1, Levy goes on to deny precisely this: bad beliefs are not, he argues, (typically) a result of an “information deficit” (25–27). For example, for the most part, “climate change skeptics don’t seem worse informed than those who accept the science” (26). If this is so, it can’t be that climate change skeptics are saved from irrationality by a gulf between the totality of the evidence and that which they possess. This left me very unclear about how to make sense of Levy’s claim that the relevant beliefs are both ‘bad’ in his sense and rational.

While—as the above makes clear—I found Levy’s framing(s) of his thesis somewhat inconsistent and confusing, this critique has its limits. Whatever he means by ‘bad beliefs’, we can still ask whether the paradigm examples of the beliefs that Levy has in mind—beliefs that run against the expert consensus about climate change, evolution, and vaccines—are, as he claims, (typically) rational. This is the substantive question to which Levy’s book is really addressed, and it’s what I’ll focus on in the remainder of this review.

As I understand it, Levy’s central argument for the rationality of such beliefs is as follows. First, given the complexity and volume of the first-order evidence bearing on complex scientific matters, no individual—not even a scientist, let alone a lay person—can feasibly or responsibly evaluate all this first-order evidence for themselves (95–105). Instead, we must defer to others very extensively, “outsourcing” many of our beliefs to the community (64). This practice is not only evolutionarily adaptive (45, 61, 73) but also “directly” rational (131, 142). This is because, crucially, the fact that people in my community believe p is higher-order evidence for p (81, 150).[2] But then, the thought is, those who hold beliefs counter to the scientific consensus on matters like climate change, evolution, and vaccines are (often) just deferring to their communities in this perfectly appropriate way (82–84). It just happens that their epistemic environment is polluted such that this rational practice of deference leads them to false beliefs. Those of us who hold beliefs that are in line with the scientific consensus on these matters aren’t using a radically different method of belief formation: we too must defer somewhat automatically to what others around us believe. We just happen to be in less polluted epistemic environments.

Levy is surely correct that most—perhaps all—of us lack the capacity to fully assess all the relevant first-order evidence about complex scientific matters for ourselves, and that much of what we know relies on (quite properly) deferring to the testimony of others in our community. But to get the result that (for example) climate change deniers are rational in believing as they do, he needs to explain why climate change deniers are rational in giving so much more weight to the testimony of other climate change deniers than they do to the testimony of those who affirm the scientific consensus. After all, many climate change deniers are perfectly aware of the existence of large numbers of people—people who may well qualify as being part of their “community” in at least some good sense—who don’t deny the reality of anthropogenic climate change. Thus, Levy’s claim needs to be not just that we’re rational in deferring to “the community,” but that we’re rational in deferring to our immediate epistemic community—those closest to us, whether geographically or culturally, whether by accident or choice—and in particular to our political co-partisans. Moreover, the claim must be that we’re rational in this even when we’re perfectly well-aware that many people outside this immediate community disagree.

What could justify this claim? Clearly, it’s true that people generally trust their members of their immediate epistemic communities, and their co-partisans, more than they trust outsiders. At times (82–83, 103–4), Levy seems to flirt with or presuppose a radical subjectivism on which the fact that I trust someone suffices to make it rational to defer to them (and the fact that I distrust someone suffices to make it rational to dismiss them), and my trusting as I do is not itself open to rational assessment. To use an example of his (82), liberals trust Dr. Fauci, while conservatives trust Donald Trump; and given whom they each trust, they’re each rational to reach the beliefs that they do. To say that the fact that conservatives trust Donald Trump suffices to make it rational to reach the beliefs that they do implies that this trust itself is neither rational nor irrational; it is just a social fact.

Perhaps Levy finds this view tempting because—as he argues very persuasively in one of the best parts of the book (110–122)—it’s very hard for lay people to tell who is a reliable source of information and who isn’t. Without possessing expertise themselves, they are very limited in their ability to accumulate track-record data on different testifiers’ reliability; nor can they directly track amorphous factors like “argument quality” or “intellectual honesty” in a way that sets aside their pre-existing beliefs and convictions about the matter under dispute. But though this observation is spot-on, I do not think we should draw from it the conclusion that individuals can trust (and dismiss) whomever they wish and be beyond rational criticism in so doing. An alternative view—one that is to my mind more compelling, albeit somewhat unsettling—is that the proper attitude in light of one’s inability to arbitrate between conflicting sources of testimony is one of doubt or suspended judgment. But this would at most show that individuals in certain informational environments are rational in failing to believe the scientific consensus on climate change, evolution, and vaccines—not that they’re rational in positively believing that the consensus is false.

But perhaps Levy does not ultimately want to say that whom we trust is beyond rational criticism. At other times (81–82), he seems to say instead that there are just good positive reasons for people to trust those in their immediate communities, and their co-partisans, more than outsiders. For example, he says that the fact that people like me believe p is higher-order evidence for p (72, 81). But he fails to explain what work “people like me” does here: why is the fact that people like me (or: who are my part of my immediate epistemic community, are my co-partisans) believe p weightier evidence than the fact that people like you believe not-p? Why is it rational to treat being like me as a mark of being a (more) reliable source of information? After all, as the epistemological literature on “irrelevant influences on belief” makes vivid, had I been born into a different family, or in a different geographical location, or gone to a different kind of school, those “like me”—my immediate community members, and likely my co-partisans too—would have been very different. But it doesn’t seem that which circumstances I am born into makes any difference to who is in fact reliable.

All Levy says here is that “those who don’t share my values may seek to exploit me, and those on my side are likely to be more trustworthy (toward me)” (82). This is not persuasive. Exploitation can come from those who are (perceived to be) on one’s political “side” just as much as those on the other side: the latter-day GOP’s exploitation of the white working class’s fears and concerns to push an agenda that arguably redounds to the latter’s own economic disadvantage is a case in point. Moreover, someone who doesn’t seek to exploit me and who is quite honestly reporting what they believe to be the truth can still be a highly unreliable source of information: sincerity is not reliability.

As I mentioned earlier, Levy’s book is impressively engaged with a wide swathe of empirical work in cognitive science. Unfortunately, it is less well-engaged with a large body of work in epistemology that is of direct relevance to the central arguments of the book. To begin, despite the fact that Levy’s argument centrally appeals to the notion of higher-order evidence, there is almost no engagement with the voluminous literature on the ways in which higher-order evidence can afford us reason to doubt our beliefs. Some of this literature powerfully challenges some of Levy’s core contentions. For example, I’ve already brought out how the literature on “irrelevant influences on belief”—the ways in which factors such as where and when we happen to be born affect what we believe and what sources we trust—can be used to challenge Levy’s view that the fact that my immediate community believes p makes it rational to believe p (cf. esp. Avnur & Scott-Kakures 2015). This literature, and the problem it raises, goes entirely undiscussed.

Similarly, while Levy does allude to psychological work on motivated reasoning, there is very little discussion of the growing literature on its epistemological consequences—specifically, on the reasons for doubting our beliefs that are afforded by suspecting that they have been produced by motivated reasoning. This too might be taken to directly challenge Levy’s view, since many beliefs that deny scientific consensus on matters like climate change look plausibly like results of motivated reasoning (cf., e.g., McKenna 2019; Carter & McKenna 2020; Greco 2021). While Levy does very briefly mention something like this thought, he dismisses it unpersuasively in a single sentence (81). Levy also neglects work in feminist epistemology that suggests, directly contra his suggestion that it is reasonable for me to give disproportionate weight to the testimony of those like me, that epistemic justice often requires us to take especially seriously the testimony of those who are not like us, particularly if we occupy positions of privilege (Daukas 2006; Pohlhaus 2012).

As well as not engaging with some epistemological work that challenges his view, Levy also neglects to mention some important work that prefigures his view. For example, his (again: persuasive) arguments about the difficulties of assessing expert credentials without having expertise oneself strongly mirror arguments previously presented by Elijah Millgram (2015: chapter 1, appendix A) and C. Thi Nguyen (2020), while his arguments about the reasonableness of relying on co-partisanship in determining whom to trust closely resemble those in a well-known paper by Regina Rini (2017). Millgram is cited in a different context, but not the one just highlighted; Nguyen and Rini, despite being two of the most prominent figures in the growing field of “applied epistemology” of which this book is a part, go entirely uncited. Finally, chapter 6 argues that engineering others’ epistemic environments is not a problematic restriction of autonomy—a claim that Kristoffer Ahlstrom-Vij has previously defended in a full-length book (Ahlstrom-Vij 2013); again, this goes uncited.

In place of engagement with this literature, Levy sometimes traffics in stereotypes about mainstream epistemology that are at best outdated and at worst simply inaccurate. For example, he claims that his contention that “unaided individual cognition is highly unreliable” runs “contrary to the consensus in epistemology” (xvii). But I can’t think of a single contemporary epistemologist who holds that unaided individual cognition is reliable. On the contrary, the claim that we are necessarily highly reliant on others in our doxastic lives is now a truism in epistemology, and there are a number of recent works that argue (in concurrence with Levy) that we cannot responsibly evaluate the first-order evidence about scientific matters for ourselves, and as a consequence are required defer to epistemic authorities (Huemer 2005; Zagzebski 2012: chapter 5; Ahlstrom-Vij 2016; Grundmann 2021; again, all these works go uncited).

In closing, I want to comment on a broader trend that Levy’s book exemplifies. As mentioned earlier, Levy stresses that, given his view, our aim should not be to alter the epistemic practices of individuals, but rather to reform the “epistemic environment” in which these individuals find themselves. This fits with a trendy refrain in some recent social epistemology—paralleling and perhaps shaped by a similar refrain in parts of social and political philosophy, social science, and (some) left-leaning socio-political discourse more broadly—that urges us to shift attention from the pathologies of individuals toward the pathologies of structures. But there is a crucial ambiguity here. It’s one thing to say that decrying the practices of individuals is unlikely to be an effective way to bring about social change. This claim has much plausibility to it, especially in the present context: it seems very unlikely that shouting “you’re irrational!” at climate change deniers is going to change their minds. But the fact that it is ineffective political strategy to say this does not in the slightest show that it is false. That is quite a different claim. We can acknowledge that the best solutions to bad belief are likely to be aimed at altering epistemic environments, rather than at altering individual practices, without having to pretend that these individual practices are, in fact, fully rational.

Contra Levy, then, we simply don’t need the claim that bad believers are rational to motivate a focus on reform of the epistemic environments and structures that bad believers find themselves in. Those of us who remain unconvinced by his arguments for the former claim can wholeheartedly join in the project of seeking to improve epistemic environments, as he urges. But this itself, of course, is no easy task.

ACKNOWLEDGMENT

For extremely helpful comments on a draft of this review, I’m very grateful to Genae Matthews.

REFERENCES

Ahlstrom-Vij, K. 2013. Epistemic Paternalism. Palgrave Macmillan.

Ahlstrom-Vij, K. 2016. Is There a Problem with Cognitive Outsourcing? Philosophical Issues 26: 7–24.

Avnur, Y. & Scott-Kakures, D. 2015. How Irrelevant Influences Bias Belief. Philosophical Perspectives 29: 7–39.

Carter, J.A. & McKenna, R. 2020. On the Skeptical Import of Motivated Reasoning. Canadian Journal of Philosophy 50: 702–718.

Chen, Y. & Worsnip, A. forthcoming. Disagreement and Higher-Order Evidence. In Baghramian, Carter & Rowland (eds.), The Routledge Handbook of the Philosophy of Disagreement. Routledge.

Daukas, N. 2006. Epistemic Trust and Social Location. Episteme 3: 109–124.

Greco, D. 2021. Climate Change and Cultural Cognition. In Budolfson, McPherson & Plunkett (eds.), Philosophy and Climate Change. Oxford University Press.

Grundmann, T. 2021. Facing Epistemic Authorities: Where Democratic Ideals and Critical Thinking Mislead Cognition. In Bernecker, Flowerree & Grundmann (eds.), The Epistemology of Fake News. Oxford University Press.

Huemer, M. 2005. Is Critical Thinking Epistemically Responsible? Metaphilosophy 36: 522–531.

McKenna, R. 2019. Irrelevant Cultural Influences on Belief. Journal of Applied Philosophy 36: 755–768.

Millgram, E. 2015. The Great Endarkenment. Oxford University Press.

Nguyen, C.T. 2020. Cognitive Islands and Runaway Echo Chambers: Problems for Epistemic Dependence on Experts. Synthese 197: 2803–2821.

Pohlhaus, G. 2012. Relational Knowing and Epistemic Injustice: Toward a Theory of “Willful Hermeneutical Ignorance.” Hypatia 27: 715–735.

Rini, R. 2017. Fake News and Partisan Epistemology. Kennedy Institute of Ethics Journal 27: 43–64.

Zagzebski, L. 2012. Epistemic Authority. Oxford University Press.

 

[1] On p.x, Levy seems to suggest that bad beliefs conflict with the evidence in sense (a) but not in sense (b) [or, by extension, (c)]. But this is not enough to resolve the tension with his aforementioned claim that bad beliefs “blatantly conflict with easily available evidence,” which seems to directly say that bad beliefs conflict with the evidence in sense (b). A more consistent proposal would thus be that bad beliefs conflict with the evidence in senses (a) and (b) but not in sense (c).

[2] In passing, let me note that while Levy is not the only person in the literature who characterizes testimony as functioning primarily or solely as a kind of higher-order evidence, I am not convinced that this is right. Elsewhere, with Yan Chen (Chen & Worsnip forthcoming), I’ve argued for a characterization of higher-order evidence (with respect to some proposition p) as evidence that bears on the rational status of one’s (would-be) doxastic attitudes toward p derivatively on its bearing on a higher-order proposition about p (e.g., that one’s evidence supports p, or that one’s [would-be] belief in p is rational). But it’s not clear that testimony fits this characterization. When a testifier is reliable (or rationally taken to be so), the fact that they say that p is direct evidence for p, just as a reliable instrument (say, a thermometer or other measuring device) saying that p is direct evidence for p. This evidence’s rational bearing on would-be doxastic attitudes toward p does not seem to be derivative on its bearing on any higher-order proposition about p. Thus, testimony that p seems to be—at least primarily—first-order evidence for p