Moral Judgments as Educated Intuitions

Placeholder book cover

Hanno Sauer, Moral Judgments as Educated Intuitions, MIT Press, 2017, 312 pp., $50.00 (hbk), ISBN 9780262035606.

Reviewed by Joshua Alexander, Siena College

2017.10.10


Hanno Sauer provides a thoughtful defense for a novel kind of rationalism about moral psychology. More traditional rationalist models suggest that moral judgments arise from deliberative reflection and the careful weighing of reasons. These models have fallen out of fashion in recent years, and it has become increasingly popular to think that moral judgments are instead shaped by our "moral intuitions" -- fast and often emotionally-charged responses to morally salient situations that are produced by unconscious cognitive processes and mechanisms. This change in how we commonly think about moral psychology has been driven by a growing body of empirical research that suggests, among other things, that people form moral judgments so quickly that it would be impossible for those judgments to be suitably influenced by deliberative reflection, that those judgments are influenced significantly by emotional engagement, and that when people are asked explicitly to explain and defend their moral judgments, they often appeal to considerations that could do not possibly explain or defend those judgments (see, for example, Greene et al. 2001 and Haidt 2001). In other words, it seems like we have really good empirical reasons for thinking that moral judgments are automatic and intuitive responses to morally salient situations, closely linked with our emotional responses to those situations, rather than the product of controlled and reflective deliberation. Sauer's goal is to defend rationalism by demonstrating that, properly understood, it is perfectly consistent with this growing body of empirical research.

I would like to focus on just one part of this defense, where Sauer claims that not only is rationalism perfectly consistent with evidence that moral judgments are automatic and intuitive responses to morally salient situations, but also that we have good empirical reasons to believe that automatic and intuitive responses to morally salient situations are themselves shaped by deliberative reflection and the careful weighing of reasons. The basic move in this part of his defense is to try to demonstrate that deliberative reflection and the careful weighing of reasons play a significant role in the formation, maintenance, and correction of our moral intuitions, so that the fact that moral intuitions play a significant role in moral judgment is perfectly consistent with the view that moral judgments arise from deliberative reflection and the careful weighing of reasons. In a slogan, or a book title: "Moral judgments are educated moral intuitions."

The second part of his defense involves an argument to the effect that emotional responses to morally salient situations count as genuine moral judgments only when they pick out the morally relevant features of those situations, and that in this case they demonstrate sensitivity to moral reasons, which is just what rationalism requires. I leave it to people more heavily invested in recent debates about moral sentimentalism to judge this second part of the defense.

I think that the idea that moral judgments are educated moral intuitions is worth serious consideration, in no small part, because I think that Sauer succeeds in showing us that thinking of moral judgments as educated moral intuitions would reconcile rationalism with empirical evidence that moral judgments are automatic and intuitive responses to morally salient situations. Having said that, I want to suggest here that Sauer has not yet made the case that moral intuitions can be educated. He provides two different kinds of reasons for thinking that they can: empirical studies that focus on our moral intuitions, which he thinks provide direct evidence that moral intuitions can be educated, and empirical studies that focus on other kinds of intuitions, which he thinks provide indirect evidence that moral intuitions can be educated. My worry is that the empirical studies that he recruits do not provide adequate reason for thinking that moral intuitions can be educated, at least in the sense that he would need them to be in order secure the kind of reconciliation he wants.

Let's start with his direct evidence. Sauer's "experimentum crucis" for the claim that our moral intuitions are amenable to reason comes from recent work by Joshua Greene and his colleagues, which aims to show that people's moral intuitions about Jonathan Haidt's famous incest case are responsive to legitimate challenges to those intuitions (Paxton, Ungar, and Greene 2012). There is reason to worry about pointing to this work in defense of the idea that we can educate our moral intuitions, namely, that it is not entirely clear whether Greene and colleagues have shown that people change their moral intuitions or simply that they change their minds. After all, one way that people might respond to challenges to their moral intuitions is by setting those intuitions aside, whether those intuitions change or not. This is, arguably, what many people do when they learn how to solve the "Monty Hall problem" or when they learn to overcome the Müller-Lyer illusion, and nothing about this work rules out the possibility that this is what is happening when people's moral intuitions about Haidt's incest case are challenged.

This worry -- that Greene and his colleagues have not done enough to show that people's moral intuitions about Haidt's incest case actually change, rather than that their considered views about the case change despite their moral intuitions -- is compounded by the fact that Sauer points to recent work by Matthew Feinberg and his colleagues as further evidence that people's moral intuitions about Haidt's incest case are responsive to legitimate challenges to those intuitions, despite the fact that this work is quite explicitly aimed at showing how people overcome their moral intuitions about Haidt's incest case, something that could hardly qualify as educating those intuitions in the sense that Sauer clearly has in mind (Feinberg et al. 2012). The problem, then, is this. What Sauer needs in order to be able to reconcile rationalism with empirical evidence is evidence that deliberative reflection and the careful weighing of reasons shapes our automatic and intuitive responses to morally salient situations, not just that our automatic and intuitive responses to morally salient situations sometimes give way to deliberative reflection and the careful weighing of reasons. The studies that he recruits simply do not provide adequate reason to think that this is what is going on.

What about his indirect evidence? Since little empirical work has been done to explore whether people's moral intuitions can be educated, Sauer builds much of his case on the basis of supposed similarities between moral intuitions and other kinds of intuitions, philosophical or otherwise, and what empirical work has been done exploring whether other kinds of intuitions can be educated. There are two reasons to worry about this strategy, however. The first is rather simple and straightforward. It is simply not clear that moral intuitions actually do recruit the same cognitive mechanisms that are involved in the production and maintenance of other kinds of intuitions, including other kinds of philosophical intuitions. In fact, from the perspective of psychology or cognitive neuroscience, there are substantial reasons for thinking that intuitions form a rather heterogeneous class of mental states or episodes, which suggests that not much weight can be placed here on evidence that some kinds of intuitions can be educated in the relevant sense (Nado 2011).

The second reason to be concerned is that it is just not clear that the studies that he recruits for this purpose demonstrate what he needs them to demonstrate. To take just one central example, Sauer says that the "most spectacular evidence for how subjects can educate their intuitive judgments comes from research on social prejudice and stereotype activation," citing the influential work of Laurie Rudman and her colleagues on "unlearning" implicit biases (Rudman et al. 2001). This work aims to show that implicit attitudes are malleable, and that there are interventional strategies that can be used to help people change their implicit attitudes. The problem for Sauer is that the idea that we can unlearn implicit biases remains rather controversial. In fact, the current consensus seems to be that interventional strategies produce only short-term effects; various interventions immediately reduce implicit preferences, but this effect goes away after a couple of days, something that suggests that implicit preferences are actually rather steadfast (Lai et al. 2016). What's more, even when we focus our attention on the short-term effectiveness of interventional strategies, what we see is that only certain kinds of interventional strategies produce even short-term changes (Forscher et al. under review). This underscores another important point about the education of our moral intuitions; not only is it an open question whether moral intuitions can be educated, it is also an open question what that education should look like and what the educational benefits will be.

While I do not think that Sauer has made the case that moral intuitions can be educated, at least in the sense that he seems to need them to be in order to reconcile rationalism with empirical evidence that moral judgments are automatic and intuitive responses to morally salient situations, I am really glad that he is trying to make this case, and think that his attempt is a profound and important contribution to the ways that we think about moral psychology. This seems like just the kind of move that rationalists need to make, and I think that anyone interested in moral psychology should take this move, and this book, very seriously. There is more work to be done, but it seems to me that this is the exactly the right kind of work to be doing.

REFERENCES

Feinberg, Matthew, Robb Willer, Olga Antonenko, and Oliver P. John. (2012). Liberating reason from the passions: Overriding intuitionist moral judgments through emotion reappraisal. Psychological Science 23: 788-795.

Forscher, Patrick S., Calvin K. Lai, Jordan R. Axt, Charles R. Ebersole, Michelle Harman, Patricia G. Devine, and Brian A. Nosek. (Under review). A meta-analysis of change in implicit bias.

Greene, Joshua D., R. Brian Sommerville, Leigh E. Nystrom, John M. Darley, and Jonathan D. Cohen. (2001). An fMRI investigation of emotional engagement in moral judgment. Science 293: 2105-2018.

Haidt, Jonathan. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review 53: 814-834.

Lai, Calvin K., et al. (2016). Reducing implicit racial preferences: II. Intervention effectiveness across time. Journal of Experimental Psychology: General, 145: 1001-1016.

Nado, Jennifer. (2011). Why intuition? Philosophy and Phenomenological Research 86: 15-41.

Paxton, Joseph M., Leo Ungar, and Joshua D. Greene. (2012). Reflection and reasoning in moral judgment. Cognitive Science 36: 163-177.

Pinillos, N. Ángel, Nick Smith, G. Shyam Nair, Peter Macrhetto, and Cecelia Mun. (2011). Philosophy's new challenge: Experiments and intentional action. Mind & Language 26: 115-139.

Rudman, Laurie A., Richard D. Ashmore, and Melvin L. Gary. (2001). "Unlearning" automatic biases: Malleability of implicit prejudice and stereotypes. Journal of Personality and Social Psychology 81: 856-868.