The Moral Psychology Handbook

Placeholder book cover

John M. Doris (ed.), The Moral Psychology Handbook, Oxford University Press, 2010, 493pp., $80.00 (hbk), ISBN 9780199582143.

Reviewed by Dan Haybron, Saint Louis University

2011.05.13


Around a decade ago, a leading moral philosopher told me flatly, "psychology is irrelevant for ethics." That was not, at the time, an unusual sentiment. It is hard to imagine anyone saying such a thing today (not, at any rate, publicly). Should any doubts remain, The Moral Psychology Handbook should swiftly put them to rest. Its thirteen chapters, penned by a stellar collection of twenty philosophers, psychologists and neuroscientists, offer an outstanding view of the current state of this now-hopping field. Unlike many handbooks, however, this one significantly adds to, and does not simply review, the literature. It is essential reading for anyone interested in moral psychology, and it should probably be read by anyone doing ethics, period.

As they are multiple-author ventures, handbooks risk offering a disjointed view of their field, and are frequently of uneven quality. Both problems are considerably reduced here by the fact that this book is basically a collaboration, indeed being credited to the Moral Psychology Research Group, a working group that has met regularly since 2003. Each chapter has at least two authors and had to survive the scrutiny of the group; there are no weak links. While the collection still has some of the overlap and disunity of any such book, it also has the benefit of multiple perspectives that a single-author volume lacks. To my mind, the tradeoff is well worth it: the result is a more balanced, thorough discussion than one would expect from a monograph.

Turning to the substance of the book, a couple of things stand out. First, there is refreshingly little discussion of "experimental philosophy," though several of the authors are leading figures in that movement. Instead of drawing out further the (to this reviewer) tedious methodological debates about the role of experimental work within philosophy, the authors simply do it, without making a fuss about it. Or, in some cases, they don't do it, and leave the empirical work to others. In all cases the chapters are richly informed by empirical research. The empirical focus, note, is not placed in opposition to philosophical reflection, but employed to inform it. This is still philosophy.

Second, you might expect that a collection like this, assembled in the relatively early, heady days of a promising and fast-growing research endeavor, would tend toward the breathlessly credulous, if not the hyperbolic. But the authors assembled here are not given to the breathless, and their chapters offer a sober, carefully argued assessment of the science and what it means for ethics. Often they throw cold water on the pretensions of more excitable types. (While the book's audience is mainly philosophers, investigators in the sciences would do well to examine its meticulous discussions of their research and what it really shows.) If anything, the reader occasionally wishes for a bit less caution and a little more excitement.

For most of what follows I will briefly describe each chapter, grouping them thematically and pausing occasionally to editorialize. The lion's share of the book focuses on moral judgment and motivation: what drives ordinary judgments about morality, and what drives moral behavior? The opening chapter, by Edouard Machery and Ron Mallon, casts a skeptical eye on evolutionary accounts of morality, arguing that while there may well be an interesting evolutionary explanation of normative cognition generally, evidence is lacking for an evolutionary explanation of specifically moral normativity. While I suspect some such explanation is in the offing, the authors show how daunting the obstacles are, in the process bringing a welcome clarity to the debate.

Evolutionary arguments also come to the fore in a chapter on the egoism/altruism debate by Stephen Stich, John Doris and Erica Roedder. Here too a nest of confusions is patiently disentangled, one upshot being that evolutionary arguments have done little to advance the debate. Much progress, however, has come from a series of psychological experiments by Batson and others that have methodically eliminated a variety of plausible egoistic explanations for putatively altruistic behavior. This research has indeed increased the pressure on psychological egoists, but the argument is far from settled: the science remains inconclusive. Here the authors are admirably charitable and even-handed.

I am less temperate, however, and wonder if the egoist isn't in deeper water than they let on. True, the dispute is empirical and cannot be settled from the armchair; for all introspection and casual observation can tell us, egoism might be true or false. (So, perhaps, might universal malice.) But why should anyone think egoism true in the first place? From the armchair, the case doesn't look too good, since the appearances are of motivational pluralism; that's why the egoist immediately has to explain away the blizzard of apparent counterexamples. At any rate, how could anyone pretend to establish an extremely bold and arcane empirical conjecture about the roots of all motivation from the armchair? By contrast, the non-egoist can simply say, I have no idea what makes the gears turn, so in the meantime I'm sticking with the appearances: some motives rest on self-interest, others don't. In that light, the popularity of egoism seems a little weird; its status as a default position on motivation in many circles, even weirder. (Indeed, the very term 'altruism' connotes a strange departure from an egoistic norm, begging for explanation.)

And if Stich et al. are correct, then the empirical case for egoism is, at present, null: neither evolutionary theory nor mind science offers any serious evidence to support it, and every empirical test thus far has been consistent with non-egoism, if not favoring it. At this point egoists have to bet on one of the as-yet-untested variants of their theory. (Though importantly, I don't believe any of them support anything like crude commonsense egoism, a surprisingly popular doctrine which claims, for instance, that we'll never get anywhere by appealing to anything other than people's self-interest. That view, at this juncture, appears pretty plainly to be off the table and needs to go away. No philosopher needs to be persuaded of that, but an alarming number of folks outside the discipline seem not to have gotten the message.) But absent some test breaking its way, the right conclusion to draw, it seems to me, is that egoism is likely false, or at any rate there is no rational basis for believing it.

Note that this doesn't immediately help the believer in altruism, which is the main focus of this chapter. But once you take egoism out of the picture and allow for other ultimate motives, why doubt that some motivation is genuinely altruistic? Accordingly, the case for altruism might be bolstered by evidence of other forms of non-egoistic motivation. Here's one: sheer malice. Sometimes people seem to harm others simply out of the desire to harm them, or at any rate self-interest is not obviously implicated. ("For hate's sake I spit my last breath at thee … ") Egoists sometimes pride themselves on being tough-minded, but in this case they may not be tough-minded enough: people may be meaner than egoists think. Less disturbingly, there is good evidence, discussed in other chapters, that people will sacrifice their interests to punish violators of moral norms, as in ultimatum games. One can of course try to cook up egoistic explanations for such observations, but perhaps progress can be made, à la Batson et al., in ruling out at least the more obvious ones. In any case, this seems to me another potential source of evidence for the debate over egoism, bearing indirectly on the case for altruism as well.

Another recurrent theme, indeed perhaps the central issue of contemporary moral psychology, is the ancient "reason versus emotion" debate. Here there has been an "antirationalist turn" in recent years, characterized by various challenges to the role of conscious, rational decision in human life. In fact there are at minimum two distinctions here, one concerning controlled versus automatic processes, the other between cognitive and affective processes; though even this schema is a bit simplified. While the nonrational or automatic is often assimilated to the affective -- both tending to be assigned to the "sentiment" camp -- the distinctions actually cut across each other. It would be helpful to have a label other than "sentimentalism" that fits both automaticity-based and affect-based doubts about the centrality of reason, and "antirationalism" or "nonrationalism" is less than inspiring. It is a promising sign that the contributions to this volume, while exhibiting various facets of this turn, have resisted the temptation to overplay the doubts and consign reason to the margins (your mileage may vary; I suspect Kantians will be less sanguine). As far as I can tell, none of the contributors suggest that reason fails to play an important role in human life. The question is whether it plays quite the roles that various philosophical accounts have supposed.

In the chapter on moral motivation, Timothy Schroeder, Adina Roskies and Shaun Nichols compare several philosophical models against the neuroscientific evidence, concluding that philosophical views enjoy varying levels of congruence with the science of motivation, where the reward (roughly "desire") system takes center stage. Cognitivist models in particular face difficulty, as there is no plausible mechanism by which moral beliefs could, independently of desire, motivate behavior. Turning to the moral emotions, a chapter by Jesse Prinz and Nichols identifies several different models for the impact of emotions on moral motivation and judgment, including Nichols' "sentimental rules" account and Prinz's "emotional constitution" model. But the majority of their discussion centers on a detailed examination of the role of some specific emotions, particularly anger and guilt, which may be essential for the sustenance of moral behavior.

Focusing more narrowly on the mechanisms subserving moral judgment, Fiery Cushman, Liane Young and Joshua Greene lay out the case for a dual process model of moral judgment, on which judgments issue from both "intuitive/affective" and "conscious/cognitive" processes, each favoring different sorts of moral norms. In particular, non-utilitarian judgments, like those involved in trolley cases, are associated -- contra Kant -- with emotional processes. A later essay on rules by Mallon and Nichols, by contrast, takes non-utilitarian judgment to employ internally represented principles or rules as well as emotions, yielding a "dual vector" model of (non-utilitarian) moral judgment. Roedder and Gilbert Harman, in their chapter, examine another cognitive approach to moral judgment that builds on the idea, adapted from linguistics, of a "moral grammar." A particularly interesting variant of the linguistic analogy posits some sort of moral universals. The authors do not endorse a moral grammar view, but argue that the approach merits further investigation.

Moving to moral reasoning, Harman, Kelby Mason and Walter Sinnott-Armstrong subject a popular deductive model of moral reasoning to empirically informed scrutiny and find it wanting. They discuss an alternative model, reflective equilibrium, and suggest that it is not likely to generate reliable results, for instance because of its sensitivity to starting points. Those disappointed by such intimations are unlikely to find much solace in the chapter by Sinnott-Armstrong, Young and Cushman, which involves a close scrutiny of moral intuitions and their provenance, suggesting that such intuitions are heuristics -- useful shortcuts for dealing with workaday moral problems but often unreliable, and of little evidential value for establishing moral propositions. (Intuitionists take note.) This conclusion, unsurprisingly, is taken to bolster the case for consequentialist ethics, or at least to shield it from a familiar source of objections, namely its apparent incongruence with commonsense intuition.

The assimilation of moral intuitions to heuristics is provocative and highly interesting, and I expect it will be the subject of substantial debate. But I'm not yet convinced. I take heuristics crudely to follow the schema "instead of directly x, y"; instead of calculating probabilities directly, just assign them by ease of recall (the availability heuristic). For the many people who seem to hold no explicit moral theory, it is not clear how their intuitions really fit this formula in any straightforward way. For instance, suppose people have firm intuitions about fairness. If these are psychological heuristics akin to the availability heuristic, then their evidential status is indeed dubious: if it turns out that people have these intuitions because they conceive of morality in utilitarian terms but have made a habit of caring about fairness as a good rule of thumb for promoting utility, then the authors' worries are fully vindicated. But if they have the intuitions because, say, a concern for fairness was adaptive and hence selected for, then fairness intuitions may only be "heuristics" in some very broad sense, and not obviously cause for concern. Some of those with Humean leanings, for instance, will be untroubled by such a finding, and may indeed find considerable joy in it: if you think morality is a product of contingent human sensibilities, and the correct moral principles the ones that best fit those sensibilities -- while perhaps also holding up under reflection -- then learning that fairness intuitions are heuristics in an evolutionary sense may actually be evidence that fairness norms are true, and intuitions about fairness good evidence for moral truths. In short, intuitions might perhaps be heuristics from some point of view, such as an evolutionary standpoint, without playing the role of heuristics in our psychological economies.

Be that as it may, a lot hangs on whether such intuitions will indeed hold up under reflection, particularly reflection informed by the sort of empirical research that Sinnott-Armstrong et al., as well as other contributors to this book, bring to our attention. Some commonsense moral intuitions seem driven by factors, like physical proximity or contact, that are liable to strike any reflective person as morally irrelevant. Whether intuitions are heuristics in any troubling sense or not, the doubts raised here about the reliability of intuition are, for my money, among the most pressing challenges raised by the moral psychology literature. Deontologists need a convincing explanation of what distinguishes reliable from unreliable intuitions, or better yet a theoretical rationale for deontological constraints that is comparably compelling to the consequentialist's root notion that morality is fundamentally about making the world a better place. The sparse Kantian framework is one candidate, but not a likely choice for those drawn to this literature. I suspect that a more satisfying response will come from an empirically informed consideration of the roles that various moral values play in human life. Humans need to look out for each others' welfare, for instance, but we may also have other fundamental concerns -- relating, say, to respect or autonomy -- that make our non-consequentialist sensibilities seem well worth endorsing.

A different challenge arises from situationist research in psychology, a worry by now well-known among philosophers thanks to pioneering work by Doris, Harman and others: our extreme sensitivity to situational influences, as manifested in the Milgram studies and many others, raises doubts about the role of character in driving moral behavior. This is the subject of a chapter by Maria Merritt, Doris and Harman. Since the original arguments commonly drew objections that they focused too narrowly on behavior, ignoring the centrality of practical reason for virtue, they extend the argument to show how situational influences raise doubts about the extent to which agents exhibit rational control; to a great extent, these effects occur independently, and frequently contrary to, individuals' reflective commitments. Now the situationist critique of standard models of character broadens, becoming a dual process or antirationalist critique. While individuals have some means at their disposal to compensate for the insubordination of their unruly subrational selves, and rational self-control has a place, the research suggests that it may often be more fruitful to focus on promoting the social contexts that help sustain virtuous conduct.

This strikes me as an extremely important and fruitful point for future research: a narrow focus on what individuals can do for themselves to lead better lives has been pervasive in moral and political thought of recent centuries, with social institutions deemed relevant, for normal adults, only as means to giving individuals the freedom to live as they wish. This tendency is especially prominent in popular thinking about happiness and well-being, including much of the new field of positive psychology. People get indignant when governments propose to so much as measure well-being or happiness, thinking it an intrusion on a thoroughly private matter. But the considerations canvassed by Merritt et al. -- along, I would add, with the literature on cognitive biases and other mistakes -- suggest that this sort of individualism is badly mistaken: living well, both morally and prudentially, may depend strongly on living in social (and physical) contexts that shape us, and our choices, in desirable ways. Improving our lives is inherently a collective, as well as an individual, venture. This sort of "contextualist" outlook on human agency may prove to be one of the more important upshots of the new moral psychology. (Full disclosure: I am not a neutral party in this debate; e.g., Haybron 2008, ch. 12.)

A prime example of limited rational control, though not necessarily involving situational influence, is implicit racism: it appears that most individuals whose explicit beliefs disavow racism harbor implicit, unconscious biases against other racial groups, African-Americans being the most studied example. There is excellent reason to believe, moreover, that many of us sometimes treat members of other races badly because of such biases, however enlightened our convictions. This feature of racial cognition, among other things, raises troubling questions for proposals to eliminate or defang racial categorization, as Daniel Kelly, Machery and Mallon argue in their chapter. Surprisingly, the literature on racial categorization and what to do about it has had little to say about the psychology of racial cognition. This is a significant omission, since what we can reasonably hope to accomplish in addressing racism, and the best means for doing it, will presumably depend on the constraints imposed by human psychology.

Joshua Knobe and Doris contribute a chapter on responsibility, focusing on how people conceive of responsibility. (Not, as one might expect, what to make of human agency, free will and responsibility given findings on automaticity and the like. Such questions don't get much direct attention in this volume, though various chapters shed light on them indirectly.) Philosophers have tended to assume that the right account of responsibility is invariantist, which is to say a unified account that applies the same conditions to all cases of praise and blame. Drawing on some of the best-known findings in experimental philosophy Knobe and Doris put considerable pressure on invariantism: given the complex, multiply asymmetrical nature of ordinary intuitions about praise and blame, it is not obvious that any single set of conditions can apply to all assessments of responsibility. They suggest that some sort of variantist account may be in the offing. The dialectic on responsibility may just have gotten a lot more complicated.

Discussions of moral psychology have had an unfortunate tendency to ignore the subject of well-being, despite the vast empirical literature that has recently arisen in this area. That, thankfully, is not the case here, and the remaining chapter, by Valerie Tiberius and Alexandra Plakias, accordingly focuses on well-being. After surveying different schools of thought in empirical research on well-being, the authors argue that none is backed by an adequate conception of well-being and go on to defend an important new account of their own, the "Values-Based Life Satisfaction" theory (VBLS). At the core of this view is a cognitive/affective mental state, being satisfied with one's life as a whole. But life satisfaction alone isn't sufficient for well-being, since people might be satisfied with their lives when deluded, when their judgments aren't very well grounded in their values, or when those values are ill-suited to their affective natures (e.g., they are internally divided, pursuing goals that make them miserable).

The view resembles Sumner's "authentic happiness" theory, and can roughly be seen as a further development of that approach, as well as Tiberius' own earlier work (Tiberius 2008). VBLS makes well-being subjective while allowing ample room for individuals to make mistakes. The authors note that this theory offers a normative rationale for using subjective well-being measures, notably life satisfaction, in well-being research. But they also observe that it points to areas of slippage between empirical measures and well-being, for instance when self-reports are unduly driven by trivial contextual features like the weather. Interestingly, this means that well-being, as conceived on their life satisfaction theory, may sometimes best be measured using instruments that assess variables other than just life satisfaction. For instance, people may strongly value pleasure, yet fail to register this value adequately in their responses to life satisfaction surveys. Direct measures of hedonic balance may still have a place, then, even if a kind of life satisfaction is what ultimately matters.

But is it? Tiberius and Plakias note that life satisfaction judgments can be somewhat arbitrary, but argue that the idealized attitudes privileged by VBLS would not have this problem. I am less optimistic: life satisfaction attitudes seem to me inherently and profoundly arbitrary, in ways that no idealization can remedy (see Haybron 2008). In a nutshell, such attitudes involve a global summation of all the values in one's life which, given the many incommensurables in most lives, will be somewhat of a coin toss. And then they require us to set a "good enough" point: how good a life is good enough? I suspect few of us do, and fewer still should, have any firm notion of how to answer such a question: we could, by our own lights, reasonably be either satisfied or dissatisfied with a vast range of life conditions. Life would have to be pretty awful, for instance, for one not to be able to find good reason to be satisfied with it (it beats being dead!). Such arbitrariness may be a major reason for the notorious instability of life satisfaction reports: when it's a coin toss whether to be satisfied or dissatisfied, then why not let the presence of a disabled person decide the matter?

But suppose life satisfaction has these problems, so that even idealized life satisfaction cannot form any major part of well-being: the core insight driving the VBLS still seems viable and perhaps the basis for the right theory of well-being. For even if people cannot form nonarbitrary all-in judgments about their lives, they may still have a pretty good sense of what they care about, as well as how things are going relative to each of those values. The root ideal, that well-being involves subjective success, could well be the right way to approach well-being, and we may not need global life satisfaction attitudes to do it. It also, significantly, emphasizes our status as rational, reflective beings, in contrast to the more sentimentalist tenor of recent work that stresses hedonic or emotional well-being (Feldman 2004; Haybron 2008). Even those unconvinced by VBLS as a theory of well-being, moreover, may still find it a plausible vehicle for thinking about well-being in certain contexts, such as policy: arguably, policymakers should defer to individuals' own conceptions of well-being when making decisions on their behalf, in which case something like VBLS might be employed in policy contexts even if the correct theory of well-being were some other view. With caveats duly noted, VBLS seems to me an important avenue for further research on well-being, and should particularly interest social scientists and policymakers.

The inclusion of a well-being chapter is a pleasant surprise; the omission of a different chapter, on political psychology, is no surprise at all. While political scientists and others have done interesting work in this area, philosophical moral psychologists seem largely to have ignored it, or at least their work is getting far less attention. (There are exceptions, e.g., Freiman and Nichols 2011; Miller 1999 engages extensively with empirical research. And several of this book's chapters touch on political issues.) It can only be a matter of time before this situation changes, as political philosophers should be especially keen to attend to the mind sciences -- at least, if they wish to recommend social and political institutions, procedures and policies befitting the species that will have to live with them. Kelly et al.'s chapter would actually fit well in a volume on political psychology, as it shows us how policies and social efforts to deal with racism need to take account of the realities of racial cognition. Other inquiries might consider the conditions under which citizens' reported opinions most reliably reflect their values, as in studies of deliberative polling; how people's tendencies to make systematic mistakes bear on policy; and many other questions. (Again, work is already being done in these areas -- see, e.g., Trout 2009. But philosophical engagement has been much more sporadic here than in other areas of moral psychology.)

Looking to core issues in political philosophy, such as social justice, it may not be irrelevant whether the political moralities embedded among the world's populations bear any resemblance to, say, the theories of justice propounded by philosophers. Imagine that most philosophers were to converge on something like a Rawlsian view of social justice, on which poverty is viewed as injustice -- a failure of many to get a fair share of the benefits of social cooperation. Now imagine that most of the citizens of a given polity agreed with the philosophers that the state has pressing moral obligations to help the disadvantaged, indeed would be reprehensible not to, but did so on utterly different moral grounds: they firmly reject a Rawlsian conception of justice, deeming it positively offensive to their notions of property rights and personal responsibility and dignity. Most philosophers might disagree, but the folk morality has not been proven to be obviously incoherent or unsustainable, and indeed a few philosophers defend it. In short, the folk political morality is a minority view among philosophers, but nonetheless falls within the broad class of respectable, intelligent political moralities.

Would it be acceptable for policymakers to ignore such a schism? From a moral perspective such a manner of proceeding would not obviously be easy to defend: in governing on behalf of your constituents, you choose to disregard their core moral convictions and impose moral principles of your own; for while their principles are not completely unreasonable, yours -- you think -- are better. And so you adopt, say, redistributive policies that most citizens, even the beneficiaries, consider immoral; from their perspective, you treat taxpayers' earnings as manna from heaven. (Not because you tax some to help others, but because your rationale for doing so is thought to discount the property rights of those taxed.) To impose alien moral principles on the public in a case like this is at least not obviously okay; the question at least seems worth examining. And from a sheer practical standpoint, how stable and effective will your government be, if run on principles deemed immoral by most voters? How, in fact, will your politicians ever win an election? Political philosophers should probably take into account the feasibility of their proposals when making recommendations. In short, political philosophy needs psychology, right at its core: good politics is politics for human beings, as empirical investigation reveals them to be. Note that the fictional scenario just sketched may not be entirely fanciful; I would wager a few beers that it is not many degrees off from the actual situation that confronts (much of) political philosophy and the current Democratic party in the United States, the latter being the butt of many jokes about its remarkable talent for alienating the very people it aims to help.

Let me close by returning to the dismissal of psychological research in ethics with which we began. Offhand, at least one major school of thought in moral philosophy might seem immune to empirical challenge and so free to ignore such books as this: Kantian ethics. If they can really make good on the promise of a purely formal account of moral norms, Kantians might seem well insulated from empirical challenge. Set aside a couple hundred years of apparent failure to pull this off, leaving some of us wondering whether the view doesn't ultimately collapse into some form of Platonism or Humeanism (depending on where the substantive norms of rationality that really keep the machinery running are being smuggled from). Even if some plausible notion of practical rationality has this formal structure, Kantians may still want to attend to empirical moral psychology for a variety of reasons. One question, for instance, is why Kantian intuitions seem largely to emanate from emotional mechanisms in the brain, as discussed in the chapter by Cushman et al. This is an interesting coincidence -- not incompatible with Kantian rationalism, but perhaps a little awkward. (Perhaps evolution might be expected to favor Kantian principles?) Setting that matter to the side, we might still ask whether humans really are rational beings in the requisite sense; if ought implies can, and humans cannot actually do morality in the Kantian manner, then perhaps the theory just doesn't apply to us. Or, alternatively, its standards are too high. Another possibility is that Kantian morality turns out to be at odds with human nature, so that its ideal of living ends up being unattractive and alienating. This might be a tragic result for humanity, or it might be a signal to look elsewhere for our ethics. Either way, it sounds pretty interesting, as is this fascinating book.

References

Feldman, F. (2004). Pleasure and the Good Life. New York, Oxford University Press.

Frieman, C. and S. Nichols (2011). "Is Desert in the Details?" Philosophy and Phenomenological Research 82(1): 121-133.

Haybron, D. M. (2008). The Pursuit of Unhappiness: The Elusive Psychology of Well-Being. New York, Oxford University Press.

Miller, D. (1999). Principles of Social Justice. Cambridge, Mass., Harvard University Press.

Tiberius, V. (2008). The Reflective Life. New York, Oxford University Press.

Trout, J. (2009). The empathy gap: Building bridges to the good life and the good society, Viking.