What's Wrong with Morality?: A Social-Psychological Perspective

Placeholder book cover

C. Daniel Batson, What's Wrong with Morality?: A Social-Psychological Perspective, Oxford University Press, 2016, 265pp., $39.95 (pbk), ISBN 9780199355570.

Reviewed by Vanessa Carbonell, University of Cincinnati

2016.04.10


What’s wrong with morality? Ask a moral philosopher this question and I suspect you’ll get one or more of these answers: (1) nothing; (2) it’s alienating, suffocating, cold, sexist, bourgeois, or a tool of oppression and social control; (3) it rests on questionable foundations, or isn’t properly objective, or doesn’t have the authority it claims; (4) it demands too much of us, or not enough. Ask a social psychologist what’s wrong with morality, and you get an interestingly different answer: morality doesn’t get what it demands. It tells us the right thing to do, but we do the wrong thing anyway. This book is an attempt to show that morality is a failure in this respect, to ascertain how and why it fails, and to suggest some possible remedies. Scholars and students of moral psychology will find much of interest in this rich, sweeping, and engaging discussion of our “moral maladies”.

C. Daniel Batson is a social psychologist known to many philosophers for his important work on empathy and altruism. You have him to thank for the fact that you can proclaim in your intro class that “empirical studies show” that psychological egoism is false. Earlier, he was behind the famous Princeton Theological Seminary study, a key experiment in the genre of bystanders behaving badly, the one where future ministers ignore an ailing man slumped in a doorway because they are in a hurry to give a lecture on the parable of the Good Samaritan (Darley and Batson 1973). It should be clear from the set-up of that study that Batson has an eye for the absurd and ironic. That eye serves him well in this book, where he takes a detached, unsentimental look at human behavior and comes to the “rather cynical” (7) conclusion that our moral standards are “vulnerable to rationalization, self-deception, and moral hypocrisy” (228).

It may seem strange that an expert on altruism — that most wholesome of human phenomena — has written a book with such a pessimistic outlook. If even Batson thinks that integrity is rare, hypocrisy is common, and the whole thing is basically a charade, surely we’re doomed? But here are two consolations. First, Batson defines “moral motivation” in such a way that altruistic motivation doesn’t count as moral. (The jacket blurb says that although he has written two books on altruism, this is his first book “on morality”.) So rest assured that the book’s “cynical” conclusions are compatible with genuine warm-fuzzy helping behaviors. Second, Batson’s exposé is more or less alarming depending on how we interpret the many studies he cites. Are they as troubling as advertised? Do they translate to the real moral environment? I am skeptical. I’m not saying we’re not doomed. I am just not sure we understand the precise nature of the doom yet.

What does moral failure look like? The book begins with a breezy but awfully depressing laundry list of “symptoms”: the Holocaust; the My Lai massacre; some KKK malfeasance; corporate negligence and corruption; pedophile priests; the nefarious nonsense that we all read in local papers every day; and garden-variety lying, cheating, and stealing that doesn’t make the news. The examples are not analyzed in much detail, presumably because the phenomenon is broad and undeniable. The focus is on sub-psychopathic wrongdoing ranging from small slights to horrific atrocities. Group dynamics and social and political forces are relevant to many of these symptoms, but we tend to think individual agents still bear moral responsibility for their role in them. Politically charged and morally contested examples (drone warfare? police brutality? late-term abortion?) are not discussed. (For a different take on what “moral failure” is, see Tessman 2015.)

In Part One, after giving a general overview of the book (Ch. 1), Batson surveys the “standard scientific diagnoses” of moral failure, namely various species of personal deficiency and situational pressure (Chapters 2 and 3). These chapters include insightful exposition and analysis of the work of Freud, Hugh Hartshorne and M. A. May, Albert Bandura, Martin Hoffman, Leon Festinger, Piaget, Lawrence Kohlberg, Jonathan Haidt, Antonio Damasio, Jesse Prinz, Stanley Milgram, Philip Zimbardo, Ervin Staub, and of course Batson, among others. Even readers who cannot stomach another recounting of Milgram or Zimbardo’s infamous experiments will find other fascinating nuggets here. But Batson ultimately concludes that personal deficiency and situational pressure are not — alone or together — sufficient to explain moral failure.

The guiding metaphor for the book is one of disease. If wrongdoing and hypocrisy are symptoms, Batson seeks a diagnosis and eventually a cure. The metaphor gets tricky if you think too hard about it. Perhaps, like cancer, we will discover that moral failure is not one disease but several, with no unifying explanation, causal or otherwise. What’s more, it’s not always clear who is sick — the individual, the group, the entire moral community, or “morality” itself as the title suggests? Moral failure seems to be at once an individual malady (sometimes congenital, usually acquired), an environmental hazard, an infectious epidemic, and a way of describing an internal or external flaw in a system (like “leakage” in an engine or “fraud” in Medicare).

Whether or not the disease metaphor holds up, it’s a useful way of framing the sort of scientific inquiry Batson wants to make: theorize about possible explanations for what ails us morally, shape them into specific hypotheses about human behavior and motivation, and then test those hypotheses in carefully controlled experiments. His theoretical framework is summarized as “value → emotion → motivation → behavior” (29). For example, I care about my own welfare (that’s a “value” or “valued state”), and as such I’m liable to experience emotions like anticipatory pride and shame, which help me monitor whether I’m succeeding in furthering my welfare, and help to arouse goal-directed egoistic motives (motivation) which, if actualized, result in certain behaviors, such as hoarding my resources.

In Batson’s taxonomy, prosocial behaviors result from four types of motivation: egoism (promotion of one’s own welfare); altruism (promotion of others’ welfare); collectivism (promotion of the welfare of a specific group); and “principlism” (promotion of a moral principle or ideal) (29). Later in the book he uses “moral integrity” instead of “principlism” and says he means them as synonyms. (“Principlism” does not mean here what it means in Bioethics, though it is a close cousin.)

Batson is very careful about defining terms. Coming from philosophy, it’s disorienting to see dictionary definitions. Multiple sentences in the book begin with "Webster’s . . . ". Unexpectedly, many of these definitions are just fine — perhaps philosophers are too uptight. To his credit, Batson acknowledges that Webster’s definition of “moral” is only a start: “1. of or concerned with principles of right and wrong conduct. 2. being in accordance with such principles”. He follows up with “nine points of clarification and elaboration” (19). He does not remark on the ambiguity between Webster’s two definitions. “Moral” sometimes means about morality (moral vs. nonmoral) and sometimes means aligned with morality (moral vs. immoral). Things can fail to be moral in two very different ways. This issue is most relevant in Ch. 6, where Batson argues against “moral” emotions.

Like Webster, Batson gives principles a central role in morality. Only the fourth member of the motivation taxonomy, principlism (a.k.a. moral integrity), counts as genuinely moral motivation. Altruistic and collectivistic motivations are not moral. If you help a frail elderly man with his groceries because you care about him and his welfare (or about elderly men and their welfare more generally), your motivation was only “instrumentally” moral. If, rather, you help him because you aim to promote the “greatest happiness principle”, or Kant’s formula of humanity, or perhaps some lesser-known principle like “do exactly seven good things on Tuesdays”, then you displayed moral integrity.

“Principle” is here meant loosely, to include ideals, norms, standards, and virtues. Batson says he does not want to take sides on normative questions, or to endorse or evaluate particular principles. He certainly takes a side in the debate over the role of principles (broadly construed) in morality. If you’re not acting on an impartial principle, you’re not acting with integrity, and you’re vulnerable to “seduction by special interest” (an interesting choice of words) (35). He says his view is consistent with those of Kant, Mill, Hume, and Rawls, and potentially in tension with folks like Lawrence Blum, Nel Noddings, Carol Gilligan, Michael Stocker, and other care ethicists. This is presented as though it were not much of a problem.

I’m not sure if virtue ethicists will be relieved or offended to hear that virtues get counted as principles in this framework. It’s an ecumenical but complicating choice. Is compassion a virtue and therefore a kind of “principle”? If so, then it seems that whether a compassionate action counts as genuinely moral will depend on extremely fine-grained differences in how motivations are individuated or described. If you help the elderly man because you (compassionately) feel for his plight and care for him, that’s altruism (sorry, not moral!), but if you reluctantly help out of allegiance to the “principle of compassion”, that’s principlism?

All of this raises thorny philosophical questions, including the proper relationship between de dicto and de re moral motivation, the distinction between motivating and justifying reasons, and what to do when motivations overdetermine behavior. And if it’s not clear how to reconcile Batson’s framework with virtue ethics and care ethics, it might seem even less clear how moral particularism would fare. If there are no true general moral principles, is moral failure universal? Is moral success possible?

Perhaps to have these concerns is to overthink things, and to miss Batson’s point. His project is descriptive through and through. It doesn’t matter to him which normative theory, if any, is true, or how principles feature in the true theory. Look around and you’ll see that humans do have moral principles, standards, ideals, etc., and do claim that these should guide our actions. But then we keep screwing it up. We value fairness, but we cheat. Batson just wants to know why that is. Philosophers of all theoretical orientations should want to know, too.

Part Two of the book is where the positive argument picks up steam. Chapter 4 introduces the idea of moral hypocrisy: “motivation to appear moral while, if possible, avoiding the cost of actually being moral” (94). In one study, 90% of subjects who decided for themselves how to divide up tasks gave themselves the better task, but 90% of subjects who “flipped a coin” in private also favored themselves (101). Lucky coin! Many other related studies are discussed. Chapter 5 asks why moral hypocrisy is so common and finds some answers in research on moral development.

In Chapter 6, Batson argues that part of the explanation is that so-called “moral emotions” aren’t really moral. If what passes for moral anger is really just personal anger, we won’t be motivated to prevent the violation of interpersonal moral principles when our egoistic interests are at stake, and we’ll settle for the mere appearance of moral integrity if we can get away with it. Again, “moral” here is defined with a principlist and impartialist bias built-in. Anger aroused on behalf of a cared-for other is unlikely to qualify as moral anger (166). Even anger at harm done is not as “moral” as anger at a principle violated, and Batson finds little evidence of the latter. Collectively, Chapters 2-6 are a must-read for moral philosophers. They reveal a vast landscape of human mischief. And for aspiring moral hypocrites, they serve as a self-help guide to ever more sophisticated forms of rationalization and self-deception.

The central studies involve scenarios that are sterile and artificial by design. Participants — mostly undergraduates — divvy up resources, such as raffle tickets or undesirable tasks. The experiments come across as carefully constructed and responsibly interpreted, and Batson often shows epistemic humility about what can be concluded from the data. He frequently points to what is not known, pleading for additional experiments. I am not qualified to assess the methodology or statistical validity of this research. Data are presented only in summary. Readers who have been rendered skeptical by the recent hubbub about replication, publication bias, and statistical manipulation in social science will have to consult the original literature.

Granting that the science is pristine, I nevertheless have qualms about drawing broad, skeptical conclusions about moral motivation, moral behavior, and “morality” more generally from scenarios that, to my nonscientist eye, represent at best a pale sliver of what people are faced with as moral agents in the real world. I understand why the experiments are sterile. You need to isolate the phenomenon, and to control for confounding variables. But do we risk controlling away the very thing we should be interested in? We live in an era in which human interaction is often portrayed as a series of consumer transactions and complex social phenomena are reduced to whatever can be quantitatively measured. Many experiments discussed in this book are described as games or likely to be approached by subjects as though they were games. We are socialized to believe that the objective in playing a game is to win, that the governing norm is self-interest constrained only by the written rules, and that games are not serious.

It is not surprising that in game-like scenarios, college kids are selfish (isn’t that how you’re supposed to behave in a game?) and inconsistent or irrational (after all, games aren’t serious). I hesitate to draw conclusions about moral integrity from how folks divvy up raffle tickets amongst strangers in a lab. That’s not to deny that people act selfishly or irrationally in real-life scenarios as well; it’s just to raise doubt about whether the artificial scenarios shed explanatory light on the real-life ones. It would be great to see more research on moral behavior “in the wild”, as well as attempts to correlate laboratory and real-world behavior across the same participants. Is Johnny as much of a jerk in real life as he was in the psychology lab? If you give him a month instead of a minute to decide how to divvy up his raffle tickets, does it make a difference? And is he still a jerk two decades later? To be clear, the answer to my qualms is more, not less empirical research on morality.

In Part Three, Batson completes the overall story by arguing that we use morality to engage in “moral combat” with each other. Moral standards and principles are not really about regulating one’s own conduct, but about keeping others in line. It starts in childhood with tattling. The evidence for this hypothesis in adults in speculative and indirect. For example, one study examined records of confrontations in which passersby excoriated — or even attacked — drivers who parked in disabled spots without authorization. It’s closer to the sort of ecological study I was hoping for, but it’s hard to know what to make of it. The real smoking gun would be if the moral tattlers in the parking study were later found illegally parking in disabled spots themselves.

In the final chapter, Batson says a bit about how we might treat our moral malady. The prescriptions include practicing moral behavior, emulating moral heroes, and broadening your moral outlook via travel and fiction. These pills should be easy to swallow. But, in keeping with the disease metaphor, I want to suggest that we not jump hastily to the conclusion that our morality (as a system) is sick or dysfunctional, or that our failure rate is in the pathological range. (The first rule of medicine should be, "If it ain’t broke . . . ".) Granted, the world is filled with horrific atrocities. People are vulnerable, imperfect creatures prone to indoctrination, cruelty, posturing, and dishonesty. Socially and politically, things are a mess. But is something is wrong with morality? In the studies Batson cites, only 10-32% of participants display moral integrity as he defines it (they distribute resources fairly even when given wiggle room to be selfish hypocrites instead) (116). This is supposed to be both surprising (lower than expected) and troubling (lower than it should be). Certainly we would not want 90% leakage in an engine or 90% fraud in a government program. But it’s a philosophical question what the acceptable failure rate of morality is, how to measure it, and whether it even makes sense to speak this way. For philosophers interested in beginning to chip away at these tough questions, this book is an excellent place to start.

REFERENCES

Darley, John M., and Batson, C. Daniel. (1973). “From Jerusalem to Jericho”: A study of situational and dispositional variables in helping behavior. Journal of Personality and Social Psychology, 27(1), 100.

Tessman, Lisa. (2015). Moral Failure: On the Impossible Demands of Morality, Oxford University Press.