Consequentialism: New Directions, New Problems

Consequentialism New Directions New Problems Book Cover

Christian Seidel (ed.), Consequentialism: New Directions, New Problems, Oxford University Press, 2019, 268pp., $90.00 (hbk), ISBN 9780190270117.

Reviewed by David Cummiskey, Bates College


In this fine collection, Christian Seidel has brought together innovative new work on consequentialism, with a special focus on the theoretical strategy of "consequentializing" agent-centered (deontological) moral theories. It is an excellent resource for anyone seeking to better understand and evaluate the conceptual foundations of consequentialism. Seidel's introduction is a real strength of the book, providing a clear overview of the evolution of consequentialism, which he divides into three waves.

The first wave is the "conceptual emancipation" of consequentialism from utilitarianism, which is familiar but worth briefly summarizing. Classical utilitarianism defines the right action as that which maximizing the good, with happiness constituting the goodness of outcomes. Utilitarians, of course, debate over the nature of happiness itself (and whether it is reducible to pleasure, or preference satisfaction, or requires a more objective conception of flourishing), but classical utilitarians were unified in arguing that the maximizing happiness is the proper aim of actions, rules, and just institutions. The first wave of "conceptual emancipation" responds to the core objections to the utilitarianism. As Seidel emphasizes, the "first wave" concept of consequentialism was introduced in order to abstract from the more specific claims of utilitarians in two main directions. First, consequentialism is simply a theory of the right abstracted from any particular theory of the good. Many people argue that, in addition to happiness, others things are also good, suggestions have included friendship and loyalty, beauty, achievement, and wisdom. For first wave consequentialists, whatever one believes to be good is that which ought to be maximized. The second direction of the first wave was to expand the focal point of evaluation (to use Shelly Kagan's phrase) beyond actions to include rules and practices, rights and duties, motives or virtues, or anything whatsoever that is a fitting focal point for promoting good outcomes (Kagan, Normative Ethics, Westview Press, 1988). An important part of this second dimension involved emphasizing the distinction between right-making characteristics and decision-making procedures. By emphasizing that utilitarianism is not itself an optimal decision-procedure, and thus its use as a decision-procedure would not in fact maximize the good, various indirect and two-level versions of consequentialism emerged. Seidel, describes this as "consequentialism's conceptual emancipation of from utilitarianism" (5).

The second wave of development involved an elaboration and exploration of the variety and limits of consequentialist theories. As consequentialists responded to old objections, the critics developed new objections, resulting in a dialectical interplay between critics and ever more creative alternatives. Leaving aside his detailed account, Seidel argues that "the upshot of the second wave was a growing awareness of a dialectical impasse between consequentialism and competing (common sense) theories." The strength of consequentialism was its "appealing conception of practical rationality" and the "compelling idea" that it can never be right to prefer a worse outcome to a better. But consequentialism rejected the "agent centered character of commonsense morality." Seidel concludes that "the problem was seen as one of reconciling agent-centeredness and practical rationality" (9-10, italics original).

This brings us to the third wave, consequentializing. Although one might think that consequentialists are raising a genuine normative problem about the justification or soundness of the directives of "commonsense morality," consequentializers argue that the conflict with commonsense morality is instead a theoretical problem about the nature of practical rationality and the abstract idea of better and worse outcomes. Their solution is to abandon the idea that the moral goodness or badness of outcomes (or states of affairs) is an impartial, impersonal, or agent-neutral evaluation of the world. Instead, they argue that if (we assume that) the value of different outcomes is itself agent-centered, or agent-relative, we can reconcile practical rationality and common sense deontological moral judgments. The move here is purely abstract and technical. Douglas Portmore, for example, explains it as follows,

take whatever considerations that the non-consequentialist theory holds to be relevant to determining the deontic statuses of actions and insist that those considerations are relevant to determining the proper ranking of outcomes rank (Seidel quoting Portmore, 10)

In considering the merits of consequentializing, let's recall the substantive issue at stake. If we are debating whether we should kill an innocent to save five other innocents from being killed, there seems to be a problem of justification for the constraint on killing. As Kagan has argued, it doesn't matter if we focus on a feature of the victims, the agents, or the relationship between agents and victims; other things equal, we have five factors to one (Limits of Morality, Oxford 1989). Agent-centered commonsense morality thus seems to prohibit an action without any adequate justification for why I should countenance others killing so as to avoid killing myself. Consequentializers seem to agree, for otherwise there is no call to consequentialize.

Of course, there is a rich (second wave) consequentialist literature explaining away and/or justifying the role of common sense constraints, rights, options and prerogatives. Third wave consequentialists take a completely different approach. They accept common sense agent-centeredness and stipulate that it is better from each agent's perspective that they don't kill because the evaluation of outcomes is agent-centered, not agent-neutral. It is thus better that each doesn't kill from the agent's perspective, and there is no puzzle or paradox, or normative problem. I take it that this means that as an agent, I am supposed to think that it is a better outcome that five are killed instead of one, as long as I don't kill anyone, and this is so because the value of outcomes is centered on me. In addition, since I am also not supposed to kill now to prevent myself from killing later, we also need to build time-centeredness into the evaluation of outcomes or states of affairs.

Seidel's anthology is composed of new essays and thus does not include the classic essays introducing the consequentializing approach. If the reader is new to concept of consequentializing, I recommend a quick read of Jamie Dreier's influential article "The Structure of Normative Theories" (The Monist, 1993) where Dreier argues for an agent-centered and time-centered account of outcomes. In a fascinating and clever essay in Seidel's anthology, Dreier adds that we can also solve various puzzles involving possible futures, and core problems of population ethics, by incorporating and consequentializing world-centered values. In a longer discussion, it would be worth exploring the metaethical differences between an agent-relative and time-centered normative theory and Dreier's proposed world-centered values. In particular, even if one rejects agent-centered consequentialism for the reasons sketched below, one might ask: do these objections also challenge Dreier's proposed world-centered values?

Chapters 3-5 primarily pick up on and further advance second wave consequentialism. In chapter 3, Martin Peterson also takes up population ethics and argues for what he calls "multidimensional consequentialism," which evaluates outcomes in terms of multiple variables. In a fashion that struck me as similar to William Frankena's work Ethics (1963/1973), Peterson focuses on the two distinct values of equality and well-being, and he uses this multidimensional structure to respond to both Derek Parfit's puzzle of the "repugnant conclusion" and Portmore's "commonsense consequentialism." Peterson next compares his multidimensional consequentialism with Gustaf Arrhenius' work and concludes by leaving it to the reader to decide if his solution resolves the seemingly insolvable problems faced by population ethics. In chapter 4, Portmore focuses on consequentialism and cooperation, and aims to set out a position that is distinct from Donald Reagan's classic work on this topic (Utilitarianism and Cooperation, Oxford 1980). In chapter 5, Richard Yetter Chappell responds to Peter Railton's "sophisticated consequentialism" by developing an account of "well-calibrated dispositions" ("Alienation, Consequentialism, and the Demands of Morality" Philosophy and Public Affairs, 1984).

Chapters 10-12 also are not focused on consequentializing in particular. In chapter 10, Tim Henning focuses on consequentialist conceptions of reasons, or more specifically, on the assumption that our preference-based reasons are outcome-based. His argument is centered on "predictable preference accommodation." For example, the decision one might make not to have a child, which can easily be combined with a full awareness that one would be pleased with, and identify with the outcome of the choice, if one instead decided to have a child. How can a consequentialist outcome-oriented approach account for the belief that my current preference nonetheless determines what I should do? After dismissing various attempts to accommodate the authority of one's choice within an outcome-based framework, Henning argues that the authority of one's own current preference is best justified from a Kantian perspective. His Kantian solution incorporates Darwall's account of practical reason to ground the distinct authority of the agent's own decision independently from any evaluation of outcomes. A future possible preference does not constrain decision-making now "because whether her future self will prefer it this way depends on what she decides now" (214). The preferences of her future self simply cannot have authority over her current self.

Elinor Mason, chapter 11, returns to Bernard Williams' classic integrity objection to utilitarianism. Williams argues that utilitarianism is committed to a robust conception of (negative) responsibility that undermines the relationship between an agent and her constitutive projects. Building on previous work by Frances Howard-Snyder, Alastair Norcross, and Frank Jackson, Mason argues that any normative theory, consequentialist or not, is subject to a responsibility constraint that limits responsibility for outcomes. "A normative theory must give an account of right action such that an agent could reasonably be deemed responsible for acting rightly or wrongly" (221). She argues that this is a (meta-ethical) constraint on any normative theory, and it thus also constrains consequentialist normative theories. Roughly, the rightness or wrongness of actions tracks responsibility, responsibility tracks autonomy, and the threats or coercive offers of other agents undermine the subject's autonomy, and thus moral responsibility. Her account of responsibility builds on Harry Frankfurt's influential account of wholehearted identification, and she argues that responsibility conceptualized in this way undermines Williams' argument. Notice that Mason's reply shifts the debate from one of normative ethics to a debate over the metaphysics of responsibility.

Another major line of objection to consequentialism focuses on its forward-looking, primarily deterrence-based conceptions of punishment. On the Kantian version of this type of objection, punishing one person to deter another person treats the punished person as a mere means, and thus fails to respect them as an end-in-itself. In chapter 12, Steven Sverdlik takes up this line of objection and argues first that there is no plausible version of the Kantian "means principle" that does not prioritize the requirement to treat persons as ends; and second, that interpreting the "ends principle" so as to exclude consequentialism proves too much in that it leads to the conclusion that "harming rational agents is always wrong." As Sverdlik argues, although this categorical prohibition on harming does undermine deterrence theories, it also "tells us that imposing severe punishments in order to give wrongdoers what they deserve is also wrong" (248). Sverdlik thus shifts his focus to Kant's conception of a "kingdom of ends" and contemporary contractarian interpretations. In particular, he focuses on Sharon Dolovich's application of John Rawls' maximin principle to the justification of legal punishment. Dolovich argues that from an impartial perspective that considers both the perspective of the victim of crime and the criminal, and not knowing if we would be a victim of crime or a criminal, we would endorse a system of punishment that deterred crime by means of punishments so as to minimize victimizations (250-251). Leaving the details aside, such a system of punishment aims to respect both the criminal and victims, but it also sets significant limits on the severity of punishments. Sverdlik responds (i) that Dolovich's limits on deterrence only follow because she excludes knowledge of probabilities (here she follows Rawls), (ii) that this exclusion results in strikingly implausible results, and thus (iii) that the consequentialist account of punishment is to be preferred (254-255).

Readers especially interested in "third wave" consequentializing should focus on chapter 2 by Dreier and chapters 6-9. How does reformulating agent-relative deontology as a form of consequentialism in any way reconcile the two? Betzler and Schroth argue, in chapter 6, that consequentializing simply relocates the original substantive normative dispute. They argue that the substantive dispute about agent-neutral and agent-relative normative principles are mirrored in the new dispute over whether outcomes should be judged using an agent-neutral or agent-centered approach (123, 129). They also argue that consequentializing itself does nothing to help resolve this question, and conclude, "it seems that the consequentializing project, albeit a theoretical possibility, is good for nothing" (133). Their chapter provides an excellent resource for anyone looking for an especially clear and accessible critique of the consequentializing project.

The next two chapters are also critical of consequentializing. In chapter 7, Jan Gertken provides a more focused account of consequentializing agent-centered constraints. In particular, he proposes that one might argue that "we never bring about other person's actions" and thus violations of constraints by others are not "outcomes" of my choices (even when they are counterfactually dependent on my choice). The result is that "it is not an agent-relative theory of value, but rather an agent-relative theory of outcomes that provides the most promising option for accommodating deontic restrictions into a consequentialist outlook." He concludes, however, that he is "skeptical that at the end of the day, such an account will be as appealing as the . . . nonconsequentialist alternatives" (154). His argument is especially interesting because it highlights the dependence of these arguments on robust assumptions about responsibility, voluntariness, reasonable alternatives and decisive reasons, and also on the assumption that outcomes that involve killings are worse than deaths.

Similarly, in chapter 8, Dale Dorsey focuses on consequentializing options in particular, and argues that commonsense consequentialism relies on the moral rationalist view that "we have decisive practical reasons to conform to moral requirements" (165). Dorsey goes on to argue that "moral rationalism is probably false" (165-173). These three chapters build on the existing second and third wave literature, but they also explore a broad range of conceptual possibilities, the often unstated metaethical assumptions, and thus the point of consequentializing.

As Seidel suggests in his introduction, consequentializing is supposed to reconcile the agent-centered (and time-centered) aspects of commonsense morality with a teleological conception of practical rationality. In chapter 9 on "New Consequentialism the New Doing-Allowing Distinction," Paul Hurley shows that the consequentializing project is driven by a "teleological conception of reasons . . . upon which all reasons, both agent-relative and agent-neutral, are reasons to promote outcomes (178-179). Hurley argues that non-consequentialist theorists must and should reject the teleological conception of reasons. Hurley's own non-consequentialism starts with a foundational focus on intention and action, that builds on new interpretations of Elizabeth Anscombe's work. His main point in his chapter, however, is to show that the disputes about (second and third wave) consequentialism "draw upon claims and commitments that lead well beyond normative ethics and even metaethics into deep questions in the philosophy of mind and theory of action" (192).

In reading Hurley's conclusion, I thought that his explicit focus on the metaphysics underlying the disputes in normative ethics might be an apt alternative characterization of the new third wave in the consequentialism debate. All of the authors brought together by Seidel consistently appealed to broader considerations including the nature of responsibility, conceptions of autonomy, the importance of character formation, epistemic limitations, and the metaphysics of action. Looking back, none of the essays in this collection engage in the common second wave methodology that relies on stylized counter-examples, combined with ever more intricate formulations of principles. Instead, they all develop and explore the underlying metaphysical assumptions. I highly recommend Seidel's book to anyone interested in contemporary work on consequentialism.