This is the third of three volumes on moral psychology edited by Walter Sinnott-Armstrong and published by MIT Press in 2008. The first volume — with seven main papers by Owen Flanagan, Hagop Sarkissian, and David Wong; Leda Cosmides and John Tooby; Debra Lieberman; Geoffrey Miller; Peter Tse; Chandra Sripada; and Jesse Prinz — is focused on the evolution of moral, and is already reviewed in Notre Dame Philosophical Reviews (http://ndpr.nd.edu/review.cfm?id=15605). The second volume
- with eight main papers by Gerd Gigerenzer; Walter Sinnott-Armstrong; Marc Hauser, Liane Young, and Fiery Cushman; Jonathan Haidt and Fredrik Bjorklund; Shaun Nichols; John Doris and Alexandra Plakias; Don Loeb; and Julia Driver - is focused on the cognitive science of morality. For my review of Volume 2 see http://cfweb-prod.nd.edu/philo_reviews/review.cfm?id=16785.
Volume 3 contains eight main papers, each of which is followed by two (or in one case, three) responses, and then a reply by the original author(s). Given the daunting task of having to review two of these sizable volumes, I have chosen to very briefly summarize each of the main papers and commentaries in order, while stopping to make a few comments about Joshua Greene’s paper in particular.
In “The Cognitive Neuroscience of Moral Emotions”, Jorge Moll, Ricardo de Oliveira-Souza, Roland Zahn, and Jordan Grafman develop a framework for better understanding the moral emotions, which typically include but are not limited to guilt, pity, embarrassment, shame, pride, awe, contempt, indignation, moral disgust, and gratitude. What makes these emotions count as moral ones is that they are related to the “laws of proper behavior and customs in daily life” (2). Further, they function to “help guide moral judgments by attaching value to whichever behavioral options are contemplated during the tackling of a moral dilemma” (5). Moll et al. adopt a representational approach to understanding the neural bases of moral emotions rather than a process approach. More precisely, on their view the elicitation of moral emotions is carried out by dynamic prefrontal cortex-temporolimbic network representations, which arise from the activation of one or more of the following six components: attachment, aggressiveness, social rank, outcome assessment, agency, and norm violation (13). They end their paper by briefly but systematically sketching how each moral emotion may depend on particular combinations of these six components.
In the first commentary, William Casebeer makes three main critical points: (a) Moll et al. need to acknowledge better the role of background moral theories, (b) they need to explain how basic reward-processing relates to the six components above, and © they need to combine a representational approach together with a process approach to the moral emotions (19). Further, Catherine Hynes nicely distinguishes four sets of concerns: (i) concerns about Moll et al.‘s criterion for what makes an emotion moral, (ii) concerns about the role of the basic emotions and the lack of discussion of inhibition, (iii) concerns about certain of the six components, and (iv) concerns about inadequate attention devoted to the role of propositional content.
It is probably fair to say that Joshua Greene’s paper, “The Secret Joke of Kant’s Soul”, is likely to be the most controversial and provocative of all the papers in the collection. Roughly he argues that deontological moral judgments tend to be caused by emotional responses and are post hoc rationalized, whereas consequentialist moral judgments tend to be caused by more cognitive processes and involve genuine moral reasoning (36). Consequentialist and deontological judgments are defined functionally in terms of characteristic moral conclusions, such as “Better to save more lives” in the first case and “It’s wrong despite the benefits” in the second (39). On Greene’s view, each of these ways of thinking stems from distinct psychological patterns in the same person’s brain, patterns that have a long evolutionary history (37). One pattern is cognitive, and is connected to the dorsolateral surfaces of the prefrontal cortex and parietal lobes. The other is emotional and is connected to other parts of the brain like the amygdala and the medial surfaces of the frontal and parietal lobes (40-41).
Schematically, some might think that consequentialist and deontological judgments are both primarily cognitive, others that they are both primarily emotional (as Haidt and Bjorklund seem to suggest in volume 2), and still others that consequentialism is more emotional and deontology more cognitive. Greene opts for the fourth option, and in particular advances the hypothesis that "what deontological moral philosophy really is, what it is essentially, is an attempt to produce rational justifications for emotionally driven moral judgments, and not an attempt to reach moral conclusions on the basis of moral reasoning" (39, emphasis his). He summarizes a range of different empirical studies to support his hypothesis, which are only very hastily mentioned here:
(a) Trolley cases. To explain the difference in typical reactions to the standard trolley case and the footbridge variant, Greene surmises that the thought of having to push someone to his death is more emotionally salient than, for instance, merely having to flip a switch in the standard case. Further, increased neural activity in footbridge cases was found in emotional response regions of the brain, whereas in trolley cases it was found in the more cognitive regions (43).
(b) Singer cases. Similar results to the above are found in cases involving a nearby drowning child versus cases involving impersonal donations (47).
© Victim cases. Similar results to the above are found where the contrast is between identifiable victims such as Baby Jessica who is trapped in a well versus indeterminate, merely statistical victims (48).
(d) Punishment cases. Deontological, retributivist judgments about punishment tend to be emotionally driven and indeed are “proportional to the extent that transgressions make [people] angry” (51). When discussing punishment in the abstract, however, people often give consequentialist arguments.
(e) Harmless action cases. Cases involving harmless actions, such as certain instances of breaking a promise, are often condemned as a result of emotional responses, whereas in more reflective moments people are less willing to forbid them (55).
Deontological moral judgments are thus taken to be causal offshoots of our moral emotions, whereas consequentialist judgments are more inherently cognitive (63-64).
The final and most interesting (and controversial) part of Greene’s paper concerns the normative implications he draws from these results. Unfortunately the discussion is condensed and several arguments get run together here. Thankfully Mark Timmons, in his very careful and thorough commentary, nicely distinguishes and formulates four of them, and does so in such a way that Greene in his response seems to accept happily as charitable. Here I offer only the quickest of summaries of each of them (95-102):
The Misunderstanding Argument: Deontologists have misunderstood what turns out to be the real essence of deontology, which is a pattern of intuitive, emotional responses to cases, and not the application of rules or other rational methods.
The Coincidence Argument: Given that deontological moral judgments seem to be emotional gut reactions for which there is a good evolutionary explanation, the “rationalist” deontologist must explain how there could be such a coincidence between these emotional reactions and the rationalist’s posited objective moral truth. As Greene says, “it is unlikely that inclinations that evolved as evolutionary by-products correspond to some independent, rationally discoverable moral truth” (72).
The No Normative Explanation Argument: Attempts to characterize the difference between deontological and consequentialist responses in, say, the trolley cases have so far failed.
The Sentimentalist Argument: Deontology is committed to the view that moral judgments are (or are expressions of) cognitive states like beliefs, but in light of the empirical data, a non-cognitivist, sentimentalist story is the way to go.
Thus according to Greene these arguments shift the balance of normative plausibility away from deontology and towards consequentialism (76).
In the first commentary, John Mikhail takes Greene to task for neglecting computational theory and so for not providing an explanation of how the mind computes representations of the deontic status of actions (81). Furthermore, he claims that Greene’s hypothesis about trolley cases involving the personal-impersonal distinction is incomplete and does not generate the correct results in some variants of them (81, see also the critical discussion of Greene on trolley cases in Hauser, Young, and Cushman’s paper in volume two, 134). As an alternative, Mikhail briefly sketches his own proposal, which appeals to a computational or moral grammar hypothesis about the brain.
As noted, Mark Timmons devotes his commentary to helpfully laying out each of the four arguments above. He then suggests what are in my view very forceful responses that might be made to each of them by deontologists. Let us look in a little more detail at the exchange between Timmons and Greene concerning the coincidence argument. Timmons notes (rightly by my lights) that there is no entailment relation between adopting a deontological approach to normative theory and objectivism about moral truth. For instance, Scanlon’s constructivism serves as a nice example of a non-realist foundation to a deontological ethic (98). Further, on constructivist accounts, it would not be much of a coincidence if the moral truth reflects to some degree our emotional intuitions, since those very intuitions would be suitably refined by the requisite process of rational reflection.
Greene thinks that deontologists who adopt this approach would fall prey to a dilemma. Either the emotionally based deontic intuitions that go into the reflective process would also come out of it, in which case we have the result of “garbage in, garbage out”, or they would not come out of the reflection process, in which case the resulting moral claims are not necessarily deontological (116).
Even though I am no constructivist, much more needs to be said here. Take the first horn. If the methodology and principles used in carrying out the reflective process are themselves rationally supported (and perhaps epistemically acquired via reflective uses of our “cognitive” brain centers), then other things beings equal and when properly utilized they should bestow justificatory authority on any deontic intuition that they legitimize. To take simple examples, the Kantian CI-procedure, the Rawlsian veil of ignorance, or the Scanlonian reasonable rejection criterion might allow us to systematically refine our initial deontic intuitions so that the end project is no longer “garbage out” but rather specific moral commitments that now have substantive justificatory backing. This leads to the second horn of the dilemma, which is that while Greene is certainly correct in thinking the outputs of the constructivist procedure might not be the exact same initial deontic intuitions and so might not necessarily be deontological in form themselves, he gives us no reason to think that they would not be either. Indeed, at least on certain interpretations of their projects, Kant, Rawls, and Scanlon have all applied their own constructivist procedures and generated from them recognizably deontological principles and moral evaluations of cases.
These remarks even open the door for a deontological moral realist to respond to Greene as well. For Greene offers us no reason, at least in this paper, to distrust the workings of the more cognitive centers of the brain. This means a moral realist might provide a story about how reason can be used to properly discern one or more general objective deontic moral principles, principles that in turn can be used as action guides and also as checks on our specific deontic intuitions. Those intuitions that are cleared by the rationally discovered moral principles would then be given substantive justificatory backing. Nothing about the empirical results Greene provides has any bearing on the metaphysical existence of such objective moral principles nor, as far as I can see, on the psychological possibility that our cognitive faculties can gain epistemic access to them. Furthermore, note that a similar strategy exists for deontological moral realists who are skeptical about one or more foundational moral principles, such as certain moral pluralists and particularists.1
Kent Kiehl, in “Without Morals: The Cognitive Neuroscience of Criminal Psychopaths”, provides a very detailed and thorough review of important empirical findings in this area. Kiehl supports the Hare Psychopathy Checklist-Revised for diagnosing psychopathy, and reviews focal brain damage studies that implicate the orbital frontal cortex, the anterior insula, the anterior cingulated of the frontal lobe and the amygdala, and adjacent regions of the anterior temporal lobe (129). He also helpfully reviews abnormalities in psychopaths in three areas: language, attention and orienting processes, and affect and emotion. For instance, criminal psychopaths have been found to make more errors compared to non-psychopaths when classifying abstract words (131). Kiehl concludes by advancing the hypothesis that the “relevant functional neuroanatomy implicated in psychopathy is the paralimbic system” (146).
In the first commentary, Ricardo de Oliveira-Souza, Fátima Azevedo Ignácio, and Jorge Moll focus instead on psychopathic behavior in community settings involving subjects who have not been incarcerated. They present some previously unpublished data which they take to show that such antisocials differ in certain ways from criminal psychopaths. For instance, there was only a weak correlation between these psychopaths and violent crime, and instead recurrent minor infractions were much more common (156). Jana Borg instead focuses on Kiehl’s claim that “Psychopaths are not impaired in their ability to reason about what is right or wrong. They are only impaired in their ability to do or follow through with what they reason to be right or wrong” (quoted in Borg, 159). However, evidence from fMRI studies in her Dartmouth lab suggests that emotional impairments in psychopaths may also lead to impairments in at least certain types of moral reasoning.
“Internalism and the Evidence from Psychopaths and ‘Acquired Sociopaths’”, by Jeanette Kennett and Cordelia Fine, is a response paper to a 2003 paper by Adina Roskies. In that paper, Roskies proposes a new strategy for rejecting motivational internalism, which she formulates as the claim that:
RMI: Motivation is intrinsic to, or a necessary component of, moral belief or judgment (quoted in Kennett and Fine, 178-9).
Roskies argues that patients with ventromedial (VM) frontal lobe damage are such that they have unimpaired moral beliefs and judgments, but are not motivated at all by them. Thus she claims to have discovered a class of counterexamples to motivational internalism.
Kennett and Fine make a number of points in reply. First they worry about Roskies’ formulation of internalism, specifically about who actually holds such a strong formulation of the view in the literature (other than perhaps McDowell 1979). Instead they recommend in its place a formulation such as the following:
KMI: Other things being equal, if an agent makes the in situ judgment that she ought to φ in circumstances C — that is, if she judges that she ought to φ in circumstances C, believing herself to be in those circumstances — then she is motivated to φ (Smith 210, formulated on behalf of Kennett and Fine).
From there they note that Roskies’ project has two parts: show that VM patients make relevant moral judgments, and show that VM patients have no moral motivation. Against the first, Kennett and Fine claim that Roskies mainly relies on a single-case study of a patient EVR, and that the patient was engaged in only third-person moral reasoning. Furthermore, “moral reasoning has rarely been studied in VM patients” (183). Against the second, Kennett and Fine question whether EVR’s behavior is best explained by absent moral motivation as opposed to a general impairment in decision making, and raise several other concerns about Roskies’ various attempts to show impaired moral motivation in VM patients (184). They conclude by claiming that “the clinical literature provides no support for externalist claims. It has not delivered up a single clear-cut example of the amoralist” (189).
Appropriately enough, Adina Roskies wrote the first commentary and makes a number of points in her defense. One is that the proposed alternative formulation of motivational internalism is “too permissive” and also “exceedingly weak” (192, 194). She also cites additional studies on VM patients which purport to show that they make genuine moral judgments (196). In addition, it is a “simplistic and unrealistic picture of externalism” to expect that there must be a specific or isolated deficit in moral motivation for there to be a genuine counterexample to motivational internalism. Rather a deficit in moral motivation can be just one aspect of the decision making deficit found in VM patients such as EVR (197). In the second set of comments, Michael Smith offers his own familiar formulation of motivational internalism:
MMI: It is conceptually necessary that if an agent judges that she morally ought to φ in circumstances C, then either she is motivated to φ in C or she is practically irrational (211).
He then raises two problems for Kennett and Fine’s proposal KMI as well as a defense of MMI against Roskies’ dismissal of it in her original 2003 paper.
Victoria McGeer’s paper, “Varieties of Moral Agency: Lessons from Autism (and Psychopathy)”, tries to advance our understanding of moral agency by considering a puzzle that arises when we compare psychopaths with autistic humans. Both groups are said to lack empathy, and yet psychopaths seem to have no moral concern whereas those with autism have strong moral convictions (228). More precisely,
work on psychopaths seems to support the view that the capacity for moral thought and action is strongly dependent on our affective natures and in particular the capacity to respond empathetically to others’ affective states, to experience a vicarious emotional response to how they affectively experience the world, and especially to feel some distress at their distress and suffering (231).
Similarly, autistic subjects seem to lack a basic empathetic connection as well. Nevertheless then we find that they have what appears to be a robust moral consciousness (233).
McGeer discusses at length Jeanette Kennett’s proposed solution in a 2002 paper, namely that autism gives us reason to adopt a broadly Kantian approach to moral psychology focused on the rationality and the concern to do the right thing (234). Similarly in the case of psychopaths, Kennett proposes that we understand their moral indifference in terms of their lack of a capacity to consider and act in accordance with available reasons (235). This can in turn teach us something important about what is central to moral agency — “reverence for reason is the core moral motive, the motive of duty” (quoted in McGeer, 237).
McGeer is not persuaded. She returns to autistic individuals, and claims that they are motivated to follow moral rules as such, without much of an understanding of the ends those rules are intended to serve (240). More precisely, the
need to impose order as a way of managing their environment predisposes high-functioning individuals with autism toward … discovering easy-to-follow principles behind whatever system of rules they find in place (242).
Thus behind their use of reason to follow rules is a strong desire for order, thereby supporting a Humean rather than a Kantian picture of their moral psychology. Even though autistic individuals might not have empathetic attachments, they still have other affective states that secure for them a kind of moral agency. Stepping back from autism, the picture is that we respect reason due to our recognition that it is instrumental to pursing our affectively valenced ends (246). Finally, McGeer ends her paper with more speculative claims about the existence of three spheres of affective concern: (i) concern or compassion for others, (ii) concern with social position and social structure, and (iii) concern with “cosmic” structure and position (229). Different forms of moral agency can result from differences in the development and relationship between these spheres in a given person. On the other hand, all three of these spheres do not appear to be at work in psychopaths (254).
Kennett is appropriately enlisted to write the first commentary, and she suggests that what McGeer calls the concern for cosmic structure and position might just be a manifestation of the Kantian rational disposition to “understand what we do, and to do what we can understand ourselves doing” (261). In the second commentary, Heidi Maibom also focuses on McGeer’s appeal to the concern for cosmic structure, and raises a series of worries. One is that it is not clear how this concern connects to moral actions, nor how it connects to “a concern [in autistic individuals] that the little things be done in the right way” (267). More forcefully, Maibom argues that the passion for order found in those with autism can be explained by the second concern for social position and structure (268). Finally, in a third set of comments Frédérique de Vignemont and Uta Frith helpfully develop the central puzzle at issue in McGeer’s paper as follows (274):
(a) Humean view: Empathy is the only source of morality.
(b) People who have no empathy should have no morality.
© People with autism show a lack of empathy.
(d) People with autism show a sense of morality.
Talk of “morality” here is, I take it, best read as shorthand for “moral consciousness” or “moral sensitivity”. McGeer rejects (a) and thereby (b). De Vingemont and Frith, on the other hand, suggest that both © and (d) might be vulnerable. Some individuals with autism spectrum disorder may be able to empathize, and it is not clear that they can make the now standard distinction between violations of moral and conventional norms (275-7).
Jerome Kagan, in “Morality and its Development”, sketches five broad stages in human moral development (299-303):
First Stage: During the first year, infants are conditioned to learn that certain behavior is followed by punishment.
Second Stage: By the second birthday, facial expressions suggest “a state of uncertainty in situations that present temptations to violate a standard of action, even though the child may never have been punished for that act” (299). Before the third birthday, there is an awareness of normative standards of behavior.
Third Stage: By the end of the third year, children can understand the concepts of good and bad and apply them to things, events, and people, including themselves.
Fourth Stage: Between three and six years of age, children feel guilt when they violate normative standards.
Fifth Stage: Between five and ten years of age, children form an understanding of the abstract concepts of fairness and the ideal.
The remainder of his paper is devoted to briefly mentioning other important concepts in moral development, such as the role of nominal and relational social categories, a feeling of virtue, and temperamental biases.
The first commentary by Nathan Fox and Melanie Killen seems to be generally approving of Kagan’s project, and they mainly note differences with other views in the literature, such as Fans de Waal’s position on whether morality is uniquely human. At one point Fox and Killen attempt to clarify what Kagan means when he says that which actions count as moral or conventional has changed over time (316). In the second commentary, Paul Whalen briefly suggests further work in neuroscience that needs to be done in order to better understand inhibited temperamental biases in children.
In “Adolescent Moral Reasoning: The Integration of Emotion and Cognition”, Abigail Baird in a sense picks up where Kagan left off by briefly treating several important concepts used to understand adolescent moral development. She first provides a similar developmental sketch as the one above, going through conditioning, conformity to internalized standards, guilt, and abstract thoughts (325). Turning to adolescence, the period between puberty and adulthood, she notes that in addition to abstract thought, the other primary gain in cognitive development is the ability of adolescents to reason about their own thoughts, thereby grounding metacognition and introspection (326). At the neuroscientific level, adolescent development is correlated with a decrease in gray matter and an increase in white matter throughout the cortex (327).
In the remainder of the paper, Baird touches on a number of topics. She shows some sympathy for Damasio’s (1994) somatic marker hypothesis, one consequence of which is that somatic markers “help to reduce the complexity of decision making by providing a ‘gut’ feeling that does not require effortful cognition” (330). In the moral context, such markers can improve the rate of moral transgression avoidance amongst adolescents by helping them anticipate how they might feel if they were to perform the morally wrong action (331). She also briefly mentions what she calls the self-conscious emotions such as pride, shame, guilt, and embarrassment, which arise comparatively late in development because of their cognitive sophistication. The paper concludes by focusing on the phenomena of imaginary audiences and personal fables in adolescents, as well as the central role that peers come to play in their social lives.
Daniel Lapsley raises several objections in his commentary. One objection questions the assumption in Baird’s four stage theory of development that “mature moral reasoning is isomorphic with neurological maturation” (346). In addition, Lapsley claims that associations between cognition and emotion are made much earlier in development than Baird’s account predicts, and that her claim that cognition precedes emotion is contestable and sits uneasily with the somatic marker hypothesis (348). Katrina Sifferd instead applies Baird’s framework to the topic of adolescent criminal responsibility, and takes it to favor the retention of a “separate juvenile justice system that focuses upon rehabilitative sanctions” (354).
Finally, in a very careful and timely paper entitled “What Neuroscience Can (and Cannot) Contribute to Metaethics”, Richard Joyce helpfully tempers some of the enthusiasm for using certain interdisciplinary empirical results to adjudicate between competing meta-ethical views. More precisely, he focuses on whether neuroscience supports moral emotivism and threatens moral rationalism. Moral emotivism is, very roughly, the view that when making public moral judgments we are expressing an emotion rather than a cognitive belief (374). As a number of papers in these three volumes suggest, there is a wealth of empirical evidence that emotions are intimately involved in moral deliberation. So the argument looks to be relatively straightforward: “if we find that when we hook up people’s brains to a neuroimaging device, get them to think about moral matters, and observe the presence of emotional activity, emotivism is supported” (375). But Joyce recommends caution. For at best what these studies would show is that public moral judgments are caused or accompanied by emotions, but not that they actually express those emotions (375). Instead the empirical work that needs to be done in this area in order helpfully to evaluate the plausibility of emotivism does not have to do with neuroscience but rather with empirical studies of our linguistic conventions and practices connected to moral discourse.
More attention is paid by Joyce to moral rationalism. He first very helpfully distinguishes between three versions of the view: psychological, conceptual, and justificatory moral rationalism. The first is the view that moral deliberation and decision arise from a rational faculty (377). A strong version of this view claims that the rational faculty is both necessary and sufficient for moral judgment, and might be in danger from empirical studies on psychopaths. Even so, a weaker psychological rationalist position, according to which the work of the rational faculty is only necessary but not sufficient, still seems defensible. Merely showing that emotional activity is strongly correlated with moral deliberation and judgment will not refute that version of the view (379-80). Conceptual rationalists, on the other hand, claim that it is conceptually necessary that moral requirements are also requirements of practical rationality. Here Joyce focuses on Shaun Nichols’ attempt in two recent places (2002, 2004) to undermine this view using empirical surveys of folk intuitions. Nichols’ strategy involves connecting conceptual rationalism to motivational internalism and then showing that many of the folk report that they accept the existence of characters who are supposed to be unmoved by their genuine moral judgments and thereby serve as counterexamples to the conceptual necessity of motivational internalism. In the process of responding to Nichols, Joyce also nicely discusses in some detail the same work by Roskies (2003) that was the focus of the earlier paper in this volume by Kennett and Fine. Finally, the third form of moral rationalism distinguished by Joyce is justificatory rationalism, which claims that moral wrongdoing is also practically irrational, but does not affirm a conceptual connection between the two (388). Unlike psychological rationalism, the view is indifferent as to what was causally responsible for motivating the action (i.e., an emotion or a rational faculty), but rather just cares about the normative status of the action itself. More precisely, the core idea is that rational principles discoverable by reason favor a degree of interpersonal impartiality and benevolence (390). Such an attempt to find a rationalist foundation for ethics will not be threatened by results from neuroscience, unless those results end up having the dramatic result that we fail to instantiate the bare minimum conditions required for rational agency in the first place (390).
In the first commentary, Shaun Nichols begins by trying to bolster his earlier argumentative strategy against conceptual rationalism. More interesting in my view is the end of his commentary, in which he develops a strategy for using neuroscientific results to cast doubt on justificatory rationalism. Nichols suggests that while Joyce is right that such results cannot refute this brand of rationalism directly, they can cast doubt on the actual arguments used to arrive at rationalist conclusions. For instance, an argument by Peter Singer seems to rely on the following principle:
Justification Principle: Rationality reveals that from the perspective of the universe my interests are not privileged, so I should not privilege them to the exclusion of the interests of others (402).
This claim has a great deal of intuitive support, but if our morally relevant intuitions are a causal product of nonrational emotional faculties then it might seem that such intuitions have no justificatory value. Hence any conclusions derived from principles like the Justification Principle will be ungrounded (402-3). This line of reasoning and Joyce’s response are worth examining, and the connection to Greene’s arguments should be clear. In general I found the exchange between Joyce and Nichols to be one of the most rewarding in the collection. In the second commentary, Leonard Katz engages not only with Joyce’s paper in this collection but also with his earlier work, which advanced a meta-ethical error theory (Joyce 2001). Katz focuses on pleasure and pain, and argues against Joyce that hedonic considerations can ground the objectivity of practical reasons and the realist standing of central parts of morality connected to those considerations.
Let me end by stepping back from the individual papers. As with the second volume, once again there are several papers of very high quality that should have a significant impact on work in moral psychology. Many of the commentaries are first-rate as well. However, to my mind at least, there was a noticeable drop-off from the previous volume to this one in several respects. For one there was a great deal of variability in the structure of the volume — for instance, some papers were as short as 15 pages while one was 45 pages in length. Similarly with the commentaries — some were extremely thorough and detailed and ran upwards of ten pages, whereas others were as brief as two and a half pages. More substantively, the quality of the papers varied significantly in my mind as well, although I will not comment on individual papers in this respect here. So too did the ambitions of the papers vary noticeably — several read primarily as literature reviews or as response papers whereas others had as their focus to build on the existing literature by providing new approaches to future work in an area or by trying to advance the discussion of a longstanding philosophical debate. Finally, in at least one case it was not clear what connection the paper had to the volume’s theme of neuroscience, although in general the papers did hang together nicely when read in order.
These points should not, however, detract at all from the amazing job that Walter Sinnott-Armstrong has done in assembling such a tremendous cast of leading scholars from a number of different disciples and producing as a result a three volume collection of accessible and interesting papers and commentaries that should be read by anyone interested in contemporary philosophy of psychology, moral psychology, meta-ethics, and action theory. I have no doubt that many of these papers will significantly advance discussions in these areas for years to come.
Damasio, Antonio. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Grosset/Putnam.
Joyce, Richard. (2001). The Myth of Morality. Cambridge: Cambridge University Press.
Kennett, Jeanette. (2002). “Autism, Empathy and Moral Agency.” Philosophical Quarterly 52: 340-357.
McDowell, John. (1979). “Virtue and Reason.” The Monist 62: 331-350.
Nichols, Shaun. (2002). “Is It Irrational to be Immoral? How Psychopaths Threaten Moral Rationalism.” The Monist 85: 285-304.
_____________. (2004). Sentimental Rules: On the Natural Foundations of Moral Judgment. Oxford: Oxford University Press.
Roskies, Adina. (2003). “Are Ethical Judgments Intrinsically Motivational? Lessons from ‘Acquired Sociopathy.’” Philosophical Psychology 16: 51-66.
1 Also very helpful in this context is Richard Joyce’s paper later in this volume in which he nicely distinguishes between three different kinds of moral rationalism and notes that two of them — conceptual and justificatory rationalism — are not likely going to be threatened by the kind of neuroscientific results that moral psychologists such as Greene offer.