Moral Psychology, Volume 2: The Cognitive Science of Morality: Intuition and Diversity

Placeholder book cover

Walter Sinnott-Armstrong (ed.), Moral Psychology, Volume 2: The Cognitive Science of Morality: Intuition and Diversity, MIT Press, 2008, 585pp., $30.00 (pbk), ISBN 9780262693578.

Reviewed by Christian Miller, Wake Forest University



This is the second of three volumes on moral psychology edited by Walter Sinnott-Armstrong and published by MIT Press in 2008. The first volume — with seven main papers by Owen Flanagan, Hagop Sarkissian, and David Wong; Leda Cosmides and John Tooby; Debra Lieberman; Geoffrey Miller; Peter Tse; Chandra Sripada; and Jesse Prinz — is focused on the evolution of morality, and is already reviewed in Notre Dame Philosophical Reviews ( The third volume — with eight main papers by Jorge Moll, Ricardo de Oliveira-Souza, Roland Zahn and Jordan Grafman; Joshua Greene; Kent Kiehl; Jeanette Kennett and Cordelia Fine; Victoria McGeer; Jerome Kagan; Abigail Baird; and Richard Joyce — is focused on the neuroscience of morality. For my review of volume three, see

The second volume contains eight main papers, each of which is followed by two responses, and then a reply by the original author(s). Given the daunting task of reviewing two of these sizable volumes, I have chosen to summarize very briefly each of the main papers and commentaries in order, stopping to make a few comments about two papers in particular.

In “Moral Intuition = Fast and Frugal Heuristics?” Gerd Gigerenzer summarizes and extends his influential work on heuristics. Simple examples of such heuristics include “Don’t break ranks” and “If there is a default, do nothing about it” (2-3). Gigerenzer’s theory emphasizes fast and frugal heuristics — fast in the sense that the theory can make a decision rapidly, and frugal in the sense that it searches for little information (4). These heuristics are highly context sensitive and can conflict with traditional normative standards. They are meant to explain behavior rather than to serve as true normative claims (5).

Gigerenzer’s central thesis in this paper is that “[m]orally significant actions can be influenced by simple heuristics” (3). More precisely, he advances three hypotheses:

(a) Moral intuitions, especially as described by social intuitionist theories (Haidt 2001, Haidt and Bjorklund this volume), can be helpfully understood as fast and frugal heuristics (9).

(b) Heuristics which explain moral behavior are not different in kind from those which explain non-moral behavior (9).

© Heuristics underlying moral behavior are generally unconscious (10).

Gigerenzer further argues that morally relevant heuristics rely on (largely unconscious) reasons, and those reasons differ from the post hoc rationalizations that people often give for their moral intuitions. Towards the end of his paper, he turns to normative considerations and adopts an optimistic position on whether we should rely on moral heuristics. Perhaps most controversially, he argues that the fast and frugal heuristics approach limits acceptable normative theories. In particular, he claims that maximizing theories, such as standard consequentialist views, face a host of difficulties including problems with computational limitations, with imprecision in their central criteria, and with establishing trust between people (22-25).

In the first set of comments, Cass Sunstein is much less optimistic about the normative value of heuristics. Sunstein highlights several cases in which heuristics are unreliable and lead to moral error, especially cases that stem from heuristics such as “do what the majority does” or “do not distort the truth” (28). Both Sunstein and Julia Driver and Don Loeb in their subsequent comments devote most of their critical focus to Gigerenzer’s application of his view to consequentialism.

In a model essay for combining philosophical theorizing about morality with empirical results from other disciplines, Walter Sinnott-Armstrong takes aim at moral intuitionism in his paper “Framing Moral Intuitions”. He defines moral intuitions as moral beliefs that are strong (high degree of confidence) and immediate (non-inferential) (47). Sinnott-Armstrong concedes that most humans have moral intuitions understood in this manner, but the interesting question is whether such beliefs are non-inferentially justified, i.e., justified independently of whether the believer is able to infer them from any other beliefs (48). While moral intuitionists on Sinnott-Armstrong’s taxonomy claim that some moral intuitions are justified non-inferentially, he attempts to show instead that no moral intuitions are so justified. His argument relies on the following general principle:

If the process that produced a belief is not reliable in the circumstances, and if the believer ought to know this, then the believer is not justified in forming or holding the belief without inferential confirmation (51).

From this we get the following master argument:

(1) If our moral intuitions are formed in circumstances where they are unreliable, and if we ought to know this, then our moral intuitions are not justified without inferential confirmation.

(2) If moral intuitions are subject to framing effects, then they are not reliable in those circumstances.

(3) Moral intuitions are subject to framing effects in many circumstances.

(4) We ought to know (3).

(5) Therefore, our moral intuitions in those circumstances are not justified without inferential confirmation (52).

The framing effects in which Sinnott-Armstrong is mainly interested are word and context effects. A word effect influences moral intuitions, for example, when the words used to describe what a belief is about impact whether a person holds the belief. An instance of a context effect is when a person’s belief depends upon the order in which two or more scenes are presented to him or her (52-3). Wording and context do not affect the truth of the beliefs in these cases, but if they have a significant impact upon whether we form those beliefs in the first place, then according to Sinnott-Armstrong it is plausible to hold that moral beliefs influenced by framing effects will often be incorrect and cannot reliably track the truth. As support for (3), he thoroughly summarizes a range of recent studies by Horowitz (1998, on rescue cases), Petrinovich and O’Neill (1996, on trolley problems), and Haidt and Baron (1996, on lying).

In his comments on Sinnott-Armstrong, William Tolhurst begins by questioning how the conclusion in (5) is supposed to follow from the premises, but he devotes most of his attention to the claim that

Nothing Sinnott-Armstrong has provided by way of argument gives us sufficient reason to believe that the percentage of moral intuitions formed in ordinary circumstances that result from framing effects is a significant fraction of all such moral intuitions (81).

Russ Shafer-Landau also takes aim at Sinnott-Armstrong’s master argument, and claims that the conclusion in (5), by telling us that moral intuitions in many circumstances are unjustified, tells us nothing that we did not already know. Instead Shafer-Landau proposes a revised argument on Sinnott-Armstrong’s behalf:

(1) If moral beliefs are subject to framing effects in many circumstances, then, for any one of my moral beliefs, it is justified only if I am able to inferentially confirm it.

(2) Moral beliefs are subject to framing effects in many circumstances.

(3) Therefore, for any one of my moral beliefs, it is justified only if I am able to inferentially confirm it (85-6).

He then proceeds to call into question the first premise. It is worth noting that in his reply to the commentators, Sinnott-Armstrong does give a more carefully developed, twelve step version of his argument.

Marc Hauser, Liane Young, and Fiery Cushman use their paper “Reviving Rawls’s Linguistic Analogy: Operative Principles and the Causal Structure of Moral Actions” to argue for a version of moral nativism. Drawing on what they take to be a suggestion from John Rawls, they try to forge a helpful analogy between Chomskyan approaches to language and the psychology of moral judgments.1 In the case of language, there is good reason to think that we have inherited a language faculty that provides the principles whereby a subject’s knowledge of a language is constructed. Children are born with linguistic principles such as rules for verb construction, although in the child the principles are operative but not expressed (111). In the moral case, Hauser et al. contrast their Rawlsian model (117):

Action analysis → Judgment → Emotion → Reasoning

with a Kantian model (114):

Perceive event → Reasoning → Judgment → Emotion

and a Humean model (115):

Perceive event → Emotion → Judgment → Reasoning

On their approach, there is an innate, universal moral faculty that forms moral verdicts about perceptions of situations on the basis of operative but unexpressed moral principles. Hauser et al. proceed to distinguish a stronger and weaker version of this view. According to the weaker model (as I understand it) the operation of the moral faculty is necessary for moral judgments, but other components such as the emotions might also be necessary and are causal products of the appraisals made by the faculty. On the stronger model, on the other hand, the moral faculty generates a moral judgment just using the operative moral principles (117, 121). Crucial for their view in either version, however, is the work of the unconscious appraisal mechanism, which analyzes such things as the causes and consequences of actions. Furthermore, this “moral faculty is equipped with a universal set of principles, with each culture setting up particular exceptions by means of tweaking the relevant parameters” (122). It follows that damage to such a system would seriously impair our moral capabilities. To empirically support their view, Hauser et al. describe at length their survey results using various trolley scenarios.

The two sets of commentaries are perhaps the most critical in the volume. Ron Mallon argues that there is no evidence for a specialized moral faculty, understood as being informationally encapsulated, having principles that are obscure to conscious reasoning, and existing in a certain discrete brain location. Rather, cognitive science gives us good reason to think that “multiple or diffuse internal mechanisms operate in such a way that we can accurately describe them as performing the function” of the moral faculty (150). Jesse Prinz raises five potential disanalogies between morality and language, and also questions the four allegedly analogous respects between the two offered by Hauser et al. He then proceeds to sketch what he would later call his constructive sentimentalist account and outlines how a non-nativist might account for the trolley data (Prinz 2007).

In “Social Intuitionists Answer Six Questions about Moral Psychology”, Jonathan Haidt and Fredrik Bjorklund provide a very helpful overview of the social intuitionist account of moral judgment. Simplifying greatly, on their view moral judgments arise from quick and automatic moral intuitions. These intuitions are defined as

the sudden appearance in consciousness, or at the fringe of consciousness, of an evaluative feeling about the character or actions of a person, without any conscious awareness of having gone through steps of search, weighing evidence, or inferring a conclusion (188).

Haidt and Bjorklund postulate five sets of basic intuitions: harm/care, fairness/reciprocity, authority/respect, purity/sanctity, and in-group/out-group (203). Conscious moral judgments formed on the base of such intuitions in turn give rise to conscious moral reasoning to rationalize in a post hoc manner the conclusions that have already been reached. This moral reasoning can causally influence the agent’s subsequent moral intuitions and judgments, but Haidt and Bjorklund claim that such influence is “hypothesized to occur somewhat rarely outside of highly specialized subcultures” (193). In addition, there is a strong social dimension to their theory — one person’s intuitions might be shaped by another’s judgment (the “social persuasion link”) or reasoning (the “reasoned persuasion link”) or both (187). Indeed, Haidt and Bjorklund claim that the “reasons that people give to each other are best seen as attempts to trigger the right intuitions in others” (191).

In support of their view, Haidt and Bjorklund cite three main forms of empirical evidence: moral judgment interviews and the phenomenon of moral dumbfounding, disgust and hypnosis manipulations, and neuroscientific evidence. They conclude by addressing the questions of how morality develops and why people vary in their moral views. At the very end of their paper, they draw some quick (and as the commentators note, very controversial) philosophical implications from social intuitionism about, among other things, the metaphysics of moral facts and the nature of moral inquiry.

In the first set of comments, Dan Jacobson raises a host of important criticisms, but one of his main worries is that

the social part of the [social intuitionist model] fails to vindicate moral judgment as a form of good thinking, specifically about questions of what to do and how to live … we need an argument that intuition can reveal moral truth and ground moral knowledge even on a less objectivist model (224).

In addition, he criticizes the seeming equation on the social intuitionist model of reason giving with persuasion (226). In a very interesting commentary, Darcia Narvaez argues that social intuitionism overemphasizes the role of intuitions in moral thought without paying adequate attention to the role of deliberative reasoning and the way in which other factors such as principles and goals (the agent’s own and others) play a role in decision making and deliberation prior to judgment. To use her memorable expression, “Instead of intuition’s dominating the process, intuition danced with conscious reasoning, taking turns doing the leading” (235).

Let us pursue this last point a bit further. In their response, Haidt and Bjorklund graciously acknowledge the force of Narvaez’s objection, and revise their view to apply just to moral judgments as opposed to moral decisions (242). This distinction is never specified very precisely, but the idea appears to be that moral judgments are moral evaluations of others (their behavior, character, etc.) whereas moral decisions concern one’s own behavior (242). Haidt and Bjorklund claim that social intuitionism is primarily intended as an account of the former, and that there is good reason to think there is not one moral faculty responsible for both judgments and decisions (243).

Nevertheless a critic might object that such a distinction does not get the phenomenology correct. On the one hand, there seem to be automatic “moral decisions” about how to act that involve little or no prior conscious reflection. Indeed Haidt and Bjorklund provide a nice example of such a case: the decision that leads someone to immediately jump into a river to save a life (244). On the other hand, there are plenty of cases in which “moral judgments” about the morality of third-person behavior involve “private, internal, conscious weighing of options and consequences” (242). For instance, someone might read about a case of physician-assisted suicide and, after reflectively weighing the various considerations raised in the story, come to a conclusion about the morality of the doctor’s action. More straightforwardly, Narvaez gives a list of twelve moments of conscious deliberation in her own life about how to act, a list that Haidt and Bjorklund seem to accept as familiar (235, 243). For each item on the list, however, we can easily arrive at a third-person analog in which just as much conscious deliberation might be involved. For instance, one item is the following (235):

“This meeting is a waste of time. What can I do to make it worthwhile for everyone?”

Another person might consciously deliberate just as much about the following:

“This meeting is a waste of time. What can the department chair do to make it worthwhile for everyone?”

Or after the fact:

“That meeting was a waste of time. Did the department chair act in the best way possible?”

Similarly, another item is this (235):

“How do I tell my boss that the workload is unfair?”

But we can easily imagine the following question requiring even more conscious deliberation in some cases:

“How should she go about telling her boss that her workload is unfair?”

Or after the fact:

“Did she really go about telling her boss that her workload is unfair in the best way possible?”

In general it is hard to see how the distinction between moral judgments and moral decisions really helps Haidt and Bjorklund. Ultimately, I suspect that they would be better off simply abandoning this distinction and following Narvaez in claiming that intuitions are often just one of several factors (some conscious and some not) that typically go into the formation of many moral conclusions, and that the precise contribution that each factor makes can vary significantly from one case to the next.

Shaun Nichols’ paper “Sentimentalism Naturalized” can serve as a précis of his 2004 book Sentimental Rules. He begins by examining and quickly dismissing early sentimentalist accounts of moral judgments such as emotivist views before moving to contemporary neosentimentalist positions. There the representative example is Gibbard’s 1990 norm-expressivism, according to which "what a person does is morally wrong if and only if it is rational for him to feel guilty for doing it, and for others to resent him for doing it" (quoted in Nichols, 258, emphasis in original). Nichols argues, however, that this approach is too demanding since it requires an agent to “(i) attribute guilt, (ii) evaluate the normative appropriateness of emotions, and (iii) combine these two capacities to judge whether guilt is a normatively appropriate response to a situation” (261). Further, there is good empirical evidence for thinking that young children can make genuine moral judgments (more precisely, what Nichols calls core moral judgments concerning violations of harm norms), whereas they do not possess an understanding of guilt until significantly later in life.

As an alternative approach, Nichols begins to outline his sentimental rules account, whereby core moral judgments have two central components. One is an affective system that is triggered by perceptions of suffering in others; a motivation for positing this system comes from studies of psychopaths who have difficulty drawing the moral versus conventional distinction and also have deficits in affective response (263). The other central component is an agent’s normative theory, or body of information about what actions are wrong. Since our affective system also reacts to natural disasters and accidents, something more is needed to form a genuine moral judgment (264). Finally towards the end of his paper (and his book), Nichols turns to the evolution of norms and advances the “affective resonance” hypothesis:

Norms that prohibit actions to which we are predisposed to be emotionally averse will enjoy enhanced cultural fitness over other norms (269).

As evidence for this hypothesis, Nichols examines in some detail the cultural evolution of etiquette norms (270-1).

At this point we can raise a few questions for Nichols about his treatment of these two alleged components of core moral judgments.2 First of all, even if we accept that there are these two components, what we need is a detailed account of how they work together and what precise relation they bear to core moral judgments. Here Nichols is short on details, remarking at one place in his book that they “somehow conspire to produce the distinctive responses tapped by the moral/conventional task” (2004, 29). Similarly, he seems to reject the claim that both components operating together occurently are necessary for core moral judgments (2004, 28-9). Instead it may be enough if the affective system is present at some crucial earlier developmental stage (2004, 29). Even so, however, we are never given an argument for why such an affective system is necessary at any point for the production of core moral judgments, where the necessity here is presumably nomological rather than conceptual necessity since Nichols is not giving a conceptual account of moral judgments. Instead, at best his arguments for the role of an affective system would seem to show that such a system is extremely common or frequently present in subjects making core moral judgments. Similarly, even if both components are indeed necessary in some way, Nichols says nothing to convince us that together they are jointly sufficient. Without any clear reasons to accept either the necessity or the sufficiency of his view, it is not clear to what extent Nichols provides an actual empirical account of our capacity to form core moral judgments. Finally, even if Nichols provides such an account, it is initially unclear how it would extend beyond just those judgments concerning violations of harm-based norms. These points are not meant to be serious objections, but rather areas where I hope the view receives further development (for similar concerns, see Sinclair 2005).

In his response to Nichols’ paper, James Blair questions the necessity of both the affective response component and especially the normative theory component of Nichols’ account. Blair suggests that in place of the normative theory component we should instead pay attention to simulation accounts of the theory of mind. In a second set of comments, Justin D’Arms adopts two main lines of criticism. The first is to suggest that neosentimentalists like Gibbard might plausibly deny that children are making genuine as opposed to ersatz moral judgments. The second is that neosentimentalists can account for familiar moral disagreements in ways that Nichols’ theory cannot.

In “How to Argue about Disagreement: Evaluative Diversity and Moral Realism”, John Doris and Alexandra Plakias examine the bearing that moral disagreement should have on the plausibility of moral realism. They helpfully distinguish between two realist approaches to disagreement: (i) convergent moral realists claim that moral realism is threatened by the actual extent of moral disagreement we find, but are confident that such disagreements will be seriously diminished in suitably ideal conditions (where these conditions are specified in different ways by different realists), whereas (ii) divergent moral realists deny that even persistent moral disagreement in suitably ideal conditions would count against the truth of moral realism. Doris and Plakias are no fans of moral realism in general, and first criticize the divergentism of Bloomfield (2001) and Shafer-Landau (2003). They then turn to convergentism, and note that here the usual strategy is to provide one or more defusing explanations for why current moral disagreement is only superficial and will not likely persist as epistemic conditions improve. Such explanations include different beliefs about relevant nonmoral facts, partiality, irrationality, and divergent background theories (320). Finally, they turn to empirical studies in two areas — violence and honor in the American North and South, and reactions to cases of authorities killing an innocent scapegoat to prevent rioting — which in their view serve as at least prima facie counterexamples to convergentism by appearing to be cases in which moral disagreement might persist even in ideal conditions.

In his comments, Brian Leiter indicates that he is no fan of convergent moral realism either, but raises several concerns about the approach taken by Doris and Plakias, such as their treatment of the violence studies. Interestingly, he claims that ample evidence of moral disagreement, even under suitable epistemic conditions, can be found simply by studying the history of moral philosophy, and mentions Nietzsche’s moral views as an especially difficult case for those trying to come up with defusing explanations (336). Paul Bloomfield, on the other hand, is a well-known moral realist who suggests a number of different ways of defending the view. Perhaps most forceful from a divergentist perspective is the claim that we have good reason to expect there to be persistent disagreements in science as well, even under improved epistemic conditions (342).

Don Loeb’s paper, “Moral Incoherentism: How to Pull a Metaphysical Rabbit out of a Semantic Hat”, outlines and motivates a new and interesting meta-ethical position. He begins with our linguistic dispositions and asks us to suppose that no coherent account fits them well enough to give the meanings of moral terms. Then it would follow that “there are no particular things we are referring to when we employ the terms of that vocabulary” (357). Indeed, Loeb thinks there is evidence that inconsistent elements — for and against objectivity, for and against motivational internalism, for and against deontology, and so forth — are part of our understanding of moral terms. This view about the lack of a coherent moral semantics, a view Loeb calls moral incoherentism, would not only be interesting in its own right, but would also have a significant metaphysical consequence in that it would threaten to refute moral realism (381-2). Thus if his approach is correct, then (as the title of his paper suggests) Loeb would have pulled a metaphysical rabbit out of a semantic hat. Note that incoherentism is neither a straightforward cognitivist nor non-cognitivist position. Rather for Loeb,

It may be that ordinary people use the moral words both to make factual assertions and to do something incompatible with the making of such assertions, because ordinary people are at bottom widely and irremediably, if perhaps only implicitly, conflicted about questions of moral objectivity (363, emphasis his).

In order to develop his view, Loeb examines at length the semantic theory defended by Frank Jackson (1998), and concludes with some very helpful cautionary remarks about carrying out empirical investigations into our moral semantic practices.

In his commentary, Michael Gill accepts what he calls Loeb’s variability thesis: that ordinary moral thought contains cognitivist and noncognitivist elements, neither of which has primacy. However, he denies that this thesis compels us to think that moral thought and language are therefore deeply confused. As with words like “happy”, it might be

perfectly sensible to use a moral term in a way that involves one commitment, and also to use a moral term in a way that eschews that commitment, just so long as the first use occurs in a situation that is semantically insulated from the situation in which the second use occurs (390).

For instance, to use Gill’s own example, Beavis and Butthead might use moral terms noncognitively, whereas evangelicals use them cognitively (393). Geoffrey Sayre-McCord instead thinks that Loeb’s mistake is to focus on the semantic practices of ordinary folk. Rather, cognitivists and non-cognitivists alike should best be understood as giving accounts of what only those who are “genuinely engaging in moral thought and talk” are doing (410).

The final paper in the volume, Julia Driver’s “Attributions of Causation and Moral Responsibility”, raises a host of moral and metaphysical issues that have not made an appearance up to this point. Driver’s goal is to defend — barring some odd Cambridge-change cases — the following thesis:

If an agent A is morally responsible for event e, then A performed an action or an omission that caused e (423).

This thesis “seems extremely intuitively plausible”, but has been met with alleged counterexamples. Driver draws on work from social psychology, metaphysics, and moral psychology in order to try to block these cases. The main empirical studies she employs are those of Alicke (1992) and Knobe (2005), which purport to show that normative considerations play an important role in causal attributions. For example, suppose two employees logged onto a computer at the same time, despite the fact that one of them was not permitted to do so by the company for fear that the computer would crash. Despite its being true that if either had not logged on, the computer would not have crashed, we tend to say that the employee who acted out of order was the one who caused the computer to crash (428). Along the way, Driver suggests an alternative interpretation of the data, namely that in the cases studied by Alicke and Knobe causal attributions are sensitive to the abnormality of certain events rather than to normative considerations. From there, Driver focuses on two main counterexample strategies: Leslie’s cases (1991) involving “quasi-causation” such as perfectly symmetrical behavior of identical doubles in two universes and Sartorio’s cases (2004) involving disjunctive causation (e.g., two people have the power to stop a chemical leak by simultaneously pushing a button in their respective rooms but choose not to) (432, 434).

In the first set of comments, Joshua Knobe and Ben Fraser examine two of Driver’s empirical hypotheses, the first one being the claim mentioned above that abnormality might be able to help explain the causal attributions in the Alicke and Knobe cases. They devised a survey case designed to test this hypothesis and found little empirical support for it. John Deigh, on the other hand, turns to cases involving criminal responsibility in order to provide a new challenge to the original principle. In particular, he appeals to the principle of complicity and a case involving several youths and the death of a pizza deliveryman. While some of the youths only stood watch and so did not actually beat the deliveryman, they are still held criminally responsible for his death.

Let me end by stepping back from the individual papers. In general, all of the main papers and most of the commentaries are of high quality and interest. Sinnott-Armstrong is to be congratulated for bringing so many leading figures from various disciplines together and for preserving both a high level of dialogue and a commitment to making the material accessible to a wider audience. Needless to say, this and the two other volumes in this series are must reading for anyone working in moral psychology and meta-ethics.

At the same time, I came away from this collection somewhat discouraged. The overall picture we get is of moral intuitions and judgments that are usually only post-hoc rationalized (Haidt and Bjorklund), are subject to unreliable framing effects in many circumstances (Sinnott-Armstrong), involve a significant role for affective responses that have had important impacts on the cultural evolution of norms over time (Nichols), are subject to fundamental moral disagreements even in significantly improved epistemic conditions (Doris and Plakias), and involve conflicting elements that render them in some sense incoherent (Loeb). More optimistic accounts of the role of moral reasoning and reflective deliberation were few and far between (e.g., Narvaez). While this focus might just reflect the way things really are, it likely will not foster a very uplifting outlook either in philosophers working in this area or in people in general who are interested in better understanding how our moral psychologies operate.


Alicke, M. (1992). “Culpable Causation”. Journal of Personality and Social Psychology 63: 368-378.

Bloomfield, Paul. (2001). Moral Reality. New York: Oxford University Press.

Haidt, Jonathan. (2001). “The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment”. Psychological Review 108: 814-834.

Haidt, J. and J. Baron. (1996). “Social Roles and the Moral Judgment of Acts and Omissions”. European Journal of Social Psychology 26: 201-218.

Horowitz, T. (1998). “Philosophical Intuitions and Psychological Theory”. Ethics 108: 367-385.

Jackson, Frank (1998). From Metaphysics to Ethics: A Defense of Conceptual Analysis. Oxford: Clarendon Press.

Knobe, Joshua. (2005). “Attribution and Normativity: A Problem in the Philosophy of Social Psychology”. Unpublished manuscript, University of North Carolina Chapel Hill. Cited in Moral Psychology, Volume 2: The Cognitive Science of Morality: Intuition and Diversity. Ed. Walter Sinnott-Armstrong. Cambridge: MIT Press, 2008. 481.

Leslie, J. (1991). “Ensuring Two Bird Deaths with One Throw”. Mind 100: 73-86.

Miller, Christian. (forthcoming). “Moral Relativism and Moral Psychology”, in The Blackwell Companion to Relativism. Ed. Steven Hales. Oxford: Blackwell Publishing.

Nichols, Shaun. (2004). Sentimental Rules: On the Natural Foundations of Moral Judgment. Oxford: Oxford University Press.

Petrinovich, L. and P. O’Neill. (1996). “Influence of Wording and Framing Effects on Moral Intuitions”. Ethology and Sociobiology 17: 145-171.

Prinz, Jesse. (2007). The Emotional Construction of Morals. Oxford: Oxford University Press.

Sartorio, C. (2004). “How to be Responsible for Something Without Causing It”. Philosophical Perspectives 18: 315-336.

Shafer-Landau, Russ. (2003). Moral Realism: A Defence. Oxford: Clarendon Press.

Sinclair, Neil. (2005). "Review of Sentimental Rules: On the Natural Foundations of Moral Judgment". Notre Dame Philosophical Reviews.

1 More carefully, Hauser et al. develop a strong and a weak version of the analogy. The strong version claims that language and morality are similar in a deep sense, with corresponding faculties and similar roles for encapsulation, operative versus expressed principles, competence versus performance, universal grammars, and so forth. The weak version of the analogy, however, is only meant to serve as an “important guide” by “opening doors to theoretically distinctive questions that, to date, have few answers” (125, 139).

2 This paragraph draws on the longer treatment of Nichols’ view in Miller forthcoming.