Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading

Placeholder book cover

Alvin I. Goldman, Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading, 2006, Oxford University Press, 376pp., $35.00 (cloth), ISBN 0195138924.

Reviewed by Peter Carruthers, University of Maryland


In Simulating Minds, Alvin Goldman provides a systematic development and defense of a simulationist account of our mind-reading capacities, drawing on a rich and varied body of research in psychology and neuroscience. (The basic idea of simulationism is that we often come to attribute a mental state to someone by first undergoing a similar mental process in ourselves, the outcome of which is introspected and then attributed to the other person.) This is interdisciplinary philosophizing at its best: it is clear, it is careful, it is insightful, it examines arguments critically and draws relevant distinctions, and it synthesizes a wide range of empirical data. It should be read by anyone with an interest in mind-reading.

The book does have one major flaw, however. This is that Goldman sets up the dialectic in such a way that simulation theory gets to win out over the opposition provided that simulation plays some role in mind-reading; and conversely theory-theory and modular approaches both lose if mind-reading turns out not to be entirely theory-driven, or entirely modular. This asymmetry in treatment is unwarranted. One reason is that many writers who call themselves "modularists" or "theory-theorists" now accept that mind-reading involves both simulation and modules and/or theory (Botterill and Carruthers, 1999; Nichols and Stich, 2003). In light of this, our focus should rather be on the relative centrality of theory versus simulation in human mind-reading, and on the question whether either is more fundamental than the other. Another point is that because of the way in which he frames the debate, Goldman is led to concentrate most of his efforts on showing that simulation is important (which isn't something that modularists or theory-theorists should deny). In consequence, not nearly enough attention gets devoted to those aspects of his account that are distinctive, and that would be rejected by his opponents. This is especially true in connection with the alleged primacy of first-person knowledge. I shall return to this point below.

The first five chapters are preparatory for the main body of the book, which follows over the course of the next five. (The final chapter then explores the role of simulation in human social life, including mimicry, fantasy, fiction, and morality.) Chapter 1 gives an overview of the main issues and the main theoretical positions. Chapter 2 explains the notion of "simulation" that is at issue (it is said to occur whenever people get answers to questions about the mental states of another by engaging in mental processes that are relevantly similar to the processes taking place in the other), and it elaborates the form of simulation theory that Goldman proposes to defend (which also allows an important role for theoretical knowledge, it should be stressed). The next three chapters then criticize the opposition. Chapter 3 critiques the rationality theory of Davidson (1984) and Dennett (1987). Chapter 4 criticizes the "child as scientist" forms of theory-theory defended by some developmental psychologists (Wellman, 1990; Gopnik and Melzoff, 1997). And Chapter 5 takes aim at modularity theories (Baron-Cohen, 1995; Scholl and Leslie, 1999). Chapters 3 and 4 are for the most part successful. Chapter 5 is less so, since much of it depends upon the strong sense of "module" introduced by Fodor (1983), which arguably isn't appropriate in connection with conceptual systems like mind-reading (Carruthers, 2006).

In Chapter 6 Goldman makes out a powerful case that certain forms of mind-reading involve the operations of a suite of automatically operating mirroring systems, and thereby a form of simulation. (The discussion focuses on mind-reading for emotions, for bodily sensations like pain, and for simple intentional actions like grasping a cup.) The case is especially strong in connection with our recognition of the emotions of other people from their facial expressions. For we know that the relevant emotion centers of the observer's brain are activated when seeing the facial expression of another. And we know that people who are impaired in experiencing an emotion themselves (fear, say) are also impaired in recognizing that emotion in other people.

These data are consistent with a (suitably weakened) form of theory-theory, however, which allows a role for simulation, but insists that the core concepts and inferential principles involved in mind-reading are both information rich and not acquired through simulation (rather, they are either innate or acquired via theorizing). It makes perfectly good sense that the above mirror systems might have evolved in advance of a capacity for mind-reading, either facilitating emotional learning or facilitating imitative learning of actions and action sequences. (Indeed, Goldman acknowledges as much in Chapter 8. If I feel fear when I see you displaying fear I shall be on the alert, both to respond appropriately -- e.g. by running away -- and to discover an appropriate object of fear in the environment; similarly if I myself feel disgust when I observe you displaying disgust. And if I see you performing a sequence of actions, and the action schemata necessary for me to replicate it are thereby primed, then imitation of those actions is rendered more likely.) These mirror systems are then waiting ready to be co-opted into the service of a later-evolving mind-reading system. Since the identification of emotion in others, in particular, can be a subtle business, that one should have a conceptual representation of emotion that can be activated either by visual or other experience, or by a mirror-system-induced experience of the same emotion, or both, makes perfectly good sense. It does not follow, however, that the recognition of one's own feelings of emotion are primary, or that there isn't a good deal of learned or innate information about emotions and their causal role that gives our identification of emotions in ourselves and others much of its significance.

Chapter 6 had been about what Goldman calls "low-level mind-reading" -- that is, forms of automatic non-conceptual mirroring that play a significant role in mind-reading. Chapter 7 then turns to the "high-level" variety, focusing on the role of pretense or "enactment imagination" in mind-reading (and distinguishing it from mere propositional supposition). Much of the chapter is devoted to showing that visual and other forms of imagination are the right kinds of thing to play a simulative role. All this is nicely handled, but it doesn't really serve to distinguish Goldman's views from weakened forms of theory-theory. Indeed, in the course of the chapter Goldman himself concedes a crucial role for theory at two different junctures. When we wish to predict what someone in a given situation will think or do we have to begin our simulation of them with some pretend inputs. But selection of the right inputs will have to be guided by theory. Likewise when trying to explain why someone has acted as he has, Goldman thinks that what we do is adopt a "generate and test" procedure -- we try out some imagined inputs to the simulation process, and see if they result in an intention to perform an action of that sort. But since there are indefinitely many distinct inputs that we could in principle select and test, the choice of the most relevant and/or likely hypothesis will, again, have to be guided by theory. This is all music to the ears of the kind of theory-theorist who also allows an important role for simulation.

Chapter 8 is something of a hodge-podge. It discusses the emergence of mind-reading in ontogeny and evolution, and discusses its absence in autism. But it also discusses empathy and dual-process theories of empathy, as well as the relationship between simulation theory and control theories of action of the sort proposed and elaborated by Wolpert and colleagues (Wolpert and Ghahramani, 2000; Wolpert and Flanagan, 2001). The goal is to review a range of empirical results not covered in earlier chapters, and to show that they support -- or are at least consistent with -- simulationism. Here again it is unfortunate that Goldman construes simulation theory so weakly and his opponents' views so strongly, since much of this data is equally consistent with weakened forms of theory-theory. For example, Goldman discusses data that infants who have had experience with blindfolds will no longer follow the "gaze" of a person wearing a blindfold who turns his head in one direction or another, whereas infants who lack such experience will do so. Goldman interprets this result in terms of simulation. But of course one can equally claim that experience with blindfolds enables infants to acquire a new item of theoretical knowledge: people with their eyes covered can't see anything.

Chapters 9 and 10 get to what ought to be the heart of the matter. For what makes Goldman's account different from other kinds of theory-simulation hybrid is the distinctive position occupied by first-person knowledge of mental states within his approach. He needs to claim, in particular, that first-person awareness of mental states is both prior to and serves as the foundation for reading the mental states of another. And so he does. Chapter 9 argues that self-ascription of mental states occurs via a process of introspective self-monitoring and classification that does not depend at all on theoretical knowledge. And then Chapter 10 argues that the core of our mental state concepts is constituted by an introspective code in the language of thought, which classifies our own internal states on the basis of their introspectible properties, again independently of theoretical knowledge. (These concepts might nevertheless subsequently be elaborated to contain such knowledge, Goldman thinks.)

It is important to see that introspection of one's own propositional attitudes can't play the sort of foundational role in mind-reading that Goldman supposes, unless a substantive body of theoretical knowledge about the causes and interactions of those attitudes can initially be gained from one's own case alone. This is because (as we noted above, and as Goldman himself acknowledges) theoretical knowledge plays an indispensable part in each of the two basic forms of mind-reading (predicting what someone will think or do, and explaining what someone has thought or done). Yet if the theoretical knowledge in question were either innate or acquired by theorizing about the minds of other people then there would no longer be any distinction between Goldman and his theory-theory opponents, provided that the latter allow a place for simulation within their accounts.

How plausible is it that knowledge of the causal roles of the various mental state types should be learned from one's own case, then? Such a view faces multiple difficulties. One is that there is direct evidence against the existence of a faculty for propositional attitude introspection of the sort that Goldman's account requires. The evidence is that humans can be induced to confabulate explanations of their own behavior (attributing to themselves intentional states that they demonstrably don't have) whenever the actual causes of their behavior are opaque to folk psychology; but they do so with just the same apparent immediacy and introspective obviousness as normal (Gazzaniga, 1998; Wegner, 2002; Wilson, 2002). Goldman is aware of some of this evidence, and claims that it is consistent with a "dual method" model according to which people sometimes introspect their mental states and sometimes attribute them via a process of self-interpretation. But when the full range of the evidence is considered, this dual-method account becomes unsustainable, I believe (Carruthers, forthcoming).

Another problem for the idea that the causal roles of the attitudes are learned from introspection parallels one of the main difficulties for child-as-scientist accounts. This is that all normal children acquire capacities to predict and explain the actions of other people at about the same time, irrespective of wide variations in general intelligence. So they must all have acquired the relevant background knowledge needed for simulation to operate successfully by about the same time, also. Yet learning the dependence relationships between a set of observable and introspectible events would surely be a general-learning task if ever there was one. If children were initially learning the causal roles of the attitudes from their own case, we would surely predict wide variations in time-to-acquisition, varying with the general intelligence of the children in question. But this isn't what we observe.

Yet another difficulty concerns the absence of mind-reading in autism. Admittedly, Goldman can (and does) tell a plausible story about how difficulties in empathizing and perspective taking might lead autistic people to be bad at reading the minds of others. But so far as I can see, he must also predict that they should have no difficulty in reading their own minds, unless failures of empathizing always co-occur with deficits of self-monitoring. On the assumption that the introspective faculty is intact in autism, Goldman should predict that autistic people will have no difficulty in articulating and deploying propositional attitude concepts in their first-person use, or in explaining their own actions in propositional attitude terms. Indeed, since many autistic people are especially good at the sort of focused learning and theorizing that gives rise to knowledge of the causal operations of complex systems, one would predict that this ability combined with introspective access to their own mental states should lead to them being especially good first-person mind-readers. But they aren't.

Let me describe just a couple of strands of evidence here. Phillips et al. (1998) tested autistic children against learning-impaired controls (matched for verbal mental age) on an intention reporting task. The children had to shoot a "ray gun" at some canisters in the hopes of obtaining the prizes contained within some of them. But the actual outcome (i.e. which canister fell down) was surreptitiously manipulated by the experimenters (in a way that even adults playing the game couldn't detect). They were asked to select and announce which canister they were aiming at in advance (e.g. "The red one"), and the experimenter then placed a token of the same color next to the gun to help them remember. After learning whether they had obtained a prize, the children were asked, "Did you mean to hit that [e.g.] green one, or did you mean to hit the other [e.g.] red one?" The autistic children were much poorer than the controls at correctly identifying what they had intended to do in conditions where there was a discrepancy between intention and goal satisfaction. For example, if they didn't "hit" the one they aimed at, but still got a prize, they were much more likely to say that the canister that fell was the one that they had meant to hit. (Russell and Hill, 2001, were unable to replicate these results; perhaps because their population of autistic children, although of lower average age, had higher average verbal IQs, suggesting that their autism was less severe.)

Likewise Kazak et al. (1997) presented autistic children with trials on which either they, or a third party, were allowed to look inside a box, or were not allowed to look inside a box. They were then asked whether they or the third party knew what was in the box, or were just guessing. The autistic children got many more of these questions wrong than did control groups. And importantly for our purposes, there was no advantage for answers to questions about the child's own knowledge over answers to questions about the knowledge of the third party.

Not only does Goldman's simulationism predict that children must initially be learning from their own case the theoretical knowledge needed to engage in mind-reading, but it also predicts, more generally, that children should acquire the capacity to attribute mental states to themselves before they acquire the capacity to attribute such states to others. But there is quite a bit of evidence that this isn't so. Many relevant studies are reviewed by Gopnik and Melzoff (1994), who conclude that by the time children become capable of attributing a given type of mental state to themselves, they are also capable of attributing states of that type to other people; and conversely, when children cannot yet attribute mental states of a given type to others, they don't attribute states of that type to themselves, either.

More recent evidence of this sort is provided by Lang and Perner (2002). They not only presented children with the usual third-person false-belief tasks, but also with a first-person intention attribution task. (In the latter, the children had a knee-jerk reflex elicited and were then asked, "Look, your leg moved -- did you mean to do that?") The capacity to attribute intentions to oneself correctly was strongly correlated with the capacity to solve false-belief tasks (and also with the capacity to suppress pre-potent responses, such as selecting the opposite color to the one named by the experimenter), not emerging earlier in the way that Goldman's theory would predict.

Perhaps Goldman could reply to this objection, and to some of the other objections to his simulation theory mentioned above. The unfortunate thing is that he doesn't give us those replies because he is so focused on convincing us that simulation plays some role in mind-reading. What we really needed to be convinced of is that simulation plays the sort of role that he says it does, in which first-person introspective knowledge is primary. It is a pity that he has largely missed the opportunity to make out and defend his case.

Although this review has contained some criticisms, let me emphasize in closing, as I did at the outset, that this is an excellent book. Even if one finds Goldman's overall position unconvincing, as I do, there is a wealth of information contained here; and it is extremely valuable to have the main simulationist position laid out so clearly.


Baron-Cohen, S. (1995). Mindblindness. MIT Press.

Botterill, G. and Carruthers, P. (1999). The Philosophy of Psychology. Cambridge University Press.

Carruthers, P. (2006). The Architecture of the Mind: massive modularity and the flexibility of thought. Oxford University Press.

Carruthers, P. (forthcoming). Divided introspection. (A draft is available on the web at:

Davidson, D. (1984). Inquiries into Truth and Interpretation. Oxford University Press.

Dennett, D. (1987). The Intentional Stance. MIT Press.

Fodor, J. (1983). The Modularity of Mind. MIT Press.

Gazzaniga, M. (1998). The Mind's Past. California University Press.

Gopnik, A. and Meltzoff, A. (1994). Minds, bodies, and persons: young children's understanding of the self and others as reflected in imitation and theory of mind research. In S. Parker, R. Mitchell, and M. Boccia (eds.), Self-Awareness in Animals and Humans, Cambridge University Press.

Gopnik, A. and Meltzoff, A. (1997). Words, Thoughts, and Theories. MIT Press.

Mazak, S., Collis, G., and Lewis, V. (1997). Can young people with autism refer to knowledge states? Evidence from their understanding of "know" and "guess". Journal of Child Psychology and Psychiatry, 38, 1001-1009.

Lang, B. and Perner, J. (2002). Understanding of intention and false belief and the development of self-control. British Journal of Developmental Psychology, 20, 67-76.

Nichols, S. and Stich, S. (2003). Mindreading: an integrated account of pretence, self-awareness, and understanding other minds. Oxford University Press.

Phillips, W., Baron-Cohen, S., and Rutter, M. (1998). Understanding intention in normal development and in autism. British Journal of Developmental Psychology, 16, 337-348.

Russell, J. and Hill, E. (2001). Action-monitoring and intention reporting in children with autism. Journal of Child Psychology and Psychiatry, 42, 317-328.

Scholl, B. and Leslie, A. (1999). Modularity, development, and "theory of mind". Mind and Language, 14, 131-55.

Wegner, D. (2002). The Illusion of Conscious Will. MIT Press.

Wellman, H. (1990). The Child's Theory of Mind. MIT Press.

Wilson, T. (2002). Strangers to Ourselves. Harvard University Press.

Wolpert, D. and Flanagan, R. (2001). Motor prediction. Current Biology, 11, 729-732.

Wolpert, D. and Ghahramani, Z. (2000). Computational principles of movement neuroscience. Nature Neuroscience, 3, 1212-1217.