Representation Reconsidered

Placeholder book cover

William M. Ramsey, Representation Reconsidered, Cambridge University Press, 2007, 268pp., $91.00 (hbk), ISBN 9780521859875.

Reviewed by Rick Grush, University of California, San Diego

2008.02.01


Representation Reconsidered is a good book that is well worth reading despite some serious shortcomings. It would also make an excellent addition to a graduate or advanced undergraduate course on the philosophy of cognitive science. The topic of the book is the notion of representation as used in the interdisciplinary sciences of the mind, including psychology, cognitive neuroscience, and connectionist and classical AI modeling. Ramsey discerns about half a dozen distinct concepts of representation (for example, representations as indicators, representations as models), and argues that while some qualify as legitimate senses of representation, others do not. This done, Ramsey draws some (tentative) radical eliminativist conclusions about the nature of the human mind, on grounds that current scientific practice seems to indicate that none of the legitimately representational notions of representation in fact characterize human cognition or its neural basis. In this review, I will first provide a succinct summary of the main points of each chapter, and will then turn to a discussion of the merits of the project as a whole.

Chapter One, titled 'Demands on a representational theory', is a refreshingly sensible discussion of what is involved in constructing a good theory of representation. Among other things, Ramsey points out that if the philosophy of psychology has taught us anything, it is that first, it is almost always possible to treat a system as representational, and second, that it is never necessary (one could explain everything in principle in non-representational vocabulary). The target we should be aiming for is a set of conditions where the positing of representations in the analysis of the behavior of some system gives us non-trivial explanatory purchase that we would not otherwise get. Ramsey calls this the 'job description' challenge, and it figures large in later chapters where some, but not all, accounts of representation are argued to not meet that challenge despite their intuitive plausibility.

Chapters Two, titled 'Representation in classical computational theories: the Standard Interpretation and its problems', and Three, titled 'Two notions of representation in the classical computational framework', address computational theories of mind and various notions of representation that Ramsey discerns within those frameworks. The basic lesson of Chapter Two is that there is a standard approach to understanding the way in which computational theories of mind are representational, but this way does not, in fact, provide for a legitimate notion of representation. This Standard Interpretation maintains that the symbols, or combinations thereof, employed in such theories correspond to mental representations with propositional content, and the functional role that such a symbol has in the system corresponds to an 'attitude' that, in folk psychological parlance, captures the manner in which that proposition is grasped (belief, desire, etc.). The result is a putative implementation of the core features of folk psychology. While I am skipping a lot of detail and nuance, Ramsey's main argument against the standard interpretation is that the functional roles assigned to states by the syntactic engine are insufficient to explain why the content they putatively have is explanatorily relevant. And when one unpacks the arguments in favor of treating them as representations, they are shown to be circular, requiring the premise that having them serve as an implementation of folk psychology, or explanation of rationality, is desirable.

Chapter Three, by contrast, argues that one can find in some classical computational models two legitimate notions of representation that meet the job description challenge: IO-representations, and S-representations. The former are symbols that are passed between subcomponents of a computational system. They pass the job description test, because understanding them as representing their assigned values is necessary in order to see the subsystem as playing its role in the overall computational system. That is, in order to see the system as one that is breaking the computational task up into subtasks, each of which is handled by a component process, one must see the inputs and outputs of these components as symbols that represent values. I shan't discuss IO-representations further, because it is the notion of S-representation that plays the more important role in the remainder of the book.

S-representations (for 'simulation') are involved when a system employs a model of the target domain for off-line reasoning. There are many classical computational examples (SOAR, ACT) and psychological counterparts (Johnson-Laird's mental models account of reasoning). Ramsey claims that many classical computational models implement this sort of simulation, and that when they do so, they are genuinely representational -- they pass the job description challenge. The core idea is that one gets an explanation of how the system solves problems if one treats these internal states as representations. I will note, without pausing to describe, that Ramsey has some interesting rebuttals to common criticisms of the model- or simulation-based notion of representation, including criticisms based on the possibility of treating anything as a model, and of treating any model as modeling many things.

Chapters Four, titled 'The receptor notion and its problems', and Five, titled 'Tacit representation and its problems', address kinds of putative representation discernible within connectionist modeling and neuroscience. The receptor notion is the familiar idea that if something is a detector for an object or feature in the sense that it carries information about the presence of that object or feature, then it represents that object or feature. This is prevalent in neuroscience where neurons that selectively respond to some stimulus are almost universally described as 'representations' of that stimulus; and also in connectionist modeling where sets of units are able to detect the presence of some feature coded in input units are often described as 'representing' the discerned feature. Ramsey's analysis here hinges on pointing out that treating such entities as representations fails the job description challenge because it provides no explanatory benefits over and above what one gets, just as easily, by describing them as causal mediators. Worse, to treat them as representations seems to license calling anything that is a causal mediator a representation, so that for example immune system responses are representing diseases. Ramsey treats at length one response to this sort of claim, one he finds to be best worked out by Dretske. On this approach, being an indicator, while necessary, is not sufficient for representational status. It must also be the case that this indicator is used by some system in the production of some behavior. The idea is that this might allow such states to pass the job description challenge. Ramsey's rebuttal to this teleological approach is quite detailed and nuanced, and resists quick summary. Particularly interesting, though, is an example brought up at the end of this chapter, of how treating a receptor as a representation (regardless of whether or not there is a consumer) can be not merely superfluous, but actually misleading and counterproductive.

Chapter Five focuses on the notion of tacit representation. While there are a variety of cases here, the main thrust of Ramsey's analysis begins by arguing that tacit representation is analogous to receptor representation, except that where the latter sees 'representation' in entities that are reliably caused by certain things, the former sees 'representation' in entities that reliably cause certain things, as embodiments of abilities or know-how. And once this is done, arguments similar to those launched at the receptor notion come into play. To treat such entities as representations would render the notion of representation nearly ubiquitous since dispositions are to be found everywhere; and we lose nothing by way of explanatory leverage by dropping talk of representation in such cases and speaking instead of dispositions.

Chapter Six, titled 'Where is the representational paradigm headed?', does several things. There is a brief discussion of another player in the cognitive architecture game, dynamical systems theory (DST). After describing the DST approach, Ramsey argues that it does not support any legitimate notion of representation. The strategy here is to canvas the major proposals for treating one or another aspect of a DST model as representational, and then arguing that each such candidate is in fact a version of one of the sorts of ersatz-representation diagnosed in previous chapters.

The second major topic of Chapter Six is the lessons to be drawn from the previous chapters. Ramsey paints a picture of the development of the sciences of the mind in the 20th Century as marked by a few major paradigms: a largely non-representational behaviorist paradigm, followed by an explicitly representational cognitive paradigm, and finally a (currently developing) connectionist/computational neuroscience paradigm. While this last paradigm still freely uses the representation-talk of its predecessor, Ramsey argues that this is a matter of conceptual inertia, and not because the notion of representation in fact plays a helpful or legitimate role in these theories (as has been argued in Chapters Four and Five). He provides by way of illustration a lesson from the history of science, the perseverance of the notion of the celestial spheres in heliocentric models of the solar system after the geocentric model of the solar system had been abandoned. And while Ramsey is officially cautious in claiming that it is an empirical issue whether the connectionist/computational neuroscience paradigm or the classical computational paradigm will win out, and hence an empirical issue whether our best theory of the neural implementation of cognition will reveal cognition to be representational or not, it is clear that Ramsey is putting his money on connectionism and anti-representationalism. Thus the eliminativist thrust of the book.

I turn now from summary to evaluation. Anyone interested in the topic of representation as employed by the sciences of the mind should definitely read this book. Not because it is perfect or, in its largest aim, even successful. But rather because there is a lot of excellent philosophical discussion and argument addressing various issues that are central to the topic, and there are, as well, useful summaries of the core relevant literature. The problem is that the overall terrain is not organized in quite the way Ramsey thinks it is, and so while for the most part his discussion of individual topics is excellent -- and should be read by anyone who wants to responsibly engage with the literature -- not all the pieces are there that should be, and the ones that are there don't exactly fit together as advertised. I will expand on each of these points in turn.

The first point is that, for the most part, the treatment of individual topics is very solid. For example, the discussion in Chapter One about the relationship between a scientific and pre-scientific concept of representation is excellent. To take another example, Ramsey's critical discussion of the 'standard interpretation' of classical computationalism not only rehearses in a useful way, but synthesizes and builds upon previous treatments of that topic. My personal favorite is his discussion of S-representation. These individual discussions are of a consistently high quality, and are the reason why this book should be read by anyone interested in the topic.

The second point is that while the treatment of individual topics is excellent, it is arguable that the overall landscape of the issue is not structured in exactly the way Ramsey thinks it is. In Chapter One it is correctly pointed out that in order to make headway on the issue of representation, one must focus attention on how the notion is employed in specific cases, as opposed to assuming that different models and domains are dealing with a univocal concept. But Ramsey's way of distinguishing the specific cases to which attention is to be separately applied runs together two orthogonal classification schemes. One is the cognitive modeling paradigm employed, and the second is the kind of (putative) representation being employed. So for example Chapters Two and Three are on the topic of classical computational architectures (one of the modeling paradigms), and the construction and employment of an internal model (what I am labeling a kind of representation) is taken to be something that is proprietary to that paradigm; and the discussion of tacit representation is in the chapters on connectionist modeling. And this strikes me as a mistake. While it is perhaps arguably true that the kind of representation involved in the standard interpretation is proprietary to computational models, the other candidates for representational status seem not to be tied to a particular paradigm. For example, the issue of tacit representation comes up in classical computational architectures (as Ramsey's own discussion brings out, see Section 5.2.2). And to take another example, there are non-classical paradigms that employ internal models (as Ramsey also notes, more about which below).

At one level, this elision of the issues of modeling paradigm and representational genus is more or less harmless. Ramsey's discussion of each genus is excellent, and the fact that each such discussion is found in a chapter that is organizationally linked to one or another modeling paradigm doesn't significantly affect that excellence.

But at another level things are not so innocuous. The large-level project of the book, as the title suggests, argues for an eliminativist conclusion to the effect that there is reason to suppose that the current state of the art in the cognitive and neural sciences reveals that the human psychological system is non-representational. And the two-sentence version of the argument is that the current best science says that the mind/brain is best understood as a connectionist system, and connectionist systems don't employ any states legitimately thought of as representational. And the reason for this last premise is that all of the representational genera that were discussed in the chapters organizationally linked to connectionist paradigm were revealed to not be legitimately representational, and the only genera that were found to be genuinely representational were discussed in the chapters organizationally linked to the classical computational paradigm. Now even if it were true that the current state of the art in the sciences of the brain and mind were strongly implicating connectionism, the eliminativist conclusion would follow only if the candidate representational genera line up with modeling paradigms in the way Ramsey suggests. And as noted briefly above, there is prima facie reason to think that in fact they are orthogonal issues.

This leads to the third issue, which is the missing pieces. The two major modeling paradigms Ramsey discusses are classical computational and connectionist. There is a brief discussion of a third player, DST, in the final chapter. But there is a fourth paradigm that Ramsey doesn't directly address at all, namely modeling based on modern control theory (see Grush 1995, 1997, 2004; Eliasmith and Andersen 2003; for excellent discussion pitched specifically at the level of debate between modeling paradigms, see Eliasmith 2003). Many readers may be familiar with this paradigm only via its most salient feature, the notion of the forward model or emulator. Proponents of this modeling paradigm have a good claim to biological plausibility and explanatory utility, are proposing models that are quite unlike classical computational models, and yet employ a notion of S-representation that on Ramsey's own analysis is legitimately representational. I don't want to dwell on this issue, since it would amount to blowing the horn of my own favorite approach to understanding some key neurocognitive phenomena, but I will simply point out that, ironically, I am happy to enlist Ramsey as a very helpful if unwilling ally in my own markedly non-classically computational, and pro-representational, approach to understanding many brain functions.

What is important in this context is not so much the fact that there is a fourth paradigm that Ramsey effectively ignores (I say 'effectively' because he does at least mention a few examples, e.g. Grush 1997, 2004 and Ryder 2004, even if only to say that he isn't going to engage with them). Just how central or promising that paradigm is at this time is, I suppose, an unclear matter. Big or small, it presents Ramsey's argument with a significant unmet challenge. What is more important, I think, is what strikes me as an unhelpfully inaccurate vision of the goals of and prospects for theoretical cognitive neuroscience -- though to be fair this inaccurate vision is widespread and not unique to Ramsey. The unhelpful vision is one that is aided and abetted by Ramsey's invocation of lessons from the history of science. In a great many historical cases, scientific progress has been marked by paradigm shifts, and has been a matter of articulating the correct theory for a domain, and revealing it as correct and its competitors as incorrect. Chemistry turned out to be right, and alchemy wrong, for instance. And while this is how things are commonly pitched in theoretical cognitive science -- as a battle between behaviorism and cognitivism, or between computationalism and connectionism, say -- I think this way of looking at matters is unhelpful and misleading. The fact is that there are many hundreds, perhaps many thousands, of interesting psychological and cognitive phenomena, and it is an open possibility (to my mind a near certainty) that each of the four paradigms I discerned will turn out to be the most explanatorily useful approach for some substantial subset of these phenomena. And if this is the case, then it could well turn out that, for example, the brain executes some tasks that it does in a representational way, and others not. And it may even be the case that for a single given task, the brain may have more than one means of addressing it. If there is a piece of conceptual inertia that we would do well to get beyond, my own bet is that it is not the notion of representation, but rather the notion that what we are looking for is the correct information processing architecture of the brain; or to put it in terms with a deeper historical reach, that what Newton did for physics can be done for psychology. But let me reiterate that even if these critical remarks aimed at the large-scale aim of the book are correct, this shouldn't take this book off of anyone's to-read list.

References

Eliasmith, C. and C. H. Anderson (2003). Neural Engineering: Computation, representation and dynamics in neurobiological systems. Cambridge MA: MIT Press.

Eliasmith, C. (2003). Moving beyond metaphors: Understanding the mind for what it is. Journal of Philosophy 100(10):493-520.

Grush, Rick (2003). In Defense of Some 'Cartesian' Assumptions Concerning the Brain and Its Operation. Biology and Philosophy 18:53-93.

Grush, Rick (2004). The emulation theory of representation: Motor control, imagery, and perception. Behavioral and Brain Sciences 27:377-442.

Hurley, S. (forthcoming). The shared circuits model: How control, mirroring, and simulation can enable imitation, deliberation, and mindreading. To appear in Behavioral and Brain Sciences.

Oztop, E., Wolpert, D., & Kawato, M. (2005). Mental state inference using visual control parameters. Cognitive Brain Research 22(2), 129-151.

Ryder, D. (2004). SINBAD neurosemantics: a theory of mental representation. Mind and Language 19(2):211-240.

Wolpert, D. M., Doya, K., & Kawato, M. (2003). A unifying computational framework for motor control and social interaction. Philosophical Transactions of the Royal Society of London, Series B, 358, 593-602.