Reconstructing the Cognitive World: The Next Step

Placeholder book cover

Michael Wheeler, Reconstructing the Cognitive World: The Next Step, MIT Press, 2005, 340pp., $20.00 (pbk), ISBN 0262731827.

Reviewed by Alistair Welchman, University of Texas, San Antonio

2008.02.02


This book is a sustained philosophical analysis of the most recent developments in cognitive science and artificial intelligence (AI). Wheeler develops an interesting account of the role of representations in cognitive science that mediates between the traditional position (that cognitive science is constituted by a commitment to modeling the mind in terms of computational rules manipulating linguaform symbolic representations) and the various forms of rejectionism that claim cognitive science is possible without representations. All this is done in the frame of a historical narrative in which traditional symbolic cognitive science and AI (GOFAI -- good, old-fashioned artificial intelligence) are shown to be residually Cartesian in orientation, and therefore open to an attack parallel to the attack Heidegger makes on Descartes in the first division of Being and Time. Despite some flaws in the historical framing, the book is required reading for all philosophers of cognitive science, and should be of interest to many practicing cognitive scientists too.

Unlike Hubert Dreyfus (to whom his reading otherwise owes a lot), Wheeler does not use Heidegger to criticize AI tout court. Rather, he argues that new developments in AI -- especially those emphasizing the fact that cognitive agents are typically embodied in a particular way and embedded in a particular environment (the 'embodied, embedded' movement) -- actually constitute the realization of a Heideggerian critique of GOFAI, a properly Heideggerian AI. And unlike the 'enactive' school of Maturana and Varela, Wheeler's use of European philosophy does not involve abandoning the notion of representation. Instead, following Andy Clark, he develops a conception of 'action-oriented' representations, different from the symbolic representations of GOFAI, but representations nonetheless (9). These, he argues, will feature in the explanation of a large fraction of cognitive abilities, human, animal and artificial.

It is not hard to see why old-fashioned AI has a kind of Cartesian basis: Descartes argues that cognitive processes are radically distinct from physical processes; indeed the two take place in different substances. GOFAI took what Descartes identified as cognitive and argued that just this conception of intelligence could (as a result of technical advances in proof-theory and the development of digital computers) now be naturalized and simulated artificially. Thus: when I act intelligently, I form a representation of how the world is, use it to plan a course of action, and then issue instructions to my body to execute the plan. The principles that explain how the world (including my body) operates are drawn from physics; but the principles that explain how my thinking works are quite different (drawn from the operation of formal logical systems). In other words, AI substitutes an explanatory dualism for Descartes' original substance dualism.

Wheeler's account of this quite familiar point is much more nuanced and textually sensitive than any other in the literature. Wheeler defends the level of detail in his account of Descartes by raising issues of historical accuracy, for instance the possibility that 'received interpretations of Descartes … may reveal themselves to be caricatures' (15). But it is not immediately clear why this matters. If the aim of constructing a notional doctrine of Cartesian psychology is to show how it has been a (pernicious) influence on orthodox cognitive science, then historical accuracy about Descartes is irrelevant: orthodox cognitive scientists may in fact have been influenced by a caricature, and it may be that caricature that is responsible for their allegedly degenerating research program.

Wheeler's treatment of Heidegger is much more carefully integrated into the conceptual argument, and his interpretation of Heidegger deserves to be taken seriously. The central hermeneutic claim maps Wheeler's three-way division of cognitive systems in general onto a trichotomy of Heideggerian analyses: the merely present-at-hand (Vorhandensein) corresponds to a traditional GOFAI symbolic and computational system; the ready-to-hand being (Zuhandensein) of equipment corresponds to a purely reactive system directly linking perception with action; and Clark's and Wheeler's own kind of system, basically dynamic, but with action-oriented representations, corresponds to the un-ready-to-hand.

This is a bold interpretive gesture, but, when unpacked, a clear and stimulating one. Before evaluating it, though, some explanation of terms is probably necessary. When I am planning what to write next in this review, I am, according to Wheeler, probably acting in a way consonant with GOFAI models: I must use full-blown symbolic representations because I am thinking explicitly about things that are not perceptually present at all (Wheeler's book is closed!), and I am thinking about them in (I hope) a rule-governed way. Accordingly, I am acting more or less as a Cartesian subject would, manipulating objective representations of things as (in Heidegger's terms) merely present-at-hand, i.e., not linked up in any intrinsic way to my needs and interests. On the other hand, when I drive to work, I am in what Dreyfus calls 'smooth coping' mode, in which my practiced skill with the task enables me to perform it without thinking at all, and I can instead rely entirely on the automatic and unconscious connections across my sensorimotor system. Here my Cartesian self has disappeared, and I treat the things I interact with as instruments relative to my interests, as what Heidegger calls equipment, the ready-to-hand.

Something Wheeler wants to emphasize is that driving to work is still a cognitive skill of some complexity: it involves navigation, object avoidance, goal seeking, stopping at red lights, etc. And yet, if Wheeler et al. are right, it can be explained without the need for representations or explicit computations manipulating representations. Is there evidence for this? Well some similar cases of navigation in biology and robotics are suggestive. Ants, for instance, exhibit quite complex navigational abilities without constructing full maps: instead they drop pheromone markers, and effectively offload representation onto the world itself. The situated robotics movement has been inspired by this biological research to produce robots with similar properties. For example Rodney Brooks' Genghis exhibits remarkably efficient predatory behavior, but uses only a small array of infrared sensors, located at the front of its head, to do so. Genghis operates by following these simple rules: (1) if any of the sensors are activated, then move forward for a few seconds; (2) if more left than right sensors are operative, take smaller steps with the left legs (and similarly for the right). The effect of these simple rules is that any human being passing in front of Genghis will wake it up by stimulating its infrared sensors through that person’s body heat; if the person then tries to move, Genghis will follow, turning left when the prey turns left and right when it turns right.

Genghis works simply because it is embodied and embedded. The structure of Genghis' body is essential to its operation because its sensors are placed in the direction of locomotion. If they weren't, it would go the wrong way when it detected something. Similarly, Genghis exploits certain features of its environment in a very special way: its sensors detect only heat in the 10µ range (corresponding to the heat emitted by human beings). If its environment were not structured in just the right way, Genghis would not function.

This is a simple example, but it illustrates the essential way in which Wheeler's historical argument operates. What Heidegger shows, among other things, is that in absorbed coping with equipment we are not Cartesian subjects striving to represent a world of interest-neutral (merely present-at-hand) objects. Accordingly, cognitive science should not try to model such behavior as if it were Cartesian.

Here Wheeler brings up an important methodological point about the interface between a phenomenological approach, such as Heidegger's, and cognitive science. A phenomenological analysis must clearly operate at the macro level, the level of the whole agent. The obvious relevance of phenomenology to cognitive science is therefore this: effectively executed, it provides a clear picture of the cognitive phenomenon to be explained (123). Wheeler nevertheless wants to go beyond this by using Heidegger's phenomenological analysis, not just to license a certain conception of the explanandum of cognitive science at the whole agent level, but also to warrant a micro level claim about the kind of non-whole-agent causal mechanisms that can count as an explanation.

In particular, Wheeler wants to claim that Heidegger's analysis of the ready-to-hand constrains explanations of mechanism by excluding implicit rules and symbolic representations. Following McDowell, Wheeler argues that there must be an 'intelligible interplay' between micro and macro levels (133). Surely an effective causal explanation of, for example, smooth coping in terms of implicit rules operating on representations would be exactly such an intelligible interplay, so the question is really an empirical one that can't be settled a priori. This would be of less concern were it not for the fact that Wheeler does not mention modern linguistics, which appears to provide an exact counter-example to his claim. Linguistics shows (arguably with greater success that any other aspect of cognitive science) that it is possible to explain our possession of a skill that has a phenomenology of smooth coping (linguistic competence) on the basis of the operation of unconscious rules and representations.

Wheeler's textual treatment of Heidegger is I think also somewhat problematic here: there is little doubt that Heidegger would have been as appalled by Wheeler's brand of cognitive science as he was by cybernetics (of which the 'embodied, embedded' movement is a descendent). Wheeler's claim that Heidegger endorses Wheeler's own view of the relation of philosophy to science (in which philosophy critically articulates the regulative principles of a science so as to guide it away from a degenerating research program) hangs on little more than one sentence in §10 of Being and Time where Heidegger says 'the scientific structure of [the human sciences] … needs to be attacked in new ways'. This hardly licenses his claim that those who think Heidegger is not a naturalist about the mind are 'just wrong' (285). Heidegger consistently maintains that the question of the Being of human beings is hermeneutic or interpretive in nature. Nowhere does he countenance a naturalistic and causal answer to this question along the lines of modern cognitive science. Wheeler of course does not need to claim any of this since his view can be saved simply by saying it is inspired by Heidegger rather than trying to show that Heidegger must have held it. Indeed Wheeler is right that his is the most convincing direction one would have to go it in order to naturalize Heidegger and effect contact between Heidegger's work and cognitive science. But one would have thought that the project of naturalizing Heidegger ought to presuppose that Heidegger is not already a naturalistic thinker.

Wheeler's third, and, for him most important, class of cognitive system does not map quite so clearly onto Heidegger as the former two (i.e., the mappings of GOFAI onto the present-at-hand and smooth sensorimotor arcs onto the ready-to-hand). For Heidegger there are only two categories, in his special sense, of modes of Being of non-human beings. Wheeler relies on Dreyfus' interpretation of Heidegger to extract a third from §16 of Being and Time where Heidegger gives a number of analyses of situations in which equipment malfunctions. For Heidegger these analyses show how we can gain phenomenological access to the being of equipment when our everyday dealings with it are characterized precisely by its inconspicuousness: the ready-to-hand nature of equipment becomes conspicuous just when the equipment does not work properly, is un-ready-to-hand. If the road I usually take is closed, I have to pay conscious attention for a while to figure out what route to take.  I may have to consult a map (etc.) and in general deploy resources beyond a subtle but merely reactive sensorimotor hookup.

What Wheeler wants to infer from this kind of analysis is the existence of a distinctive third kind of cognitive system with representational components that fall short of the full symbolic representations of GOFAI. The task is particularly important because Wheeler presents the other two categories as the ideal ends of a spectrum in which his informal representations would occupy most of the space. Despite the crucial nature of this idea, however, Wheeler only gives one worked-through example, a robot whose artificial neurology is based on that of a housefly, from Franceshini et al. (1992). This robot performs a standard task set (navigating to a target light source while avoiding interposed obstacles) using only relative motion information computed on the previous time slice. The authors describe it, using representational terminology, as having a 'snap map', but are at pains to emphasize the differences between this kind of representation and a traditional (philosophical) 'symbolic' representation: the snap map is transient, centered on the light sensitive array (i.e., not objective, but egocentric) and intimately, even 'reactively,' tied to a corresponding 'motor map' (i.e., it is action-oriented). This is not much to go on. It is certainly plausible to think that these intermediate forms of representation will be action-oriented (as Wheeler claims) given the demands of evolution. But Wheeler could certainly have been more explicit.

This is especially so because much of the rest of the book is given over to defending his 'minimally representationalist' (219) position from more radical anti-representationalist views. This part of the book is argumentatively the densest, but follows Wheeler's earlier published work (Wheeler & Clark 1999; Wheeler 2001) very closely and often word for word. There is also something slightly artificial about the presentation here because the actual antagonists (e.g., Maturana & Varela's 'enactive' paradigm, Dreyfus' interpretation of Merleau-Ponty) are not engaged, and the argument takes place mostly on conceptual terms dictated by Wheeler. It is nevertheless very interesting. Wheeler gives -- and then rejects -- two reasons for thinking that even minimal representations will not figure in cognitive explanation. Both stem from the ultimately evolutionary character of natural intelligence. As Wheeler mentions, but does not to my mind emphasize enough, evolution produces 'cheapskate' solutions to problems (198). This drive to minimize resource outlay is what lies at the bottom of the dependency of many intelligent biological systems on the corporeal and environmental situation, i.e., their constitutive embodiedness and embeddedness. Wheeler describes this situation (the first objection) as one of 'non-trivial causal spread' (200ff) because the causal factors required for explaining the behavior of such an intelligent system are spread over brain, body and world. In this situation, the objection runs, neural (inner) representations cannot be explanatory, since causal responsibility in such systems is not located only inside the brain.

Wheeler's view (the argument is too delicate for easy summary) is that it is ultimately possible for a (non-trivially) causally spread system to possess minimal inner representations, but only in the presence of what he calls 'homuncularity' (218). This feature is present in systems whose internal organization decomposes into a hierarchically structured set of subsystems connected in part by the communication of information. There are obvious problems of circularity in defining a representational system in terms of concepts like information and communication, problems that Wheeler acknowledges but doesn't solve. But the second radical objection to minimal representation is that, in fact, naturally occurring evolutionary systems exhibiting intelligence will not in general possess homuncularity, and for exactly the same reason as that for which they are likely to be causally spread: the deeply cheapskate nature of evolutionary processes. Wheeler is sanguine about the issue, while acknowledging that it is ultimately empirical. However he himself cites some of the highly surprising recent evidence that suggests that evolutionary processes will avoid homuncularity even in circumstances highly conducive to it: Adrian Thompson's experiments show that, even when presented with a medium already hierarchically organized into communicating units (a silicon chip), applying an evolutionary process resulted in a design that bypassed these units, and instead exploited uncontrolled (still unknown) electrophysical properties of the material substrate. If this is what can happen in artificial evolutionary circumstances, it becomes less plausible to think that homuncularity is a general property of naturally evolved systems. And this in turn suggests that the radical anti-representationalists may still be right; or at least more right. In any case, Wheeler is to be commended for starting to move what are often inappropriately transcendental arguments (e.g., that intelligent action is simply inconceivable without representations) back into the empirical sphere where they belong. It remains to be seen just what can be achieved by non-representational systems; but a dose of Wheeler's philosophy might just help stimulate some novel hypotheses.