Andrew Brook, Kathleen Akins (eds.)

Cognition and the Brain: The Philosophy and Neuroscience Movement

Andrew Brook and Kathleen Akins (eds.), Cognition and the Brain: The Philosophy and Neuroscience Movement, Cambridge University Press, 2005, 440pp., $90.00 (hbk), ISBN 0521836425.

Reviewed by Fred Adams, University of Delaware

This book bills itself as an "up-to-date and comprehensive overview of the philosophy and neuroscience movement, which applies the methods of neuroscience to traditional philosophical problems and uses philosophical methods to illuminate issues in neuroscience." (front inside cover)  It is not exactly that, though it does apply neuroscience to problems in philosophy and vice versa. It is a collection prepared from a basic set of papers (with perhaps a few more invited papers to fill gaps) that came from a successful conference on philosophy and neuroscience at Carleton University organized by Akins and Brook in 2002.  The papers are of high quality (so it must have been a very good conference, indeed).  However, while there are papers on many topics, it is hardly a "comprehensive overview," at least not if one is expecting a review of the state of the art of philosophy and neuroscience today.  Instead, it is standard fare of philosophers putting forth and defending their favored views on current topics at the intersection of philosophy and neuroscience, and it gives what Brook and Mandik in their introduction call a "snapshot" of work currently going on (21).

Philosophers of mind have long been interested in neuroscience. In the early days, we were interested to find data upon which to base the Identity theory of mind and brain.  And in this volume Paul Churchland gives impressive support for the view that human color qualia are opponent-cell coding triplets in a neuronal instantiation of a Hurvich network (330).  In typical Paul Churchland fashion, we are treated to a comprehensive summary of opponent processing accounts of color vision, predictions of what the theory should explain, and then do-it-yourself experiments with color after-image phenomena predicted and explained by the theory.  Whether or not colors are real or whether or not human color perception corresponds to the actual physical basis in reality, is not the topic of Churchland's essay, but it is the topic of Zoltan Jakab's.  Jakab argues persuasively that human color similarity space cannot represent accurately the physical similarity of the color stimulus (357).  In Jakab's paper too, we are treated to much of the science of color and color perception, as well as to the philosophical arguments about the reality of color or accuracy of human color perception.

In more recent studies of neuroscience by philosophers of mind, one cannot help being impressed by fMRI pictures of the brain lighting up when performing various cognitive tasks.  It would be easy to be misled by these eye-catching color images into thinking that one may locate exactly where some cognitive process takes place.  Valerie Hardcastle and C. Matthew Stewart attempt to save us from this illusion by reminding us, among other things, that "brain plasticity and concomitant multifunctionality belie any serious hope of localizing functions to specific channels or areas … " (28). They look at several different cognitive models of the brain and find that on none of them does it really make sense to ask of a specific area of the brain "What does this area really do?" (36).  As they rightly point out, it depends on what the other related areas of the brain are doing.

Even if some localization studies may prove to be misleading or illusory, no recent book on philosophy and neuroscience would be complete without discussions of consciousness and the brain.  This volume includes two essays that address this connection. In his paper, Jesse Prinz presents and defends what he calls his "neurofunctional theory of consciousness."  Prinz advertises his theory as an answer to the What, Where, How, When and Who problems of consciousness (381-2), viz. what are we conscious of, where in the brain does consciousness arise, how do certain states become conscious, and when or under what conditions, and just who may attain the state of being conscious (how far down the biological scale)?  After looking at data from recent vision science on levels of vision and areas in V1-V5, Prinz settles on Jackendoff's answer to the "what" question.  We are conscious of combinations of local features of stimuli (385).  Consciousness implies, but is not implied by "intermediate level processing" (385). Prinz thinks consciousness implies attention (386) -- a "selection process that allows information to be sent to outputs for further processing" (387).  Consciousness arises when intermediate-level perceptual representations are made available to working memory via attention (388) -- answering the "how" problem.  This is his AIR theory (attended intermediate-level representations). Of course, this answers the how question only if attention does not already presuppose conscious activity.  Prinz does not show that it does not.  As for the "why" problem, "Consciousness serves the crucial function of broadcasting viewpoint-specific information into working memory" (389) where decisions are made and actions are chosen and biographies and memories are produced.  And for the "Who" problem, he sees no obvious answer to how far down the biological scale consciousness goes (391).  Hence, his theory offers a sufficient but not necessary account of consciousness, and lacks a neuro-computational account of attention.

In Andrew Brook's paper, we get an explanation of why we should not be overwhelmed by "mysterians" or "anti-cognitivists" about consciousness and the brain. Brook thinks that the mysterian's arguments are "unproductive" (don't shed light on consciousness) and have "deep incoherence" built into them.  Upon giving a list of properties that are typical of consciousness, Brook finds the anti-cognitivists' (anti-representationalists') attempts to explain these fundamental properties to be "fairly dismal" (408).  As for the zombie challenge: "For any form of representation and any representing system, couldn't one imagine the system doing all the wonderful cognitive things that it does without consciousness?"  Brook's reply is that "we need to examine whether there is anything to it [this challenge]" (409).  His answer is: "not much." He notes that zombie arguments try to establish a split between consciousness and cognition (410). Going from the other direction, consciousness with no brain activity, Brook calls arguments in this direction "Rimms." He finds these harder to address.  He also recounts the explanatory gap arguments of Levine, Nagel and Jackson, but maintains that these arguments don't work (414).  Brook tries to go through all of the arguments and to dismiss them very quickly with one sentence answers or in some cases partial sentence answers.  Mostly I agree with his answers, but a first time reader would not learn from such short replies.  To "gaps" and zombies, Brook maintains "once representations of the right kind are in place, there ain't nothin' left over to be left out." "Even the zombie itself will be convinced that it is conscious.  So what could possibly be missing?" (416).  He thinks we can't even form an idea of what could be missing, "so the philosopher's zombie is not fully imaginable … hence not possible in any relevant sense of possibility." (416). Yet, his solution to the zombie and Rimms problems is asymmetrical.  He thinks zombies are impossible, but Rimms are possible (logically, but not physically actual) (420).  Much of his chapter is really about whether there is a nomic connection between consciousness and brain/behavior.  He wants to say the right behavior (and brain) events imply consciousness, but not vice versa.  Of course, on at least some accounts (even naturalized accounts) of consciousness this is not true.  Physically identical systems may not be psychologically identical even if they make all the same bodily movements.  Brook seems to want to rule out such possibilities, but it is not clear to me that he has.  Nonetheless, he makes a good case for not being overwhelmed by the standard fare of examples and arguments designed to introduce a wedge between consciousness and the brain.

A somewhat new but related interest in neuroscience is phenomenology and what to do about phenomenal reports.  Are they accurate representations of what is happening in the brain?  In her paper, Victoria McGeer argues for the significance of phenomenal reports of autistics.  Data from first person reports may not be available from third person reports, such as the extent and impact of autistic children's sensory abnormalities.  For instance, why don't autistics like to make eye contact? Possibly because it is uncomfortable for them; they may be hypersensitive perceptually, since they report that their "eyes are on fire," or that it is "frightening" or "unbearably difficult."  Challenges to using such reports include that they are unreliable and that there is no good basis for how they are to be explained neurologically.  For example, one worry is that self-reports are not accurate because there is an improper function of the cognitive mechanism for tracking one's own mental life in autistics (105).  Another worry is that autistics lack a developed TOM module (Baron-Cohen).  McGeer counters with the claim that, if true, self-reports could tend to confirm this via global introspecting disability.  Interestingly, she compares unusual first person testimony of autistics to Hume's problem with reports of miracles (107).  She also points out that, as in studies of synaesthesia, there are ways to test to see if the subjective reports are valid (109) by testing self-reports of perceptual qualities with objective abilities on those same qualities of experience and perception.  Secondly, McGeer offers an alternative model of self-reports: a Reflexive-Expression Model.  The expression model: (1) predicts no tracking mechanism for internal first order states (and hence no damage to such a module in autistics); (2) rules out a certain kind of error in self-reports:  no mistakes in sensory reports (how they are vs. how they seem); and finally, (3) suggests that, in some sense, first person reports cannot be doubted (when qualified in the right way).  So McGeer thinks we should accept the self-reports of autistics as more or less reliable, given the denial that they are reports, and that any deficit in autistic reports are due to developmental abnormalities causing social-normative lack of expertise with intentional language.  While this is an interesting line to pursue, McGeer does little here to demonstrate that this is the best way (based on science or neuroscience or clinical practice) to interpret  first-person reports of autistics (as a kind of expressivism). Her earlier claim (that when an autistic says "my eyes are on fire," upon looking into the eyes of another) may well be best treated as a non-reported expression, but on its face it has the definite character of a report on an inner process.  More work would need to be done to explain away this strong appearance.

In a different way, Evan Thompson, Antoine Luz, and Diego Cosmelli also defend the phenomenal reports of subjects.  They see these too as a way of learning about the brain, about "physiological processes that would otherwise remain opaque" (46).  They say they are looking for "bridges" between first person reports of subjective experience and third person reports of neurobiology, but propose in this book only a research program for where to look for the bridge (89). At times, they puzzle over a notion such as finding neural correlates of consciousness and reject any notion of a match between conscious content and a level of neuronal activation as a kind of "category mistake" (82).  Still they see "neurophenomenology" as a blend of traditional phenomenology and the "enactive approach in cognitive science" (41). As they put it, "Like phenomenology, the enactive approach … emphasizes that the organism defines its own point of view on the world" (42). These themes are continued in the papers of Grush, Jacob, and Mandik. As one might expect, these chapters too cover many different aspects of conscious experience and labels for it.

How does the brain produce in us the phenomenal experience of time? Sean D. Kelly raises the matter in his contribution.  How do we experience objects persisting in time, moving through time or space, or the passage of time? Sean says his goal is to "show you that there is a problem here" (209), to puzzle you about something you had not seen as puzzling.  In that, I think he succeeds.  Kelly examines several attempts to explain the "specious present theory," the "retention theory," and so on, and rejects them all (211).  While he succeeds in making this puzzling, by finding flaws in all of the attempts to solve it that he examines, we will have to wait for a future episode to see Kelly's own solution to the puzzle.

In his paper, Rick Grush looks at our phenomenal perceptions of "behavioral time," time manifest in our immediate perceptual and behavioral goings on (161), such as how long it might take one to catch a falling object (163). He looks at "behavioral space," spatial representation that is fully subjective, where the magnitudes that define the spatial relations are not objective, such as "The coffee cup is right there" (162). This space fixes on one's bodily axes defined relative to one's own behavioral capacities and relations ("right here" not "over there") and are also defined relative to one's own behavior, e.g., "my grasp."  And, of course, he looks at "behavioral now" as a small temporal interval spanning a few hundred milliseconds into the past and future (165).  Grush distinguishes "carrying spatial information" from carrying it "as spatial import phenomenologically" (171). Manifestation of sub-personal level machinery is the experience of the stimulus as being located somewhere in space (173). In this representational schema, "a perceptual episode in PPC [posterior parietal cortex] that extracts information from the episode in such a way as to be able to cue and guide behavior just is imbuing that personal-level experience with behavioral spatial import" (173).

Grush walks us through a description of Kalman-filters (174 ff) and recounts his well-known view that the brain is a controller, and an emulator -- where model feedback can produce mock feedback to handle delays in real time feedback (177).  Kalman filters filter noise to find the real feedback value (179) by maintaining an estimate of the real value.  Grush gives an explanation of the specious present, or what he calls the behavioral now, in terms of lags on the afferent and efferent side and processes to smooth these out in an emulator (197).

Grush's chapter nicely complements the chapters of both Jacob and Mandik.  Pierre Jacob argues for the "two visual system" model of perception for objects that can be reached and grasped:  a visual awareness and a visuomotor representation.  Following Goodale and Humphreys, Jacob maintains that the visual awareness is linked directly not to action or motor outputs but to cognitive systems involving memory, semantics, spatial reasoning, planning, and communication (245).  If true, it is not directly involved in action-guidance. Non-conceptual content of visual perception enters the belief box.  He says the information processing tasks and perspectives are different for the two visual systems (246).  One solves for action and the other for recognition.  Jacob compares these representations to visuomotor representations that are hybrid (like Millikan's pushmi/pullyu representations and Gibson's affordances) (247).  He gives a brief history of some research on two visual systems (247), and which parts of the brain (and damage) cause what kinds of impairments (recognition or action impairments). He ends with some excellent examples of problems for the enactive theory of vision from patients with deficits (260) and some "neglect" cases where the brain is active in them (276).

Pete Mandik also recounts the enactivists' views of perception, as against the representational theory. Enactivists, such as O'Regan and Noe, postulate that perception is the product of sensori-motor knowledge (289). Mandik explains why this is a threat to the representationalists (290). Perception is underdetermined by sensory inputs and has to be supplemented by sensori-motor outputs.  Mandik argues that even perception based essentially in part on efference copy information is consistent with the representational theory of perception (292-3). Imperative representational content can figure in determining the sensory input content of a perceptual representation. Mandik shows that his account is implementable in a robot, consistent with evolutionary cognitive models, (296-7) and instantiated in human vision (299). 

No collection on cognition and the brain, since Turing, would be complete without thinking of the brain and cognition as a type of computer and computational process, respectively.  However, Chris Eliasmith thinks the metaphor of mind as a computer is overused.  He would prefer to model the mind/brain as a dynamic information processor, a dynamic system.  He favors "modern control theory" that includes "internal system descriptions" and system state variables, claiming that this is a better fit for explaining the mind than the computer metaphor (137-8). Representation involves the brain's encoding and decoding stimuli (139). Neural responses encode physical properties.  The basic alphabetical element is the neural spike. There are multiple coding schemes for this.  For example, he thinks you can read off, from a brain encoding information about spatial location, where an object is in the world from the encoding in the brain. He treats these as scalar magnitudes, moving from neural activity to representations of object position. And computation, he thinks, will be on some transformed version of the signal encoded (143).  What a population of neurons represents is determined by the decoding that results in the quantity of which all other decodings are functions.  Decoders are embedded in the synaptic weights between neighboring neurons (145).  What's new, according to Eliasmith, is that the systems under study are dynamic systems. Those features cannot be ignored.  Identifying the system state variable is crucial (146). He suggests it just is the neural representation.  He supplies a list of the typical objections to each approach:  symbolism, connectionism, dynamism.  He claims his dynamic system view does better in each category (150).  He thinks there are significant consequences of his approach for the cognitive sciences: for neuroscience (careful consideration of decodings), for psychology (quantitative dynamical descriptions that compute over time as a variable), and for philosophy (he thinks bringing in time requires us to re-think the standard functionalist arguments of multiple realizability) (159).