Neurocognitive Mechanisms: Explaining Biological Cognition

Neurocognitive Mechanisms

Gualtiero Piccinini, Neurocognitive Mechanisms: Explaining Biological Cognition, Oxford University Press, 2020, 400pp., $105.00 (hbk), ISBN 9780198866282.

Reviewed by Michael Rescorla, University of California, Los Angeles

2021.11.05


Gualtiero Piccinini’s book discusses the scientific study of cognition, construed broadly to include low-level phenomena such as perception and motor control. The scope is epic, ranging from metaphysics to explanation to neuroscience to history of science. The writing is brisk and clear. At every turn, the book evinces Piccinini’s command of relevant philosophical literature and his knowledge of contemporary science. It is an imposing contribution that all philosophers of mind should read.

Piccinini espouses a version of the computational theory of mind (CTM), which holds that many mental processes are computations. His position differs significantly from the versions of CTM most familiar to philosophers, such as Jerry Fodor’s (1975) influential treatment. Most notably, Piccinini thinks that previous authors have neglected neural aspects of mental computation. Piccinini instead foregrounds neural activity. His neurocentric development of CTM contains some impressive elements: trenchant critique of Warren S. McCulloch and Walter Pitts (1943), who gave the first published presentation of CTM (107–127); informative exposition of diverse neuroscientific findings (182–204, 248–315); intriguing reflections on mental ontology and multiple realizability (6–65); devastating rebuttal of several prominent arguments for and against CTM (225–257); and more. In my opinion, though, the book’s elements do not assemble into a persuasive case for Piccinini’s neurocentric viewpoint.

Psychological Explanation

Cognitive science routinely offers psychological explanations, which cite mental states as explananda or explanantia. For example, linguists postulate tacit knowledge of a generative grammar to explain grammaticality judgments (Chomsky, 1965), and perceptual psychologists explain illusions by hypothesizing that the perceptual system executes unconscious Bayesian inferences (Knill and Pouget, 1996). Traditionally, psychological explanation has tended to proceed independently from neuroscience. Researchers posit mental states and processes without addressing how the states and processes are implemented in the brain. Often, though not always, the posited states and processes are computational.

Piccinini thinks that it is a bad idea to pursue psychological explanation without addressing neural implementation. In a representative passage, he writes (315):

I am calling for abandoning the classical approach of just searching for computational explanations of human behavior without worrying much, if at all, about neural computation. . . . anyone seriously interested in explaining cognition should strive to show how the computations she posits may be carried out by neural processes, to the extent that this can be made plausible based on current neuroscience.

Piccinini urges a “mechanistic integration of psychology and neuroscience” (185), weaving the psychological and the neural into unified explanations. He embraces cognitive neuroscience, which he says aims to achieve and often does achieve the desired mechanistic integration. He writes approvingly that “cognitive science as traditionally conceived is on its way out and is being replaced by cognitive neuroscience” (182).

Piccinini’s case for cognitive neuroscience centers upon his more general mechanistic view of explanation: “To explain a phenomenon, we need to identify level(s) of mechanistic organization that are relevant to producing the phenomenon” (177). A mechanistic treatment of a complex system “requires identifying the [system’s] components, their relevant functions, and their organization” (160). Mechanistic explanation describes how the components interact to produce the explanandum. Applying his mechanistic viewpoint to the mind, Piccinini argues that good explanation of a cognitive phenomenon should identify neural components and describe how the neural components produce the phenomenon: “Providing a scientific explanation of cognition requires understanding how neurocognitive mechanisms work” (1). He claims that cognitive neuroscience supplies the desired mechanistic explanations.

Piccinini is undoubtedly correct that psychology and neuroscience are more intermingled than in the early years of cognitive science. Psychologists now routinely support their theories by arguing that the theories tally with neurophysiological data. They also commonly consider how mental processes might be implemented by neural activity. Still, I think Piccinini overstates the extent to which cognitive neuroscience is “replacing” traditional cognitive science. Many researchers continue to theorize at the purely psychological level, paying little if any attention to neural implementation. This methodology is ubiquitous in linguistics (where linguists continue to theorize about generative grammar), perceptual psychology (where researchers routinely construct Bayesian models), and numerous other areas of cognitive science.

Purely psychological theories offered by cognitive scientists often look explanatorily successful. For example, Bayesian models supply powerful explanations of how the perceptual system estimates distal properties such as size, color, location, speed, and so on (Rescorla, 2015). Piccinini says nothing to critique such theories, save that they flout his preferred mechanistic template. The critique has little force, because successful explanation in other sciences commonly flouts the mechanistic template (Rescorla, 2018; Woodward, 2017). To illustrate: we can cite the ideal gas law and an increase in temperature to explain why a gas exerts increased pressure upon a container, without citing mechanistic facts about gas molecules. Statistical mechanics improves upon this non-mechanistic explanation by adding mechanistic details, but the non-mechanistic explanation on its own already looks at least somewhat explanatory. I find that Piccinini has given no reason to suspect anything amiss with the non-mechanistic theorizing found in linguistics, perceptual psychology, or other areas of cognitive science. In exhorting us to abandon purely psychological explanation, he advances an unmotivated and stultifying methodological prohibition.

In some passages, Piccinini (155, 165, 171, 176) weakens his demand for mechanistic explanation. He allows that scientists can provide a mechanism sketch, which takes us only some way towards a fully satisfying explanation. As we fill in the sketch with mechanistic details, we improve our explanation. In that spirit, Piccinini admits that Bayesian modeling may provide a mechanism sketch and therefore be somewhat explanatory, but he insists that “we would gain further explanatory depth if we also identified the relevant components, their functions, and their organization, and showed that their functions, correctly organized, constitute the explanandum phenomenon” (159). Piccinini’s analysis here seems inconsonant with his vehement opposition to traditional cognitive science. Once we grant that purely psychological theorizing can be somewhat explanatory, why should we dissuade scientists from engaging in it?

Neurocognitive explanation

A pressing challenge facing Piccinini’s position is that some mechanistic details look unexplanatory. When economists cite growth in the money supply to explain inflation, they would not improve their explanation by mentioning mechanistic details about gears in the currency printing presses. Adding mechanistic details does not always yield explanatory progress. Thus, there is no guarantee that adding neurological details to a purely psychological explanation will yield an improved explanation.

In response, Piccinini concedes that only some mechanistic details should figure in good explanations: “we need to identify the relevant causes—the ones that make the most difference to the behavior of the whole—and abstract away from the irrelevant ones” (160). He does not expand upon the phrase “make the most difference.” His ensuing discussion of the issue (166–181) features similarly vague language. Consequently, his discussion leaves unspecified which neural implementation details improve a psychological explanation.

One fleeting passage (177) hints that Piccinini hopes to fill the lacuna by invoking interventionist theories of causal explanation (Woodward, 2003). The suggestion seems to be that fully satisfying explanations will cite those mechanistic details that help us answer “what-if-things-had-been-different” questions (i.e., questions about how the explanandum would have changed had the explanantia changed in certain ways). I suspect that developing the suggestion into a compelling account would leave little work for mechanisms, since interventionists can bypass mechanisms and appeal directly to “what-if-things-had-been-different” questions. In any event, Piccinini’s discussion does not systematically differentiate between explanatory and non-explanatory mechanistic details.

Compounding the difficulty is Piccinini’s puzzling decision not to illustrate his position with concrete examples of neurocognitive explanation. Although he discusses various neuroscientific discoveries, he does not say how the discoveries help explain cognitive phenomena. He asserts that “Many books could be written analyzing specific cognitive neuroscience explanations in detail” (201), but he declines to analyze any putative explanations himself. I find it striking that, in a 400-page book ostensibly concerned with “explaining biological cognition,” Piccinini does not adduce a single real explanation of biological cognition. The omission left me uncertain how, on Piccinini’s view, neural details make an explanatory contribution.

The closest Piccinini comes to discussing an actual neurocognitive explanation is the following “sketch of an account of vision” (196):

Individual cells in V1 selectively respond to particular line orientations from the visual scene. Several of these cells in conjunction form an orientation column, which provide the basis for edge detection in the visual scene. These orientation columns combine to constitute V1, which computes the boundaries of visual objects. V1 then operates in conjunction with downstream parietal and temporal areas to constitute the different “streams” of visual processing and visual object representation.

Piccinini’s sketch leaves unaddressed the crucial question of how the visual system “computes the boundaries of visual objects.” Some edges belong to boundaries. Other edges are merely textural (e.g., folds in a sheet of paper). Evaluating which edges belong to boundaries is a difficult problem that the visual system somehow solves with great accuracy. Learning that the problem is solved in V1 (the primary visual cortex) hardly constitutes much explanatory progress: knowing where the computations occur is no help regarding which computations occur. Piccinini’s sketch does not begin to clarify these computations. Nor does his sketch clarify the computations through which the perceptual system estimates any other distal property (e.g., size, color, location, speed). For that reason, the sketch is not an enticing advertisement for neurocognitive explanation.

Most researchers would agree with Piccinini that integration of psychology and neuroscience has the potential, in principle, to improve upon purely psychological explanation. I think Piccinini exaggerates how much neuroscience currently contributes to psychological explanation. A few examples:

  • Much of the research cited by Piccinini (197–203) investigates where various computations occur in the brain. As just indicated, identifying where a computation occurs is not much help regarding the nature of the computation.
  • Other research cited by Piccinini (268–292) focuses on firing patterns over neural populations. How do such firing patterns relate to cognition? For example, suppose a neuron preferentially fires in response to a bar with orientation x. How does that firing profile relate to the thinker perceiving that the bar has orientation x? We do not know. Similarly for other neural firing patterns. As a result, it is unclear how knowledge about neural firing patterns can enrich psychological explanation.
  • Neuroscience pursues neural network modeling of mental activity. This research, which Piccinini mentions but does not discuss in detail, might potentially supply integrated neurocognitive explanations. However, the research often tends to be quite conjectural. A good example is research on neural implementation of Bayesian inference (Pouget et al., 2013). The various neural network models under consideration are not, as yet, well-confirmed. Certainly, they are less well-confirmed than countless Bayesian models couched at the psychological level. While neural network models may eventually enhance the explanations offered within perceptual psychology, I am not confident that they currently do so.

Despite how much we have learned over the past few decades about the neural underpinnings of cognition, I see no evidence in Piccinini’s book or elsewhere that our increased neuroscientific knowledge has generated anything like the explanatory progress he alleges.

Neural Computation

Many proponents of CTM, such as Fodor (1975) or C. R. Gallistel and Adam Philip King (2009), emphasize digital computation. They regard the mind as a Turing machine (or a Turing-style computing system) that manipulates discrete digits. Digital computational models are usually couched at an abstract level that prescinds from neuroscience.

Piccinini argues that we should instead study neural computation: that is, computation by neural populations. He elucidates neural computation using the mechanistic theory of computation articulated in his previous book (Piccinini, 2015). He contends that digital CTM is incompatible with contemporary neuroscience (300–311). He defends this verdict by critiquing several detailed proposals about how digital computations might be neurally implemented, arguing that the proposals conflict with known facts about the brain. Piccinini also maintains that neural computation is not purely analog (298–300), partly on the grounds that neural signals are composed from discrete elements (spikes). He concludes that neural computation is sui generis (311–312): neither wholly digital nor wholly analog.

Piccinini’s argument is unlikely to persuade proponents of digital CTM, who freely admit that they do not know how digital computations are neurally implemented. Proponents hold that we nevertheless have strong behavioral and theoretical grounds to accept digital CTM. Maybe they are wrong, but one does not settle the matter either way by critiquing specific neural implementation proposals. Piccinini eventually admits as much (315–316): he concedes that digital CTM may be true of some psychological domains, and he retrenches to the complaint that its proponents have not discovered how digital computations are implemented in the brain. This complaint applies equally well to generative grammar, to Bayesian perceptual inference, and to every other psychological posit in cognitive science. So the complaint has no specific dialectical force against digital CTM.

As Piccinini emphasizes, computational neuroscientists regard digital CTM quite skeptically. The computational models offered by neuroscientists are typically more along the lines favored by Piccinini: neural network models that describe interactions among idealized neurons. Piccinini does not attempt to expound how such models work, to assess which cognitive phenomena they explain and which cognitive phenomena they have trouble explaining, or to compare the explanations they supply with those supplied by Turing-style models. Accordingly, his discussion offers little help to readers in weighing the relative merits of these competing computational formalisms.

Conclusion

Piccinini denigrates vast amounts of cognitive science as misguided speculation or at best feeble groping towards a more fruitful cognitive neuroscience. He does not validate his neurocentric viewpoint with a convincing analysis of how, why, and when neuroscience contributes to psychological explanation. For my own part, I do not expect cognitive neuroscience to replace psychology anytime soon, nor do I see any reason why we should want it to.

ACKNOWLEDGMENTS

Thanks to Jacob Beck and Mark Greenberg for helpful comments on an earlier draft of this review.

REFERENCES

Chomsky, N. 1965. Aspects of the Theory of Syntax. Cambridge: MIT Press.

Fodor, J. 1975. The Language of Thought. New York: Thomas Y. Crowell.

Knill, D., and Richards, W., eds. 1996. Perception as Bayesian Inference. Cambridge: Cambridge University Press.

Gallistel, C. R., and King, A. 2009. Memory and the Computational Brain. Malden: Wiley-Blackwell.

McCulloch, W. and W. Pitts. 1943. “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics 7: 115–133.

Piccinini, G. 2015. Physical Computation: A Mechanistic Account. Oxford: Oxford University Press.

Pouget, A., Beck, J., Ma., W. J., and Latham, P. 2013. “Probabilistic Brains: Knowns an Unknowns.” Nature Neuroscience 16: 1170–1178.

Rescorla, M. 2015. “Bayesian Perceptual Psychology.” In The Oxford Handbook of the Philosophy of Perception, ed. M. Matthen. Oxford: Oxford University Press.

—. 2018. “An Interventionist Approach to Psychological Explanation.” Synthese 195: 1909–1940.

Woodward, J. 2003. Making Things Happen. Oxford: Oxford University Press.

—. 2017. “Explanation in Neurobiology: An Interventionist Perspective.” In Integrating Psychology and Neuroscience: Prospects and Problems, ed. D. Kaplan. Oxford: Oxford University Press.