Explanation and Integration in Mind and Brain Science

Placeholder book cover

David M. Kaplan (ed.) Explanation and Integration in Mind and Brain Science, Oxford University Press, 2018, 258 pp., $60.00, ISBN 9780199685509.

Reviewed by Valerie Gray Hardcastle, University of Cincinnati

2018.04.26


Back in the mid-1970's, when cognitive science was becoming a recognized discipline, there was a lot of discussion regarding how all the fields that made up this new area of research (e.g., anthropology, computer science, linguistics psychology, neuroscience) fit together. Most of the discussion centered on how and whether the "mind" sciences, like psychology, fit with the "brain" sciences, like neuroscience. The conversation has been running ever since, though it quieted to something akin to a whisper in the late 20th century and through the first decade of the 21st. However, for a variety of reasons, the discussion has picked up again as of late, and this book brings to the fore many of the participating voices -- some of whom have been there since the beginning.

This is a volume with a particular agenda: it wants to provide space and support for those who prefer to chart a middle path, for those who believe that psychology and neuroscience are intimately related, but that they are not reducible one to the other (at least not in the old-fashioned Hempelian sense of the term). As a result, the book has nine chapters that essentially reiterate the same message, though always from slightly differing perspectives, and only a final chapter that takes a different approach. This message is essentially one that was making the rounds back in the early days of cognitive science: science is a pragmatic activity, so that what counts as a scientific explanation is relative to the context of inquiry. Consequently, because explanations in psychology and those in neuroscience occur in different contexts and answer different questions for different sorts of people, they will not be reducible. At the same time, because the interests of psychology and at least human neuroscience overlap one another -- both are interested, in the broadest possible sense, in explaining human behavior -- these two disciplines are not unrelated to one another. Indeed, often results in one domain can inform or constrain or elaborate on results in another. But, of course, being deeply connected is not the same thing as being different versions of the same thing.

Martin Roth and Robert Cummins focus on the notion of "law" in psychology, noting that they are often what psychologists take to be the thing-to-be-explained. For example, Fitts' law describes the trade-off between speed and accuracy in most of skilled human behavior. Roth and Cummins argue that such so-called laws are nothing like traditional covering laws; instead, they comprise the explananda of psychology. Psychologists want to know: why is there this trade-off between speed and accuracy? Of course, there is nothing special about psychology's asking these sorts of questions. Any science that relies on functional explanations follows a similar pattern. Indeed, one could ask why it is that the Nernst equation holds as a description for the influx and efflux of potassium ions across neural membranes. But when one asks questions such as these, then "we find the work spreading out among the disciplines like a slime mold" (p. 41). And, importantly, "different problems require different representational resources and different assumptions" (p. 42). Roth and Cummins conclude that the issue of reducibility is just a poor way to approach understanding the nature of science. Scientists want to answer particular questions, and they do this using whatever relevant tools they have at hand.

In many respects, Daniel Weiskopf echoes Roth and Cummins. Weiskopf argues that the functionally described "boxologies" found in many psychological models are explanatorily adequate, for they satisfy the standard norms of a good explanation, such as providing the ability to predict, manipulate, and control the explanandum. But, while these models are causal, they are not reductionist. The functional models get their explanatory oomph from how information is represented and manipulated, not from how causal properties are mechanistically realized, as they do in more reductionist approaches. Again, different sorts of queries lead to different sorts of answers.

James Woodward keeps with this theme of scientific explanations being designed to answer specific questions asked in specific contexts, and he highlights that having more details is not always better for any particular explanation. Indeed, explanations deliberately abstract from underlying details in order to emphasize the few key attributes that made the most difference to the outcome under consideration, given current background circumstances. These factors can sometimes be captured at a higher level of abstraction and sometimes not. Which level of abstraction is required depends upon the phenomenon to be explained, as well as who wants the explanation and why. In each case, one only wants to supply what is required for the explanation and should eschew relatively extraneous details.

Similarly, Michael Strevens argues that convergent biological evolution can give us several similar (but not identical) solutions to the same problem, and that a better explanation for how animals solve a particular evolutionary problem would concentrate on a more abstract description of the several solutions. It would emphasize what is common across the different answers, instead of detailing how each solution differs from the others. In other words, the best explanations are those that focus on the most relevant factors, as opposed to itemizing the most details. Of course, we can always explain why the fly eye is structured the way that it is, but this would be a different explanation than of why eyes function as they do. Both are legitimate things to explain, but they are different explanations that require different facts to support them.

Dominic Murphy takes a different tack and examines how folk psychology might fit into the discussion of reductionism. He ends up defending a non-radical view of eliminativism, namely that folk psychology plays important heuristic roles in our day-to-day life, so we cannot rid ourselves of it entirely. However, it does not carve nature at her joints, as we have learned from research in the science of psychology and the brain, so it will not form the basis of scientific theories of personal thoughts and action. But again, folk psychological explanations work for what they were designed to do: to answer culturally specific sorts of queries.

Like Cummins, Weiskopf, Woodward, and Strevens, Frances Egan defends the fundamental explanatory autonomy of computational/functional models and explanations. Articulating how a system computes a mathematical function and how that computation contributes to the psychological capacity in question just is a functional explanation, for it specifies what the system does and how it does it. She claims that mechanistic details describing how the computation is implemented in neural processes does not add to the higher-level theoretical account.

Taking a different approach and really extending Egan's position, David Kaplan makes the world safe for limited-scope mechanistic explanations. While higher level functional/computational explanations are valuable, it can also be the case that understanding how those computations are instantiated in particular instances can be useful as well. Again, this will depend upon what one wants to know and why -- proper explanations are determined by pragmatic and contextual factors.

Continuing the theme of using both mechanistic and functional approaches, depending on circumstance, Oron Shagrir and William Bechtel return to David Marr's tri-level formulation of a computational explanation in cognitive neuroscience, noting that the highest level is supposed to articulate both what the capacity in question does and why. Often, they argue, the "why" component of the computational description is neglected and it accounts for which function the system needs to compute, based on its environmental constraints. Scientists can then use this precisely described function, along with the defined constraints, to create lower-level accounts of the underlying mechanisms.

Kenneth Aizawa focuses squarely on the notion of multiple realizability and how that may or may not be related to explanations of various cognitive capacities. Aizawa provides detailed and quite excellent examples from vision research which illustrate that different explanations may lump or split the same phenomenon, depending upon the explanation's goals. That is, different "implementations" may still be given the same higher-level description functional description, if the differences in neural processing details do not (significantly) influence how scientists describe the higher-level psychological capacity. We have some flexibility in how we taxonomize higher-level realizer properties in virtue of our theoretical or explanatory aims. Sometimes differences among the realized properties will influence that taxonomy, and sometimes they will not.

Finally, Corey Maley and Gualtiero Piccinini provide the opposition for this volume. In contrast to all the other contributors, they hold that functional explanations are really abbreviated mechanistic models. Once all the details are fleshed out, then the higher-level causal-functional interactions will be grounded in lower-level neural-mechanical interactions. However, it is important to note that they are focused primarily on scientific models, not scientific explanations. A complete model of some capacity might indeed require a description of underlying interactions. At the same time, an explanation of the capacity may not necessitate such descriptions. More details do not always make for a better explanation, which is a pragmatic and situation-dependent event, even if they make for a better model, which (in theory) should stand apart from such contextual constraints.

All in all, each of the substantive chapters, taken individually, are excellent -- carefully argued, nuanced, clearly written, and deep. In addition, Kaplan's introductory chapter, which I have not summarized here, is a terrific quick history of the major issues in the reduction debate in cognitive science. However, everything taken together comes across as repetitive, each of the chapters (with the exception of the final one) essentially arguing for the same conclusion: explanations are responses to particular queries, made by particular actors, at particular times, and for particular reasons. As such, they may or may not rely on underlying mechanisms. And explanations are not the same thing as models; explanations can use models, but not vice versa. This is true for physics, and it is true of the cognitive sciences. I thought this was something we figured out roughly thirty years ago.

In his introduction, Kaplan promises that "similarities and differences between [the central proposals] . . . will be highlighted," (p. 2). Certainly, the similarities are clear; it would have been nice if the authors had found some way to highlight any subtle differences among their views. From the perspective of this reviewer, the views are all of a piece. And that makes this volume appear weaker as a whole than any of the individual chapters.