Cognitive Pluralism

Placeholder book cover

Steven Horst, Cognitive Pluralism, MIT Press, 2016, 360pp. $54.00 (hbk) ISBN 9780262034234.

Reviewed by Bryce Huebner, Georgetown University

2016.09.03


How is human thought structured? In remembering past events, thinking about counterfactual possibilities, or planning for the future, many people experience sentence-like descriptions of events (others seem to experience imagistic thoughts; cf., Ross 2016). It may seem obvious that these are our thoughts. But human brains do quite a bit more than manufacture and manipulate sentence-like representations. They track multiple objects in parallel, they estimate numerical values, and they reflexively parse the phonemic, phonological, morphological, and syntactic structure of familiar languages. These operations occur without conscious control. And some philosophers claim they are carried out by modular systems; these modules supposedly pass their outputs to 'central cognition', where they become the raw materials for thought. But should we say that thoughts only occur centrally, when they are formed into linguistically structured memories, plans, beliefs, desires, or judgments?

In Cognitive Pluralism, Steven Horst provides reasons for rejecting the assumption that the mind divides neatly into modular and central processes. He also recommends abandoning the assumption that thought relies primarily on word-and-sentence sized contents. He argues that many forms of thought rely on domain-sized models, which -- like modular processes -- are only apt for tracking particular aspects of the world (Chapters 2-5). He then examines the nature of mental models, and suggests that moving away from the word-sentence-inference model of thought will open up new possibilities in the philosophy of mind and cognitive science (Chapters 6-11). And he concludes by suggesting that a focus on mental models should change the way we think about epistemology, ethics, and semantics (Chapters 12-16). Horst covers a lot of ground, and there's no way to do justice to the whole book in a short review. But I find the book thought provoking, and I hope to make it clear why philosophers interested in human thought will benefit from thinking about cognitive pluralism.

The first two chapters provide an overview of the philosophy of mind from 38,000 feet. They are fly-over chapters, which often neglect the details of particular positions in favor of the broad contours of a view that becomes perceptible at a high level of abstraction. Perhaps this is okay, as Horst is writing for philosophers working in epistemology and semantics who have little working knowledge of the relevant parts of cognitive science (p.178), but who think that the primary units of thought are "analogous to the three sizes of units found in language and logic: units that are word-sized, sentence-sized, and argument-sized" (p.7). While Horst agrees that some kinds of thinking are done with language-sized and logically-structured representations, he thinks that philosophers have paid insufficient attention to nonlinguistic forms of thinking and reasoning. And the primary goal of the book is to show that many forms of human thought are organized into "domain-sized units of understanding, with proprietary ways of representing domains and tight connections between concepts and inference patterns within the domain but much looser connections across domain boundaries" (p.62). I think these claims are plausible; but there may be ways of capturing these insights without rejecting the Standard View, as I suggest below.

According to Horst, many of our cognitive capacities rely on special-purpose and domain-specific models of the world, instead of "(1) a single, consistent and integrated model of everything or (2) a long inventory of more specific and independent individual beliefs" (p.83). And his case for this hypothesis relies on four sources of data:

1. Typically developing humans possess Core Knowledge Systems, which allow them to track objects, estimate numerical magnitudes, engage in geometric reasoning, and perceive animacy and agency (Spelke and Kinzler 2007). These systems appear early in development, and they provide domain-specific expectations about how animate and inanimate objects will behave.

2. Typically developing humans develop folk-theories of biology, physics, and psychology, which operate over distinct domains, and structure the inferences they draw within these domains. For example, folk-psychology is organized around assumptions about which entities have minds, and what kinds of minds they have (Gray and Wegner 2016); and folk-biological assumptions make it difficult for people to accept the stochastic and mechanical character of natural selection (Keleman 2012).

3. The minds of nonhuman animals are "composed largely of a bundle of closed instincts and good tricks" (p.179). Considerations of evolutionary continuity suggest that our cognitive architecture is also likely to be built around bundles of domain-specific capacities, which are similar to those that we find in nonhuman, nonlinguistic animals.

4. Research in AI has successfully used frames or other data-structures that "represent the space of possible objects, events, and actions stereotypically associated with a particular type of context or situation" (p.68). ​These data-structures are abstract, and they rely on idealizing assumptions about particular domains. Encounters with the world recruit particular frames and once a frame is recruited, information that is consistent with that frame is searched for, while information that is irrelevant to it is ignored.

Many philosophers of cognitive science will be nonplussed by Horst's claims about Core Knowledge Systems. These are usually treated as modular capacities, in line with the Standard View he rejects (Carey 2009; Mandelbaum 2013). By extending the discussion to folk-theories, and including an argument for evolutionary continuity, Horst suggests that there are more quasi-modular cognitive systems than most people allow. This makes it more difficult to sustain a distinction between modular and central cognition, but such capacities may be consistent with a word-and-sentence model of minds that allows for massive modularity (Carruthers 2006). Nonetheless, the appeal to frames and models is novel, as is its use in an argument for cognitive pluralism.

It comes as no surprise, then, that much of the book focuses on characterizing mental models, and specifying their role in human thought. For Horst, mental models are "idealized domain-sized units of understanding, each of which has its own internal ontology for representing the objects, properties, relations, events, and processes of its content domain and its own rules for inferences about the properties and transformations of these" (p.99). These units of understanding guide capacities for pattern-tracking and pattern completion; they reflect learned and innate assumptions about which features of different domains matter; and they guide cognitive and behavioral strategies in ways that are suitable for coping with particular aspects of these domains (p.187). But are these forms of understanding model-based?

Horst initially pursues this question by way of auto-anthropology. He asks readers to imagine a familiar room, and count the electrical outlets; and he assumes that most readers will explore a mental model as they do so. He then notes that chess players often experience pieces as able to move in particular ways, and board positions as affording particular forward-looking strategies; and he argues that this requires perceiving the board through a cognitive template that structures the ability to understand possible games and possible states of play (p.131). Unfortunately, since I lack rich visuospatial imagery, I'm unsure what inference to draw from Horst's account of his model-like experience. Perhaps it would have helped to appeal to the work of Gilles Fauconnier and Mark Turner (2008). Like Horst, they reject the claim that thought depends on inferential relations between sentence-like representations. And they argue that models shape the way we think, and that blending these models can open up creative possibilities. But these too are claims about how thinking seems. Such claims are insufficient to distinguish thinking that's model-based and non-linguistic from thinking that's linguistic but sometimes rendered in visuospatial images by a dedicated modeling system (cf., p.21). I don't know how to tease these possibilities apart; both options are worth exploring. But I don't think we need to settle this issue to see that human minds are likely to be cobbled together networks of affordance-based systems, each of which "provides a unique type of epistemic and practical grip upon the world, and is useful for a particular set of purposes" (p.189). And this is where things get interesting!

Domain-specific processing pays big dividends for animals that inhabit stable and unified environments. But modern humans are not such animals. Many of us inhabit multiple, partially overlapping micro-worlds, and we internalize multiple models to track different aspects of these micro-worlds. As Sally Haslanger (2012) has argued, local schemata play a critical role in structuring thoughts about race and gender, and we seem to rely on these locally-salient, and domain-specific models to distinguish information we pay attention to from information we ignore. While these models are apt for tasks like tracking statistical regularities, they are deeply problematic when deployed in socially significant situations. As Horst notes, "if you turn an idealized model into a perfectly universal claim, it may still seem like knowledge if you do not recognize that the idealization and the intuitive force are still present, and as a perfectly universal claim it is erroneous" (p.269). Understanding the operation of this error in racialized and gendered thinking may help us to uncover strategies for shifting the models we rely on, and it may help us in developing alternative models for these domains. This isn't an issue that Horst explores. But I think it's consistent with his claim that we typically think through mental models, and that affordances just show up to us. As a result, it's often difficult to figure out which idealizing assumptions we rely upon and where we have abstracted away from important facts about the world.

This situation is complicated by the fact that we can track different aspects of a domain with multiple, inconsistent models. Moreover, experiencing some affordances may require idealizing assumptions that prevent us from experiencing others; different situations might "activate different models, generating one set of judgments rather than another depending on which model is activated" (p.287). So there may not be a "complete" model of the world. This doesn't mean that the world is disunified, but it might mean that there's pervasive indeterminacy in our understanding of the world, and it might mean that there are contexts where more than one idealized model is apt for our purposes (especially in the case of philosophical concepts such as knowledge, justification, and goodness). So while we like to think that we know what the world is like, "minds like ours may be unable to produce a set of beliefs that is globally consistent or a single comprehensive model of everything without losing some of the epistemic grip upon the world that we gain through many more localized idealized models and the beliefs they license" (p.222).

Horst tries to temper these implications of cognitive pluralism by arguing that there are mental models we can't abandon or reshape, as they rely on idealizing assumptions that are biologically or socially forced moves. "Modes of ethical evaluation and perhaps an assumption of freedom . . . might fall into this class, as might assumptions about a world of classical objects in space and time, and models of causation and teleology" (p.242). But even if we begin from shared assumptions, embodied trajectories through socially structured space will impact what we think and what we see as a possibility. Locally shaped forms of dehumanization are a robust feature of human ethical life, but they are realized differently in different contexts (Smith 2011). Individual learning histories -- and variations in interpretive strategies -- impact evolved tendencies to form religious beliefs (Norenzayan and Gervais 2013). And some people learn to treat metaphysical freedom as a myth, grounded in peculiar habitual biases. This is what we should expect. And, as Horst repeatedly notes, cognitive pluralism comes into its own when we find that we can move beyond learned perspectives or we find that different models can be triangulated against one another to enhance our understanding of the world.

Horst doesn't discuss atypical neurologies that yield psychological differences. But he does briefly mention autism as a domain-specific deficit, and like many philosophers, he notes that some people partially compensate for this deficit (p.183). But he doesn't notice the rich model of neurodivergence that lies behind his claim (i.e., the model that guides his interpretation of neurological differences and their psychological manifestation). This model makes it hard to recognize that typically functioning adults are often deficient at predicting autistic mental states (Edey et al, in press), but activists have long suggested that neurotypical people have difficulties overcoming their biased assumptions about neurodivergence. This is what Horst should predict: the assumption that there's only one way to understand other minds is just as ungrounded as the assumption that there's only one way to understand the world. In both cases, mistaking a generalization for a universal claim yields epistemic illusions (p.272). And this recognition opens up the possibility of a more pluralistic form of cognitive pluralism.

The key insight, then, is that people who are embodied in different ways tend to form different understandings of the world they inhabit. Likewise, people who inhabit subtly different micro-worlds will acquire different forms of understanding, and perceive different affordances (it's nice to see these insights from feminist philosophy, critical race theory, and disability theory making their way into mainstream philosophy of mind). But Horst's approach also suggests a way of moving through such differences, to uncover facts that are obscured by our tacit practices of abstraction and idealization. He doesn't pursue this issue, but by layering linguistically-structured thought on top of model-based processing, Horst (Chapter 11) opens up the possibility of interpersonal triangulation. Just as linguistically-structured thought allows us to coordinate our own actions over time, it allows us to coordinate with one another. This is a familiar philosophical claim, but it provides a way of understanding how talking and listening to one another's perspectives and learning about the distortions built into our everyday models of the world can open new forms of understanding, and new forms of social triangulation. Doing this is never easy, but I think it's the only strategy we have for building knowledge in a world governed by interpersonal and intrapersonal indeterminacy.

ACKNOWLEDGEMENTS

Thanks to Ruth Kramer and Carl B. Sachs for helpful comments on this review.

REFERENCES

Carey, S. (2009). The Origin of Concepts. Oxford University Press.

Carruthers, P. (2006). The Architecture of the Mind. Oxford University Press.

Edey et al (in press). Interaction takes two. Journal of Abnormal Psychology.

Fauconnier, G., and M. Turner (2008). The way we think. Basic Books.

Gray, K. and D. Wegner (2016). The mind club. Penguin.

Haslanger, S. (2012). Resisting reality. Oxford University Press.

Kelemen, D. (2012). Teleological minds. In K. Rosengren and E. Evans (eds). Evolution challenges. Oxford University Press, 66-92.

Mandelbaum, E. (2013). Numerical architecture. Topics in cognitive science, 5(1): 367-386.

Norenzayan, A., and W. Gervais (2013). The origins of religious disbelief. Trends in Cognitive Sciences, 17(1), 20-25.

Ross, B. (2016). How it feels to be blind in your mind. Unpublished. Retrieved 13 August 2016.

Smith, D. (2011). Less than human. Macmillan.

Spelke, E. and K. Kinzler (2007). Core knowledge. Developmental science, 10(1), 89-96.