Macrocognition: A Theory of Distributed Minds and Collective Intentionality

Placeholder book cover

Bryce Huebner, Macrocognition: A Theory of Distributed Minds and Collective Intentionality, Oxford University Press, 2014, 278pp., $65.00 (hbk), ISBN 9780199926275.

Reviewed by Deborah Perron Tollefsen, University of Memphis


The idea that groups have minds was popular in the late-19th and early-20th centuries. The group mind was posited as a force that influenced and dominated individual agency and provided an explanation for various types of human behavior. But such explanations were deemed mysterious, and, with the rise of behaviorism and operationalism, the idea fell out of favor. But interest in group mentality has experienced a rebirth over the past few decades. Within philosophy, Margaret Gilbert's work (e.g., 1989, 2004, 2013) has done a great deal to bring attention to the ways in which individuals might form a single unit of intentional agency, and Christian List and Philip Pettit's recent book Group Agency (2011) argues that there are genuine group mental states that cannot be reduced to the mental states of individuals within the group. Outside of philosophy, the study of distributed cognition is a growing area of research in cognitive science, and the hypothesis of group mind is gaining traction in economics, social psychology, organizational theory, and politics. Recent theories of group mentality, however, are thought to be just as mysterious as their 19th and early-20th century ancestors. Macrocognition goes a long way to demystifying the idea. It provides the most sustained and detailed defense of group minds available in the literature today.

Huebner's starting point is intentional systems theory. Our practice of making sense of others involves the attribution of various propositional attitudes such as belief, intention, and desire. We extend this practice to various sorts of subjects -- children, non-human animals, computers, and so on. We also clearly extend it to groups (or collectivities). According to intentional systems theory, if a subject's behavior is usefully and voluminously predicted from the intentional stance, then we have every reason to believe we are dealing with an intentional agent. Since the actions of certain groups (e.g., corporations) are usefully and voluminously predicted from the intentional stance, we have every reason for thinking that certain groups are intentional agents. I defended this position in (2002). Huebner goes beyond this and takes on some sticky issues that any account like mine would need to address. What we need is an account of the conditions under which taking the intentional stance toward a group (and anything else for that matter) is licensed. That is, if the intentional stance produces useful and voluminous explanations and prediction of group behavior, it is because there is a certain cognitive structure in place, and we need a story about that structure in order to move from the explanatory practice described in intentional stance theory to a cognitive science of group minds. To address this need, Huebner couples the intentional stance with a theory of cognitive architecture. In so doing he provides not only a compelling account of macrocognition (group cognition), but also a compelling picture of how intentional systems theory can be wedded to a cognitive science of the mind.

The book is divided into two parts. Part I offers three principles (which are actually more like prohibitions than principles) for those who wish to argue for group mentality and develops an empirically grounded understanding of cognition and mind that, Huebner argues, can be extended to certain groups. The first principle rules out positing collective mentality where the collective behavior is the result of "an organizational structure set up to achieve the goals or realize the intentions of a few powerful and/or intelligent people" (p. 21). In such cases the various people involved in implementing an action or solving a problem are mere tools used to achieve the goals of a powerful individual. Information in such a system "trickles down" through various computational levels and is ultimately a result of the actions of those in charge. The representations produced by a system are representations for the powerful and/or intelligent few but not representations for the group as such.

Principle two rules out collective mentality where "collective behavior bubbles up from simple rules governing the behavior of individuals" (p. 23). Consider the nest-building activity of termites. It may seem that such complex group behavior requires intentionality, perhaps in the colony itself, but we can explain the nest-building behavior by appeal to mechanisms that govern individual behavior and rules for aggregating that individual behavior. There is no need to posit system-level representations that guide the colony.

Principle three says that we should not posit collective mentality where

the capacities of the components belong to the same intentional kind as the capacity that is being ascribed to the collectivity and where the collective computations are no more sophisticated than the computations that are carried out by the individuals who compose the collectivity. (72)

Consider the operations of a stock market. The market is composed of individual buyers and sellers who are making independent decisions regarding the stocks and bonds that are of interest to them. In the Wisdom of Crowds James Surowiecki (2004) argues that the stock market itself is able to make judgments. When the Challenger space shuttle blew up, the floor of the American stock market went wild. Traders quickly sold shares in four of the corporations associated with building and launching the shuttle. Although the stock of three of those companies stabilized throughout the day, the fourth, that of Morton Thiokol, continued to drop dramatically. According to Suroweicki (2004), the stock market judged that Thiokol was responsible for the disaster. Huebner disagrees. This is a case of merely aggregating the attitudes of individual agents. There is no need to posit judgments to the stock market, for the judgments of individuals explain it all.

The third principle rules out approaches to group mentality that dominate much of the discussion of group agency in the field of collective intentionality. Those who want to acknowledge groups as genuine subjects of mental states and processes have done so by appeal to the intentional states of individuals within the group. Gilbert's joint commitment account (2013), for instance, analyzes group belief in terms of joint commitments that are constituted by individual members' willingness to be jointly committed to believing that p as a body. List and Pettit (2011) argue that group mentality supervenes on the attitudes and judgments of individuals within the group. Huebner argues that attempts to ground group mental states in individual mental states, even those involving shared content or joint commitments, will fail because such theories give rise to both causal overdetermination and charges of explanatory superfluity. What causal role do group mental states play if the mental states of individuals are really doing all the work?

Having laid out his principles, Huebner offers a general theory of cognition and cognitive architecture. Cognitive systems should be understood as parallel processing networks that are composed of numerous discrete subsystems and mechanisms that work relatively independently of one another in order to produce domain-specific representations that result in system-level behavior. Cognitive systems have a "kludgy" architecture. Their components are modules put together haphazardly by evolution. The modules process a narrow range of information and the outputs of such modules are integrated in ways that help an organism to cope with change in the environment. For Huebner even goal-directed behavior is implemented by "massively distributed, highly integrated, specialized, and unconscious computational systems" (p. 79). Huebner offers us a sophisticated version of homuncular functionalism -- the view that individual minds are "corporate entities" made up of simpler agents carrying out limited computations who contribute to a larger system.

Psychological explanation, according to Huebner, is a species of reverse engineering and essentially involves identifying basic, intentionally and functionally specified, cognitive tasks (Huebner's favorite examples are solving a math problem and making a cup of espresso) and explaining these in terms of components that involve subtasks and various computational mechanisms. These tasks in turn will be explained in terms of further functionally specified subtasks, themselves explained by simpler computational mechanisms, etc., until the explanatory project bottoms out "in homunculi so stupid that they can be implemented by on/off switches (or their equivalents)" (p. 65).

The intentional stance helps us to functionally specify certain cognitive tasks and so is ineliminable, but cognitive science opens the "black boxes" posited by the intentional stance. It describes the mechanisms responsible for producing the complex behavior that is usefully and voluminously explained by the intentional stance. Huebner emphasizes that there is no need to assume that the states posited by the intentional stance will somehow be isomorphic with the states posited by a cognitive psychology. This is an assumption made by the language-of-thought hypothesis, for instance. Following Andy Clark (1989), Huebner refers to this assumption as semantic transparency and appeals to research on numerical cognition and thermo-perception in order to cast doubt on it.

The basic insight is that there is no need to assume that the computational structures responsible for the production of complex behavioral dispositions operate over explicitly encoded representations of the sort that we posit from the perspective of the intentional stance so long as the intentional representations can be implemented virtually. (p. 58)

Given this account of cognition and cognitive architecture, Huebner argues that genuine group mentality arises only when groups implement the cognitive architecture found at the level of individual minds, and they engage in flexible, goal-directed behavior that is robustly predictable from the intentional stance. He turns homuncular functionalism on its head and argues that the "organizational structure" of the individual mind can be writ large within groups. A similar view was proposed by John Biro in 1981, but Macrocognition provides us with a much more detailed story. Huebner contends that the following three criteria must be met by any putative case of collective minds:

1.     There is a computational architecture in the collectivity that consists of a variety of computational subroutines, each of which is dedicated to solving a particular computational task.

2.     Each of these subroutines, which are implemented by individuals -- or perhaps groups of individuals-and the technological apparatuses that they employ in solving particular sorts of tasks, are organized so that their representations are integrated into larger computational structures by way of local interfaces between these subsystems.

3.     Each of these interfaces is implemented by a "trading-language" that facilitates the construction of complex representations from local information processing that is (largely) encapsulated and carried on without recourse to the computations responsible for producing representations in other component systems. (p. 84)

Huebner is clear that these criteria will seldom be met. Group mentality is a plausible idea but seldom found, in a robust way, in the real world. But minds, according to Huebner, come in degrees. Certain cognitive systems will be minimal minds because they do not have robust representational states such as beliefs, desires, and intentions. Rather they form what Huebner calls "pushmi-pullyu" representations. These representations are not decoupled from the immediate features of the environment and lack linguistic structure. Huebner points to the honeybee colony as an example of a minimal mind. Philosophers that attempt to defend group mentality by insisting on the reality of group beliefs and intentions fail to appreciate that the groups we find in our world may be more like hives than human minds.

Part II provides responses to some of the most pressing objections to the idea of group mentality and fills in the proposal details outlined in Part I. These are not cursory responses but sustained and careful considerations of arguments. Indeed, at every step Huebner is careful to present the strongest objections to group mentality, acknowledging the difficulty of overcoming them, and offering novel and compelling responses. Huebner considers objections motivated by intuitions about self and personhood (minded creatures have a first-person point of view -- a self -- but groups lack such a thing and so can't be minded), objections based on the idea that mindedness is essentially tied to consciousness (groups aren't conscious, so how can they have a mind?), two versions of the explanatory superfluity objection, and objections that appeal to the problem of collective epistemic responsibility, among others. Readers familiar with Robert Rupert's objections to group mind (e.g., 2011 and 2013) will be particularly interested in reading chapters six and seven.

Along the way we are also given examples of candidates for nearly maximal minds. Crime scene investigation teams, for instance, appear to implement an architecture that is widely distributed and involve the compartmentalization of individuals with specific tasks and specializations that produce representations which are then used to produce more complex representations in the form of narratives. Research in high-energy physics is done by large research groups -- Huebner tentatively suggests that these groups might approach maximal minds. The distribution of cognitive labor in these groups results in representations that are transferred through various media to form collective representations which function to adjust the system's behavior (p. 254). Further, these groups seem capable in some cases of misrepresenting the world and have various ways of preventing misrepresentation. They seem to respond to the norms of experimentation established within their community and so appear to be reasons responsive. But Huebner's discussion of these cases is tempered with a good bit of pessimism. Macrocognition is a modest defense of the idea of group minds. It makes clear that there are serious obstacles to establishing that groups are minded in anything like the way that human beings are.

I've always been attracted to the idea of group minds, so it is difficult for me to find fault with Huebner. In the space remaining I raise a few concerns that I hope will contribute to the critical conversations that this book will no doubt generate. My concerns reveal that I am more optimistic than Huebner about the prospect of defending maximal group minds. First, I want to return to Huebner's discussion of the limits of intentional stance theory. Though I applaud the attempt to deal with the difficult issues of implementation, I think Huebner is too pessimistic about what intentional stance theory can do. He argues that the intentional stance is too coarse-grained to discriminate between real intentionality and as if intentionality, and hence too coarse-grained to distinguish between genuine group mentality and cases where the behavior can be explained by appeal to individual attitudes and some aggregative function.

There are ways, however, to understand the notion of "interpretability" that make the intentional stance a much more fine-grained tool. Davidson, for instance, fleshes out the notion of interpretability in terms of linguistic intelligibility (1975). Of course, this way of doing so might be too fine-grained as it puts animals outside the class of intentional agents. But it suggests that there are ways of drawing boundaries by appreciating the ways in which the ascriptions we make are embedded in rich interpretive practices -- practices that hold others epistemically and morally responsible, for instance. Our practice of making sense of others involves more than one-off ascriptions made to figure out what a system will do next. It involves rich narratives that position the subject within the realm of intelligibility. We don't just predict and explain the behavior of others; we attempt to understand them. Attributing attitudes to lecterns and dots moving across a page are no more examples of using the intentional stance than swinging a golf club is an example of playing golf. Huebner argues that, on its own, the intentional stance would be unable to discriminate between genuine cases of group mentality and cases where the behavior is simply a result of individual aggregative behavior. But this is too quick. I suspect that if one really looked at attempts to explain the behavior of these groups (say, in the context of an investigation into legal culpability), we would find that the intentional stance breaks down at some point precisely in cases where there is no unified perspective of the group. That is, once we actually look at our practice of making sense of these groups -- a practice that is richer and more complicated than a few attributions of belief and desire -- we find that the intentional stance does actually limit the class of intentional agents. Now, whether groups are clearly within the boundaries of those limits is a different story. My point here is just that Huebner, and many others (including, I think, Dennett himself) are simply too quick to admit defeat. This isn't to say that we don't need the sort of mechanistic story Huebner provides, but it suggests that intentional systems theory isn't without ontological import.


Huebner's pessimism is present in his discussion of collective epistemic responsibility as well. Although he suggests that certain scientific research groups might meet some of the conditions for maximal mentality, he is skeptical about their ability to meet the requirements for epistemic responsibility. Following Rebecca Kukla (2012) Huebner suggests that because, in large scientific collaborations, no one person seems to have knowledge of the whole project, no one person has the expertise to understand the outcome of the research, and the results of the research are often written by a ghostwriter, knowledge claims are suspect. "If there is no way for the justification of a claim to be located, checked or reproduced, and if no one is really accountable for having made it, it is hard to see why this should count as knowledge at all" (214). Perhaps the group knows? Huebner, like Kukla, is skeptical. Large-scale collaborations in science do not involve a unified community, and both Huebner and Kukla are skeptical that anything like belief could be attributed to such groups. But what seems to be motivating this skepticism is an epistemic internalism that requires for knowledge the ability to give a justification or to have access to reasons. No one person seems to be able to provide a justification or has access to reasons, and so no one, including the group, can be held responsible. But why not take an externalist approach and think of group knowledge as the result of a reliable process? There are various ways to preserve the notion of epistemic responsibility within an externalist theory of knowledge. It may be that large-scale scientific collaboration is, in its current form, not using reliable processes, but this is an empirical question -- one that a science of distributed cognition might one day answer. Indeed, as Huebner notes, his approach to macrocognition paves the way for finding the sorts of processes that would make groups more reliable.

One last point: Huebner takes the idea of group mind as far as it can possibly go within the framework of PDP (parallel distributed processing) theory and computationalism. Much of the most difficult work is done in trying to develop a notion of representation that makes sense of how groups can have internal representations. One might think what is generating all the difficulty here is not the hypothesis of group mind but the theory of mind with which Huebner is operating. The load might be much lighter and the prospects much brighter for those who resist orthodoxy. The thesis would certainly have more traction, for instance, within an enactivist and non-representational theory of mind.

Macrocognition makes a tremendous contribution to social ontology and cognitive science. It introduces new ideas and novel arguments, and it sets the bar high for any scientific theory of group mind. Philosophers who concern themselves with policing the bounds of cognition will find in Huebner a formidable challenge.


Biro, J. 1981. Persons as corporate entities and corporations as persons. Nature and System 3 (September): 173-80.

Clark, A. 1989. Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing, MIT Press.

Davidson, D. 1975. Thought and Talk. In S. Guttenplan (ed.), Mind and Language, Oxford University Press.

Gilbert, M. 1989. On Social Facts. Princeton University Press.

Gilbert, M. 1994. Remarks on collective belief. In F. Schmitt (ed.) Socializing Metaphysics. Rowman and Littlefield, pp. 235-256.

Gilbert, M. 2013. Joint commitment: How we make the social world. Oxford University Press.

Kukla, R. 2012. "Author TBD": Radical Collaboration in Contemporary Biomedical Research. Philosophy of Science 79 (5): 845-858.

List, C. and Pettit, P. 2011. Group Agency. Oxford University Press.

Rupert, R. 2011. Empirical arguments for group minds: a critical appraisal. Philosophy Compass 6 (9): 630-639.

Rupert, R. 2014. Against group cognitive states. In S. R. Chant, F. Hindriks, and G. Preyer, From Individual to Collective Intentionality. Oxford University Press.

Surowiecki, J. 2004. The Wisdom of Crowds. Doubleday.

Tollefsen, D. 2002. Organizations as True Believers. Journal of Social Philosophy 33 (3): 395-410.