Human and Animal Minds: The Consciousness Questions Laid to Rest

Placeholder book cover

Peter Carruthers, Human and Animal Minds: The Consciousness Questions Laid to Rest, Oxford University Press, 2019, 220pp., $40.00 (hbk), ISBN 9780198843702.

Reviewed by Jonathan Simon, Université de Montréal

2020.12.02


In this well-argued, engaging book, Peter Carruthers makes a comprehensive case for a global workspace theory of phenomenal consciousness, and considers the upshot for animals: are they phenomenally conscious, and does it matter morally? His answer: there is no fact of the matter about whether animals are phenomenally conscious, but this doesn't change anything morally, because consciousness is not what matters morally.

I proceed by theme rather than chapter: first global workspace theory (chapters 1, 3, 4, 5 and 6), then facts of the matter about animal phenomenal consciousness (chapters 1, 2, 3 and 7), and finally whether phenomenal consciousness matters morally (chapter 8).

According to global workspace theory (GWT), for a content to be conscious is for it to be sustained in working memory, and so to be available to the various cognitive systems that draw on working memory. Carruthers' case for GWT centers on four conditions. Any adequate theory of phenomenal consciousness must meet all four, but the only theories to do so are GWT and the dual content theory -- the theory that Carruthers used to endorse, according to which an aptness to be the object of higher-order thought gives a state a higher-order content alongside its first-order content. GWT is theoretically simpler than the dual content theory, which settles matters in its favor. Carruthers used to think that GWT failed one of the conditions, but he has changed his mind, hence his change in position.

The four conditions:

1) Fineness-of-grain: we can consciously discriminate more than we can non-indexically conceptualize. (pp. 16-20, p. 89).

2) Ubiquity of unconscious perception: unconscious perception is widespread even in the ventral system (pp. 61-63).

3) Hard-problem aptness: "phenomenal consciousness can be operationalized in terms of its aptness to give rise to 'hard problem' thought experiments" (p. 19). Consciousness has to be such that there are ways of thinking about it that are apt to give rise to thoughts like "my brain states could have been the same but this might have been different" (ibid.).

4) All-or-nothing: our primary, first-personal concept of phenomenal consciousness is "all-or-nothing" (p. 20): it doesn't admit of degrees. "Any given mental state (in humans, at any rate) is either categorically conscious or definitely unconscious" (p. 141). This is a claim about degrees of truth, not about degrees of intensity or focus or detail. Conscious experiences can be more or less intense, focused or detailed, but if you are having an experience of any degree of intensity, focus or detail, it is true (to degree one) that you are having an experience.

In chapter 4, Carruthers looks at the main rivals to GWT: integrated information theory (Tononi 2007), brainstem integration theory (Merker 2007), fragile short-term memory theory (Block 2007), actual higher-order thought theory (Rosenthal 2005) and dispositional higher-order thought theory, a.k.a. dual content theory (Carruthers 2000). He argues that integrated information, brainstem integration and fragile short-term memory theories all violate the all-or-nothing, ubiquity-of-unconscious-perception and hard-problem-aptness conditions (he does not explicitly say that fragile short-term memory violates hard-problem aptness, but his later remarks entail that he thinks so). Actual higher-order thought theory can accommodate those three conditions, but it cannot accommodate fineness-of-grain. Dual content theory accommodates all four conditions, but GWT is theoretically simpler (pp. 73-95).

Chapter 5 argues that GWT meets the fineness-of-grain (p. 98), ubiquity-of-unconscious-perception (pp. 98-116), and all-or-nothing conditions. On this latter condition, the critical point is that a step-function underlies entry of a content into the global workspace (p. 98).

Chapter 6 argues that GWT underwrites hard-problem aptness. The picture is that phenomenal concepts are complex demonstratives that we use to target contents sustained by attentional focus in working memory. These concepts are higher-order (and so sustain intelligible hard-problem questions) because their descriptive component includes a concept of perceptual state (one that does not differentiate conscious from unconscious) furnished by the mind-reading system, itself a consumer system of the global broadcast.

That is the argument for GWT. I turn my attention now to the argument that there are no facts of the matter about animal consciousness. This begins in chapter 2, with a survey of recent research in animal cognition. The gist: while a central mental workspace is strongly conserved across species (p. 34), and functions like reasoning and reflection (pp. 36-9) and executive function and inhibition (pp. 40-2) are realized in the animal kingdom, there are critical differences that may be relevant to the question of phenomenal consciousness. For example, Carruthers argues that meta-cognition depends on mind-reading, and there is little evidence of the latter outside of the primate line (pp. 43-6).

Chapter 3 argues that we need to settle on a theory of consciousness to settle whether consciousness is ubiquitous in the animal kingdom. The chapter criticizes Tye's (2016) case that the ubiquity of consciousness in the animal kingdom is settled prior to theory by Newton's principle, the claim that (ceteris paribus) similar effects have similar causes. Carruthers objects that the principle is useful only once we have settled on which of our own behaviors are caused by conscious rather than unconscious perception, and this is precisely what is up for grabs in debates between theories of phenomenal consciousness.

Chapter 7 contains the central argument that GWT entails no facts of the matter about animal consciousness. The problem is that "global workspace" is not a precise notion. Which functions are we talking about, exactly? Which consumer systems, exactly, have to be involved? Carruthers first considers two proposals for determining the facts of the matter and argues that they fail. He then advances two direct arguments that there are no facts of the matter.

The first (and I think most plausible) proposal that Carruthers considers for settling the facts of the matter is that we look for the most salient natural kind in the vicinity of the global workspace. To this end Carruthers identifies two options, the functional role of providing "a (virtual) 'center' for the mind, enabling multiple subsystems to have their activities coordinated around a single set of representations" (p. 144), and a biologically individuated sub-kind: "attention-like networks . . . that play the 'centering' role" (p. 146).

Neither option leads to an absolutely precise notion, but both refine the borderline and push it well out into the animal kingdom, yielding a healthy menagerie of determinately conscious animals. But Carruthers argues that neither of these natural kinds are what "do the explanatory work" in reducing phenomenal consciousness (i.e., underwriting hard-problem-aptness), and he argues that Kripke (1980) taught us that terms referring to phenomenally conscious states aren't used as natural kind terms (pp. 146-8).

Next, Carruthers considers stipulating a categorical boundary: saying that something is conscious just in case its global broadcasting architecture is more similar to the architecture that underlies human phenomenally conscious experience than it is to architecture that underlies unconscious forms of perception in humans. But this approach would leave us with more indeterminacy, rather than less (pp. 148-52).

Carruthers then advances two direct arguments that there are no facts of the matter about animal consciousness. The "negative semantic argument" (7.4) is a familiar use-underdetermines-reference argument: it moves from the premise that "there is nothing in the intentions with which first-person phenomenal concepts . . . are employed that settles how similar the functional role of an animal's perceptual states must be to human global broadcasting" (p. 153) to the conclusion that nothing settles the matter.

The "positive semantic argument" (7.5) sketches a counterfactual semantics for ascriptions of first-personal phenomenal concepts to other beings, then notes that this semantics does not yield determinate verdicts when those other beings are of other species. When is a statement like "Creature C has perceptual states that are this-E" true (where "this-E" is Carruthers' notation for the maximally general first-personal indexical phenomenal concept of experience)? Carruthers suggests that in asking this I ask something like: "if I were to be aware of C's perceptual states, would I judge them to be this-E?" He refines this as: "if the dispositions-to-judge that underlie my use of the concept this-E were to be instantiated in creature C, then the creature would judge that some of its states are this-E" (p. 157).

But this semantics will render most ascriptions of phenomenal consciousness to animals indeterminate. The problem is this. Take goldfish G. Find the nearest world with a counterpart, G*, of G, that satisfies the antecedent of the counterfactual: i.e., that has the relevant dispositions-to-judge. The nearest-world way to have the relevant dispositions-to-judge involves having a global workspace and the human meta-cognitive capacities that go on top of it. So G* will be in a position to judge (correctly) that some of its states are indeed this-E. By our semantics it follows that G is actually conscious, irrespective of how things actually are with G. That was too easy! Worse, take some given state, S, of G. The counterpart of S in G* -- call it S* -- may well be within G*'s global workspace. But there might be some other equally nearby world with some other counterpart of G, G#, also equipped with dispositions-to-judge. And S# might not be in G#'s global workspace. So there is no fact of the matter as to whether S is actually a phenomenally conscious state of G (pp. 155-61). It is unclear whether Carruthers thinks we should accept this result or conclude that there is no way to make sense of ascriptions of animal consciousness, but either way the facts give out.

Finally, in chapter 8 Carruthers considers the moral upshot of the foregoing. He also acknowledges something that the reader may have suspected all along: it is not only animals about whose consciousness there is no fact of the matter, but also humans without a full global-broadcasting architecture, like infants and the severely cognitively impaired (pp. 179-183).

The book's position on the significance of phenomenal consciousness is more moderate than its subtitle suggests. The aim of chapter 8 is not to lay to rest the thought that consciousness is relevant to morality, but rather to show that there are "at least reasonable accounts of what makes human pains and desire-frustrations bad that enable us to pull those things apart from phenomenal consciousness, in a way that might equally apply to the case of an animal" (p. 174). These reasonable accounts include preference utilitarianism with preferences construed functionally rather than phenomenally (pp. 172-3) and an account based on the idea that one can appropriately feel sympathy for animals based on an understanding of their (functionally construed) wants and needs (pp. 174-6). But this is consistent with there being other reasonable accounts, like hedonic utilitarianism, on which consciousness is what matters (p. 173). Carruthers does not (here) argue against views like that; he just argues that they will entail that sometimes there is no fact of the matter whether a given state (e.g., an indeterminately conscious pain) is bad or not (p. 171).

I confess to have been somewhat surprised here. The way the book builds up to it, you expect that chapter 8 is going to present an argument from the indeterminacy of phenomenal consciousness in animals to the conclusion that it isn't what matters morally. The indeterminacy claim is bound up with the idea that, once you know what all of the candidate cognitive kinds are, you know everything there is to know: it would be misguided for you, as a cognitive scientist, to continue to ask, "yes, but which one is consciousness?".

Why doesn't Carruthers conclude that the same goes for moral theorists? And does he think there are genuine problems for theories compelled to accept that what matters is sometimes indeterminate, or is this merely a curiosity? I hope to hear more.

Now everything is on the table. How does it all fit together?

Let's start with the tension between the all-or-nothing condition and the later conclusion that often there are no facts of the matter. Even in his most precise statement of it (e.g., p. 141), Carruthers suggests that the all-or-nothing condition applies to all humans. But later (pp. 179-83) he acknowledges that for infants and the cognitively impaired there are sometimes no facts of the matter. Are infants and the cognitively impaired therefore such that their consciousness is all-or-nothing, even though there is no fact of the matter about it? Trouble either way: if the consciousness of infants and the cognitively impaired is not all-or-nothing, then the all-or-nothing condition is violated. On the other hand, if the consciousness of infants and the cognitively impaired is all-or-nothing, even when there are no facts of the matter, then why can't Carruthers' opponents just say the same thing about cases where he accuses them of violating the all-or-nothing condition because there seem to be no facts of the matter?

To motivate this latter reading, we might propose that "all-or-nothing" means not vague and "no fact of the matter" means indeterminate. I, for one, was tempted to read things this way, since I have argued (Simon 2017; see also Antony 2008) that the concept 'phenomenally conscious creature' cannot be vague, though it can be indeterminate in other ways (vagueness is semantic; other indeterminacies are meta-semantic). If I am correct, rivals to GWT can accommodate the intuitions behind the all-or-nothing condition and agree with GWT about which cases are vague (namely: none), restricting disagreement (and the relevance of the step-function) to the question of which cases are indeterminate without being vague.

That being said, it does seem to be a theoretical virtue of GWT that it makes entry into consciousness a step-function. The question is how, exactly. Here is a suggestion. Why not construe the all-or-nothing condition as: "in a determinately conscious being, each contentful state is either determinately conscious or determinately not conscious"?

So construed, there is no tension with the claim that some beings (e.g., animals and infants) are not determinately conscious. And it is something that theories that support a step-function for entry into consciousness can accommodate (given a coarse-grained enough time bin, anyway, since even a neural step-function is gradual on a fine enough timescale) while other theories, perhaps, cannot.

The question is whether Carruthers can defend this claim. In my experience, it is harder to establish an absence of all forms of indeterminacy than it is to establish the absence of a specific form like vagueness, and I worry that the first-personal considerations that Carruthers musters for the all-or-nothing condition aren't sufficient for the more ambitious claim. But I don't take the matter to be settled.

Here's another worry, this time about the hard-problem-aptness condition. Carruthers' motivation for this condition is methodological: we presuppose it in fixing our subject matter. The thing we want a theory of is the thing we can imagine zombies lacking, whatever it is. I concur. But Carruthers gives this condition a lot of work to do. First, he suggests that theories on which consciousness overflows access cannot meet it. Later, he suggests that it justifies us in ruling out certain natural kinds as the kinds that our global workspace theories of consciousness are ultimately theories about.

These things don't follow. Carruthers seems to presuppose an austere use theory of meaning according to which "the extensions of one's phenomenal concepts get fixed by one's classificatory dispositions when employing them" (p. 125). But there are other meta-semantic accounts, equally consistent with Kripke (1980)'s observation that we do not explicitly intend to refer to natural kinds when we deploy phenomenal concepts. According to these other accounts, factors beyond our usage dispositions shape what our concepts pick out. After all, as Kripke (1982) points out, our usage dispositions fall short of determining the referents we generally take our words to have. One such account is reference magnetism, the idea that reference can gravitate toward more natural candidate referents (Sider 2011). If this is correct, then even if phenomenal concepts are precisely what Carruthers says they are, nevertheless they may designate the most salient natural kind in the vicinity. As such there can be facts of the matter even if use underdetermines meaning, meaning we don't need a counterfactual/use-dispositional semantics to determine whether phenomenal concepts apply to animals. Admittedly, we might be uncertain about which natural kind turns out to be the one toward which reference gravitates, but that suggests ignorance rather than indeterminacy. For all we know, it is a specific fragile short-term memory recurrent loop structure, or the class of attention-like networks that play the 'centering' role. As such, I don't see that the hard-problem aptness condition supports an argument against the fragile short-term memory theory, or an argument for the conclusion that animal consciousness is radically indeterminate.

Be that as it may, my conclusion is that this is an excellent book, written with Carruthers' characteristic insight, lucidity, and open-mindedness. Everyone should read it. But the consciousness questions are still alive and kicking.

ACKNOWLEDGMENTS

Thanks to Peter Carruthers and David Chalmers for comments.

REFERENCES

Antony, Michael V. (2008). Are our concepts CONSCIOUS STATE and CONSCIOUS CREATURE vague? Erkenntnis 68 (2):239-263.

Block, Ned (2007). Consciousness, Accessibility, and the Mesh between Psychology and Neuroscience. Behavioral and Brain Sciences 30 (5):481-548.

Carruthers, Peter (2000). Phenomenal Consciousness: A Naturalistic Theory. Cambridge University Press.

Kripke, Saul (1980). Naming and Necessity. Harvard University Press.

Kripke, Saul A. (1982). Wittgenstein on Rules and Private Language. Harvard University Press.

Merker, Bjorn (2007). Consciousness without a cerbral cortex: A challenge for neuroscience and medicine. Behavioral and Brain Sciences 30 (1):63-81.

Rosenthal, David (2005). Consciousness and Mind. Oxford University Press.

Sider, Theodore (2011). Writing the Book of the World. Oxford University Press.

Simon, Jonathan (2017). Vagueness and zombies: why 'phenomenally conscious' has no borderline cases. Philosophical Studies 174 (8):2105-2123.

Tononi, Giulio (2007). The information integration theory of consciousness. In Max Velmans & Susan Schneider (eds.), The Blackwell Companion to Consciousness. Blackwell. pp. 287-299.

Tye, Michael (2016). Tense Bees and Shell-Shocked Crabs: Are Animals Conscious?. Oxford University Press USA.