Is consciousness vague? It is certainly sometimes difficult to decide which creatures are conscious. Take earthworms. They move around purposefully all right. Yet they have rudimentary nervous systems and a narrow repertoire of behaviours. Who is to say whether they are conscious or not? Or consider the fast-advancing world of linguistically intelligent computer bots. I don’t expect it to be long before serious commentators start disagreeing on whether some of these artificial systems qualify as conscious beings, without any obvious way of resolving the question.
Still, is it consciousness itself that is indeterminate, or just our ability to decide about it? It isn’t easy to make sense of the idea that consciousness itself is vague. Either the light is on, we feel, or it isn’t. The glow of consciousness might be dim in some cases, but a dim light is still a definite light, not a borderline light-or-non-light. And on reflection it seems hard to imagine a genuinely borderline case of consciousness. This suggests that any indeterminacy about marginal cases of consciousness must lie in our limited epistemic access to the minds at issue, not in those minds themselves.
Yet at the same time there seems to be good reason to suppose that consciousness itself, and not just our knowledge of it, has to be indeterminate. If physicalism is true, as many contemporary philosophers suppose, then consciousness must be determined by some kind of physical state. Suggestions for the physical grounds of consciousness include global workspace activity, neural oscillation frequencies, re-entrant neural loops, integrated information processing, action-guiding mental representation, and so on. If any theory along these lines is correct, however, there will inevitably be borderline cases of consciousness itself. Global activities and the like come in degrees, and so any sharp cut-off point specifying exactly when we have enough for consciousness would be arbitrary. The obvious implication is that some mental states are indeterminate between being conscious and not.
This paradox of vague consciousness has recently been receiving increasing attention from philosophers of mind, so it is timely that Michael Tye has made it the focus of his latest book. His proposed resolution, though, is likely to surprise many of those familiar with his previous work. Tye argues that the only way to unravel the conundrum is to embrace panpsychism. As he sees it, borderline cases of consciousness do indeed make no sense. But this isn’t because a sharp line is found somewhere as we move from non-conscious physical systems to conscious ones. Rather it’s because no such line exists at all. Even the most basic constituents of physical reality are already endowed with consciousness.
While Tye is happy to describe his new view as a variety of panpsychism, it differs in significant ways from existing species of the genus. In particular, Tye takes pains to distance himself from the Russellian monism that underlies most contemporary versions. Not that he objects to the Russellian’s underlying metaphysics of “quiddities”, categorical properties that contingently fill the basic dispositional roles articulated by physical theories. Quiddities play a role in Tye’s own version of panpsychism. But Tye has no time for Russellian explanations of the kinds of conscious states enjoyed by humans. Russellians take these states to be identical to, or grounded in, certain arrangements of the fundamental physical quiddities. But the presence of any such physical arrangements, Tye objects, will once more be a matter of degree, leaving the Russellian monists no better able to account for the sharpness of consciousness than the physicalists.
Tye’s own solution to the paradox involves a distinction between two kinds of consciousness. At bottom he posits consciousness*, a basic contentless what-it’s-likeness that characterises the categorical basis of physical simples like fermions and bosons. The consciousness possessed by humans and other complex beings is more articulated, however. It results when consciousness* is transferred from our basic physical constituents to any representational states that are “poised” to guide action and deliberation. Not all complex physical systems possess such macroscopic consciousness. Rocks and trees, for instance, lack any poised representational states. So the consciousness* of their basic constituents is not transferred to any of their macroscopic states. By way of analogy, Tye cites the way that the individual elegance of all a play’s acts might, but need not, result in the elegance of the whole. Such a transfer of elegance depends on the way the parts are arranged into the whole. Similarly, not all arrangements of physical parts that individually are conscious* need lead to a whole which is conscious*. This results only when the wholes are poised representations, and not otherwise.
This now gives us Tye’s resolution of the paradox. Consciousness* is sharp. In particular it is sharp whether or not consciousness* gets transferred from physical simples to the macroscopic states of humans and other beings. There are thus no borderline cases of consciousness*. Yet nothing in all that violates the spirit of physicalism. True, we are now taking consciousness* to contribute to the nature of physical simples. Still, it is a uniform feature of all physical simples, and does nothing to undermine the operation of physical law.
To be frank, I found it hard to see why Tye took his position to evade the objections he himself makes to orthodox physicalism and Russellian monism. However, before turning to that, I would like to comment on the role that representationalism plays in Tye’s new theory. Tye has long defended a representationalist account of consciousness, equating the conscious character of experiences with their representational contents. This account is not eclipsed by the new theory but incorporated into it. Tye uses his representationalism to explain how different macroscopic conscious states will each have their own phenomenology, despite the fact that any consciousness* attached to them only contributes a uniform unstructured what-it’s-likeness. The distinctive consciousness involved in seeing something red, for example, derives not just from this visual state being conscious*, but also from the redness that it represents.
Tye accordingly devotes more than a third of this slim volume to the current status of his representationalism. He revisits such topics as emotions, blurry vision, cognitive phenomenology, hallucinatory truth conditions, and so on, and elaborates his latest thoughts about how they might best be accommodated within his approach. Strangely, though, Tye does little to address the more fundamental difficulties facing his theory.
Tye endorses a version of a naturalist tracking theory of representation. A mental state realised by some internal neural vehicle will represent whatever property P that vehicle appropriately co-varies with (66, 90). Such represented Ps will paradigmatically be distal environmental properties, such as the colours and shapes of ordinary objects. The consequence, for Tye, is that the distinctive conscious character of a representational state will be fixed, not by the neural make-up of its internal vehicle, but by whichever distal state that vehicle happens to co-vary with. If evolution had wired up that same neural vehicle to represent a different distal P, it would have felt consciously different, and if it had wired up a different neural vehicle to represent the same P, that would have felt consciously the same.
This radical externalism about consciousness is not incoherent, but it is certainly strange. Even so, Tye does not defend it explicitly. Instead, he focuses on his main motivation for the externalism, namely, the supposed “transparency” of sensory experience, the idea that distal properties like colours and shapes are the (only) properties that we are aware of in sensory experience.
This supposed transparency, however, has its own difficulties. Consider a case where we suffer an illusion that some object is red. For Tye, this illusory experience will have the same conscious character as a veridical experience of a red object. But nothing is actually red in the illusory case. How then can the subject of the illusion be aware of redness?
Tye does address this difficulty (65–6), but his answer only seems to deepen the problem. He says: “Agreed: you cannot attend to what is not there. But on my view there is an un-instantiated quality there in the bad cases.” In order to help us understand how an un-instantiated properties can be “there” in illusory experiences, Tye offers an analogy with measuring instruments. A faulty speedometer, he observes, can falsely represent that a car is going at 60 mph, even though that speed is not locally instantiated. I didn’t see how this helps. Of course, speedometers, along with experiences, can represent properties that are not locally instantiated. The issue, however, is whether illusory experiences do so by literally having the uninstantiated properties within them (“there is an un-instantiated quality there”). Given that nobody, I take it, would want to say that the uninstantiated property of going at 60 mph is literally inside the faulty speedometer, the analogy seems to backfire. In the end I came away feeling that Tye’s representationism goes beyond strangeness to incoherence, and that his account of the distinctive phenomenologies of different macroscopic conscious states collapses with it (Cf. Papineau 2014, 2016, 2021).
Let me return to the idea that macroscopic consciousness of any kind depends on consciousness* being transferred from microscopic simples to the macroscopic whole. As I said, I didn’t see why this story avoids the charge that Tye levels at physicalism and Russellian monism. Tye objects to these options saying that they cannot identify any non-arbitrary arrangement of their microscopic primitives that might constitute or ground a sharp distinction between conscious and non-conscious macroscopic systems. Yet Tye himself seems to face a quite analogous difficulty. True, Tye does not take macroscopic consciousness* to be constituted by or grounded in arrangements of underlying parts. But he does take consciousness* to be “transferred” from the parts to the whole specifically when the parts are arranged into representational states that are “poised” to make a cognitive difference. And, as Tye himself admits (89), it can be a vague matter whether parts are so arranged. So, again, it would seem quite arbitrary whether some borderline arrangement of this sort should get blessed with the glow of consciousness* while others do not. I couldn’t see why such arbitrariness should be fatal to physicalists and Russellian monists but not to Tye himself.
Given how much trouble it causes him, it is surprising that Tye is so quick to commit himself to the thesis that consciousness is sharp. Without this assumption, all his problems would disappear. We could equate consciousness with poised representation, or some such, and simply accept that it is vague, with the undoubted borderline cases of poised representation constituting borderline cases of consciousness. Marginal cases of consciousness would become no more problematic than marginal cases of behaviour, say, or intelligence, and all the convolutions of panpsychism and consciousness* would fall away.
Tye spends very little time exploring this horn of his dilemma. He claims that we cannot conceive of any borderline cases of consciousness, and that the best explanation for this is that consciousness is sharp. I found this less than compelling. It is true that it is not straightforward to conceive of borderline conscious cases. But there are arguably ways of accounting for this even on the hypothesis that consciousness is vague. Given all the difficulties that the alternative hypothesis occasions, it would seem worth considering them.
Thomas Nagel observed long ago that there are different ways of imagining conscious states. We can imagine them sympathetically, from the inside, as it were, or we can imagine them from the outside, perceptually or symbolically. Tye does not stop to distinguish these internal and external perspectives, but let us take them in turn.
Nagel says that when we sympathetically imagine a conscious state, we “put ourselves in a conscious state resembling the thing itself”. Consider someone so trying to imagine a borderline conscious state sympathetically. If the target of their imaginings were borderline conscious, then presumably their imagining, in “resembling the thing itself”, would also only be borderline conscious. Perhaps, then, this offers one response to Tye’s claim that “we cannot give any clear examples of borderline cases of consciousness” (15). Even if such borderline cases existed, our attempts to bring them sympathetically to consciousness would themselves at best be borderline successful.
What about conceiving a borderline conscious state from the outside? It is not immediately obvious why this should be a difficulty. Suppose I think about or perceptually imagine a borderline case of, say, a poised representational state. Why should that not amount to conceiving a borderline case of consciousness? Tye will counter that our concept of consciousness is sharp, so consciousness can’t just be a matter of poised representation. But where did that come from? Consider the concept of life. In the nineteenth century, many would have held that there cannot be borderline cases of life, on the grounds that life involves a vital force, and you either have it or you don’t. However, we now reject the metaphysics behind this attitude, and are happy to recognize viruses and other entities as borderline alive. Perhaps our conviction that consciousness is sharp might similarly rest on a dubious dualist metaphysics.
Tye does explicitly consider the analogy with life, but rejects it. He argues that a similar evolution in our concept of consciousness would mean that “consciousness, that is, experience or feeling never existed”, which “seems prima facie absurd (unlike the idea that there never was any vital force)” (17). However, this seems to assume what it needs to show. The suggestion at hand is not that consciousness might turn out not to exist, but rather that our current thinking mistakenly associates it with some extra non-physical stuff, and that this is the source of our misguided conviction that it must be sharp. Tye does nothing to rule out this possibility.
Clearly there is much more to say about the idea that consciousness itself is vague. It is a pity that Tye does not explore this option at any length. By committing himself so quickly to the sharpness of consciousness, he has arguably put his money on the wrong horse and so landed himself in an unnecessary philosophical tangle.
Hall, G. 2022 “Is Consciousness Vague?” Australasian Journal of Philosophy DOI: 10.1080/00048402.2022.2036207
Nagel, T. 1974 “What is it Like to be a Bat?” Philosophical Review 83: 435–50
Papineau, D. 2014 “Sensory Experience and Representational Properties” Proceedings of the Aristotelian Society 114: 1–33
Papineau, D. 2016 “Against Representationalism (about Conscious Sensory Experience)” International Journal of Philosophical Studies 24: 324–47
Papineau, D. 2021 The Metaphysics of Sensory Experience Oxford: Oxford University Press
 Tye understands Russellian monists to be committed to such a distinction (21).
 Nagel (1974) footnote 11, 445–6.
 See Geoffrey Hall (2022) for a detailed exploration of ways in which vagueness in consciousness itself might percolate upwards to block definite knowledge of borderline cases