Consciousness as Complex Event: Towards a New Physicalism

Consciousness Complex Event

Craig DeLancey, Consciousness as Complex Event: Towards a New Physicalism, Taylor & Francis, 2022, 186pp., $170.00 (hbk), ISBN 9781032334509.

Reviewed by Kelvin J. McQueen, Chapman University


In Consciousness as Complex Event: Towards a New Physicalism, Craig DeLancey argues that what makes conscious (or “phenomenal”) experiences mysterious and seemingly impossible to explain is that they are extremely complex brain events. This is then used to debunk the most influential anti-physicalist arguments, such as the knowledge argument. A new “complexity-based” way of thinking about physicalism is then said to emerge.

Brain complexity has been appealed to before, to try to explain why consciousness seems so intractable. What makes DeLancey’s approach distinctive is two-fold. First, DeLancey does not use some vague and informal notion of complexity. Instead, he uses the formal notion of Kolmogorov complexity, which he refers to as descriptive complexity. Second, this notion is intended to do most of the heavy lifting. In particular, DeLancey is clear that he does not wish to supplement his complexity-based defense of physicalism with other existing strategies (the phenomenal concepts strategy, the ability hypothesis, knowledge by acquaintance etc. (23)). The central claim is that “what makes a phenomenal experience mysterious is its [descriptive] complexity” (21).

From the perspective of algorithmic information theory (which studies Kolmogorov complexity) this central claim is puzzling. For it seems to suggest that if something has sufficiently high Kolmogorov complexity, then we will find it mysterious and difficult to explain. But this is not so. Consider a simple example, which involves a physical property with arbitrarily large Kolmogorov complexity, but which poses no special explanatory difficulty. We send a single photon of light towards a half-silvered mirror, so that there is a 50% chance that it will be deflected and a 50% chance that it will be transmitted (pass through). We place measurement devices along these two paths, so that we know which of these two paths the photon took. Now we repeat this process N times. The Kolmogorov complexity of the outcomes of these experiments is (approximately) N, with high probability. But there is no great mystery here. Nor does Quantum theory suddenly fail to explain these outcomes just because N has gotten large. Large Kolmogorov complexity therefore does not predict whether we find something mysterious or inexplicable. If it did, we would find slot machines to be mysterious and inexplicable!

In chapter one, DeLancey outlines some principles governing scientific explanation. These are used to show that it is the large Kolmogorov complexity of consciousness that makes consciousness mysterious and difficult to explain. Perhaps the most important is the incompressibility cost principle:

The incompressibility cost principle is the observation that, if we want to use a theory T to explain a very complex phenomenon P, where the descriptive complexity of P is much greater than the complexity of T (which we write C(P) >> C(T)), then we shall need additional information of complexity at least of the quantity C(P) - C(T). (28)

Note that this principle is about explaining a phenomenon. Note further that this is supposed to hold generally, and not just in the case of explaining experience. So, if P is the outcomes of our photon measurements and T is quantum theory, then since C(P) >> C(T) for large N, the principle predicts that we will need to add a lot to quantum theory to explain the outcomes. But each photon experiment has identical initial conditions, and so no matter how large N is, those initial conditions only need to be described once. It seems quantum theory and these simple initial conditions explain the outcomes of such experiments well enough, for any N, without need of addition. The incompressibility cost principle therefore does not seem plausible. Why then, does DeLancey find it plausible?

The answer to this question is found deep in an appendix to chapter one, where a “proof” of the cost principle is offered. But here “explanation” is dropped from the definition of the cost principle:

The incompressibility cost principle is that a description d that is very much more complex than a theory T cannot be identified as a consequence of T without the addition of information at least as great as the difference between the complexity of T and d. (33)

Even if this weakened principle were true, we could only infer the stronger principle (about explanation) by assuming that “any explanation will require that we at least be able to produce or identify the relevant description” (34). But this is an awfully strong constraint to place on an explanation (one that is reminiscent of traditional deductive-nomological accounts of explanation). Indeed, the constraint is impossible for a non-deterministic theory like quantum theory to satisfy since the laws of quantum theory together with complete initial conditions do not in general allow one to “identify” or “produce” the outcomes of measurements. One would have liked to have seen some defense of this assumption in light of the contemporary literature on explanation. Instead, the supposed obviousness of this assumption was used as a reason to not need to go there:

While the notion of explanation is contentious, an explanation cannot be both adequate and also be unable to identify the phenomenon it aims to explain; so here again the incompressibility cost principle renders another fine-grained debate irrelevant: we do not need to settle what the proper account of explanation is. (35)

Either way, DeLancey does not successfully prove the weakened incompressibility cost principle. DeLancey claims that it is “a straightforward corollary of a result that we can call “the incompressibility result”” (p31). But the proof that is offered (sec. 1.9.3) does not use this result. On p34 the proof takes the following reductio form (which for simplicity I paraphrase):

Assume (1) d can be identified by some resources that include T and (2) d’s complexity is greater than the complexity of these resources. But then we have a contradiction because “by the definition of descriptive complexity” if (1) is true then (2) is false.

But descriptive (Kolmogorov) complexity does not yield this result. Even assuming a deterministic theory, this depends crucially on time t (Ming, and P. Vitányi, 2008, sec. 3.4). Let C(F) and C(I) be the complexities of the final and initial conditions respectively, and let C(T) be the complexity of the theory’s laws (T), then we have:

C(F) ≤ C(I) + C(T) + C(t) + k

Here C(t) is upper-bounded by approximately log(t) and is close to log(t) for most t. (k is an additive constant that is not important here.) So, for deterministic theories, the weakened cost principle is only applicable for small times. For larger times, the final state’s complexity may go well beyond the complexity of the initial state and laws. What’s worse, F is the state of the entire universe at time t. So, it is hard to see how this applies to open systems like brains. For even if this final state has small Kolmogorov complexity (because C(I), C(T), and C(t) are small), the state of a small subsystem (say, a brain) might still have very large Kolmogorov complexity and so be more complex than the full universe. That is, even though the universe has small C, subsystems might have large C (Tegmark, 1996). Kolmogorov complexity therefore does not seem like the right complexity measure to use here.

But let’s continue and see how all this is used to respond to the knowledge argument against physicalism. The argument can be simply stated as follows: (1) Before leaving the black-and-white room, Mary has all relevant physical information (about the human optical system). (2) After leaving the room, Mary learns new information (regarding what it is like to see red). (3) Therefore, not all information is physical information. (4) Therefore, physicalism is false. (See p101 for DeLancey’s preferred formulation.)

To understand DeLancey’s response to the knowledge argument, we need some definitions:

T := the physical theory of color vision that Mary learns about in the black-and-white room.

Ht2 := a description of all the relevant physical features of an agent who is experiencing red (e.g., Mary, when she leaves the room).

Ht1 := a description of all the relevant physical features of that agent’s brain, prior to having the experience (e.g., Mary right before leaving the room).

L = the upper limit of the information that can be manipulated by Mary (which can just be the storage capacity of the theoretical understanding of Mary).

DeLancey’s argument is then as follows:

Given these definitions [. . .] it could be that (1) T and Ht1 are a fully sufficient physical account of color vision because these entail the relevant subsequent state of the agent in question (T and Ht1 entail Ht2) and (2) T and Ht1 explain Ht2 in such a way that if we did have a theoretical grasp of both T and Ht1 we would find no reason to believe that something was “left out” of the theory (that is, we would not accept the conclusion of the knowledge argument), but (3) L << C(T) and L << C(Ht1). (105)

The moral is: from studying T and Ht1 in her black-and-white room, Mary may well not be able to know what it is like to see red, but only because she was not able to manipulate such complex information in the right way, given her cognitive limitations L. The claim that T and Ht1 must be very complex follows from the incompressibility cost principle and the premise that what it is like to see red (and so, Ht2) are very complex. I have raised concerns for the cost principle that I think readers of this book need to be aware of. Let us now consider the premise that consciousness is very complex.

Chapter two argues that while conscious experiences are very complex, our current theories of consciousness are relatively much less complex, and so cannot quite capture the essence of these experiences to explain them. DeLancey illustrates this with the experience of fear and argues that a proper description of it would involve all sorts of complicated processes, from neural activity, to heart rate increase, to the slowing of digestion (40). There is no attempt here to estimate the Kolmogorov complexity of these processes, nor of any theories of consciousness. So, these claims are hard to evaluate.

DeLancey briefly considers the integrated information theory of consciousness (IIT)—which is one of the leading contemporary scientific theories of consciousness, and which I think is a very interesting case study in this context. (For example, IIT inspired the PCI (or “zap and zip”) measurement of consciousness that considers the compressibility of EEG data.) There are some misunderstandings about IIT. For example, DeLancey states that IIT “has as a consequence that phenomenal experiences are very complex” (xvii). But this isn’t true in general under any notion of complexity. IIT models particular experiences (and distinguishes particular kinds of experience) in terms of their “Φ-structures” (or “Q-shapes”) and the Φ-structures of certain simple systems, like feedback dyads, are very easy to describe, and their derivation only consumes a couple of pages (see Chalmers and McQueen, 2022; McQueen and Tsuchiya, 2023; McQueen, Durham, and Müller, In preparation) for examples of such derivations). We are already in Mary’s situation with respect to the dyad: we know its complete Φ-structure and how it was derived yet we still don’t know what it’s like to be such a thing. On the other hand, human experiences are what is relevant here, and their Φ-structures are not possible to derive exactly, as such derivations are computationally intractable. Could this point help complexity-based defenses of physicalism?

DeLancey appears to think so, as he goes so far as to say that “IIT entails the complexity of consciousness claim, and all the arguments of this book are consequences of IIT” (50). On the contrary, I think IIT (if true) strengthens the knowledge argument against complexity-based responses. Using the IIT3.0 formalism, we know exactly what the complete Φ-structure of a simple system like a feedback dyad looks like: it is just a set of weighted points in a high-dimensional vector space, where the number of dimensions is related to the size of the system’s state space. So, if IIT gives the correct physical description of red experience, then we know enough about what Mary knows in her black-and-white room, without knowing the specifics, or so it seems to me. The Φ-structures for what it is like to see red just have many more dimensions and weighted points. The intuition behind premise (2) of the knowledge argument is then that this kind of information just isn’t the kind of information that could explain the phenomenal character of a red experience: how could we get the reddishness of red from a bunch of weighted points in a vector space? I think Frank Jackson was thinking the same thing when he first posed the knowledge argument, just replace weighted points in a vector space with billions of interacting neurons.

Although the anti-physicalist arguments seemed to escape the arguments of this book unscathed, there are still many interesting things in this book that I do not have space to discuss. For example, chapter one offers a simple and readable introduction to the notion of Kolmogorov complexity. Chapter two clarifies a number of issues in philosophy of mind, from the apparent simplicity and ineffability of conscious experiences, to the access/phenomenal distinction, to the “overflow argument”. Chapter three is largely independent from the rest and tries to clarify how we should understand physicalism (as a theory and not a stance), especially in response to Hempel’s dilemma. Chapter four contains the attempts to respond to the anti-physicalist arguments. Whatever one thinks of these attempted responses, the chapter examines the anti-physicalist arguments under many different interpretations, which is illuminating, and which will help strengthen the reader’s understanding of these arguments.

The final chapter of this book (chapter five) tries to test a prediction of the book’s main claims: that relatively less complex phenomenal experiences are not mysterious. The main example is the experience of adding numbers. This was an illuminating discussion, which helped to cement a methodological lesson of the book: that we should begin with the simplest cases of conscious experience, not the most complex, if we wish to make progress. However, there was no attempt to show that the Kolmogorov complexity of such an experience is any less than that of seeing red. I also could not quite see why the experience of adding numbers (the cognitive phenomenology of adding) should be considered less mysterious than seeing red.

The harder and more important test cases for complexity-based physicalisms are not less complex phenomenal experiences, but very complex yet non-mysterious physical properties. Cases of large physical complexity but without mystery are what complexity-based defenses of physicalism have difficulty accounting for. For DeLancy’s notion of complexity, I used the photon example to raise this difficulty. For notions of complexity more generally, there are all kinds of “complex” phenomena that don’t seem mysterious or difficult to explain in the way consciousness is. This includes the mental phenomena sometimes called “the easy problems of consciousness”. Complexity-based accounts therefore still need to explain what the difference is between consciousness and complex but well-understood physical phenomena.


Thanks to Markus P. Müller for helpful discussion of Kolmogorov complexity. This research was supported by grant number FQXi-RFP-CPW-2015 from the Foundational Questions Institute (FQxI) and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation.


L. Ming, and P. Vitányi. (2008) “An introduction to Kolmogorov complexity and its applications.” Vol. 3. New York: Springer, 2008.

M. Tegmark. (1996) “Does the universe in fact contain almost no information?” Foundations of Physics Letters 9(1996): 25–41.

D.J. Chalmers and K.J. McQueen. (2022) “Consciousness and the collapse of the wave function”. In: Consciousness and Quantum Mechanics. Ed. by S. Gao. Oxford University Press.

K.J. McQueen and N. Tsuchiya. (2023) “When do parts form wholes? Integrated information as the restriction on mereological composition”. Neuroscience of Consciousness 2023(1), niad013.

K.J. McQueen, I.T. Durham, and M.P. Müller. (In preparation) “Building a quantum superposition of conscious states with integrated information theory”.