In Rocco J. Gennaro's edited volume, philosophers use psychopathologies and other brain disorders to explore and challenge various theories of consciousness. Though psychiatrists and psychologists are likely not to be terribly impressed with the level of sophistication with which philosophers' pet disorders are discussed, I found the philosophical analysis to be top-notch and deftly described. In particular, I would recommend this volume for anyone wanting to understand how various philosophical approaches to understanding consciousness function in the contemporary landscape.
We can divide the philosophical approaches to consciousness discussed in this volume into two broad categories: "subjectivity" theories, exemplified in the chapters by Alexandre Billon and Uriah Kriegel; Gennaro; Myrto Mylopoulos; Timothy Lane; Paula Droege; Iuliia Pliushch and Thomas Metzinger; and William Hirstein, and "integration" theories, as discussed by Andrew Brook; Robert Van Gulick; Gerard O'Brien and Jon Opie; Jakob Hohwy; Philip Gerrans; and Erik Myin, Kevin O'Regan, and Inez Myin-Germeys. Subjectivity theories include things like the well-known HOR (higher-ordered representation) theories. Here, the basic idea is that a mental state becomes conscious when it is represented by another mental state. That is, a mental state becomes conscious when another mental state is directed to it. (I should note that Gennaro, an HOR theorist, holds that a conscious state is a complex state in which one part represents another part self-reflexively. Given that the text is not concerned with how to individuate mental states, this is a distinction that need not detain us here.) The main assumption of this approach is that consciousness is intrinsically tied subjectivity -- recognizing, in some sense, that a mental state belongs to me.
In contrast, integration theories hold that consciousness is inextricably tied to a mental state being amalgamated with the rest of the cognitive economy. Bernard Baars (1988, 1997) is probably the best known -- and certainly one of the first -- integration theorists, with his Global Workspace model of consciousness. According to his view, mental states become conscious when they are broadcast globally to the cognitive system for systemic use. These models, instead of focusing on how particular mental states are represented (or re-represented) in the cognitive economy, highlight how the cognitive economy uses certain mental states. Most integration theories (though not all) treat the processes that lead to phenomenal consciousness as occurring on a sub-personal level, while most subjectivist theories (and maybe all of them) treat the processes that result in phenomenal consciousness as personal ones.
Not surprisingly, the types of psychopathologies that each approach has problems explaining differ. Subjectivity theories, which rely on a mental state being represented as belonging to or being of the self, has challenges accounting for pathologies of alienation. Several mental disorders have as their hallmark patients who deny that their thoughts are their own or that the sensations of their limbs belong to them. The delusions of thought insertion (a belief that some conscious thoughts are not one's own), anarchic hand syndrome (a syndrome in which patients perform what appear to be voluntary actions without conscious intention or control), somatoparaphrenia (a type of asomatognosia in which patients deny ownership of one or more of their limbs) are three examples of such disorders that philosophers repeatedly use as illustrative points. In each instance, the patients either report being conscious of the inserted thought or hand movements or body part, or they can be made to report being conscious of them through experimental manipulation, and yet they also clearly are lacking any self-reference regarding these conscious experiences. Thus, each of these cases presents themselves as counter-examples to subjectivity theories of consciousness.
Integration theories are challenged by disunity syndromes -- pathologies in which the conscious experience appears splintered off from the rest of the mind in some fashion. Common examples that philosophers use include split-brain patients, Dissociative Identity Disorder, and schizophrenia. In each of these cases, the patients cannot access all of their apparent conscious thoughts. It seems that whatever is happening in the patient, thoughts which are conscious have not been broadcast to the system as a whole. These cases, too, beg for explanation.
Suffice it to say, none of the authors admit defeat regarding their pet theory and all devise work-arounds that would allow them still hang onto their favorite approach. I shall not rehearse each of the proposed solutions in any detail. Instead, I shall make some general comments regarding the sorts of answers given and use those as a lens by which to assess some of the individual contributions.
Even though "neurophilosophy" has been around at least since Patricia Churchland's book by the same name was published in the mid-1980's, I am continuously surprised and saddened that philosophers have not developed much greater sophistication in reporting and using neuropsychological brain data. Sometimes what the authors of this volume write is just false. For example, Brook comments, "In their everyday life, [split-brain] . . . patients show little effect of the operation. In particular, their consciousness of their world and themselves appears to remain as unified as it was prior to the operation" (p. 212). While it is true that the patients' verbal reports suggest a unified consciousness, their at-home behavior can often belie this. Patients report getting dressed with one hand, while undressing with the other, for example (see discussion in Gazzaniga et al. 2014). But we would expect that their verbal descriptions of their experiences would seem unified if their language areas are lateralized to the left hemisphere. In those cases, the patients would simply be unable to say what the right hemisphere is experiencing, if anything.
Of course, over time, most patients' two hemispheres learn to coordinate their behaviors with each other, and obvious behavioral effects of the operation do eventually diminish. Split-brain patients learn tricks to help the two hemispheres communicate with one another. For example, one patient learned to very subtly nod or shake his head when he wanted to tell his left hemisphere that it was providing incorrect information (Gazzaniga et al. 2014). It is also perhaps useful to keep in mind that even completely severing a corpus callosum does not mean that the two hemispheres are completely isolated form one another, for subcortical connections remain between brain areas, and patients can learn to exploit this as well over time. Nonetheless, one can bring out the cognitive deficits of split-brain patients much more cleanly in artificial experimental settings.
The point in my rehearsing all of this is not to chide the author for missing some relevant data. Rather, it is to highlight that the author appears to miss the purpose of empirical investigation. The goal of experiments in the mind and brain sciences is not to force weird and different brain processing, but rather it is to reveal what is already happening in the brain. It is to reveal how the brain is currently functioning in ways that normal daily life might not. To take a personal example, my mother has mild vascular dementia, but she creates lots of lists and reminders to keep her on track. If you were around her, you likely would not notice the severity of her deficits, but in a formal testing situation, her cognitive challenges not only can be revealed, but their contours can be discerned.
Brook is not the only one who makes this error. Most of the HOR theorists do as well. One should not be taking some symptom or other, like a thought insertion delusion, and then asking the simple question of how the target theory should be tweaked so that it can account for this oddity. Rather, one should be asking what these sorts of delusions, combined with other similar types of deficits, combined with what we know about how the brain processes information, combined with data from normal subjects, tell us about how the brain, how we, create or assign subjectivity to mental events.
In other words, for a philosopher, the goal should be bigger than creating a theoretical epicycle. Billon and Kriegel hypothesize that patient reports of alienation stem from a "phenomenology of alienation" (p. 42) added to a conscious state; in contrast, Gennaro suggests that alienation comes from "a deficit of introspective ability" (p. 62). (Pliushch and Metzinger end up making a similar suggestion as well, though they are integration theorists.) Neither tack breaks new theoretical territory, nor does it help us understand any psychopathology in a deeper way. Both are simply adding an armchair widget to an armchair theory.
If we really want to see how subjectivity theories of consciousness stack up against the empirical data, then we need to include all sorts of frontal lesions and deficits -- traumatic brain injuries, chronic traumatic encephalopathy, frontotemporal dementia (also known as Picks disease), Post-Traumatic Stress Disorder, Fetal Alcohol Syndrome, focal lesions, Substance Use Disorders, as well as the philosophical favorites of autism, schizophrenia, Dissociative Identity Disorder, and the various neglect syndromes. We should look at how our phenomenal experiences track with activation in the so-called default network, which is reportedly active when we are contemplating ourselves. We should crosswalk self-related disorders with other associated cognitive and behavioral problems, like perseverance, emotional lability, impulsivity, lack of insight, and cognitive control. We could even throw in the interesting case of Krista and Tatiana Hogan, craniopagus twins whose thalami are connected to one another and who are introspectively aware of each other's perceptions and other conscious states (see discussion in Langland-Hassan 2015). Put in stark terms, cartoon data begat cartoon theories. Philosophers of mind should be able to do better in this 21st century, given all we know now about brains, minds, and behavior.
The one exception in the discussion regarding subjectivity theories of consciousness is Lane's contribution. His chapter serves as a rebuttal to the chapters by Billon and Kriegel, Gennaro, and Mylopoulos. Lane points out that "significant aspects of [subjectivity] . . . theories are empirically tractable" (p. 109), and he too argues for "reorienting" this discussion so that it is "[situated] squarely in an experimental context" (p. 109). He too urges the philosophers to stop "cherry [picking] examples from the scientific literature" (p. 123). More significantly, he goes on to outline several empirical predictions that subjectivity theories make and how one might set about testing these predictions.
I do believe Lane goes a bit astray toward the end of his chapter, when he advocates identifying "subpersonal" processes with neuronal activity, but "personal" processes as "psychological" phenomena (cf., pp. 126-127), as though the neuroscience and psychology of today investigate radically different objects in the world. Myin, O'Regan, and Myin-Germeys make a similar error when chiding the former head of the National Institutes of Mental Health (NIH) for adumbrating a "disease model" of mental illness (p. 356). They claim that disease models, which require "causes" to "exist independently of symptoms," are not apt for embodied theories of perception, which entail that mental illnesses can only arise through body-environment interactions (p. 357). But no contemporary doctor would believe that a "virus" alone causes the symptoms of disease -- emergent symptoms depend strongly on a host of environmentally based factors -- any more than those who buy into in the Research Domain Criteria (RDoC) proposed by the NIH believe that a simple brain or genetic "virus" causes a psychiatric disorder simpliciter. In both cases the authors appear to be struggling mightily against straw men.
Still, the integration theories of consciousness described in the volume are much better grounded empirically, largely because most of them were developed in tandem with empirical investigations. At the same time, some of the integration theorists still under describe (or perhaps they underappreciate) what they take to be relevant data. I have already discussed Brook and Myin, O'Regan, and Myin-Germeys. In addition, O'Brien and Opie identify consciousness with "stable patterns of activation in neutrally realized PDP [parallel distributed processing] networks" (p. 271). But they do so without making any claims regarding how one might identify such stable patterns of activation. Even if we agree with them that many such patterns might exist at each moment in time, without further specification, we do not know, for example, whether a "stable pattern" means a common oscillatory firing pattern or simply co-occurring action potentials, or perhaps something else entirely. Do neurons need to be synaptically connected to be part of the same pattern, or can they be distal to one another? Do we need any specific duration for the pattern to count as a conscious phenomenon? The answers to these questions matter, because one can identify scads of "stable patterns" in an active brain. Most of the brain is responding most of the time. And yet, we clearly are not conscious of all the so-called patterns the brain creates as it churns along.
However, though I am critical of some of the contributors' use of brain data, I do want to reiterate my endorsement of the book for philosophers. Take the empirical discussions with a grain of salt, but the volume is most assuredly on the "must read" list for anyone interested in philosophical models of consciousness. The empirically informed philosopher of mind might not learn anything new about the brain, psychopathologies, or cognitive malfunctioning, but the philosophical analysis is highly developed and builds upon well-known theoretical scaffolding in interesting ways.
Baars, B. 1988. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press.
Baars, 1997. In the Theater of Consciousness. New York: Oxford University Press.
Churchland, P.S. 1986. Neurophilosophy: Toward a Unified Science of the Mind-Brain. Cambridge, MA: The MIT Press.
Gazzaniga, M.S., Ivry, R.B., and Mangun, G.R. 2014. Cognitive Neuroscience: The Biology of the Mind, 4th Edition. New York: W.W. Norton and Company.
Langland-Hassan, P. 2015. Introspective misidentification. Philosophical Studies 172: 1737-1758.