Consciousness and Language

Placeholder book cover

Searle, John, Consciousness and Language, Cambridge University Press, 2002, 278pp, $23.00 (pbk), ISBN 0521597447

Reviewed by Joelle Proust, Institut Jean-Nicod (CNRS, Paris)

2003.05.07


The fourteen essays collected in this book—most of them already published—cover a variety of topics that John Searle has been concerned with over twenty years, from language, conversation, and speech-act theory to consciousness, cognition, and the indeterminacy of translation. As a whole, the book offers many stimulating views, and some of the most controversial should spark new interdisciplinary reflections. Chapter 10, “How performatives work”, presents a fascinating discussion of how declarations are encoded. The analysis of self-referentiality in promises that is offered in this chapter is a great piece of philosophical theorizing. The present review will concentrate on the main topic of the book: consciousness and its role in cognition.

Consciousness and intentionality, the author contends, are essentially biological phenomena, which might at first blush seem to imply that no artificial device can ever think or become conscious of the world. But the view is, rather, that a computational representation offering a simulation of the logical relations between the subset of brain states that are the vehicles of mental representations would not qualify as a possible candidate for conscious agency. For such a simulation ignores that the relevant dimension for conscious awareness cannot be information processing (the argument as will be shown below, relies on Searle’s particular view as to what information consists in); what is relevant to consciousness is rather a specific set of “biological” processes in the brain that produces it.

Although, as Searle acknowledges, philosophy has nothing to say about the particular biological process in question, it is reasonable to assume that conceptual clarification will play a major role in any solution of the problem of consciousness. The author insists, in particular, on the ontological status of consciousness: it is caused by the brain, but is also a “feature of the brain”: “consciousness is a state that the brain is in” (48). When the causal regularities that govern such a realization are properly understood, a duplication (not a simulation) of conscious states in artifacts might be considered.

Searle anticipates the objection that conscious states have a first-person, i.e. a qualitative, experiential ontology, in contrast with the third-person biological and physical phenomena studied in science. How then could a causal, third-person approach possibly clarify what consciousness is? His rejoinder consists in developing a new kind of ontological stance, called “biological naturalism”, that keeps both dualism and materialism at bay. Dualism is notoriously unable to account for the causal connection between the mental and the physical. Materialism, on the other hand, assumes that all existing phenomena are physical; it is therefore unable to acknowledge, and still less to account for, the existence of subjective qualitative states. A proper ontology should, according to Searle, recognize that subjective facts of consciousness can be endowed with epistemic objectivity, which allows them to constitute bona fide objects of scientific inquiry (23). One might object that the difficulty of traditional dualism would surface again in the two varieties of objective facts: granting that the real world encompasses epistemically subjective (mental) facts and third-person (inter alia: cerebral) facts, the difficulty persists in understanding how they can be both accounted for in one and the same explanatory framework.

On this issue, a crucial element in Searle’s strategy is to contrast causation and reduction: conscious states cannot be reduced to lower-level properties of the brain – otherwise the felt quality is lost; they can only be “causally explained” (34). Now one might want to object that causal explanation standardly understood is offered in a detached, third-person, non-qualitative way. Why should a statement such as “X produces C”, where X refers to biological facts and C to a felt quality, be taken to provide a causally adequate explanation (causally adequate to the subjective explanandum)? This objection, associated in the literature with the “hard problem”, is for Searle just an expression of our present limited understanding of which range of facts are relevant (24-25). This admission however backfires on the whole epistemological theory and on the underlying ontology; what is needed is an indication as to how the causal statement above can be validated in principle; this problem cannot be solved simply through the discovery of the relevant neural correlations.

Chapter 7 presents in a summarized form the arguments offered in the two final chapters of The Rediscovery of the Mind 1 against the informational paradigm as developed in cognitive science. In a nutshell, the line of reasoning is this. Any kind of causal explanation has to cite “real features of the real world”—this requirement is called the “causal reality constraint” (107). Any appeal to informational, subdoxastic states, however, fails to meet this constraint, according to Searle, because in this kind of case, information is observer-relative rather than intrinsic. Observer-relativity in turn allows interpretive free-wheeling: “any system of any complexity at all admits of an information processing analysis” (p. 110). This latter feature, for Searle, shows that information might not be playing any causal role in observer-relative cases. Only (potentially or actually) conscious states qualify as intrinsic states, able to have a causal role qua mental.

Such a claim clashes with a view predominant in work presently conducted in artificial intelligence, experimental psychology, cognitive linguistics and cognitive neuroscience. In these fields, it is widely assumed that information processing is one of the most prominent functions of the brain, whether their associated mental content is in principle available to the agent or not. It is worthwhile trying to answer Searle’s objections, because they point to genuine conceptual difficulties.

What is information? And why should it be taken to be observer-relative in all cases in which the subject is attributed a mental state but is not in a position to report consciously about it? In Searle’s use of the notion, information is an epistemic notion; in other words, information is present in a state of affairs (tree rings, neuronal state) only if a thinker is able to read it off. On this view of information, it is a matter of definition that an informational state is conscious (in fact or in principle—as for example, in a conscious perceptual state, or in a belief, an intention or a desire); thus information can be said to play a causal role because it is part of the intentional content available to the agent. In contrast, when applied from a third-person point of view (by the light of an observer), it does not meet the causal reality constraint.

If the summary above is correct, it appears that Searle’s objection to cognitive science is associated with his defining information in epistemic terms. As a result, any attempt to understand intentional states in terms of informational content—which is the project of cognitive science—is considered to be doomed to circularity. This is because you cannot aim at trying to reduce intentional states to informational states if you assume that beliefs and desires are needed to extract information.

Another concept of information is used, however, in cognitive science. As Searle acknowledges (p. 119), “in a perfectly reasonable, but a different meaning of ’information’“, ”these tree rings contain information about the age of the tree” can be rephrased as: “there is an exact covariance between the number of the tree’s rings and its age in years”. This is a non-epistemic concept of information. Having this non-epistemic concept is crucial for a cognitive scientist or a philosopher like Dretske who wants to explain how intentional states can be generated from informational states. In that alternative sense of the term, it is perfectly possible to attribute an informational state to an organism in a third-person way and to take information to have a causal role in the observed behavior. The causal properties of informational states are made clear in the three conditions that, according to Dretske, have to be fulfilled for a physical state to have representational content: i) there must be a brain state that covaries with an external event or property; ii) this internal state must be systematically triggering a given response to that event (for example flight, or motivational change); and iii) this state must have the function of carrying that information: in other terms, the response must be connected in a causally structured way to the informational state in question. Dretske himself adds to these conditions a requirement on mental content that might at first blush look similar to Searle’s. To qualify as mental content, Dretske says, information must be cognitively available to the subject and not simply present “objectively” in its receptors’ relevant states. What he means here is that an organism qualifies as intentional only if the information that is extracted is available to it in some central way in order to control its responses. For Dretske, an animal able to learn new concepts and categories in order to cope with a changing environment would thus qualify as having intentional states. In Dretske’s view, the ability to learn and to exert a global control of behavior through representations plays the role that consciousness plays in Searle’s view as the essential feature of mentality. An area in which this divergence is put to the test is the issue of animal minds.

Animals don’t express their beliefs in a language, but, Searle maintains, they do have intentional states. Why? Chapter 4 offers two forms of defense: i) Because they are conscious beings; ii) “Because they correct their beliefs all the time on the basis of their perceptions” (68). These two lines of argument, however, do not determine the same set of animal minds. Whether or not animals are “conscious” may depend on various properties still under debate, for example whether they have a nervous system allowing them to have reafferent perceptions, or whether they are able to form metarepresentations about their sensory states. Whether or not they can use representations to control their behavior, on the other hand, depends on their ability to extract information, form categories, maintain them in memory over time, and apply them to new objects. There is no a priori guarantee, to say the least, that the capacity to feel and to experience, on the one hand, and to think, i.e. to represent the world in a structured way, on the other, define coextensive classes of beings.

Chapter 4, in connection with chapter 7, raises another difficulty. As we saw earlier, Searle takes information to be, in itself, observer-relative, and thus not able to constitute a “real feature of the real world”. But consciousness is attributed to other animals, Searle acknowledges, because of the overwhelming analogy between animals and humans in their needs and actions; third-person attribution in this case does not automatically prevent us from accessing a real feature of the world: “Even if we assume that there is no fact of the matter as to which is the correct translation of the dog’s mental representations into our vocabulary, that by itself does not show that the dog does not have any mental representations, any beliefs and desires, that we are trying to translate” (66). It is surprising that Searle finds it unproblematic to attribute consciousness to animals from a human viewpoint – for if information is observer-relative, projection from human to animal consciousness is not only observer-relative, but also infected with anthropocentrism; as Nagel and McGinn have emphasized, there is no way to know «what it is like» to be a bat, or even a dog, on the basis of a simple analogy with our own conscious experience. (Imagining is not the problem.)

The difficulty with Searle’s position is threefold. First, conscious states have been found, in cognitive psychology, not to exhaust the range of intentional states; in other words, there are more perceptions and beliefs that control behavior in an effective and global way than the subject can consciously recognize. A large part of learning occurs outside consciousness, and such extensive “implicit learning” cannot (’in principle’) be made conscious. Second, the agent is in no better position than an external interpreter to tell which perceptions, which beliefs and desires, have been effective in her selecting a particular course of action. Thus consciousness can be a poor guide for appreciating the causes of emotional states or of agency. Thirdly, more generally, conscious awareness cannot constitute a cause of behavior, assuming it does, independently of the structure of the information that is being used, contrary to what Searle maintains: “When Ludwig [the dog] wants to eat or wants to drink, for example, he need not use any symbols or sentences at all to have his canine desires. He just feels hungry or thirsty” (118). But how can consciousness be so serviceable? For his objection against a computational view of the mind to be effective, Searle has to offer a theory of thought in addition to his theory of consciousness.

It may seem a matter of course that Ludwig does not catch a ball because he processes information, but rather because he wants to catch the ball. But the story for “wanting” (and for “because”) is a long one, involving the phylogeny of motivation, action and social interaction. Important steps in this selective history of the will would involve informational processes such as object-tracking, categorizing, weighing properties, selecting contexts, etc.. There may, therefore, not be two different kinds of causation involved in the sentence above. The fact that we, human, language-using beings, find it easier to take the personal perspective does not imply that a special kind of expertise is reserved for that level (see p. 123). The “simply conscious” view can seem to exhaust explanation only if one chooses to apply a common-sense interpretation to a complex underlying process.

Endnotes

1. 1992, Cambridge, MIT Press.