LOT 2: The Language of Thought Revisited

Placeholder book cover

Jerry A. Fodor, LOT 2: The Language of Thought Revisited, Oxford University Press, 2008, 225pp., $37.95 (hbk), ISBN 9780199548774.

Reviewed by Mark Wilson, University of Pittsburgh

2009.02.28


Fodor's The Language of Thought of 1975 is one of the most important philosophical books of the past quarter century and provided a first class riposte, from a cognitive psychology point of view, to various dogmatisms that were blocking the pipes within mainstream philosophy of mind. However we may now weigh its conclusions, Fodor's plucky volume jolted our discipline in a very helpful manner. This new work (entitled "LOT 2" in the mode of "The Godfather III") partially attempts the same chore, using allied argumentation, against a somewhat updated range of targets, including conceptual role semantics and pragmatic inferentialism. At the same time, LOT 2 registers a series of heightened doubts with respect to the capacities of the basic computational program itself (which Fodor abbreviates as "CTM"), rendering this book as much an overview of unresolved difficulties as the straightforward advocacy of a focused CTM position.

It is hard to carry off such a mixed enterprise well, for the skeptical passages often undercut the positive arguments offered in favor of CTM. For example, Fodor strongly condemns alternative approaches that, unlike CTM, do not entail that "concepts" should compose recursively:

The systematicity, productivity, etc. of thought requires the content of BROWN DOG to be a construction out of the contents of BROWN and DOG. Likewise for the possession conditions of BROWN DOG: They've got to explain why anyone who has BROWN and DOG thereby has all the concepts he needs to grasp BROWN DOG. (p. 41)

In similarly sweeping and generalized terms, he inveighs against "pragmatist" accounts:

Thought about the world is prior to thought about how to change the world. Accordingly, knowing that is prior to knowing how. Descartes was right and Ryle was wrong. Why, after all these years, does one still have to repeat these things? (p. 14)

Yet soon thereafter Fodor raises two broad difficulties (that he calls "nativism" and "non-locality") for CTM. A worthy computationalist can respond plausibly to both challenges, as we'll see below, but probably at the price of relaxing LOT 2's absolutist injunctions with respect to plans and compositionality. So which Fodor should we follow: the CTM optimist or the CTM pessimist? Stern methodological precepts become unpersuasive if you immediately confess that you are quite confused about where your science is headed.

Clarity's purposes are not assisted by the further fact that much of Fodor's argument is prosecuted through tart exchanges with an Imaginary Friend (one "Snark" replaces his usual "Granny" as interlocutor, presumably because the former embodies a skeptical critic, rather than serving in Granny's customary "voice of common sense" capacity). These dialogues are often conducted in the manner of an old Abbott and Costello routine, leaving the reader relatively unenlightened as to which runners occupy which bases. A direct comparison of old LOT with new reveals the degree to which these characteristic mannerisms have increased over the years.

None of this is to say that I am not in sympathy with many of Fodor's positive themes. As in LOT 1, this new volume is strongest in stressing the resources that a computational approach to thinking can wield in addressing Frege's problem and resisting "public language" claims that human "concepts" must be individuated in terms of socially accessible rational standards. But some measure of the old iconoclasm seems to have dissipated and one now finds LOT 2 acquiescing in familiar a priori conclusions that stalwart computationalists should surely reject:

[I]f concept acquisition is a learning process, then … it requires the mental representation of the conditions under which the concept applies (as in : "It's the green things that GREEN applies to"). (p. 133)

This basic complaint -- that concept formation through abstraction is incoherent because one must have the relevant concept on hand before the "abstractive" process commences -- can be found in many nineteenth century apriorists such as Frege or Sigwart:

[To try to] form a concept by abstraction … is to look for the spectacles we are wearing by aid of the spectacles themselves.[1]

I'll sketch a computationalist rejoinder to this charge below.

Such anti-"conceptual learning" doubts quickly lead to the astonishing "nativism" with respect to conceptual repertory that the original LOT preached. In this update, Fodor continues these themes, but they are now wedded to an obscure mysticism about "concept attainment" (as opposed to "learning"):

It's, as it were, a subintentional and subcomputational process; it's a kind of thing that our brain tissue just does. Psychology gets you from [an] initial state to [that of stereotype formation]; then neurology takes over and gets you the rest of the way to concept attainment (that is, to locking [onto the proper semantic trait]). (p. 152)

He seems to contend (I find these passages hard to follow) that the details of these non-learning processes are not "psychology's business" because they only occur at a "sub-intentional level." When Fodor writes like this, he sounds eerily reminiscent of John McDowell:

This leaves it open that investigations of an "engineering" sort might be fine for other purposes.[2]

But should computationalists abandon hope so readily? The very strength of LOT 1 lay in its resistance to argumentation of this ilk. But present day Fodor frequently throws in the towel after a brisk dismissal of some unduly simplistic model of concept learning (e.g., crude direct abstractionism from exemplars), as if such a survey can rule out more complex learning mechanisms or remain commensurate with what we already know about the complexities of the machine routines required to sort out a group of samples into colored piles. I am more familiar with scientific computing than human learning per se, but even this limited exposure makes me wonder why Fodor, as a would-be CTMer, blithely ignores the astonishing plasticity of purpose to which effective algorithms are typically heir, even in science. For example, a reasoning routine R (a Runge-Kutta scheme, say) may answer excellently to a specific physical task such as predicting projectile flight. As such, the R algorithm is naturally framed within the physical vocabulary v1, v2, … pertinent to cannon balls et al. However, considered on its intrinsic inferential merits, R simply calculates a certain mathematical function over its input values. Commonly, this same functional relationship reappears as a component within otherwise quite different physical chores (e.g., computing the sag of a rope). Rather than code a fresh Runge-Kutta sub-routine from scratch, computer scientists commonly take an extant computational package off the shelf and feed pertinent data into it couched in the v1, v2,vocabulary upon which the prepackaged scheme operates (once the borrowed routine runs its course, the conclusions are recompiled into their original data format). In such contexts, the intervening terms v1, v2, … no longer retain their original physical significances, although they maintain what might be called their "operational" or "mathematical meanings."

Everything we know about the evolution of human intelligence indicates that, in an allied manner, our brains display a remarkable ability to cobble together ancestral reasoning schemes for, say, route planning to serve novel ends such as abstract mathematical reasoning (most mathematicians heavily employ their "geometrical intuition" to guide their thinking even in topics that bear no evident resemblance to navigational plotting). To be sure, the hasty and imperfect conclusions reached through such "borrowed reasoning" techniques often need to be scrutinized by outside checks (allied monitoring requirements are commonplace within scientific computing applications as well). Nonetheless, our most productive flights of inferential fancy remain primarily driven by various complex routines that our distant forebears had developed for the sake of efficient hunting and foraging. Indeed, the swift expansion within evolutionary time of human reasoning capacity seems explicable only through this plastic reallocation of fixed computational resources.

In this cobbling together of pre-established routines for novel applications through rerouting, we witness a basic framework for "concept learning" that, pace Fodor, does not represent a simple matter of confirming hypotheses articulated in terms of physical meanings already locked into his postulated "language of thought." Indeed, the genius of this kind of "learning" lies precisely in the fact that it allows our ancestral brain vocabulary v1, v2, … to shed their original semantic associations for the sake of new adaptive applications. For reasons I do not fully understand, Fodor persistently assumes that our basic "language of thought" vocabulary will maintain fixed "semantic readings" in all of their tokenings, despite the wide variety of popular computer techniques where this constancy does not obtain. True, within any specific application each brain language computational registration s can usually be semantically interpreted as carrying pertinent physical information with respect to the application at hand, but this assignment will not remain constant across all tokened appearances of s and can be sensibly attributed to s only after considering the purpose of the larger computational package to which s presently belongs. This innocent concession to the role of "embedded context" suggests that certain flavors of what Fodor dismisses as "pragmatic considerations" will prove useful to the computationalist.

In much of his recent thinking, Fodor has waxed "purist" about "conceptual grasp" in a fairly standard apriorist manner that does not benefit the CTM program at all. When a student learns a "new concept" within physics or mathematics, what does she really do? Although some official "definition" for a term like "differentiable manifold" may be found in her textbooks, many excellent pupils prefer to have some adept draw them an assortment of suggestive sketches on a napkin so that they can gain an ample sense of what Oliver Heaviside called "the practical go of the affair." In doing so, the pupil typically cobbles together an assortment of "borrowed" geometrical routines that allow her to reach appropriate heuristic conclusions about MANIFOLDS swiftly. Such students typically attend to the official "definitions" in their texts only as an "after the fact" check upon the validity of their geometrically facilitated conclusions. Indeed, some very good mathematicians pass through life without absorbing "proper definitions" for their key concepts at all. In terms of practical conceptual success, it is plainly the accumulated raft of borrowed, imperfect yet physically effective inferential and applicational techniques that keeps a concept like DIFFERENTIABLE MANIFOLD afloat within its home discipline and allows it to "lock onto" (Fodor's term) a real world correlate as "semantic value." For such definition-eschewing thinkers, Fodor cannot plausibly claim that, in any straightforward acquisitional sense, "knowing that is prior to knowing how." True, such "borrowed search routine" skills inevitably leave applicational holes that can only be closed through proper definitions and, until such steps are taken, many sentences frameable within the underlying language will lack proper truth-values. In consequence, our definition-shunning pupils will not be able to deal with such sentences adequately (these unsettled swatches of grammar often prove relatively unimportant within the discipline itself). But none of this indicates that the central core of "conceptual mastery" within real life practice doesn't rely primarily upon a rich medley of "planning" skills of exactly the sort that Fodor dismisses as "conceptually irrelevant." I can appreciate why a philosopher interested in the "metaphysics" of "concepts" or "properties" should find the resulting truth-value gaps pertinent, but they seem improperly emphasized within LOT 2's orbit of psychological concerns.

To be sure, Fodor's core complaints strike me as just when applied to simple "conceptual role" stories such as Christopher Peacocke's (whom I believe represents one of Fodor's intended targets, although not clearly identified as such). But I am bothered by the scattershot blasting with which such opponents are generally dispatched.

Turning to his pessimistic conclusions, Fodor expresses his "non-localist" anxieties about the ultimate scope of CTM as follows:

Computing processes are (by definition) syntactic, hence local. CTM says that mental processes are ipso facto computations. But it's very plausible, as a matter of fact, that at least some of what goes on in cognition depends upon the mind's sensitivity to nonlocal relations among mental representations. So it's very plausible that at least some mental processes aren't computations. So it's very plausible that CTM isn't true of the general case. (p. 112)

But any learning process that relies upon sub-routine "borrowing" indicates that "local" in Fodor's first and third sentences do not mean the same thing at all.

Allied considerations suggest that Fodor's firm insistence upon the "compositionality of concepts" has been prematurely decided as well. Consider a favorite example of mine: "rainbow."[3] From a parsing point of view, children learn to respond at an early age to virtually any sentential prompt with an appropriate picture: "Draw me a brown rainbow that is approaching a little girl endways." There is little doubt that she compiles the desired artistic task through recursive assembly upon the sentential components presented. But that parsing skill alone, admirable and complex as it is, does not fully prepare the child to apply such sentences to the real world with any assurance that these grammatical units will gain appropriate truth-values. Her recursive parsing for the sake of artistry relies upon a faulty picture of how the term "rainbow" obtains its real life physical significance and she must learn more about the "practical go" of adult "rainbow" talk before she will be able to apply RAINBOW to atmospheric phenomena competently. Typically, we learn such improvements simply by absorbing a revised set of skills comparable in their pragmatic content to the routines that our definition-eschewing mathematicians acquire. After this adult mastery is achieved, our further educated child will recognize that her recursive parsing of the phrase "brown rainbow" had rested upon a wrong estimation of what the trait BROWN physically signifies and how it might fit with RAINBOW, for BROWN requires a figure/background contrast that is alien to most real life RAINBOW circumstances. In this sense, the physical semantic significance of "brown rainbow" will not straightforwardly obey the brute compositionality that Fodor posits, despite the fact that limited forms of recursive capacity form an initial component within the complex group of skills an agent must display before she can be judged fully competent in RAINBOW (we probably wouldn't credit someone with a complete grasp of RAINBOW if she couldn't draw the expected false picture with a brown crayon). Careful attention to scientific concepts often reveals allied behavior.

In short, the escape routes that allow CTM to evade Fodor's anti-localist anxieties should persuade computationalists to be more sympathetic to the "conceptual" relevance of "pragmatic" factors than Fodor recommends.

To me, the most surprising aspect of LOT 2 is how apriorist its argumentation has become; scarcely a single note of what I'd call "the computational complexity of everyday routine" enters its pages. The specific methods whereby our thought reaches semantic accommodation with the world are subtle and varied and careful attention to factors that Fodor dismisses as pragmatic ephemera will likely make up a vital part of the story. In attacking "concept pragmatism" in its lofty manner, LOT 2 runs the risk of encouraging doctrinal coagulations comparable to those against which LOT 1 valiantly argued.



[1] Christoph Sigwart, Logic, Vol. I, translated by Helen Dendy (London: Swan Sonnenschein and Co., 1895), p. 248‑9.

[2] Meaning, Knowledge and Reality (Cambridge: Harvard University Press, 1998), p. 412.

[3] For expansions of this example and the other considerations raised in this review, see my Wandering Significance (Oxford: Oxford University Press, 2006). The optical and psychological arrangements that sustain successful "rainbow" talk in real life are complicated and it is unclear whether unexpected natural circumstances might qualify the phrase "rainbow approaching endwise" as descriptively true.