The Unity of Linguistic Meaning

Placeholder book cover

John Collins, The Unity of Linguistic Meaning, Oxford University Press, 2011, 201pp., $60.00 (hbk), ISBN 9780199694846.

Reviewed by Katarina Perovic, University of Iowa

2012.06.12


The so-called "problem of the unity of the proposition" has received much renewed interest recently. The problem is an old and arduous one, dating at least to Plato, and it is often put as follows: What is the difference between a mere list of words such as "wise, Alice, is" and a meaningful sentence such as "Alice is wise"? The latter possesses a unity of some sort, but what is the exact nature of such unity and how does it come about? In The Unity of Linguistic Meaning, John Collins sheds some interesting new light on the problem and proposes an original solution that draws on the lessons from early analytic philosophers, contemporary philosophy of language, and linguistics.

Collins's problem of unity must be differentiated from the problems of ontological unity addressed by Russell, Frege, and others, though Collins finds important lessons in their work. He starts off by presenting the problem of unity that faced Russellian propositions in 1903. At the time, Russell thought of propositions as complexes composed of two types of "terms" or entities -- particulars ("things") and universals ("concepts"). For example, the Russellian proposition Alice's being wise (which may be written as *Alice is wise*) is constituted for Russell solely out of the particular girl Alice and the universal wisdom. Given such understanding of propositions, Russell was faced with Bradley's objection: what is the difference between a, b, and R, taken together, and the complex aRb? Since relations, for Russell, are universals that can occur both in a relating role and in a non-relating role (as just another term), the problem for him is especially difficult -- adding further relations R*, R**, etc. to unite a, b, and R, only leads to an infinite regress of relations and never yields unity. Faced with this problem, Russell insisted that a proposition is "essentially a unity", and that "when analysis has destroyed the unity, no enumeration of constituents will restore the proposition" -- a response that seemed to deepen the mystery rather than solve it.

Despite using Russell as the starting point, Collins's problem of unity is not a metaphysical problem -- the problem, as he sees it, is not something that arises because of a given ontological commitment to complexes (Russellian propositions, facts, Fregean thoughts, etc.). Collins's problem of unity is concerned with the unity of linguistic meanings, which he defines as "invariant interpretations of linguistic types" (6).

But what precisely is Collins's problem of unity? To clarify this, he distinguishes between the problem of interpretation and the problem of combination. The problem of interpretation takes unified linguistic structures as given and inquires about how the unity of such structures arises from their meaningful constituents and their mode of composition. The combinatorial problem, on the other hand, starts with the simples and asks: "given lexical items and their semantic properties, what principle or mechanism combines the items into structures that are interpretable as a function of their constituent parts?" (28). Collins takes the latter problem to be the more fundamental one, and it is this problem that he wishes to address. Now, it is not clear to me that these are indeed two distinct problems (rather than two sides of the same problem), but I share Collins's worry that focusing solely on the interpretative issue may create an illusion that the problem is more easily solvable than it actually is. (Davidson seems to have succumbed to this illusion in his Truth and Predication (2005), where he claimed that the solution to the unity problem had already been achieved by Tarski's theory of truth.)

Collins further argues that a satisfactory solution to the problem of unity should be able to meet the following desiderata: 1) it must account for our competence to interpret indefinitely many linguistic structures (generativity desideratum); 2) it should provide us with a combinatorial principle that is thoroughly explanatory, i.e., specifiable independently from the unities it helps form and the elements that it applies to in order to form those unities (explanation desideratum); and 3) it must be able to explain why a given structure is interpretable without reference to any elements that lie outside of the structure -- be it other lexical items that do not occur in the given structure or speaker's beliefs (exclusivity desideratum) (cf. 29-31).

Equipped with these desiderata, Collins then considers a number of attempts to solve the unity problem, giving special attention to Frege's and Russell's proposed solutions. It must be kept in mind that Collins's intent in the book is not historical exegesis, but an attempt to draw lessons from the past that can be implemented in his own solution to the problem. Thus, the main lesson from Frege is that in order for there to be a linguistic unity, the lexical elements must be different in kind. But how should such a difference in kind be fleshed out? Collins has no interest in relating the lexical elements to the underlying Fregean ontological distinction between saturated entities (objects), and unsaturated entities (concepts). Nor does Collins wish to characterize different lexical elements in relation to the semantic contribution that they bring to the wholes they are parts of -- since this would go directly against his desiderata 2) and 3). The difference between lexical elements is for Collins due to a difference in their inherent properties (more about this below).

Russell's multiple relation theory of judgement (MRTJ) provides Collins with another important ingredient for his solution. By 1910, Russell had famously replaced his analysis of judgment as a dyadic relation between a subject and a proposition -- J(S, aRb), with the new analysis, according to which judgement was to be analyzed as the complex J(S, a, R, b, R(x, y)) involving a multiple relation J between the subject S, the entities a, R, b which would compose a fact (if S judges truly), and the logical form R(x,y) of such a would-be fact. In this way, the old Russellian propositions were dispensed with; in their place, now, were kinds of judgment complexes, their constituents and the logical form indicating the way in which those constituents ought to be arranged. Putting aside the debate about the reasons which led Russell to abandon MRTJ and the exact nature of the problems that theory faced, what is relevant here is that Collins sees Russell's different versions of MRTJ as getting to the very heart of the unity problem. According to Collins, Russell came to recognize: "that some independent principle of synthetic agency . . . is required over and above the constituents that enter into the content of what is judged" (97). At the same time, the problem with this insight, for Collins, is "that this independent ingredient cannot be posited as another element or a mere abstraction from unities already given, for it must structure the constituents that are its arguments" (98).

Thus, according to Collins, an important ingredient of the solution of the unity problem has to be some unity-conferring principle, along the lines of Russell's judging relation. But note that it cannot be just Russell's judging relation, since by itself, this relation cannot solve Collins's combinatorial problem -- that is, while it may be able to unify the constituents, it does not have the resources to structure them in such a way as to bring about the meaningfulness of what is being judged. In Collins's view, it was this particular lack in the judging relation that pushed Russell to add the logical form to the mix. The problem with this addition, however, was that the logical form was not capable of structuring the other terms of the judging relation -- it was merely another term amongst them.

So how does Collins propose to solve the unity problem? There are two main parts to his solution. The first is his introduction of the syntactic combinatorial principle called Merge, which features extensively in contemporary linguistics and, in particular, in Chomsky's recent work. Merge is supposed to do what both judging relation and logical form were meant to do for Russell -- namely, to provide a "synthetic agency" rendering both unity and structure.

Following Chomsky, Collins construes Merge as an external set-theoretic operator that targets any two elements α and β, and creates a new object γ = {α, β}. Merge is recursive -- it merges atomic objects as well as the previously merged ones, and it can do so ad infinitum. Each merged object displays the history of its mergers as an "individuative condition" that is, as Collins explains, "each object is structured as a sequence of binary pairings, where the nth embedded pairing corresponds to the nth operation of Merge" (110).

Collins argues further that Merge must be a binary operator. But as a binary operator, Merge creates symmetry between conjoined objects. The way to establish the desired hierarchical asymmetry is, according to Collins, to allow Merge to operate internally as well as externally, so that "any object of a pair may internally Merge to create a superset that contains the initial result of Merge" (113); this can be represented as follows: [α, β] → [α [α, β]]. Internal Merge can thus be thought of, Collins explains, "as a device of symmetry breaking, where one object of a directionally symmetrical pair is positioned so as to be asymmetrically related to a copy of itself and the copy's pair mate" (113). In this way, internal Merge selects an object that serves as the head of the structure -- presumably, he means α in the example of [α [α, β]] above. Creating headed structures brings us one step closer to interpretable unities.

It is important to understand that Collins's Merge is "indifferent to interpretation"; it creates headed structures and of these structures only some are interpretable. What then accounts for the interpretability of the select few merged structures? This is the second part of Collins's solution -- the part in which he invokes the Fregean insight: some lexical items that are merged fit together, others don't. This difference in "fit" is due to the inherent properties that lexical items possess. Thus, Collins suggests that we think of lexical items "not as unstructured simples" but "more like actual atoms that have properties that make them suited to form stable compounds" (118). That is, we should think of lexical items as "marked in ways that reflect the features of other items" (118). Collins gives as an example the verb love -- the syntactic property of this verb is that it takes two arguments, where the first argument needs to be an agent (AGENT) and the second argument the thing affected (PATIENT). It is properties like these that Collins considers to be "inherent" to syntactic elements and it is thanks to them that some syntactic elements when merged will give rise to an interpretable structure.

Finally, Collins argues that the bipartite account just outlined satisfies all three of his desiderata -- it meets the generativity desideratum by delivering an indefinite number of interpretable structures; with Merge, it provides us with a principle that is indeed specified independently from the unities it provides and the elements that come to make those unities up (thus meeting the explanation desideratum); and finally, it explains why a given structure is interpretable by referring only to Merge and the lexical elements that enter the structure (thus meeting the exclusivity desideratum).

Collins's account raises some important questions. First, he does not discuss in any detail the ontological status of Merge. We are told that it is a set-theoretic operator that is realized by any mind that is capable of grasping structures: "being non-specific, we can credit Merge to any mind capable of a certain kind of calculation, perhaps to insects capable of dead-reckoning, navigating birds, or even pattern recognizers in general" (117). Thus, if I understand Collins correctly, Merge need not be species-specific or language-specific; any mind that recognizes patterns realizes Merge. But this is troubling. Wasn't Merge supposed to be an operator that created structure out of syntactic elements? If Collins extends its application to a wide variety of objects, much more needs to be said about it. We are no longer in the metaphysically neutral territory that Collins seemed to want to confine himself to.

Closely related to the metaphysical status of Merge is a worry about the ontological status of merged objects. Syntactic Merge generates interpretable and uninterpretable linguistic structures. But if non-syntactic Merge is also available, what kind of non-linguistic structured objects would it generate?

Perhaps I misunderstand Collins on this, and all he actually has in mind is syntactic Merge and its syntactic products. After all, it is the syntactic Merge that Collins needs for his solution to the unity of linguistic meaning. But here a different worry presents itself -- a worry I was surprised not to find addressed in the book. Namely, Collins's account of unity relies on the notion of fit of syntactic elements to arrive at interpretable linguistic unities (since Merge on its own does not weed out the uninterpretable structures). But I am not at all sure that, without invoking semantic considerations, syntactic considerations alone have the resources to explain why certain elements fit together rather than others. It strikes me, for instance, that the reason why "Alice" fits with "is wise" rather than with "is raining" has much more to do with the meanings of the nouns and verbs in question than syntactic (lexical) fit. How does Collins propose to treat cases such as these or even more dramatic cases such as Chomsky's famous example "colorless green ideas sleep furiously"? Are such cases simply ruled out by Collins's notion of syntactic fit, and if so, how? If not, in what way are such sentences interpretable? At one point, when discussing the notion of syntactic fit, Collins explains that: "Some items take arguments, and it is the number, nature, and placement of such arguments that makes for an interpretive whole, however silly or implausible we might find its content" (118). Is this an indication that Collins would indeed consider the above examples as interpretable?

Collins's notion of interpretability may be too weak to capture what philosophers are after when they are talking about the unity of linguistic meaning. This notion perhaps needs to be weak if the solution to the unity problem is going to be fundamentally syntactic. But my understanding is also that Collins's notion of the aim and scope of syntactic theory is stronger than what philosophers assume it to be. Collins explicitly states at one point that that we ought not to conceive of syntax "as merely a descriptive means of representing bunches of words that exist anyhow, independently of their being structured one way or another" (129). On the contrary,

syntactic structure, in the guise of Merge or some other principle, is not a descriptive device that pertains to something out there already. It is hypothesized as a real structure, which we discover, not attribute, that constrains how we produce and consume the relevant vehicles, be they hand gestures or sound waves, or just thought processes (129).

While I find this Kantian approach very appealing, I remain doubtful, without further arguments on Collins's part, that syntactic structure can do as much.

Collins's book is certainly a noteworthy contribution to the debate surrounding the unity of the proposition. It is also an ambitious attempt to tie relevant developments in linguistics to an old philosophical problem. But the book sometimes suffers from trying to cover too much ground -- linguistic and philosophical -- too quickly, at the expense of clarity and accessibility. In particular, Collins's discussion of linguistic notions such as Merge and headedness struck me as far too brief to be sufficiently explanatory, and yet these notions are crucial to his account of linguistic unity. Despite these shortcomings, I am sure that Collins's book will prove to be a valuable resource for those investigating the insights that linguistics can bring to the problem.