Language in Context: Selected Essays

Placeholder book cover

Jason Stanley, Language in Context: Selected Essays, Oxford University Press, 2007, 264pp., $39.95 (pbk), ISBN 9780199225934.

Reviewed by Gary Ostertag, Nassau Community College


Although the individual essays collected in Language in Context have already been widely read and discussed, it is useful to have them collected in a single volume. Reading the essays together, framed by an informative Introduction and Postscript, one can appreciate the richness, complexity and breadth of Jason Stanley's theoretical framework. These essays represent the state of the art in semantics and the philosophy of language and are mandatory reading for anyone working in these and related areas.

Minimalism, Contextualism and Indexicalism

In uttering 'It's raining' (R) at precisely this moment I communicate the proposition that it's now raining in Manhattan. Indeed, it seems that something stronger is true -- that in uttering these words I say that it's now raining in Manhattan. But while the present tense indicates the time of rain, no expression explicitly occurring in my utterance provides its location. The Minimalist takes there to be a strict correlation between the surface grammar of a sentence S and the proposition one can use S to say, relative to a context. Since there is no expression in (R) whose value at the context is Manhattan, the proposition expressed by my utterance of (R) cannot have Manhattan as a constituent. The Contextualist denies this requirement of strict correlation, holding that the minimal proposition encoded in (R)'s syntax is amenable to an optional process of free enrichment. On her view, what I said does involve a location, even though this location cannot be traced to any element in surface grammar. To adopt her terminology, it is an unarticulated constituent of my utterance.

The Minimalist holds that the semantic role of context is limited to assigning referents to so-called "automatic indexicals" ("I", "here", "now" and the like) and other explicitly context-sensitive expressions;[1] the Contextualist counters that this denies the obvious. True, the logical form of the sentence-type (R) contains no item whose value is Manhattan; nonetheless (she claims) I used (R) to say -- and not merely convey -- that it's now raining in Manhattan. Yet, while the Contextualist remains faithful to speakers' intuitions, there is a question whether she can give a principled account of how we arrive at the relevant proposition. If the mechanisms underlying pragmatic enrichment are truly "free" -- unconstrained by logical form -- then there is a real worry that our speaker's capacity to interpret those utterances freely enriched by context will elude systematic treatment.

Is there an approach that can avoid the Scylla of Minimalism, which must consign very real semantic phenomena to pragmatics, without succumbing to the Charybdis of Contextualism, which respects our semantic intuitions but may fail ultimately to locate their source within a systematic theory? Stanley argues that Indexicalism is just such an account. While Contextualism constitutes a bold departure from Minimalism, maintaining that the semantic role of context is not limited to assigning values to (or "saturating") those context-sensitive items present at surface grammar, Indexicalism is equally bold in holding that all contextual effects "can be traced to logical form" (30), with the latter understood as a level of syntactic representation.

The fundamental contrast between Stanley and the Contextualist is that, for Stanley, every content-relevant aspect of an utterance of S is traceable to its logical form whereas, for the Contextualist, context can supplement the content of S in a manner unconstrained by logical form. While Stanley holds that certain elements of what is said are not "articulated" at surface grammar, he nonetheless maintains that these elements are articulated at the level of logical form and thus should not be confused with the unarticulated constituents posited by Contextualists. The major contribution of this work is the description of a framework in which these hidden elements are articulated and their truth-conditional impact made clear.

Indexicalism and the Binding Argument

Indexicalism holds that all contextual effects are traceable to logical form and can thus be handled, like automatic indexicals and demonstratives, by a process of saturation. How can we decide whether, with respect to (R), location is present at logical form or merely an optional parameter, provided by a process of free enrichment? Well, binding is a semantic phenomenon par excellence, so if it can be shown that the interpretation of rains depends on a higher quantifier -- one in position to bind a location variable, on the assumption that there is one -- then the obvious conclusion to draw is that rains does possess a location variable.

And indeed, it appears it can be shown. Take Stanley's example (1), under the relevant reading (1a):

1. Every time John lights a cigarette, it rains.

1a. Every time t, if John lights a cigarette at t, it rains (at t) at John's location at t.

In Stanley's formalism this becomes:

1b. Every time t, if John lights a cigarette at t, rains <f(t), g(t)>.

A note on the formalism: f is the identity function and g is a function from times to locations -- specifically, a mapping from t to John's location at t. It would seem that the verb rains is adequately represented as rains <t, l>, where t is a point in time and l a location. But since the value assigned to l will always be a function of t, it is best represented as g(t).

If we are to take the binding test at face value, rains tests positive for the covert structure posited by Stanley's analysis. In assigning rains the two parameters of time and place the analysis captures the intuitive truth conditions for (1). Were we to restrict ourselves to a temporal parameter, we couldn't capture the fact that (1) is false at a circumstance in which John smokes in Manhattan and it immediately begins to rain not there but in Boston.

It should be noted that the step from the observation that the interpretation of rains in (1) depends on the higher quantifier to the conclusion that rains has a covert location variable at logical form rests on what Stanley calls "The Binding Assumption" (49):

If α and β are within the same clause, and α semantically binds β, then either α is, or introduces, a variable-binding operator which is co-indexed with, and stands in a certain specified structural relation to, a variable which is either identical to, or is a constituent of, β.

That is, the above argument assumes that the semantic binding exemplified in (1) is revealing about (1)'s logical form. If the Contextualist concedes what seems obvious -- that semantic binding occurs in (1) -- and if she accepts the Binding Assumption, she must also accept that syntactic binding occurs in (1). While there are variable-free systems that deny this assumption, Stanley shows that these must remain off-limits to the Contextualist (50).

The analysis can be extended to other lexical categories. For example, in Chapters 1-3, Stanley develops the Nominal Restriction Theory (NRT), according to which the phenomenon of quantifier domain restriction is captured by the positing of covert domain variables -- although in this case, the variables are part of the lexical structure of nominals, not verbs. In a typical utterance of 'every vase is broken' the speaker does not intend to be interpreted as making a claim about every vase in existence, but only a contextually restricted range of vases. The nature of this restriction has been the subject of much debate. In Chapter 2, jointly written with Zoltan Gendler Szabó, it is argued both that the domain relative to which the above sentence is interpreted is implicitly restricted to a contextually definite domain of vases and that this restriction is grammatically based. The positive argument for this proposal parallels the argument we considered in the context of rains. If it can be shown that, relative to a given sentence, the domain of a quantifier expression Q2 occurring within the scope of another quantifier expression Q1 depends on Q1, then the obvious conclusion to draw is that Q2 possesses a domain variable bound by Q1. Moreover, given the Binding Assumption, it follows that this domain variable exists at logical form.

With this as background, consider (2). The relevant reading, given by (2a), is analyzed in Stanley's notation as (2b):

2. In most of John's classes, he fails exactly three Frenchmen.

2a. [most x: John teaches x] ([exactly three y: y Frenchman & y in x] (John fails y)).

2b. [most i: John teaches i] ([exactly three y: y <Frenchman f(i)>] (John fails y)).

In (2b), f is a function from John's classes to the set of individuals in the class. As with (1b), the quantifier binds a position in a (covert) function expression f(i); this function expression is itself a variable whose value is provided by saturation. A quick sketch of the semantics of the restriction on the embedded quantifier goes as follows: Something satisfies 'y <Frenchmen f(i)>' relative to an assignment of a class (in the academic sense) to i just in case it is within the intersection of {x| x Frenchman} and the set f(i). Note that this delivers the desired truth conditions for (2): for most classes i that John teaches, he fails exactly three Frenchman in i.

There remains the question of the syntactic positioning of the nominal restrictor. The hypothesis on the table is that there is a function expression that is not phonetically realized but which nonetheless exists at logical form. Stanley and Szabó argue that this expression "co-occurs" with the nominal (103-7). But then it is unclear how, from a syntactic point of view, the quantifier can bind the covert variable.[2] As John Collins (2007) points out, the posited variable is not in the position required for it to be bound by the higher quantifier -- it is not within its c-command domain. This provides something of an embarrassment for the proposal, which gets a fair amount of rhetorical support from its presumed compatibility with "correct syntactic theory". Stanley is explicit in making the following assumption:

In semantic interpretation, one may never postulate hidden structure that is inconsistent with correct syntactic theory (35).

In fact, mere compatibility with correct syntactic theory is not a substantive constraint on semantic theorizing. One can respond to Collins' worry by claiming that it is open to Stanley to introduce an extended c-command relation -- one that explicitly captures the intended binding relation between quantifier and the relevant (buried) variable. To say that the higher quantifier syntactically binds the relevant variable is then to say that the latter is within the extended c-command domain of the former. But to be properly constrained by correct syntactic theory is not simply to avoid postulation of "hidden structure that is inconsistent with correct syntactic theory" -- this surely doesn't raise the bar very high -- it is to avoid, inter alia, postulating syntactic relations that are unrecognized by correct syntactic theory.

To be fair, Stanley now rejects NRT, which claims that the relevant covert variables co-occur with the nominal, in favor of the view that these variables occupy their own terminal nodes (Postscript, 249). But the revised view is considered and rejected in Chapter 2 (first published in 2000): "This involves the postulation of an entire unarticulated relative clause. Such a postulation ultimately requires syntactic justification" (106; see also the analysis (41) on 104). Surprisingly, no such justification is even hinted at in the Postscript, which focuses on purely semantic concerns. Moreover, given that the revised view posits an entire unarticulated relative clause, the distinction between the semantic and syntactic-ellipsis approaches to domain restriction breaks down. Yet, the latter approach is rejected in Chapters 2 (86-9) and 5.

There are also worries about the data motivating Indexicalism. The binding test is notorious for finding covert structure in verbs where there seems to be none -- this is the "over-generation problem". While it would seem obvious that rains contains a hidden location parameter whereas dances does not, they both test positive under Stanley's test. Consider the following pair:

3. Nina is dancing.

4. Every time I am at a disco, Nina is dancing.

In uttering (R), I say that it's raining in Manhattan; but it seems clear that there is no context in which I can use (3) to say (and not merely implicate) that Nina is dancing in Manhattan. Yet, both (1) and (4) exhibit a dependency: (1) says that for every time t at which John lights a cigarette, it rains at John's location at t; similarly, (4) says that for every time t at which I am at a disco, Nina dances at t at the relevant disco.[3]

One possibility is that genuine covert structure is revealed not by testing positive under the binding test alone, but by also testing positive under "the negation test" (226; see also Martí 2006). This test is applied to contexts that test positive for covert structure according to the binding test. For example, the binding test would appear to posit a variable in A's remark, even though eats apparently has no hidden variable:

A. Whenever John's father cooks mushrooms, he eats.

B. *No he doesn't; he orders take-out and eats that.

The negation test is applied with B's remark. Were there a hidden food variable in A's assertion, then B's remark would contradict it. Since it doesn't succeed in doing so, eats fails the negation test. Thus, eats has no covert food variable.

As Stanley observes (226), the above fails to pattern with the following (slightly modified):

C. Whenever John smokes, it rains.

D. No it doesn't. But check this out -- it rains somewhere else!

D's remark does contradict C's. Since it does, rains passes the negation test. But then the Indexicalist has a solution to the over-generation problem.

Still, a problem from under-generation remains. Cappelen and Hawthorn (2007) show that, if we restrict ourselves to functions of a single variable, we are unable to capture dependencies that, modulo the Binding Assumption, exist at logical form. Consider, for example, (5), with reading (5a), and Stanley's analysis (5b):

5. Every time someone smokes, it rains.

5a. Every time t and person x, if x smokes at t, it rains (at t) at x's location at t.

5b. For every time t, person x, if x smokes (at t), then rains <f(t), g(t)>.

Recall that in (1b) g is assigned a mapping from t to John's location at t. In effect, g is assigned a John-dependent function -- a restriction of binary g*, which maps <p, t> to the location of p at t. Thus we need something truth-conditionally equivalent to (5c) but which, in conformity with (5b), assigns a unary function to the function variable g:

5c. For every time t, person x, if x smokes (at t), then rains <f(t), g*(x, t)>.

But since there is no unary function from times to locations that can mimic g*, no instance of (5b) will do.

One solution is to shift to an event-based analysis, one according to which "verbs are associated with event or situation variables" (257). On this view, (1)/(5) might become (1d)/(5d):

1d. ∀e (e is a cigarette-smoking & e is by John → ∃e* (e* overlaps e & e* is a raining f(e))).

5d. ∀pe (e is a cigarette-smoking & e is by p → ∃e* (e* overlaps e & e* is a raining f(e))).

(Where f is a function from events to co-located events.) (1d) says that for every cigarette-smoking by John there is an overlapping event which is a raining; (5d) is a universal generalization of this sentence.

While Stanley now prefers this account (257), it threatens to obliterate the distinction he is at pains to preserve between verbs like rains and verbs like dances and eats. To see this, consider the event-theoretic analysis of A's remark:

e (e is a mushroom-cooking & e is by John's father → ∃e* (e* overlaps e & e* is an eating & e* is by John)).

That is: for any mushroom-cooking conducted by John's father, there is a subsequent eating (of something unspecified) by John. If this is the correct analysis, it becomes hard to see why context can't supplement this content by adding that the object of John's eating is the food cooked by John's father:

e (e is a mushroom-cooking & e is by John's father → ∃e* (e* overlaps e & e* is an eating & e* is by John & e* is of f(e))).

(Where f is a function from events to their products -- here, from John's father's cooking to cooked mushrooms.) While this reading is unavailable -- at least, so the negation test tells us -- notice that Stanley can no longer explain why this is so. His initial proposal, however, has a neat explanation: eats contains no covert variable for thing-eaten. That the event-based analysis has nothing to say here is particularly awkward for Stanley, who criticizes the Contextualist for failing to explain why certain enrichments are not possible (238). For example, while one can use 'Every Frenchman smokes Gauloises' to express the proposition that every Frenchman who's read Sartre smokes Gauloises, one cannot use it to express the proposition that every Frenchman or Dutchman smokes Gauloises. Nothing in the Contextualist's account of free enrichment explains why only the former sort of enrichment is possible. But, if we are to adopt Stanley's preferred analysis, he no longer has an explanation of this fact either.

The problem from under-generation extends to NRT. As Breheny (2003) argues, there are cases in which the restriction co-occurring with a nominal cannot be a function of a single variable. Consider Breheny's example:

6. Every student was feeling particularly lucky and thought no examiner would notice every mistake.

One available reading is captured as follows:

6a. Every [student]x was feeling particularly lucky and thought no [examiner]y would notice every [mistake occurring on a paper turned in by x and examined by y]z.

To get the appropriate restriction on mistake we require a function of two variables; but on Stanley's analysis the nominal restriction is a function of a single variable. So an intuitively correct analysis of (6) seems unavailable to him.

Stanley responds by noting that examiner is a relational expression and can thus be bound by every student, yielding the following reading (223):

6b. Every [student]x thought no [examiner x]y would notice every [mistake f(x)]z.

(Here, f is a function mapping students to their papers.) This seems off, though. The thought ascribed in (6) to each student x is that no examiner of x's paper will notice every mistake on a paper x turns in and y examines; but the thought (6b) ascribes to each student x is that no examiner of x's paper will notice every mistake on every paper x has ever turned in. It appears, then, that if we limit ourselves to a function of a single variable, we don't get precisely the restriction we want, with the result that an available reading is not captured.

One final area of concern involves a variant of (1), namely: 'Wherever John smokes, it rains'. As with (1), the interpretation of rains depends on the higher quantifier. In this case, however, the relevant dependency doesn't exist at logical form:

For all locations l, times t, if John smokes in l at t, rains <f(t), g(t)>.

Assuming the analysis of rains is correct, Stanley must either show that, contrary to appearances, the above sentence does not exemplify semantic binding -- even though (1) does -- or concede, contra the Binding Assumption, that semantic binding is not always reflected syntactically.

Extending the Analysis

So far, the cases where context appears to play a semantic role have been plausible candidates for Indexicalist treatment. But there are other cases where context appears to play a semantic role but where it is initially unclear how an Indexicalist analysis should proceed. Consider the following example:

7. Eating some of the cake is better than eating all of it.

Assuming that it does not express a literal contradiction, it appears that an utterance of (7) can express a truth only given contextual supplementation. One option -- perhaps pre-theoretically the most obvious -- is to take the content implicated by the first gerund (eating some but not all of the cake) and have this serve as an argument for the better-than relation. This would deliver the proposition that eating some but not all of the cake is better than eating all of it. However, this option involves pragmatic intrusion, wherein a post-semantic process delivers the input to the composition rules. A second option holds that the minimal proposition that constitutes (7)'s semantic content is supplemented by a process of free enrichment. This again involves pragmatic intrusion, as a post-semantic process contributes to what is said. A final option holds that there is some constituent of (7)'s logical form whose contextually assigned value determines the utterance's intuitive reading. In the terminology of Chapter 4, the third option recognizes only weak pragmatic effects -- cases in which "context … determines interpretation of a lexical item in accord with the standing meaning of that lexical item" (140) -- whereas both of the former options recognize strong pragmatic effects -- effects that are not weak in the above sense.

This chapter, co-written with Jeffrey King, argues that pragmatic intrusion can be avoided, claiming that the arguments in its favor "rest upon an inadequate grasp of the syntax and semantics of the particular constructions that appear to give rise to it" (163). To appreciate King and Stanley's proposal, let's consider first their semantics for "better than" (slightly modified):

(BETTER) p is better than q just in case the closest p-world is preferable to the closest q-world.

(Where the respect in which p is preferable to q is determined by context.) On a straightforward application of this equivalence, what someone uttering (7) says is (7a):

7a. The closest world at which I eat some of the cake is preferable to the closest world at which I eat all of the cake.

But as King and Stanley point out, this will in some cases deliver an absurdity. Maybe the closest world at which I eat any of the cake at all is a world at which I eat the entire cake. Then (7a), and thus (7), would be true just in case a given world is preferable to itself!

What this interpretation overlooks, they suggest, is that (7) is naturally read in such a way that emphasis is placed on some. Moreover, this emphasis generates what is known as a scalar implicature. Very roughly, a scalar implicature occurs when the choice of a given word in a given linguistic context indicates that replacement by a stronger word within that context would produce a falsehood. Examples of relevant scales are: <believes, knows>, <some, all>, <or, and>. Thus, when Mary assertively utters 'I believe that Krugman admires Friedman' (note the emphasis) she implicates that she does not know that Krugman admires Friedman. Similarly, to employ some in the matrix 'eating x of the cake' as it occurs in (7) generates the implicature that inserting all would produce a falsehood -- that is, it implicates that (7b) is false:

7b. The closest world at which I eat all of the cake is preferable to the closest world at which I eat all of the cake.

King and Stanley claim that this implicature has truth-conditional effects, but deny that these are strong pragmatic effects: "This … implicature does affect the truth-conditions of [(7)]. But the way it affects the truth-conditions is not by 'enriching' the semantic content" (171).

How, then, does the implicature affect (7)'s truth-conditions? Their answer involves locating an additional dimension of context-sensitivity in better than: the closest­­ p-world relation is restricted by context to a specific domain of worlds -- in this case, the set of non-gluttonous worlds (where a world is non-gluttonous just in case I don't eat all of the cake there).[4] Relative to such a restriction (7a) is true.

What this sketch fails to tell us is how the restriction is effected. There are two possible answers, both unsatisfactory. The first runs as follows: When the utterance is first interpreted -- relative to the initial context -- two things occur: (i) it is taken to express something potentially absurd; (ii) it implicates that (7b) is false. Given (i), it is reinterpreted, this time relative to a revised context, one incorporating the implicated content. Relative to the revised context, the utterance expresses something true.

Two problems immediately present themselves. First, while a speaker, in stating that some F's are G, typically conveys that it is false that all F's are G, this scalar implicature fails to get generated when it is cancelled by the context. Moreover, it must equally fail when the relevant proposition is already mutually known -- when it is already contained in the context. But the proposition implicated by the utterance of (7) -- that (7b) is false -- is trivially true and thus presumably mutually known. How, then, are we to make sense of the idea that this proposition, being trivially true, can be implicated by an utterance of (7)? And how would "adding" this trivial truth to the context have an effect on the interpretation of (7)?

Second, if we draw the line between semantics and pragmatics precisely at the point where the compositional mechanisms end and the all-purpose inferences begin, then the proffered analysis, involving what appears to be a strong pragmatic effect, should not be available. After all, on the suggested analysis, output from the (weak) pragmatic module is used as input to the semantic module.

These combined worries lead me to believe I've misinterpreted King and Stanley. After all, they are explicit in maintaining that their account avoids pragmatic intrusion. Perhaps, then, King and Stanley are proposing that we take the emphasis on some in (7) as effecting a restriction directly, without an intervening process of implicature generation. (Although this makes it unclear what role scalar implicature plays in their account.) The effect of this word choice (together with stress) would be to restrict the first occurrence of the closeness relation in the relevant instance of the truth-clause ('closest world in which I eat some of the cake') to non-gluttonous worlds -- worlds at which I don't eat all of the cake. Rather than appealing to scalar implicatures, this proposal would simply appeal to the relevant scales.

But this also presents difficulties. If we take the choice of some in the clause 'eating some of the cake' to have semantic and not merely pragmatic import, then it is hard to see how the same word choice could have merely pragmatic import in 'John ate some of the cake'. Of course, one might bite the bullet and maintain that, appearances to the contrary, the latter choice does have semantic significance, but this would be a rather desperate move -- one which I doubt King and Stanley would want to make.

It bears emphasizing that there's a hefty assumption accompanying each of the above options: that the syntax of the better than construction supports the conjecture that some item in the logical form of 'p is better than q' determines the value of (or the restriction on) the closeness relation.[5] While King and Stanley acknowledge this fact (171), they remain optimistic that their conjecture will be borne out by future developments.

The Dialectical Advantage of Contextualism over Indexicalism

In Literal Meaning, François Recanati writes that the Contextualist has a dialectical advantage over the Indexicalist. Whereas the Contextualist makes an existential claim -- that there is at least one case of free enrichment -- the Indexicalist makes a (negative) universal claim -- that there are no such cases. Stanley disputes this, arguing that the Contextualist herself makes a universal claim: for each of her examples -- assuming that they are genuinely semantic and not amenable to pragmatic treatment -- she must show that there is "no way" that it can be handled by a more refined semantic analysis (239). Perhaps I'm misunderstanding the dialectical situation, but it seems to me that if the Contextualist establishes that no semantic treatment is available for even one of her examples, she has succeeded. Of course, there remains the embedded (negative) universal. But I would venture that Stanley's assumption (quoted above) cuts both ways: all the Contextualist needs to show is that, for at least one case of apparent free enrichment, there is no available semantic treatment consistent with correct syntactic theory. Perhaps the Contextualist still needs to offer some guarantee that there is no way he has overlooked a correct semantic account meeting this condition. But the point is not worth making and once made, shows itself to be dialectically gratuitous, as it also cuts both ways. After all, a parallel (and equally unreasonable) demand can be made by the Contextualist: for each Indexicalist treatment of a case of apparent free enrichment, the Indexicalist needs to assure us that there is no way he has overlooked a potential inconsistency with correct syntactic or semantic theory. In sum: once both parties are assumed to hold to contextually definite standards of proof -- in this case quite high -- the dialectical advantage goes to the Contextualist and precisely for the reason Recanati gives: he is making an existential claim, in contrast to the Indexicalist, who makes a universal one.


The forgoing indicates certain challenges facing the Indexicalist. But there is no question that Language in Context is an outstanding achievement. Not since Stephen Neale's Descriptions has a book brought the apparatus of formal semantics and linguistic theory to bear on issues in the philosophy of language in such a constructive and illuminating way.[6]


Breheny, R. 2003. "A Lexical Account of Implicit (Bound) Context Dependence". Proceedings of SALT 13. Ithaca: CLC Publications.

Cappelen, H. and J. Hawthorne. 2007. "Locations and Binding". Analysis 294: 95-105.

Collins, J. 2007. "Syntax, More or Less". Mind 116: 806-50.

Martí, L. 2006. "Unarticulated Constituents Revisited". Linguistics and Philosophy 29: 135-66.

Perry, J. 1997. "Indexicals and Demonstratives." In B. Hale and C. Wright (eds.), A Companion to Philosophy of Language. Oxford: Blackwell.

Recanati, F. 2004. Literal Meaning (Cambridge: Cambridge University Press).


[1] "Automatic" in that their reference is determined independently of the speaker's intentions, unlike other context-sensitive items; see Perry (1997).

[2] The syntactic representation of a sentence is a tree, with items such as Sentence, DP, NP and VP occupying non-terminal (i.e., branching) nodes. The terminal nodes -- those without further branches -- are labeled; these labels are typically lexical items. In (*) [S [DP[Det every] [NP­ child]] [VP ­smiled]] the non-terminal nodes are the categories S, DP, NP, Det and VP; the terminal nodes are the labels: 'every', 'child' and 'smiled'. On Stanley's view, the word 'child' is really 'child f(i)'; thus (*) should be:

[S [DP[Det every] [NP­ child f(i)]] [VP ­smiled]].

Since 'f(i)' is part of the lexical structure of the nominal and does not occupy its own terminal node, it is not bindable by the higher quantifier.

[3] Both the example and the analysis are from Cappelen and Hawthorne 2007.

[4] More precisely, the effect is to restrict our attention to these worlds only when interpreting the first closeness relation; we interpret the second relation relative to the entire domain of possible worlds. This corrects an error in King and Stanley's discussion (171).

[5] In addition, the second option assumes that stress is a genuinely semantic phenomenon and, consequently (given Stanley's assumptions), must be represented syntactically. While some have argued that stress (or focus) has syntactic effects, it is worth noting that this is not yet orthodoxy and thus not obviously consistent with "correct syntactic theory".

[6] Thanks to Ray Buchanan, Frank Pupa and Zsofia Zvolensky for very helpful comments on an earlier draft.