Common Sense, Reasoning, and Rationality

Placeholder book cover

Elio, Renee (ed.), Common Sense, Reasoning, and Rationality, Oxford University Press, 2002, 278pp, $29.95 (pbk), ISBN 0195147677.

Reviewed by Jonathan E. Adler, Brooklyn College and the Graduate School, CUNY

2002.11.07


This is an important collection, marking the emergence of an exciting branch of cognitive science—the interdisciplinary, empirically informed study of reasoning. The collection could serve as a text for advanced undergraduate or graduate courses on reasoning or within a module for courses in cognitive science or philosophy of psychology.

The collection is based on a 1998 conference. Three of the essays are previously published. The articles presuppose some background in the relevant fields and a few involve technicalities, but not many. (Henry Kyburg’s contribution is an exception.) The essays are mainly introductions to works in progress. The editor, Renée Elio, provides a substantial and very useful introduction, including summaries of each chapter.

Of the 10 essays, four focus on AI modeling of reasoning and five on psychological studies of reasoning, particularly logic and probability. Gilbert Harman’s ‘The Logic of Ordinary Language’ is pretty much on its own. It extends the original explorations that he began in Change in View: Principles of Reasoning (MIT Press, 1986). Although its discussion focuses on the question of the role of logic in an account of inference, its major overlap with both groups is to be found in his brief treatment of belief-revision.

The articles in the first group by John Pollock, Kyburg, and Paul Thagard (et al.) should be humbling to those of us who work in epistemology. These philosophers, along with others, most prominently, Clark Glymour, have set themselves the ambitious goal of converting vague, abstract epistemological doctrines into precise, explicit, and structured sets of rules, ultimately to be tested by the implementation of programs.

The article by Thagard, Eliasmith, Rusnock, and Shelly, “Knowledge and Coherence”, well exhibits what a sustained epistemological project looks like within AI. The paper reports on an extension of Thagard’s program ECHO, which he has applied to modeling case studies of reasoning, especially in science. In this article, Thagard, et al. give concrete form to a related set of coherence notions (explanatory, analogical, conceptual, deductive, and perceptual). Each coherence notion comes with a set of specific constraints, and the acceptance or rejection of new elements is a function of maximization of constraint satisfaction. The primary application here is to Susan Haack’s ‘foundherentism’ and her vivid crossword-puzzle illustration.

Pollock’s essay is a particularly good entry point to the topic of AI and reasoning generally. Based on his ‘OSCAR’ project, Pollock gives us a grip on an AI approach to means-end reasoning. Toward the end he handles a knotty problem for AI—how to represent the overriding of a rule for effectuating a plan to light a match. The program incorporates a ‘default’ rule:

Temporal projection: If t01, believing P-at-t0 is a defeasible reason for the agent to believe P-at-t1, the strength of the reason being a monotonically decreasing function of t1-t0. (p.70)

So, believing that the match will light now is thereby a good, if defeasible, reason to believe it will light one minute later (and a less good reason to believe that it will light tomorrow.) But how does the program determine for what substitutions for P-at-ti temporal-projection applies? It does not seem to be appropriate for the stock market (e.g. believing that the NASDAQ is up-at-8am on 2.4.2002) or for finding one’s way out of an unmarked forest.

Although Pollock does not address the question of the scope of temporal projection, he does raise other difficulties for it. One problem, a descendant of Goodman’s ‘grue’ paradox, requires a restriction on disjunctions, which Pollock calls ‘projectibility’. Given a projectibility-restricted version of temporal projection, Pollock raises a difficulty alluded to above. If a match is not burning at t0, then we will infer by temporal-projection that it will not be burning at t2. But, of course, if it is struck at t1 and it is still dry, then we expect it to burn at t2. The highly credible solution is to apply temporal projection in temporal order. Since what happens at t1, when the dry match is struck, occurs prior to t2, this inference will go through, and the projection of no change (between t0 and t2) is preempted.

At the heart of the shift in AI from well-defined problems in highly restricted domains, such as checkers, logic proofs, and ‘cannibals and priests’—the kind of tasks for which Newell and Simon’s General Problem Solver was intended—to modeling common-sense reasoning is appreciation that the former type of problem radically understates the challenge of trying to formalize simple human reasoning. Common-sense reasoning depends upon a vast background of knowledge about causality, motion (e.g. in Elio’s example of cracking an egg, that the crack in the shell has to face down for the egg to go into the bowl) how our actions impact others, how simple things work, and so on. We develop this background early, and the learning never stops:

Commonsense knowledge seems to be that which people come to know in the course of growing up and living in the world. (Elio, p. 9)

An obvious problem with writing a program to accomplish common-sense reasoning is apparent. One of the first rules of programming is that computers are maximally stupid, requiring maximally explicit rules. But common-sense knowledge cannot—arguably—be explicitly represented.

Modeling common-sense reasoning requires default assumptions, as is illustrated above by Pollock’s use of temporal-projection. The reasoning is non-monotonic—new information may upset previous (well founded) conclusions or generate inconsistencies, neither of which can hold for sound and deductively valid arguments. If you see a bird, you ‘jump’ to the conclusion–infer—that it flies, even though it is not strictly true that all birds fly. Elio observes (p.22) that if we identify this common-sense reasoning with the cognitive architecture, we render reasoning “cognitively impenetrable” (inaccessible to the whole body of beliefs). However, this is an awkward grouping if common-sense reasoning is also holistic.

Kyburg’s essay takes up these issues and continues his remarkably sustained (for over 40 years!) investigation of inconsistencies that arise particularly due to conjunction or agglomeration. In this case, he focuses on measurement (a set of measurements can be acceptable, even though there are bound to be errors among them). As he has done in related works, he tries to show how (non-monotonic) logics and a probabilistic acceptance theory can be inconsistency-tolerant without being inconsistency-indifferent. Kyburg has created a formidable body of work, which, among other accomplishments, has made us aware of the subtle differentiations possible in the domain of the inconsistent and of the costs of the inapplicability of any absolute ban on inconsistency.

Default assumptions are also necessary to handle problems of computational feasibility. Heuristics, rather than algorithms alone, are needed to cut down on the vast space of possibilities, such as in the moves for a game like chess. Simon’s notion of ‘bounded rationality’ and ‘satisficing’ is taken up, under the label ‘bounded optimality’, in the second essay by Stuart Russell. The question is whether agents maximize utility in their actions, given the task demands and the environmental constraints. Such concerns over feasibility are a bridge to the psychological studies of reasoning. The studies of logical, probabilistic, and statistical reasoning have given rise to a roughly twenty year-old debate about rationality. Recently that debate has taken a new turn due to the perspective of evolutionary psychology, central to three of the four essays.

In his article, Gerd Gigerenzer (with Czerlinski and Martignon) reports his studies of ‘fast and frugal heuristics’, which, though it develops along with his work on evolutionary psychology, is independently fascinating. Antecedently, we think of speed and accuracy in problem-solving as values to be traded-off. What Gigerenzer finds is that a number of heuristics we deploy give us speed with very little cost in accuracy.

As illustration, consider the heuristic ‘take-the-best’. Imagine a problem like predicting which city has a higher homelessness rate given various ‘cues’: rent control, vacancy rate, temperature, unemployment, poverty, public housing. Take-the-best selects the cue with highest validity, and if that cue does discriminate between the two target cities (if the cue is positively correlated with a higher homelessness rate in one of the cities, but not the other), then take-the-best offers that city as the one with the higher homelessness rate, and stops. Otherwise it goes on to the next ranked cue in validity. The arresting finding is that take-the-best does about as well as a number of other rules that are responsive to more factors and that evaluate them on more detailed criteria.

Take-the-best and similar fast and frugal heuristics succeed because of their ‘ecological rationality’. Many real-world environments offer inquirers scarce information from which to make predictions. Rules that exploit more cues can gain accuracy at the cost of a longer search only if these additional cues supply enough new information to compensate. But they do not when the information is redundant. Another advantage to fast and frugal heuristics is that very few cues are often highly diagnostic, and take-the-best only looks to cues with high validity.

Denise Cummins’ ‘The Evolutionary Roots of Intelligence and Rationality’ is a compact introduction to the evolutionary-psychology approach. The basic thesis is that the mind evolved under natural selection pressures, and it evolved in the form of numerous, innate, domain-specific modules (including domains of reasoning), which are adaptations.

In her article, Cummins’ is particularly interested in modules for appreciating and detecting cheaters in cooperative social arrangements. Aside from treating the evolution of guile, deceit, and the development of children’s ‘theory of mind’, she presents the well-known evolutionary psychologists’ reinterpretation of the most famous test on conditional reasoning (the “Wason selection” task). Cummins, along with Cosmides and Tooby and Gigerenzer, the leaders in the field, claim that subjects greatly improve on this task when it is specified in deontic terms of what is permitted and forbidden, and that success with this content is not merely an artifact of content-familiarity.

Recently, however, Fodor threw a zinger their way. (See “Appendix: Why we are so good at catching cheaters” in his The Mind Doesn’t Work that Way, MIT Press, 2001). In very brief, Fodor argues that the deontic “it’s required that if p then q” cannot be read as a conditional. But if the rule is not a conditional then the conclusion that subjects do much better on a deontic version of the task does not go through.

If Fodor’s argument holds up, it suggests a rethinking of the import of a number of the evolutionary psychologists’ studies on probability. For in each of the evolutionist’s studies both the format of the task and the question assigned is different than in the original studies. Should the higher percentage who judge according to the probability rules be reported as showing that the traditional interpretation has been too harsh in criticizing subjects, since based on contextually inappropriate tasks? Or, instead, should these new variants only be taken to show limits on the scope of subjects’ defective reasoning?

Richard Samuels, Stephen Stich, and Michael Bishop (SSB) in their “Ending the Rationality Wars” begin by noting that the heuristics and biases tradition, as a pessimistic one, and the evolutionary approach, as optimistic, have been understood as competing. The central thesis of their paper is that this understanding is misguided, at least for the core claims of each view.

However, in their largely successful effort at reconciliation, they continue to express their claims in terms of rationality. But this is a main source, I think, of the inflated claims that have generated more fog in the discussion than is necessary. Without a full weighing of the expected costs and benefits to us of following the rules, which explains our failures in the various tasks, what is the basis for moving from those failures to irrationality? The latter claim is more tendentious and less determinate. (Think here of the debate over cross-cultural attributions of irrationality.) A further problem is that the reasoning studies are little focused on revising beliefs with new information (continuous environments) and even less so about discursive reasoning (as in extended argument). So we cannot take these studies as giving us a full enough picture for crucial judgments of diachronic rationality.

It is unfortunate that SSB do not consider the views of Oaksford and Chater, who argue in their essay that common-sense reasoning is inductive or inference to the best explanation, rarely deductive. Although their claim that we confront a world of uncertainty, and that therefore our reasoning must be non-monotonic, has a ring of truth, much more is needed for rejecting any significant role for deductive logic. To use an example from Lance Rips, does not deductive logic play a role in drawing consequences like the following?

Bill is in Alaska and Mary likes soccer. So, Bill is in Alaska.

For that matter, do not inferences to the best explanation embed deductive rules (e.g. modus ponens)?

Mike Oaksford and Nick Chater’s most impressive results are in regard to the (much explained) Wason selection task. They take subjects to be guided by a rule to increase expected information, and so to construe their task not as one of falsification but as one of data selection. (Their article summarizes an extensive and forceful body of experiments and reflections.) Moreover, whereas SSB accept the normative conclusions of the heuristics and biases literature, Oaksford and Chater, like L. Jonathan Cohen and others, dissents from it.

Like SSB, Oaksford and Chater’s philosophical remarks lean heavily on the notion of rationality, and are none the better for it. The underlying premise of Oaksford and Chater’s argument is that the correct theory of rationality must imply that we are by-and-large rational. That implication requires, they believe, that we (qua subjects) succeed in the experimental studies, including those on logic (e.g., the Wason selection task), probability (e.g., the role of conjunction, integration of base-rates), and statistics (e.g., the law of large numbers, regression). It seems to me that they cannot establish this latter (alleged) requirement. It is not credible that fruitful, intelligent lives cannot be achieved if, say, one neglects base-rates when in opposition to stereotypical or representative information. Further, the empirical studies do not show any wholesale rejection of rudimentary rules or principles of logic, statistics, and probability, at least at the level of competence.

In ‘Reasoning Imperialism’, Rips opposes the tendencies of cognitive psychologists, like Oaksford and Chater, but also Johnson-Laird, to treat their theories of induction [deduction] as usurping any role for deductive [inductive] reasoning. (Rips is best known for his experimental demonstrations of the mental representation of deductive logic rules.) Rips defends the view that our reasoning about validity (that the conclusion must be true if the premises are) and about inductive goodness (that the conclusion could be false, even if the premises are true, but the premises render the conclusion more probable or plausible) are separate reasoning processes and not to be assimilated to either a one-mode strategy or even a two-mode one (where the same mechanism contains a switch between deductive validity and inductive strength).

The richly suggestive punch line to Rips’ article is that neuroimaging studies of subjects asked to judge either deductive correctness or plausibility showed that “induction and deduction recruit different brain areas.”(230) Discussion here is extremely brief and followed immediately by a range of interpretive problems for the neuroimaging studies so far, but with optimism about future research in this direction.

I conjecture that Oaksford and Chater would respond that the real question is how do subjects respond to the arguments presented to them, when they aren’t clearly labeled so as to force an inductive or deductive interpretation. But even if they were correct that a common construal would largely be inductive and that if so construed subjects’ answers are correct, it would not follow that the criticism of subjects’ understanding of deductive rules is excluded (and similarly for probabilistic rules). For the construal itself may result precisely from a weak grasp of the formal principles under study.