Epistemology and the Psychology of Human Judgment

Placeholder book cover

Michael A. Bishop and J.D. Trout, Epistemology and the Psychology of Human Judgment, Oxford University Press, 2005, 205pp, $24.95 (pbk), ISBN 0195162307.

Reviewed by Alan Goldman, College of William and Mary

2005.06.04


This book is enjoyable to read and useful first of all for its summaries of much recent psychological data on reasoning. From all this data the authors extract clear descriptions of various reasoning fallacies, suboptimal reasoning strategies, and ways to avoid fallacies and improve strategies. For these authors the main tasks of epistemology are to translate psychological studies into normative epistemic principles and to explain why some reasoning strategies are better than others. The epistemological framework involves epistemic cost-benefit analysis. Its main principle is that we should use the most cost-efficient reasoning strategies, those with the highest epistemic bang for the buck -- which are most reliable on the most significant problems. Reasoning strategies make judgments or predictions based on evidence or cues, and we want to find and use those that generate the most true beliefs about significant problems with the least expenditure of epistemic resources.

The main practical recommendation is to use Statistical Prediction Rules (SPRs) in various domains instead of human judgment that attempts to take all relevant evidence into account. Experts are to ignore what appear to be relevant factors in favor of simple rules expressing correlations between a limited number of weighted or unweighted cues and target properties. Such rules have been found to be predictively reliable in school admissions, credit risk assessments, psychiatric diagnoses, parole violation predictions, and other areas. They are therefore to be preferred on the basis of cost-benefit analysis, i.e. in terms of both ease of use or efficiency and reliability. Experts' consideration of evidence not mentioned in these rules lowers instead of increasing reliability.

There is a nice chapter on explanations for the relative success of such rules over human judgment that focuses both on why they work and why we don't. Experts who use them must be willing to make errors even when they know they are making errors and can do better. Such discipline and toleration is necessary in the use of any genuine rules. Local errors must be tolerated for the sake of minimizing errors globally since we will get it wrong more often than not in trusting our own judgment. As for the rules themselves, there is a tradeoff between simplicity or usability, and accuracy. The central question is always when to defect from a rule when one believes one can do better by doing so. Bishop and Trout suggest that one may defect when one has a theory that explains both the success of the rule and why it would not be successful in the context at hand, or when the rule is untested in various subpopulations of its target group, including the subpopulation at hand. But their general advice is that one should always strongly resist defection from an SPR shown in tests to be generally more reliable than human judgment. The problem in some of the discussion is one common in discussions of obedience to rules: the authors typically consider only two dispositions -- that of defecting when one thinks one can do better or that of not defecting. But there are more complex dispositions that set the epistemic bar higher for when one should defect.

A more serious problem lies in the authors' uncritical confidence in the studies, and the scope of the studies, they cite, and in their consequent failure to heed consistently their own admonitions and admissions. One example is their frequent reference to the "interview effect," the claim that interviews lower the reliability of all kinds of admissions and hiring decisions. They call this "one of the most robust findings in psychology" and several times chide academic departments and others who continue to use interviews as part of their hiring process. In support of this claim they cite four studies. The first, from 1947, actually reports mixed results for interviewers' predictions of success in military training schools. Two others relate only to predictions of success in medical school, and the fourth I could not track down because the citation is wrong. Intuitively, one might suspect lack of correlation between successful interviews and success in schools as measured by grades, but one might think interviews more relevant to predicting successful salespersons, for example. Similarly, in academic contexts interviews might be useless for predicting successful research but relevant to predicting success in teaching as measured by student evaluations. Thus, departments who take teaching seriously might not be so obtuse in conducting interviews. Despite warning that predictive strategies must be tested in the widest range of contexts and that reasoners must always "consider-the-oppposite," that is, consider all the reasons why their beliefs or positions might be wrong in order to overcome the natural tendency to overconfidence, no such distinctions between contexts are drawn in speaking of the "interview effect".

More generally, psychological studies cited in the book are accepted as gospel by the authors without any disposition to "consider-the-opposite". In the absence of criticism, one might still expect more description of the contexts and methods of these studies and of their limitations. In the 1960's, some pretty bad epistemology and philosophy of perception followed from psychological studies of the plasticity of perception, of its influence by affective and other factors. The dominant method involved flashing perceptual configurations to subjects at incredibly fast exposure times, a method that turned out to have little in common with ordinary perception. In light of such histories and of the authors' own warnings to those who rely on strategies they consider outdated, a more critical attitude, or at least more description of the studies they rely on, would have been welcome here.

The other major problem I found was the persistent railing against more standard analytic epistemology, perhaps an attempt to make the cost-benefit framework and major principle sound more controversial. Again their statements on the subject are not entirely consistent. Compare "We are happy to grant that a healthy intellectual discipline can and should offer room for people to pursue highly theoretical issues that don't have any obvious practical implications" to "Epistemology is but a hollow intellectual exercise if it does not ultimately provide a framework that yields useful reasoning advice". The second sentence is much more typical in the book, although the first is to be preferred. Their criticism is like blaming aestheticians for not providing formulas to artists or blaming ethicists, especially meta-ethicists, for not providing a fixed set of rules for right behavior. Attempting to understand the nature, structure, and scope of knowledge is central to self-understanding, to understanding our place in the world, a not insignificant undertaking even in the absence of practical advice.

This complaint against standard epistemology, that it does not supply rules for good reasoning, is held to follow from another. According to the authors, standard epistemology cannot have useful normative implications because it is based on and tested against only considered epistemic judgments of philosophers, in sharp contrast to scientists who test their theories against the world. But again distinctions could have been drawn between contexts in which such conceptual analysis is fitting and those in which it is not. When a philosopher attempts to clarify our concept of knowledge, and especially our concept of justification, which is a matter of truth-indicativeness from the internal point of view, testing proposed analyses against considered judgments is fitting. But when we discuss the scope of knowledge, the question of when our concept is instantiated, we must rely on science and empirical findings. Although the former task is presupposed in the latter, there is surely no consensus among the philosophers attacked in this book, except perhaps among Moorean common sense epistemologists, that determining the structure and scope of knowledge and answering skeptical challenges is a matter of pure conceptual analysis. Few would claim that naive judgments about the scope of knowledge must be preserved. In the face of skeptical challenge, we test intuitions against only full descriptions of situations from the outside, possible factual situations compatible with empirical findings. In their abuse of standard epistemology, Bishop and Trout once more draw no such distinctions among contexts and types of analyses.

Standard epistemology can have normative implications, despite their doubts, because its central concept is that of a state at which belief naturally aims, a state that entails not only truth, but nonaccidental or robust truth. Most pertinent here is that the concept of knowledge, which occupies analytic epistemologists as a preliminary to questions of structure and scope, is simply presupposed by Bishop and Trout in their advocacy of reasoning strategies with consistently better outcomes. Better outcomes obviously involve not only true beliefs, but also knowledge.

While this book's main thesis, that we should use reasoning strategies that are more efficient and reliable for significant problems, may seem obvious to the point of triviality, the devil is in the details of its application, but also in its interpretation. This is especially so in regard to the interpretation of significance. The authors interpret significance in terms of contribution to human well-being: problems are negatively significant if their solution would detract from human well-being. The authors sometimes use "happiness" interchangeably with "well-being". If good reasoning must address itself to positively significant problems, as the authors claim, then Hume's brilliant essay on the immortality of the soul must be bad reasoning. Despite the few who might find eternal life tedious, most of us instead are depressed by the thought that death is final. The acceptance of this thought might add to well-being in other ways, for example by encouraging us to finish writing book reviews, but surely the excellence of Hume's reasoning does not depend on any such balancing coming out on the positive side overall. It rather depends only on his arguments being as conclusive as they can be without the possibility of observational confirmation. Significance seems not to enter into the question of good reasoning at all, and, in any case, this problem certainly seems significant even though its solution contributes mainly negatively to our well-being (and certainly to our happiness).

Despite all these criticisms, I recommend this book to those interested in connections between psychology and epistemology. As noted at the beginning, it is informative and written in a lively style. I certainly agree with the authors' contention that courses in critical thinking should pay more attention to the types of studies and reasoning patterns that they summarize and analyze.