2017.01.06

Michael Brownstein and Jennifer Saul (eds.)

Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology

Michael Brownstein and Jennifer Saul (eds.), Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology, Oxford University Press, 2016, 316pp., $65.00 (hbk), ISBN 9780198713241.

Reviewed by Chloë FitzGerald, University of Geneva


Implicit associations are characterised variously as unconscious, uncontrollable, non-introspectable, or arational mental processes, some of which may influence our judgements resulting in undesirable bias. These implicit biases occur between a group or category attribute, such as being black, and a thin negative evaluation, such as 'bad' -- 'implicit prejudice' in the psychology literature -- or another, thicker evaluation/category attribute, such as 'violent' -- 'implicit stereotype' in the psychology literature.[1] They manifest themselves particularly in our non-verbal behaviour towards others, such as in frequency of eye contact and physical proximity, but also influence our conscious thoughts and decisions in multifarious ways. Implicit biases can explain a potential dissociation between what a person explicitly believes and wants to do (e.g. treat parents of different genders equally) and her actions (e.g. judging a man to be a less competent parent and deciding not to leave her son with him).

The two volumes entitled Implicit Bias and Philosophy are the fruit of a series of interdisciplinary workshops held at the University of Sheffield between 2011 and 2012. Philosophers have sometimes used the term 'implicit bias' in a liberal manner to discuss a variety of phenomena, including stereotype threat.  The latter is also discussed in this volume, which covers metaphysical and epistemological themes related to implicit bias and is divided into two parts: 'The Nature of Implicit Attitudes, Implicit Bias, and Stereotype Threat', comprised of 5 articles; and 'Skepticism, Social Knowledge, and Rationality', comprised of 6 articles. The second volume, entitled Moral Responsibility, Structural Injustice, and Ethics, deals with the ethical and political implications of implicit bias.

As suggested by the title, these volumes are very much aimed at philosophers interested in implicit bias, or those from other disciplines who are curious about the philosophical issues surrounding implicit bias. In addition to looking at themes of theoretical interest to philosophers, this volume includes two papers in Part 2 that address issues of urgent practical import to the discipline: the representation and stereotyping of women in science and philosophy. The workshops held in Sheffield included psychologists as well as philosophers, researchers from other disciplines and even non-academics working on bias. However, all of the papers in the volume include a philosopher as author. The only non-philosophers are Joseph Sweetman, the psychologist co-author of a paper with Jules Holroyd, and the psychologists Laura di Bella and Eleanor Miles, co-authors of an empirical investigation into gender stereotypes in philosophy with the volume editor, Jennifer Saul. Given the need for productive philosophical engagement with psychology, particularly on topics that provoke radical revisions of how we view ourselves and our social responsibilities, the overwhelmingly philosophical nature of this output is justified. Besides, two mixed-discipline multiple-author papers out of eleven is good going for a philosophy collection. On the other hand, it is a shame that there are still so many barriers within the discipline for philosophers to undertake truly interdisciplinary work that actually influences other disciplines.

Turning to the specific content of the papers, the articles in Part 1 are stimulating and each poses an interesting challenge to the standard dual-process theory explanation of implicit bias typically found in psychology. Keith Frankish qualifies as a dual process theorist because he claims that there are two kinds of process in the mind, conscious explicit processes (system 2) and non-conscious implicit processes (system 1). However, Frankish's version of 'dual-level theory', explained in 'Playing Double: Implicit Bias, Dual Levels, and Self-Control', differs in various ways from the standard dual process theories found in psychology. He holds that we have implicit beliefs and conduct implicit propositional reasoning and thus that implicit processes (and biases) are more susceptible to control than standard dual-process accounts suggest.

Edouard Machery's and Ron Mallon's respective accounts of implicit bias and stereotype threat also imply that these phenomena form more of a part of our selves than psychologists tend to allow. In 'De-Freuding Implicit Attitudes', Machery defends via an argument to the best explanation 'the trait picture of attitudes', whereby attitudes are not mental states but dispositions to behave, perceive and think in particular ways. Under this conception, the distinction between implicit and explicit attitudes collapses because it is a distinction only properly applied to mental states. So-called 'implicit measures' that Machery labels more clearly as 'indirect measures', following Jan De Houwer's suggestion, are thus measuring the psychological bases of attitudes, typically those parts that are non-introspectable. He concludes that research on implicit bias should be accordingly reconceptualised as showing that people are not very good at knowing their own attitudes. Mallon's chapter, 'Stereotype Threat and Persons', harnesses work in the social sciences and humanities to argue that there is evidence for a personalist interpretation of stereotype threat. He claims that both the mental states that trigger stereotype threat and the processes underlying it can be plausibly attributed to persons, rather than explained by subpersonal mechanisms.

Bryce Huebner thinks that Machery is likely to be right in his trait picture of attitudes. His claim in 'Implicit Bias, Reinforcement Learning, and Scaffolded Moral Cognition' is that a computational framework of implicit biases is best placed to explain how these traits operate in an individual. In contrast to dual-process theories, Huebner argues that there are three systems that make up the mind: model-based, model-free, and Pavlovian. Model-based systems are similar to System 2 in dual process theories because they use costly counterfactual reasoning and decision trees to arrive at an outcome. As Huebner points out, they can be more accurate and flexible than the other two systems he describes, but are extremely resource-intensive. The other two systems in his model are much less costly and are thus used more often. They would both be classed as System 1 in dual process theories and their outputs can conflict with each other and with model-based system outputs. Model-free systems are systems that learn from experience, recommending the repetition of actions that have in the past produced positive outcomes, and the avoidance of the reverse. They work best in stable environments when outcomes are completely the result of actions taken. Finally, Pavlovian systems are the most simple and inflexible systems, involving associations between innate responses and biologically salient rewards. According to Huebner's three system account, overcoming implicit biases is not something individuals can do alone because the effects of local interventions will be limited, as such interventions struggle against a learning environment that reinforces those biases. Only with the help of others, and through reshaping our environment to reflect our explicit beliefs and values, can we have a considerable and long-lasting effect on our implicit biases.

Holroyd's and Sweetman's 'The Heterogeneity of Implicit Bias' is compatible with all the arguments made by the other papers in Part 1 and urges caution in the use of generalizations about implicit biases; it is likely that the term covers a wide variety of phenomena with different psychological characteristics and different behavioural outcomes. They recommend that psychologists revisit the distinction commonly made between semantic and affective associations (implicit stereotype/implicit prejudice) and that they carry out field-work to test interventions rather than basing them on laboratory studies. For philosophers, aside from avoiding generalizations, they also recommend that any normative claims be sensitive to the functional differences resulting from the heterogeneity of implicit bias and that the type of association under consideration be specified further than simply 'implicit bias'.

My only criticism of the papers in Part 1 would be that all the metaphysical models of implicit bias proposed point in a similar direction: they all lend support to the thesis that we have greater control and responsibility over our implicit biases than is typically suggested by psychologists' models. It would have been interesting to have a contrasting view represented here, such as that of Neil Levy, whose account of implicit bias presents them as rogue phenomena in the mind that do not truly constitute an agent's self.[2]Levy marshals his account to argue that agents are not responsible for their implicit biases. Although I do not personally find such accounts plausible, it could have added an interesting contrast to Part 1.

The papers in Part 2 are more of a motley than those in Part 1. Louise M. Antony's 'Bias: Friend or Foe? Reflections on Saulish Skepticism' was a highlight of the volume for me, tackling some of the bigger epistemic questions that loom when one considers bias. She offers the best answer I have seen so far to a question at the heart of the epistemology of bias: what exactly makes something a bias? Antony argues that bias is not inherently negative (neither rationally nor morally), but is simply:

Any structure, database, or inferential disposition that serves in a non-evidential way to reduce hypothesis space to a tractable size. Biases, in this sense, may be propositions explicitly represented in the mind, or they may be propositional content realized only implicitly, in the structure of a cognitive mechanism. They may reside in subpersonal computational structures, or they may be elements of person-level beliefs or associations, fully accessible to consciousness. They may work at the level of individual cognition, or at the level of socially structured inquiry. (161-2)

According to Antony, what makes a bias bad is that it inclines us away from the truth, not that it steers us in one direction rather than another. She claims that bias has been maligned in philosophy and popular thought because of the persistence of an empiricist conception of mind and knowledge that enshrines 'objectivity' as the absence of bias. This ideal of objectivity is reflected in the prominent, yet misleading, picture of science and scientists as value-free and neutral. To support this claim, she draws on Helen Longino's critique of the individualism of empiricism, W. V. O. Quine's naturalistic epistemology, C. G. Hempel's and Thomas Kuhn's attacks on the logical positivist conception of science, Noam Chomsky's critique of behaviourism, and research on bias in psychology. As for the work of rooting out bad biases, she, like Huebner in Part 1, argues that we need to look to our environments more than ourselves. She suggests that Sarah-Jane Leslie's work on 'striking-property generics' may help us to understand how language reform could help eliminate or reduce some pernicious biases.

Alex Madva's 'Virtue, Social Knowledge, and Implicit Bias' argues that there is no real conflict between ethical and epistemic aims in handling cases of implicit bias, as has been argued by Tamar Gendler. Gendler argues that in order to avoid being implicitly biased, we have to somehow eliminate knowledge, for instance, of certain statistical regularities. Madva's answer appeals to research on accessibility in psychology to show that it is not necessary to actively forget information encoded in stereotypes to avoid activating them. He also claims, in a similar vein to Antony, that it is perfectly valid for a moral aim to set an epistemic agenda.

In 'Stereotype Threat, Epistemic Injustice, and Rationality', Stacey Goguen emphasizes the full effects of stereotype threat and argues that it has much broader epistemic implications than are usually noted. She claims that stereotype threat can lead to serious self-doubt constituting epistemic injustice and shaking an individual's sense of self.

Catherine E. Hundleby ('The Status Quo Fallacy: Implicit Bias and Fallacies of Argumentation') forges interesting links between theories and disciplines in suggesting that the fallacies approach to argument evaluation could be a novel framework to use in combatting biases such as the status quo bias. It is promising because of the non-threatening way it identifies the errors people may make in reasoning and evaluating, making resistance to bias reduction training less likely.

In 'Revisiting Current Causes of Women's Underrepresentation in Science', Carole J. Lee makes a convincing case that the correlational studies showing no negative gender effects for women in science may be distorted by a quality confound and a quality-related sample bias.

The last paper in the volume reports the exciting results of the first empirical study into gender stereotypes in philosophy. di Bella, Miles, and Saul in 'Philosophers Explicitly Associate Philosophy with Maleness: An Examination of Implicit and Explicit Gender Stereotypes in Philosophy' reveal a small but significant tendency for men (except those who read a lot of feminist philosophy!) to implicitly associate philosophy with maleness. Surprisingly, women implicitly associate philosophy with femaleness. Both women and men explicitly associate philosophy with maleness. These results have important implications for how we should think about the difficulties facing women in philosophy.

In sum, this is a must-read volume for anyone interested in the latest philosophical work on implicit bias. It is probably less likely to be read by, but would be extremely helpful for, anyone conducting empirical work on the topic who values the careful conceptual distinctions made by philosophers and is looking for recommendations for further research.


[1] By 'thin' and 'thick', I refer to the distinction between thick and thin concepts in the philosophy of value. See Vayrynen, P. P. Thick Concepts and Variability. Philos. Impr. 11, 1-17 (2011).

[2] Levy, N. Consciousness, implicit attitudes and moral responsibility. Noûs 48, 21-40 (2014).