Minds, Brains, and Law: The Conceptual Foundations of Law and Neuroscience

Placeholder book cover

Michael S. Pardo and Dennis Patterson, Minds, Brains, and Law: The Conceptual Foundations of Law and Neuroscience, Oxford University Press, 2013, xxviii + 240pp., $85.00 (hbk), ISBN 9780199812134.

Reviewed by Alexander Guerrero, University of Pennsylvania

2014.05.07


This book is a clear, generally persuasive exploration of the question of how scientific evidence and methods can and should illuminate both philosophical debates and legal proceedings. Michael Pardo and Dennis Patterson are particularly concerned with the way in which neuroscientific findings and methods have been and are being used, both in philosophy and law. Because these uses -- even just within the legal context -- have been broad, their book is broad as well. They examine the use of neuroscience to bolster or undermine philosophical claims about morality, knowledge, free will and responsibility, intention, lying, deception, and punishment, in addition to discussing more specific legal issues, such as the relationship between neurological evidence and the Fifth Amendment privilege against self-incrimination.

The book does not have an official slogan regarding the place of neuroscience in these debates, but if it did, it would be this: "no empirical contribution in the face of conceptual confusion." The main point of the book is to counsel against certain kinds of incautious philosophical and legal deployments of the results of fMRI brain scans and EEG scans, among other kinds of neuroscientific evidence. Pardo and Patterson are committed naturalists; they are not neuroscience skeptics. Indeed, they are careful throughout to identify potential uses of neuroscientific findings, along with the potential or actual abuses. What they are skeptical about are some of the more sweeping claims made by Patricia Churchland, Joshua Greene, Oliver Goodenough, Deborah Denno, and others concerning the reduction of the mind to the brain, the implications of neuroscientific findings for moral theory, the existence of free will, the viability of our conception of ourselves as intentional actors, and the implications of all of this for our legal systems. Pardo and Patterson generally do not take sides regarding the ultimate philosophical truth on these matters. Their argumentative focus is on what neuroscience can and does show us, and (for much of the book), on what it cannot and does not show us, despite what has been claimed.[1]

The book begins with two chapters on general issues concerning the relationship between empirical evidence and conceptual claims, the distinction between criterial and inductive evidence, and a number of different conceptual mistakes that they identify concerning the relationship between the mind and the brain, and efforts at reducing the former to the latter. Chapter Three turns to criticize some of the most significant claims that have been made for neuroscientific evidence, including Goodenough's arguments about the nature of law, Greene's work on morality, Mikhail's work on the cognitive basis of legal and moral decision-making, and the work of a number of people on what neuroscience shows us about economic behavior. Chapter Four concentrates on brain-based lie detection. Chapters Five, Six, and Seven turn to consider a number of more narrowly legal concerns, informed by the discussions in the previous four chapters: criminal law doctrine (actus reusmens rea, and insanity defenses), criminal procedure (Fourth Amendment, Fifth Amendment, and general due process concerns), and theories of criminal punishment, respectively.

In what follows, I will identify what I take to be some of the stronger points they make in the book, moving then to consider some places where I think they go wrong.

When people think of neuroscience and the law, one of the first things they are likely to think of is the use of neuroscience to determine whether someone is lying or whether a person possesses (knows, believes) some piece of information. There has been a great deal of popular discussion of the use of fMRI brain scans to determine whether someone -- particularly a potential witness -- is lying, or the use of EEG scans to determine whether a subject exhibits brain waves correlated with prior recognition or knowledge of some fact, image, or other kind of information.

There are ethical issues and related privacy concerns to using brain scans, and there are complex criminal procedure issues that Pardo and Patterson discuss quite expertly in Chapter Six. (The discussion of the Fourth and Fifth Amendment constitutional issues regarding the compelled production and use of neuroscientific evidence against criminal defendants is worth the price of admission in its own right.) There are also a host of empirical issues that affect the validity and reliability of the results, particularly when transported into actual legal proceedings, including the temporal distance from the events in the tests (generally very close in time, unlike in the legal context), the low stakes in experimental settings, and the possibility of using countermeasures. Pardo and Patterson offer illuminating discussion of all of these complications.

But their most philosophically interesting contribution to this discussion comes with a number of the points they make regarding the conceptual issues that arise with respect to so-called "brain-based lie detection" in Chapter Four. This discussion brings out two recurring themes of the book.

The first theme is straightforward: "success of empirical inquiry depends upon conceptual clarity. . . . An experiment grounded in confused or dubious conceptual claims can prove nothing" (6). If you want to run a test to see if someone is lying, you need to know what it is to lie; you need to have an accurate understanding of the concept of lying and how it relates to the concept of deception, for example.

Drawing on Don Fallis's work on lying,[2] Pardo and Patterson note that a necessary condition for lying is that a speaker states something that the speaker believes to be false, but that this may not be sufficient. They point out: "when a speaker is telling a joke or reciting a line in a play, a false assertion is not a lie" (109). Fallis's definition requires also that the speaker believe that her statement was made "in a context where the following norm of conversation is in effect: Do not make statements that you believe to be false" (109).[3] Pardo and Patterson then point out that the fMRI studies do not fit; this conversational norm is arguably jettisoned in those studies. The subjects are instructed to assert false statements at various points throughout the studies, or are instructed to commit or plan mock "crimes" and then assert false statements about those, so that "the acts being measured, even when they involve deception, appear to be closer to actions of someone playing a game, joking, or role-playing" (110). This is not just a point about the stakes being lower or even about these being "instructed lies." These would not be lies at all. As they put it, "If this is so, then the relationship between the neural activity of these subjects and acts of lying is not clear" (110). We would be looking for lies in all the wrong places. And Pardo and Patterson point out other possible conceptual mistakes that might be made, such as conflating lying and deceiving (one can lie without deceiving -- one may know that one's audience will correctly perceive one to be lying; and one can deceive without lying -- an exercise familiar to teenagers and miscreants everywhere).

It is possible to imagine responses to the above concerns, or experiments that would better preserve the norm of conversation that one must flout in order to lie. Pardo and Patterson's point -- very well taken -- is that attention must be paid to the concepts that one is attempting to study, lest one mistakenly study something else entirely.

A second recurring theme is that (a) for the concepts under discussion -- lying, deception, knowledge, intention -- there are relatively fixed criteria of application for those concepts, and those criteria "serve a normative role: they partly constitute the meaning of the relevant terms and they regulate applications" (xix) and (b) for these folk psychological concepts, these criteria will be behavioral (concerning what we are doing or are disposed to do or are capable of doing), not neurological (concerning what brain states we are in, or what neural activity is taking place). Accordingly, behavioral evidence will "override" the neuroscientific evidence. They support this (b) claim by way of several similar thought experiments. First, they ask us to "suppose the reverse were true." They continue:

If particular brain states did provide the criteria for lies or deception, then by hypothesis having certain neural activity would be a sufficient condition for engaging in an act of deception -- even if one did not intend to deceive and one asserted a true proposition. Would we really say this person was lying? Of course not. (101)

They make similar point with respect to having an intention, relying on the inappropriateness (in a Wittgensteinian mood, they say "nonsense") of statements of intentions to do what one believes to be impossible, like "I intend to rob the bank tomorrow, and robbing the bank is impossible." They continue:

Suppose, however, that having an intention just was having a particular brain state or pattern of neural activity. In other words, his intention to rob the bank just was the fact that his brain was in a particular brain state (BS). Now, the tension would evaporate; it is perfectly coherent to interpret the defendant to be speaking literally if he were to say, "my brain is in BS, and robbing the bank is impossible." Because the combination of brain state and the impossibility of the task makes sense, but the intention and the impossibility do not, the BS and the intention are not identical. (138)

And they make a similar point with respect to knowledge, relying on the inappropriateness of saying that "I know X, and X is false," while there is no similar apparent tension in someone saying "my brain is in state (BS), and X is false" (139-40)

In these cases, Pardo and Patterson draw attention to behavioral and dispositional connections associated with lies, deception, intention, and knowledge, and show how those anchor our concepts. Responding to points made by Thomas Nadelhoffer,[4] Pardo and Patterson acknowledge the possibility of conceptual change, particularly in light of scientific advances. What they want to stress, however, is that in many of these studies, what is purportedly studied and discussed are the familiar, folk-psychological concepts and phenomena, not some neuroscientifically improved or refined concepts or phenomena. They also make the important and highly relevant point that whatever we think about our folk-psychological concepts, those are the concepts invoked by our laws and regulations.

I generally find Pardo and Patterson persuasive on these points. I found their discussion of free will and neuroscience considerably less persuasive. One significant difference between free will and the aforementioned concepts is that whether an entity, X, has free will does not appear to be determined by either behavioral or neurological criteria, but by more fundamentally metaphysical criteria. A second difference is that what those criteria are is a matter of extensive debate: must X be the ultimate origination of X's action, an uncaused causer, or able to undeterminedly choose whether to take some action, or is it enough that X's actions are reflectively endorsed by X, or that X's actions are in line with X's beliefs and values? A significant worry in this direction is that the folk concept of free will includes metaphysical presuppositions that appear to be undermined by the neuroscientific evidence. As some evidence for this, consider a thought experiment similar to the ones Pardo and Patterson use: "I have free will to φ or not at time T, and it is completely physically determined that I will φ at T." This seems inappropriate, suggesting that the speaker is confused about the ordinary concept of free will. As Joshua Greene and Jonathan Cohen put the point, "We feel as if we are uncaused causers, and therefore granted a degree of independence from the deterministic flow of the universe, because we are unaware of the deterministic processes that operate in our own heads."[5]

This is not a small matter, obviously. If our practices of attributing moral and legal responsibility presuppose that we have free will in a way that is unsupported by a scientific understanding of ourselves, then we will either have to reform those practices (perhaps eliminating, for example, desert-based punishment), or reform our self-conception and our understanding of our attributions of moral and legal responsibility so that they are in line with the kind of free will or control that we actually possess. Pardo and Patterson correctly point out that "neuroscience adds nothing new to existing arguments for or against compatibilism, incompatibilism, or hard determinism" (197). That's right -- we might be able to defend our practices with respect to moral and legal responsibility even if determinism is true (perhaps all that is required is "rational control"[6]), and we don't need neuroscience to raise the worry about determinism. But this is one place where Pardo and Patterson seem to go too far, suggesting not just that we cannot infer from the neuroscientific evidence to the inappropriateness of attributing moral responsibility or retaining notions of moral and legal desert (at least not without some significant argumentative steps in between), but also suggesting something stronger:

Consider an event as simple as a person stopping her car at a red traffic light. . . . Do the light waves from the lamp 'cause' the pressure on the brake pedal? Surely not in the way the bowling ball causes the pins to fall over. It is true that the traffic light 'causes' us to stop the car. But the 'cause' for stopping the car cannot be explained solely by a physical process. By itself, the red light does not 'cause' us to stop . . . rather, we stop because of the status of the light in an important social convention . . . . We are neither bowling balls nor pins. We have a choice. It is this choice that is the ground of responsibility. (40-41)

This is too fast. There might be a perfectly good explanation in addition to the explanation solely in terms of physical processes, but it seems there is (or will be, once the science is far enough along) an explanation that is entirely in terms of physical processes. There are then hard questions about the relationship between these explanations -- do they compete, is one more predictive or otherwise better than the other? And there are questions about the upshot of there being an explanation that is entirely in terms of physical processes: does this undermine the claim that we have a choice? Does this undermine our attributions of moral responsibility? I agree with Pardo and Patterson that the neuroscientific evidence doesn't settle these questions in favor of 'yes' answers, but nor does their example settle these questions in favor of 'no'. The full story is going to be longer and more complicated.

Pardo and Patterson occasionally, and generally to ill effect, wade into Wittgensteinian discussions of rule-following and the interpretation/understanding distinction. These are the least successful parts of the book, to my mind, and the place where confusion, rather than clarity, was most often introduced. Fortunately, these discussions are largely self-contained (pp. 12-16 and 63-70), and could be passed over without loss. The main purpose of those discussions was to criticize the work of John Mikhail. I found those criticisms unconvincing.

Mikhail, drawing on an analogy with the work of Noam Chomsky on linguistics, has developed a detailed account of the cognitive basis of moral and legal decision-making, arguing that much moral knowledge is tacit, that we possess a 'moral grammar' which has at least some innate core attributes ("where 'innate' is used in a dispositional sense to refer to cognitive systems whose essential properties are largely pre-determined by the inherent structure of the mind"[7]), and that moral intuitions are the result of our tacit knowledge of specific rules, concepts, and principles.

Pardo and Patterson raise three main conceptual objections to Mikhail's picture. First, they suggest that Mikhail must be committed to "unconscious rule following" which they maintain is a conceptual confusion. Second, they suggest that Mikhail is committed to an "interpretive" model of individual moral problem solving, and that this is problematic because "interpretation" is a "parasitic" activity that arises only where "understanding is already in place" (66). Third, they maintain that Mikhail's view is committed to the confused idea that "moral knowledge is in the brain" (69). Here, Pardo and Patterson seem to find themselves with a Wittgensteinian gun, looking for someone to shoot it at. Mikhail seems a poorly chosen target.

Early in the book, they object to the idea of unconscious rule following. They say: "Of course, a person can 'follow' a rule without being 'conscious' of it (in the sense of having it in mind or reflecting on it) while acting, but one must still be cognizant of the rule (i.e., be informed of it and its requirements) in order to follow it" (13). They stress the importance of the distinction between "following a rule and acting in accordance with a rule" (14). But it is unclear why Mikhail needs to maintain anything other than that we possess an innate moral grammar that leads or causes us to act, or be disposed or inclined to act, in accordance with certain moral rules. Why does he need to be committed to the more robust, and more obviously confused idea of unconsciously following rules in the sense to which they object? Indeed, they even quote Mikhail disavowing such a commitment:

The particular computations . . . can be conceived on the model of rules or rule-following, but care must be taken to dissociate any such conception from claims about theconscious possession or application of rules . . . . Rather, the dominant trend in the cognitive sciences is to assume that these mental rules are known and operate unconsciously . . . . In short, unconscious computation, not conscious application of rules, is the more significant rule-based proposal to evaluate in this context.[8]

They offer no argument why Mikhail is or should be committed to "following" of rules, rather than unconscious computation, for his view to succeed. And it seems clear that he is not committed to what is, somewhat obviously, a kind of confusion.

Similarly, there seems to be no reason to attribute to Mikhail an "interpretation" picture (on the Wittgensteinian understanding of "interpretation") rather than an "understanding" picture (on the Wittgensteinian understanding of "understanding"). Indeed, they note that "understanding, according to Wittgenstein, is unreflective; when we follow rules, we ordinarily do so without second-guessing ourselves and without reflection on what the rule requires" (66). This seems to fit much more naturally with the picture that Mikhail is offering, so why not see Mikhail as offering a picture on which we have an innate moral grammar that helps us to "understand" (in the Wittgensteinian sense), rather than "interpret" the deontic status of novel fact patterns?

Finally, Pardo and Patterson helpfully identify instances of the "mereological fallacy" -- the mistake of "attributing an ability or function to a part that is only properly attributable to the whole of which it is a part" (21). This mistake arises in this context, they argue, when people attribute to the brain what is properly attributed only to an agent or a person. They argue, convincingly, that, for example, "knowing is not being in a particular state," but rather that "knowing is an ability" (21), and that knowledge is "a kind of cognitive achievement or success -- it consists in a kind of power, ability, or potentiality possessed by a knowing agent" (18).[9] Let's grant all of that. What is hard to figure is why they think that Mikhail is making this mistake, or that his view is committed to this mistake. Mikhail himself says that "the mind/brain contains a moral grammar,"[10] seeming to intend to leave this kind of issue open. And Mikhail's view is eminently compatible with the idea that knowing is an ability, that knowledge has no physical location (although, as Pardo and Patterson acknowledge -- and Mikhail and anyone should -- the ability may require the existence of physically-located neural activity; it just isn't identical to that activity), and that it is something possessed by an agent, not by a brain. There is no reason to see Mikhail as committed to the view that it is our brains that possess moral knowledge, or that the knowledge is located in the brain.

These reservations aside, the book's breadth, clarity, and generally on-point criticisms make it useful to a wide audience, worth reading for anyone working on what has come to be called "neurolaw," "neuroethics," or "experimental philosophy," but also of interest to moral and legal philosophers interested in developments in neuroscience (as arguably all moral and legal philosophers should be), and both academic and non-academic lawyers whose work engages neuroscientific evidence. There are also general lessons to be learned about how empirical evidence bears on philosophical claims, and Pardo and Patterson are generally good teachers. I don't agree with all of the criticisms they make, but many of them are useful and suggestive of how some types of mistakes and over-claiming might be avoided.


[1] In this way, the book pairs nicely with Selim Berker's influential article, "The Normative Insignificance of Neuroscience," Philosophy and Public Affairs, Vol. 37, No. 4 (2009), pp. 293-329.

[2] Don Fallis, "What is Lying?," Journal of Philosophy, Vol. 106 (2009).

[3] Citing Fallis at p. 34.

[4] Thomas Nadelhoffer, "Neural Lie Detection, Criterial Change, and Ordinary Language, Neuroethics, Vol. 4 (2011).

[5] Joshua Greene & Jonathan Cohen, "For Law, Neuroscience Changes Nothing and Everything," in Law & the Brain (Semir Zeki & Oliver Goodenough eds., Oxford University Press, 2006), pp. 218-19.

[6] John Martin Fischer and Mark Ravizza, Responsibility and Control: A Theory of Moral Responsibility (Cambridge University Press, 1999).

[7] John Mikhail, "Universal Moral Grammar: Theory, Evidence, and the Future," Trends in Cognitive Science, Vol. 11 (2007), p. 144.

[8] John Mikhail, "Review of Patricia S. Churchland, Braintrust: What Neuroscience Tells Us about Morality," Ethics Vol. 123 (2013), quoted by Pardo and Patterson at p. 65, n. 82.

[9] In general, they embrace an Aristotelian conception of mind, on which the mind is not an entity or substance at all, but that instead "to have a mind is to possess an array of rational and emotional powers, capacities, and abilities exhibited in thought, feeling, and action. . . . the mind is not a separate part of the person that causally interacts with the person's body" (44).

[10] John Mikhail, Elements of Moral Cognition: Rawls' Linguistic Analogy and the Cognitive Science of Moral and Legal Judgments (Cambridge University Press, 2011), p. 17 (emphasis added).