The Structural Evolution of Morality

Placeholder book cover

J. McKenzie Alexander, The Structural Evolution of Morality, Cambridge University Press, 2007, 300pp., $95.00 (hbk), ISBN 9780521870320.

Reviewed by Herbert Gintis, University of Massachusetts

2008.07.26


J. McKenzie Alexander is a philosopher at the London School of Economics. Like several other moral philosophers, he has read extensively in the evolutionary game theory and agent-based simulation literature, and each of his substantive chapters is an extensive exercise in modeling and simulating some aspect of moral life -- fairness, trust, cooperation, and retaliation.

"Evolutionary game theory," he says (p. 291), "coupled with the theory of bounded rationality and recent work bridging the gap between psychology and economics, provides what appears to be a radical restructuring of the foundations of moral theory." It is refreshing indeed to find a moral philosopher capable of expressing such elementary, yet widely ignored truths as "our moral beliefs are simultaneously relative to our evolutionary history and our cultural background, but at the same time objectively true" (p. 291). Why objectively true? Because our moral beliefs are just as much a material force in the world as our capacity to metabolize nutrients, and truth in this case means exists.

Alexander's basic premise is that human beings are "bounded rational," so we are incapable of maximizing, expected utility theory is all wrong, traditional game theory is all wrong because it depends on the rational actor model, and boundedly rational actors are not rational. Given their bounded rationality, social norms fill in to tell people what to do, and the social norms that promote individual well-being emerge, through the Darwinian process of cultural evolution, to dominate social life. Thus, we are all best off if we take the moral high road, rather than trying to scheme our way to higher payoffs.

Alexander is certainly not the first to claim that behaving morally is in our best interest, not because we might get caught and punished, but because moral rules embody a deeper understanding of human nature than our poor, beleaguered, bounded rational minds. Aristotle's virtue ethics fit this mould rather neatly -- the philosopher determines what is virtuous, and the citizens follow their prescriptions, much as we ingest the medicaments prescribed by our physician. Indeed, I think this is a highly insightful position, and there is a not inconsiderable body of experimental evidence to the effect that, for most people, doing the right thing makes them happier than committing the immediately pleasurable but selfish or immoral act.

I recommend this book for its four chapters of applied evolutionary game theory, although I think his arguments attempting to justify his position are all wrong. I will go through them, one by one.

Most importantly, Alexander believes our moral constitution is purely cultural as opposed to biological. He does not say why he believes this, although he makes the elementary error of thinking that it must be one or the other, when in fact, we are the product of gene-culture coevolution, to the point where it is impossible to tease out autonomous cultural or genetic elements, except in the most elementary cases (e.g., what biologically nutritious foods we find ethical to eat is culturally determined, and the physiology of shame is culturally universal). There is absolutely nothing in this volume that supports his prejudice against genetic predispositions for prosocial behavior. Moreover, he completely ignores the large body of evidence to the contrary, including the psychophysiology of sociopathy and autism.

Part of Alexander's problem is that he tries to support complex and sophisticated arguments with toy evolutionary game theoretic models that have only a tenuous relationship to social reality. For instance, his criticism of Gibbard is that his argument depends on coordination games, whereas in fact, these are only one of three types of two-player games (p. 278). However, the fact that there are other possible scenarios for strategic interaction does not prove that they are all equally important and applicable. Indeed they are not, and Gibbard's treatment is quite insightful.

Similarly, in dealing with retaliation (Chapter 6), he takes the most obvious game, the Ultimatum Game, and shows in six different ways how rejections of positive offers are possible if one makes enough special assumptions (e.g., agents are imitators on a one-dimensional grid). What a glorious waste of time! This chapter, and the others on specific human moral behaviors, are classroom exercises in manipulating toy games, with no systematic relevance to reality. Alexander would have done better to look at the experimental literature on cooperation and punishment, and the biological/economic models of the evolution of fairness and reciprocity norms. Curiously, these go completely unmentioned. Nor does Alexander bother to state the fundamental theorem of evolutionary dynamics with replicators: all stable equilibria are Nash equilibria of the stage game. This would have allowed him to drop almost all the replicator dynamic arguments in the book.

This point can be expressed more completely as follows. Suppose we have a population of agents who are assigned in each period t = 1, 2, 3, … to groups of size n, where they play an n-player game G. Suppose a player's strategy is written in the player's genome, rather than being learned. Suppose an individual's payoff in this game represents his biological fitness, so at the end of each period, individuals reproduce in proportion to their success in playing this game G, and their offspring genetically inherit their strategies for playing the game. We call G the "stage game" and the reproduction process a "replicator dynamic." In a replicator dynamic, a particular strategy increases or decreases in frequency in the population according as its payoff in playing G is above or below average for the population. The replicator dynamic is thus simply a version of Darwinian evolutionary dynamics. The fundamental theory of evolutionary game theory says that every stable equilibrium of this evolutionary dynamical system is a Nash equilibrium of the game G.

For instance, suppose G is the two player Prisoner's Dilemma. Then, the only stable evolutionary equilibrium is mutual defect for all players. Suppose the game is the Hawk-Dove game, which has a unique Nash equilibrium in mixed strategies. Then, the only stable evolutionary equilibrium will have a fraction of Hawks equal to the probability of playing Hawk in the Nash equilibrium of the game, and, a fraction of Doves equal to the probability of playing Dove in the Nash equilibrium of the game. Doing simulations in such situations is superfluous -- we know before we start where we will end up.

Alexander's support for the position that "bounded rationality" is antithetical to the rational actor model and the expected utility axioms is perfectly standard, and follows Gigerenzer and Selten who, despite their eminence and indeed brilliance, are just simply wrong in their arguments on this point. There is no contradiction between bounded rationality and the Savage axioms or any other of the standard justifications of the rational actor model. Indeed, preference transitivity can be deduced from elementary evolutionary principles, and all surviving species have utility functions, just as described in the first-year graduate textbook. Of course, humans have limited information processing capacities, but no physically existing entity can avoid such limitations, by the very principles of quantum mechanics and information theory. The idea that game theory assumes that people can solve complex games is just wrong. Game theory has many problems (including assuming that rational agents actually play Nash equilibria), but this is despite the fact that they use the rational actor model, not because of it.

Finally, while much of morality is clearly individually fitness-enhancing, I do not think all moral rules arose because they were in the interest of individual fitness (see for instance Herbert Gintis, "The Hitchhiker's Guide to Altruism: Genes, Culture, and the Internalization of Norms", Journal of Theoretical Biology 220, 4 (2003): 407-418). I think all plausible models of the evolution of human morality involve group selection in the context of hunter-gather societies in which hunting and territorial defense led to the emergence of truly altruistic genetic predispositions in our species, and the reason behaving morally is in our interests today is because we have these prosocial genetic predispositions, and we care about our self-image in social life, where people judge us by our degree of prosociality.