Biology's First Law: The Tendency for Diversity and Complexity to Increase in Evolutionary Systems

Placeholder book cover

Daniel W. McShea and Robert N. Brandon, Biology's First Law: The Tendency for Diversity and Complexity to Increase in Evolutionary Systems, University of Chicago Press, 2010, 170pp., $20.00 (pbk), ISBN 9780226562261.

Reviewed by Mohan Matthen, University of Toronto


In this engaging and often insightful book, Daniel McShea, a paleobiologist, and Robert Brandon, a philosopher of biology, both at Duke University, argue for a "zero-force evolutionary law" (ZFEL), which can be stated thus:

In any evolutionary system in which there is variation and heredity, there is, in the absence of constraint, a tendency for diversity and complexity to increase.

The book is an exposition and defence of this principle. As I shall argue, the italicized phrase makes a great deal of difference to how the principle is to be understood. Without the phrase, the principle is original and heuristically important, though clearly false; with it, it is of unproven empirical validity.

Before we get going, two important clarifications. First, McShea and Brandon use the term 'complexity' to mean "number of part types or degree of differentiation among parts" (7). They note that in this sense, complexity is quite different from organizational or functional complexity, which are topics of discussion in philosophy of biology and of mind. Thus, their principle is not directly applicable to how complex organs such as the eye emerge (though, as we shall see, they say something about this). It is relevant rather to questions like: How does nature generate the variation on which natural selection operates? Second, "diversity and complexity" are one and the same thing. For diversity among the parts of a system just is the complexity of the whole. Thus, we can concentrate just on number of parts and variation among parts. The zero-force law says that (in the absence of constraint) complexity in this sense tends to increase.

In the first chapter, we are given a vivid motivating example -- a brand new picket fence. Over time, the fence accumulates stains, chips, spots of mould, and other imperfections. The fence was uniformly painted to start with: each picket was uniform in color and more or less indiscernible from the others. But as time goes by, each picket takes on random imperfections that differentiate it from any other. "No directed forces need to be invoked here… . Nor is any directed human intervention required. Rather, diversity and complexity arise by the simple accumulation of accidents" (2-3).

Understood in this way, the zero-force law is clearly salutary. In Newtonian mechanics, lack of change is the no-force condition. Following a Newtonian way of thinking, McShea and Brandon say, biologists and others assume that whenever there is change, there must be a force that caused it. But in evolution there is "a spontaneous tendency for individuals, populations, species, and higher taxa to differentiate" (33). This is illustrated by the accumulation of pseudogenes (64) or "junk DNA" -- stretches of DNA that have lost protein-coding capacity because of random copying errors over time. We do not have to posit "directed forces" to explain junk DNA: it is just a manifestation of nature's chaos. This is what McShea and Brandon want to highlight as the fundamental condition of biological systems over generations of reproduction: random change, deterioration, disorder. Chaos can be held in check by various constraints, including natural selection. But it is always present.

All of this is true, and a useful corrective to certain ways of thinking about evolution. But McShea and Brandon consistently overplay the importance of the zero-force law -- the title of the book as well as the text invite a comparison with Newton's First Law of Motion, which (as we all know) was a central axiom of classical mechanics and one of the most profound paradigm shifts in the history of science. Understood as above, the zero-force law is much more like a pattern that emerges upon reflection on the population genetics paradigm (though undoubtedly an important one). Moreover, the zero-force law is not fundamental in the way that Newton's Law of Inertia is. The discussion that follows ought to make my reasons for this assessment evident.

For a mathematically tractable example of the zero-force law, consider a collection of particles -- say some rocks in outer space. Newtonian mechanics says that its collective momentum will not change unless it is subject to a net force. On the other hand, it is clear that in certain respects, the diversity of its members will increase with time (unless constrained not to do so). Consider the variance (however measured) of the rocks' distances from their joint center of mass. Provided that at least one of the particles differs in velocity from the others, the rocks will get more dispersed in the long term, and thus, positional variance will increase. Or consider the variance of the rocks with regard to the symmetry of each around its own center of mass. If the rocks collide with each other and are subject to other deforming forces, this quantity too will increase over time -- some of the rocks will become much more unsymmetrical than they were; others will change only a little; some may even become more symmetrical than they had been before. These increases of variance are the subject matter of the zero-force law. According to McShea and Brandon, they "arise by the simple accumulation of accidents."

This is a nice insight, but how universally does it apply? Suppose that an artist paints a Seurat-like pointillist image on an outdoor wall. Because of his efforts, the regions of the wall acquire a certain variance with respect to colour. But because of exposure to sunlight, wind, rain, and abrasion, the image gradually fades. The colour-variance of the wall diminishes. Or suppose that you inadvertently burn something in a pan on your stove. At first, there is a lot of smoke around the pan. The variance of smoke-concentration in your kitchen is quite high. But as time goes on, the smoke disperses: the entire kitchen is smoky, but not as much so as the region around the pan had been -- variance of smokiness gets smaller. Finally, the second law of thermodynamics. This law is concerned with the sorts of variance that enable spontaneous energy transfers -- for example, temperature differences that enable heat transfer. The Second Law says that these types of variance will (almost) always decrease within a closed system, and that consequently the potential for spontaneous energy transfer diminishes over time.

There is a systemic problem here. Consider the Seurat-like image again. You could say that the molecules of blue paint in this painting are confined to certain regions of the image, as are molecules of green paint, etc. As time goes on, these molecules get dispersed. They are all over the place. Looked at in this way, the colour-uniform collections of molecules lose uniformity; their positional variance increases. On this description, the zero-force law seems to be valid. But now look at it another way. Consider the region of space that surrounds the image -- say within a radius of ten kilometers. Divide this region into small sub-regions. To start with, only a very few of these regions are occupied by blue paint molecules. But as time goes on, more and more are. In the limit -- in ten thousand years or so -- the probablility of a region ten kilometres away containing a blue paint molecule is pretty much the same as that of a region that was originally painted blue. So how should we look at the matter? Diversity of molecules with respect to region occupied increases, while diversity of regions with respect to occupancy by molecules decreases. Of course, the molecules are material parts of the system and regions of space are not. But this does not give us an adequate ground for accommodating all re-descriptions of this kind. (See below.) I didn't find anything in the book that tried to capture this difference of perspectives -- I am far from confident that there is a general way of capturing it.

Of course, the examples just given do not satisfy the antecedents of the zero-force law. The claim is that where there is variation and heredity -- in the biological realm, for instance -- complexity will increase in the absence of constraint. As we shall see, a lot turns on how "constraint" is understood. I'll return to this. But first, let's note that there is a kind of variance that always diminishes, whether or not heredity is present. This is variance that carries information about earlier states of the system. (See, for instance, Sober and Steel forthcoming, section 9.) The Seurat and smoke examples are instances of this general truth: in a thousand years, you can't detect the image that was painted on the wall; in a few hours, you cannot tell that there had been smoke in the kitchen.

In biological systems too, variation that carries information about past states is erased. Consider my descendants. Let there be a genetic measure F of organisms such that higher values of F correlate strongly with closeness of descent from me. In other words, if F(x) is greater than F(y), then probably x is more closely descended from me than y. At present, a very small number of organisms has a high value of F, and the rest of the human population has an extremely low number. As time passes, there will for a time be more organisms with a high number relative to the rest, though these will not be as high as my very close descendants. As time goes by, the population will become less varied with respect to F: my descendants will differ less and less from non-descendants. In fact, all humans share a common male and a common female ancestor: there is very little variation with respect to closeness of descent from these particular individuals and hence with respect to marks of such descent. However, earlier in the history of the human species, when other lineages had not yet terminated, there was more variance in this respect.

Another example prima facie damaging to the zero-force law has to do with drift. Suppose that at a particular locus, there are two alleles, A and a, neither of which has any selective advantage. Consider a population that is evenly divided between A-bearing and a-bearing organisms. Suppose further that the probability of A mutating to a and that of a mutating to A are more or less equal. As far as I can see, this is an unconstrained evolutionary system. But we know with certainty that the population will slowly drift away from the half-half state, and that eventually one or other of the two alleles will become fixed. As Sober and Steel (forthcoming) say, this is a steady decrease in variation. The initial 50-50 state is maximal with respect to variation and the final 100-0 or 0-100 state is minimal. McShea and Brandon do treat of this example and argue that the finiteness of the population imposes an "absorbing boundary" (29-30), which, I gather, is supposed to be a form of constraint. I do not fully understand the metaphor, but it seems that, according to them, finite population size is itself a constraint. This has the consequence that drift itself is a consequence of constraint. I do not know how this sort of idealization fits with the zero-force heuristic. On the one hand, more types will be generated in an infinite population; on the other hand, it is not clear how complexity is to be measured. This is a problem that ought to have received greater attention.

When we calculate the rate of information erasure, we are interested in a number of causal processes -- how genes are copied, the mutation rate, the barriers to inter-species reproduction, gene fitness, etc. Each of these processes affects the increase of variance. Mutation obviously so; fitness in that fitter genes tend to increase in frequency, thus decreasing variance; the barriers to inter-species reproduction because they tend to preserve pre-existing variety, and so on. Each of these biasing factors is what the authors call a "constraint". However, some of them are "constitutional constraints" inasmuch as they define the biological realm and differentiate it from other domains -- heredity is an example -- and the others are "imposed" or extra-constitutional. The zero-force law is supposed to apply in the absence of imposed constraints. (See the discussion at the end of chapter 2.)

So far, I have omitted discussion of constraints. I turn now to the law as stated above including the italicized phrase. At the start of chapter 3, the authors say: "In the absence of constraints … mutation will cause individuals in the next generation to differ from each other" (25). This is crucial. Mutation is, after all, the most important source of added variation in an evolutionary system. But unless mutation is unbiased, it is (by definition) a "constraint". I take it that mutation would be unbiased at a given locus if all of the possible mutations were equally probable. Further, it is unbiased if the mutation rate is independent of other biasing factors, such as natural selection or environmental exigency.

Now, mutation is not in fact unbiased in these ways. There is more mutation during environmentally challenging circumstances; this is possibly an evolved trait. There is DNA repair, which makes mutation away from the ancestral type less likely than mutation toward it. There are "mutation hotspots" where mutation rates are much higher than at other locations. These phenomena indicate that mutation itself is a constrained process. The zero-force law states that unconstrained mutation -- mutation that is neither checked nor augmented, but is rather at some hypothetical base rate -- results in increased variation. (The discussion at 130-131 is relevant to this point.)

Why restrict the zero-force law to unconstrained, or base-rate, mutation? I take it that the dialectic leading to this restriction is something like the following. What I initially took from the picket-fence example was that all sorts of random events leave a mark on a system and thus increase its differentiation. (We saw that there is good reason for doubting the generality of this claim, but put this aside for a moment.) However, the authors realize that some sequences of events are, in fact, homogenizing. When the picket fence is regularly painted and repaired, it regains its original low variance. In evolution, natural selection plays this role: it eliminates unfit mutations. I take it that the no-constraints clause is supposed to exclude these sorts of biased happenings. Unfortunately, it leads to some empirically quite implausible claims.

One of these is the claim that the zero-force law explains such large increases of variation as the Cambrian explosion and the Ordovician radiation. For it is one thing to say that sudden large increases of variance are unsurprising, given the ubiquity of variance-increasing sequences of events, including mutations. It is quite another thing to say that such events are caused by base-rate mutation. How do we know this about the Cambrian explosion? And if we do not, then how does it confirm the zero-force law? In chapter 3, the authors list many instances of sudden large increases of variety. And they say that "The common theme is adaptation in individual lineages to exploit new opportunities, occurring to some degree differently in each species and producing an expansion into unoccupied ecospace. This is the ZFEL." (37; my emphasis.) Actually, this isn't ZFEL; it's natural selection. But the authors' point, presumably, is that the dispersed adaptation mentioned in this passage does presuppose a large number of genes lying in wait, as it were, to "exploit new opportunities." Certainly, this suggests that variety is needed to explain such events -- mutation creates variety, and lots of mutation causes lots of variety. But how does the no-constraints clause contribute to our understanding of events like the Cambrian explosion?

The authors' attitude seems to be that constraints can always be separated and discounted -- that underlying the constraints is an unbiased tendency towards increased variety. I am sceptical how potent this assumption is from an explanatory point of view, because even if it is granted, it is entirely obscure how strong the zero-force tendency is. For all we know, it might be very weak except when amplified by biased "forces". This is a problem reminiscent of one faced by Darwin himself. The kind of thinking that lay behind the Principle of Natural Selection was revolutionary; on the other hand, it took a lot of empirical and theoretical work to establish that there was enough variation to allow natural selection sufficient scope. The same goes here: it would take a lot of work to establish that there is a lot of "complexification" in the absence of constraint.

What is the upshot of this ambitious book? Putting aside some of the questions raised above about the correctness and reach of the central principle, it seems to me quite clear that it accomplishes something important. It introduces us to a new heuristic concerning the emergence of phylogenetic and structural diversity. By the time we come to chapter 7, a tour of "Implications", the reader finds herself or himself practised in using this heuristic.

Even so, the authors over-reach. The treatment of functional complexity is an example. "Consider two ways of building a stone arch," they urge (124). The first way is to build it by finding the right-shaped stones and raising them one by one, with supplementary supports that can be removed only once the keystone is in place. The second way is to have an enormous pile of irregularly shaped stones -- a pile so large and diverse that buried away in it, by complete chance, is a perfect arch. Knock away all the other stones, and the freestanding arch is revealed. Perhaps complex biological structures are like this, they say. The standard story concerning the evolution of the eye, with all of its elaborately interdependent parts, is like the first way -- with pre-adaptation playing the role of the supplementary supports. "But there is an alternative method," the authors say. "It comes from ZFEL, which invites us to see organisms as always awash in novel part types." These parts might be thrown up spontaneously (as in the enormous pile) and "the role of natural selection could be mainly negative, revealing [functional] complexity by subtraction." I suggested earlier that this is incorrect: there is no reason to think that unconstrained mutation is big enough for this. Further, "complexity by subtraction" doesn't come across as a plausible alternative to pre-adaptation.

As a final comment, let me recount that having left my review copy on a plane, I bought and used the Kindle edition. Aside from its lack of page-numbers, I found it extremely useful. It was (mostly) well-produced -- though it did not have linked out-takes for the endnotes -- and it enabled me to leave easily found annotations, notes, and book-marks (without ugly stickies) and to find passages easily when it came time to check hard-copy out of the library in order to provide page references for this review.*


Sober, Elliott and Mike Steel (forthcoming) "Entropy Increase and Information Loss in Markov Models of Evolution," Biology and Philosophy.

* André Ariew, Alex Rosenberg, Elliott Sober (who also gave me detailed comments on a previous draft), and Denis Walsh helpfully discussed the Seurat and drift-to-fixation examples in this review. I don't know how far they would agree with my treatment.