evolgen

Chance, Stochasticity, Probability and Evolution

John Wilkins has replied to Larry Moran on the role of “chance” in evolution (incidentally, Moran replies to Wilkins on the same topic, but a different post by Wilkins). Here’s what Larry wrote:

Nobody denies the power of natural selection and nobody claims that natural selection is random or accidental. However, the idea that everything is due to natural selection is the peculiar belief of a relatively small number of people, of whom Richard Dawkins is the most outspoken.

A great deal of evolution is the result of chance or accident, as is a great deal of the rest of the universe. It’s perfectly okay to say, as a first approximation, that lots of evolution is random or accidental. This is a far closer approximation to the truth than saying it’s the all the result of design by natural selection.

This is a favorite topic of Moran’s (see his webpage for more). Wilkins takes a closer look at Moran’s point, focusing especially on some philosophical issues, but the argument is lacking due to a sloppy treatment of the statistics and a failure to adequately define “chance” from a statistical perspective.

First, a quibble on probability distributions. Wilkins writes:

There are a number of meanings to “chance” in this case. One of them is that without a perturbing cause, ensembles of events will tend to form a Poisson distribution – the “bell curve” of beginner’s statistics.

I like where he was going with a definition of “chance” — random draws from a probability distribution — but the “‘bell curve’ of beginner’s statistics” is the normal distribution. Given enough trials, a Poisson distribution will converge on a normal distribution (the central limit theorem). But a Poisson random variable specifically refers to how many events will occur in a certain window of time. For example, one could model the number of cars driving past your house every hour using a Poisson distribution. The inverse of this is the exponential distribution, which deals with the issue of waiting time — the expected length of time between the events (how long do we expect to wait between cars driving past your house). But I digress.

Throughout his entire argument, Wilkins never gives a true definition of “chance” (neither does Moran, for that matter, but his blog entry was much shorter). And, while Wilkins does hint that he means random draws from a probability distribution, he also suggests something else:

Yes, molecular changes can be “random” in the sense that they are external to the theory of evolution. They are not external to the theory of chemistry. In the domain of subatomic physics, the randomness of radioactive decay or gamma radiation has to do more with statistical properties than actual chance.

“Statistical properties” mean, to me, random draws from a probability distribution. So, what is chance if chance is not a stochastic process? Oftentimes, people conflate randomness with a uniform distribution — equal probabilities of all possible outcomes. But when we model a random process, we assume some distribution that approximates the randomness of the natural event we’d like to simulate. In evolutionary biology, this is often done with the binomial distribution — either allele A1 or allele A2 get passed on to a child, either a locus obtains a mutation or it does not, either two alleles coalesce at generation t-1 or they do not.

The examples mentioned above are the chance aspects of evolution. Evolution, in a nutshell, results from differential inheritance of alleles. The expected frequency of a neutral allele in the next generation is simply its frequency in the previous generation. But, in finite populations, there exists some variance around that mean. Processes that can be modeled as random draws from probability distributions — mutation and drift, for example — can lead to differential inheritance of alleles. Moran is arguing that these types of processes trump natural selection in terms of importance throughout evolution history. Many of the arguments Wilkins makes are irrelevant to that point, instead focusing on the theistic implications of “chance” (which I won’t touch with a ten foot cross).

But the big question remains: is Moran correct? Is evolution due mostly to the stochastic processes described above, or does natural selection (non-random draws from a probability distribution) drive most changes? And that’s what a lot of people are studying. Of course, it depends on which aspects of evolution one is most interested in. If one is studying evolution at the DNA sequence level, then, yes, evolution is mostly due to the mutations that accumulate over a given amount of time (which can be modeled as a Poisson distribution). If, however, one is interested in the evolution of the amino acid sequences encoded by protein coding genes, then we begin to see many more examples of both selective constraint (stasis due to natural selection) and adaptive evolution (natural selection driving change). Finally, the morphology of organisms probably contains the greatest evidence for natural selection. A cetacean‘s streamlined morphology, a bird’s wing, and every type of eye out there were all shaped by natural selection.

Moran’s peeve — and one that I agree with — is that people tend to interpret everything in an adaptationist light. This is especially problematic when it comes to molecular evolution. Much of what people explain using natural selection may merely be a spandrel.

Comments

  1. #1 razib
    December 26, 2006

    Of course, it depends on which aspects of evolution one is most interested in.

    yes. this is where dawkins got a little sly and shifty in the blind watchmaker. he defined the insight of neutral theory away. nevertheless, when most people (as in the public) think about evolution they aren’t conceptualizing genome level changes, they’re thinking of the evolution of tetrapods and what not. if we rewound the clock back to the cambrian would life as we know it look to us as monsters due to the chance and contingency of morphology? gould would say yes, and simon conway morris no.

  2. #2 Jonathan Vos Post
    December 26, 2006

    The morphology of convergent evolution was a good example to give.

    Good place to start on the Central Limit Theorem is the web page, and its carefully selected references:

    Weisstein, Eric W. “Central Limit Theorem.” From MathWorld–A Wolfram Web Resource.

    As to its applicability in Population Genetics, one can start at:

    Wikipedia: Population Genetics

    Population genetics by Knud Christensen

    I think that the fun starts with Deviations from Hardy-Weinberg equilibrium.

    We are in a deeply non-equilibrium biosphere. My [ahead of its time for 1975-1977] doctoral dissertation was on non-steady-state mathematical biology of evolving metabolic systems.

    The plagiarist department chairman, in decising to prevent the ad hoc Thesis Committee from becoming an Official Thesis Committee, thereby preventing them for formally voting to either approve nor reject my dissertation (subsequently published chapter by chapter in refereed journals and proceedings) made the amazing dismissal of my work to the Acting Dean (the school had, in a chaotic era, 5 acting deans and 4 deans):

    “He’s only considered the non-steady state case.”

    Ummm. The steady-state case is when all the derivatives are set to zero. Then one uses algebra on the Michael-Menten equations (classical enzymology) to find the ratios of concentrations of substrate, product, intermediates.

    Steady-state = dead. Non-steady-state = alive.

    I lean strongly towards Natural Selection, having researched it and taught it. Also think that Darwin was one of the great Victorean prose stylists, always worth re-reading. Yes, he had little math, and Mendel’s results were fudged. But the neodarwinian paradigm is now on a sound mathematical and experiemental footing.

    Yes, statistical effects give other types of evolution. Founder effect. Genetic Drift. Neutral Gene hypothesis.

    What mattered in my research was discarding obsolete static models of Fitness of enzymes. Fitness is deeply inherently Complexity-riddled nonlinear, non-steady-state, non-equilibrium.

    We are fractal organisms evolving to survive in a fractal ecosystem on a fractal planet in a fractal cosmos.

    We are “on the edge of chaos” organisms evolving to survive in an “on the edge of chaos” ecosystem on an “on the edge of chaos” planet in an “on the edge of chaos” cosmos.

    Chaos only looks random if you view it in too few dimensions.

    Organisms and ecosystems are chaotic attractors in the evolutionary and developmental phase spaces of trajectories of all possible organisms and ecosystems.

    Apparent randomness (but actually in chaotic systems) is too important to be left to chance.

  3. #3 cff
    December 26, 2006

    Wilkins wrote: “After all, if we are going to deny that anything is random then we have to stop talking about the outcomes of coin flips and the spin of the roulette wheel. But that would be silly. We all know what we mean when we talk about chance events or accidents. We mean that such events are not predictable by any means at our disposal. We are contrasing such events with those, such as natural selection, that have an obvious cause and a (mostly) predictable outcome.”

    This isn’t standard at all, right? They story goes that if we knew all the physical mechanisms going on when we flip a coin, then we could predict exactly what will happen. However, since we are not privy to this information, we have to use probability theory to generalize what will happen.
    This is just the standard epistemic/metaphysical(or “real”) distinction.

    So, is Wilkins just confusing the epistemic points from the metaphysical ones?

  4. #4 RPM
    December 26, 2006

    cff, you’re trying to get me to dive headfirst into a topic with which I’m not totally comfortable. It seems to me that we could model something like flipping a coin using a deterministic model, but it would be extremely complex (lots of parameters). So we simulate these using a stochastic model. I see nothing metaphysical about this.

  5. #5 cff
    December 26, 2006

    Yes, of course, modeling the coin is an epistemic affair.
    But, i got the impression that Wilkins was using statistical models to argue that evolution is, at some level, really chancy (i.e., the part in his essay which you wouldn’t touch with a ten foot cross, the metaphysical claim).
    I was just suggesting that perhaps your problem with the essay manifested itself when Wilkins slipped from talking about probabilities as useful descriptions for evolution to the consequences of doing so. Anyway, it’s just a side point to the more interesting discussion above.

  6. #6 John Wilkins
    December 26, 2006

    cff, where did I say that?

    RPM, thanks for the detailed discussion. I think that whether statistical properties at the quantum level (I don’t do quantum – that’s for the philosophers of physics, and I’m not sure that they understand it any better than I do, philosophically) is a sampling error (that is, it’s epistemological) or a real fact (in the sense of “real” that each speaker chooses to adopt) is an interesting question. No matter which, the statistical properties of ensembles is what counts at the population level, and I don’t think much hinges on it being either an epistemic or ontological fact. If epistemic, then it’s a property of evolutionary explanations. If ontological, then it’s a property that evolutionary explanations must deal with. Since we lack direct access to the noumenal world, if that is a “real” thing, we can say that evolution is statistical in the sense that any individual outcome is a selection from a probability distribution (and yes, I should have said “Gaussian” or “normal”) either way.

  7. #7 bigTom
    December 27, 2006

    I think the problem some have is how the selectivity of an algorithm can drive the statistics of a sample. For example, we can use an algorithm like simulated anealing to get an approximate minimum to a system of many variables. With say a million samples you get within some distance of the global minimum. If you simply took a million random values of the independent variables and picked the smallest, you wouldn’t do nearly as well. A lot of people try to argue that the DNA sequence of a higher organism is impossibly unlikely. They are assuming you just take a zillion random samples, and select the one animal that results -and of course the probabilities are ridiculously small. Its only because nature uses a sort-of stepwise algorithm that such specialized solutions can emerge in reasonable (compared to the livetime of the universe) time.
    Then there is a lot of chance involving the external conditions (environment) that drive the algorithm.

  8. #8 cff
    December 27, 2006

    eh, Sorry about that mr. Wilkins. In my caffeine induced blabbering this afternoon I attributed Moran’s favorite topic for your own. So again, sorry, this lapse won’t happen again (i.e., I’ll just shut up)

The site is currently under maintenance and will be back shortly. New comments have been disabled during this time, please check back soon.