ResearchBlogging.orgThe paper I’m about to discuss is a minefield of potential misconceptions that arise from the way we often use language do describe natural phenomena. This is a situation where it would be easier to start with a disclaimer … a big giant obvious quotation mark … and then use the usual misleading, often anthropomorphic language. But I don’t think I should do that. We’ll address this research the hard way, but the result will be worth the extra work.

Here is the basic hypothesis. The null model is that genetic variation arises randomly and this variation is the raw material on which selection works, and the association between particular variation … or amounts of variation …. and the selective milieu in which it is likely to be culled is itself fortuitous.


It is probably already easy to make the argument that this is not true at a very basic level. Variation that is always highly deleterious (number of major body parts in many metazoans, presence or absence of key reproductive structures in sexually reproducing organisms, etc.) may occur in nature less frequently than variaton that hardly ever mattes (fingerprint shape, for instance) because of the relative consequences and, presumably, evolved patterns of genetic expression that either cull out really bad variants early in development (two headed kittens are aborted early) or disconnect heritability from function (fingerprint variation is not really genetic).

But is there any circumstance in which variation is ramped up in certain systems because variation in those systems is itself adaptive at the population level, even if not at the individual level? In other words, birds living in changing environments may benefit from having offspring where one individual will be well adapted to average conditions, another well adapted to drought, and another well adapted to wet conditions. Birds that have all ‘average’ offspring will not do well during extreme years, but birds with highly variable offspring will generally do less well but occasionally do extraordinarily well, perhaps being the only individuals to successfully reproduce in extreme years.

Makes sense, but how is this different from some genetic variation being just ‘good’ no matter what? How is this different from sexual reproduction, and in monogamous species, a bit of promiscuity to enhance variation in the litter, etc., being selected for?

It isn’t, unless there is a molecular (or developmental) mechanism that partitions variation such that variation in some conditions is facilitated … enhanced in probability.

A few years ago, two evolutionary biologists, Marc Kirschner and John Gerhart, proposed (in the book The Plausibility of Life: Resolving Darwin’s Dilemma) that “Facilitated variation” enhances the degree to which life’s diverse and disparate forms adapt. Facilitated variation is said to overcome the problem that it is hard to account for the intricately, diversity, and complexity of adaptations that we see today.

This perspective is a bit unfortunate, because I think most evolutionary biologists don’t have a huge problem with seeing the existing mostly random variation interrelating mostly fortuitously with selective processes to produce what we see. However, it has to be admitted that the lack of credulity for the current model of random genetic variation meets selection = what we see in the natural world does not rule out the possibility of something like Facilitated Variation (hereafter ‘FV’).

The paper we are looking at today carries out simulation studies to test some ideas related to FV. From the author’s summary of the paper:

One of the striking features of evolution is the appearance of novel structures in organisms. The origin of the ability to generate novelty is one of the main mysteries in evolutionary theory. The molecular mechanisms that enhance the evolution of novelty were recently integrated by Kirschner and Gerhart in their theory of facilitated variation. This theory suggests that organisms have a design that makes it more likely that random genetic changes will result in organisms with novel shapes that can survive. Here we demonstrate how facilitated variation can arise in computer simulations of evolution. We propose a quantitative approach for studying facilitated variation in computational model systems. We find that the evolution of facilitated variation is enhanced in environments that change from time to time in a systematic way: the varying environments are made of the same set of subgoals, but in different combinations. Under such varying conditions, the simulated organisms store information about past environments in their genome, and develop a special modular design that can readily generate novel modules.

I think that part of this argument is reasonable and likely to be true: Organisms that are adapted (typically) to conditions that change systematically should be found to adapt to those changes more rapidly than expected if adaptation consisted purely of random variation fortuitously encountering selective forces. However, it seems to me that there are two different possible reasons for this. Necessarily true is the simple idea that lineages (types of organisms… thinking about animals, you can substitute “genra” or any other appropriate taxonomic category) unable to make the change risk population size die-offs, so that as the enviornment fluctuates systematically over time, those categories of organisms eventually go extinct. This is quite possibly the kind of thing that woudl explain why the Early and Middle Miocene had many many species of apes and rhinos, and the late Miocene and subsequent periods have many many species of monkey and antelope. The repeated drying and wetting, cooling and warming cycles that begin in the Late Miocene presented some challenge … periodically …. that was more challenging for the apes and rhinos and less so for monkeys and antelopes. (This is conjecture, of course, but it makes the point).

The second possibility is this: As the dominant (in the population) adaptation shifts from a to b (following the envirnment) a is not entirely wiped out, and is present … and thus ammenable to selection … when the enviornment shifts back. This may be a matter of both the timing of the enviornmetnal change and the degree to which b both matters and ‘displaces’ a as a trait. Traits a and b may be alleles of different genes. Thus, the spread of b happens, perhaps, not at the expense of a, so a is retained (though not uner positive selection) in the genome.

This second mechanism is incluced in the paper’s model, but not as a major componant. Rather, the model uses levels of linkage between genetic and development, and specifically, wht is called an RNA secondary structure model. I don’t want to describe this in great detail, but the idea put simply is that the ultimtae trait (that is selected for or against) is a function of a series of events and not just the exprssion of a single gene, and is thus more adjustable over time.

An analogy: A car company switches from one model car to another, but it does so by adjusting the dials and doohickeys only a few of the numerous, sequentially aligned (on the assembly line) machines used to make the car. If suddenly it is decided to switch back to the old model, only a few adjustments need to be made. If, alternatively, the company made each car with a single machine that simply stamped out one type of car, they may have tossed out this machine when changing models, and a switch back to the old model would be impossible.

The paper is fairly technical, mathematical, and is based entirely on computer simulation of made up data and rules. I love papers like this, but most people don’t. So, instead of boring you with any more details, I’ll just give you the conclusios:

In summary, the present study aimed at studying facilitated variation in simple model systems. Populations evolved under systematically varying conditions were found to exhibit not only a memory of past goals but were also able to generalize to new conditions that are in the same language as previous conditions. Adaptation to useful novel goals was enhanced by organisms that have learned the shared subgoals that existed in past environments and are therefore likely to be encountered in future environments. Several elements of facilitated variation theory, such as genetic triggers, modularity, and reduced pleiotropy of mutations seem to evolve spontaneously under these conditions. It would be interesting to study the evolution of additional FV mechanisms such as exploratory behavior and body-plan compartmentalization using more elaborate models with hierarchical designs and developmental programs.

I think this is a good method for testing this sort of model. The model is limited in how accurately it can account for several different features of the real system it simulates. This is obviously always a problem with models, and this is not always a disadvantage … it is sometimes an advantage. My feeling, though, is that there are a number of critical features of life at this detailed level … at the level of gene expression, protein formation and folding, and interaction … that are not sufficiently known to allow us to directly assess the relationship between the code stored in DNA and the function carried out by the molecules that make things work.

____________________
Merav Parter, Nadav Kashtan, Uri Alon (2008). Facilitated Variation: How Evolution Learns from Past Environments To Generalize to New Environments PLoS Computational Biology, 4 (11) DOI: 10.1371/journal.pcbi.1000206

Comments

  1. #1 fullerenedream
    November 15, 2008

    That is really cool.

  2. #2 Romeo Vitelli
    November 15, 2008

    All this complex modeling makes my head hurt. Can’t we just say that a big Sky Fairy did it? It’s worked before.

  3. #3 Pelio
    November 17, 2008

    Did they have to repeatedly use the word “design” in that Abstract.
    Sigh… it just seems imprudent for a biologist these days.