Previously I discussed the probability of extinction across one generation for a new mutant allele. To review, there is ~1/3 chance that a new mutant will go extinct within one generation of its origination (i.e., a de novo mutation is not replicated and transmitted to the next generation of organisms). If there is positive selection on the mutant allele there is a reduction in the probability of extinction, but only a mild one. Consider that if s is 0.10, a 10% increase in fitness vis-a-vis population median fitness, that is likely to be swamped out in many cases by the stochasticity inherent in reproductive output for any given individual carrying a novel variant. In other words, favored alleles are rewarded by persistence.
But what about an allele which makes it past this cordon of extinction and eventually fixes, that is, goes from 1 mutant to ~99% of the alleles at a locus (a gene substitution)? We know that in a large population (where drift is ignored) the probability of fixation is 2s, which agrees with out intuition that even strong selection coefficients (e.g., 0.10) don’t guarantee escape from extinction. So, if a new mutant confers a 10% greater fitness upon an individual carrying the allele there is only a 20% chance that that allele will sweep and fix in the population. In the case of neutrality the probability of fixation is 1/(2N), in other words, the probability of fixation is inversely proportional to population size. This agrees with our intuition insofar as a new mutant in a population of 10 has fewer stochastic “steps” to make before reaching 100% vs. another mutant which arises in a population of 1000. Of course, conventional neutral theory does tell us that the rate of substitution is invariant of population size because even though the probability of ultimate fixation of a given allele decreases with increases effective population the number of mutants within the genetic background increases proportionally, ergo, the rate of neutral substitution is proportional only to the rate of mutation. This idea was a basis for the “molecular clock” in regards to genome evolution.
Please read Everything You Learned in Introductory Genetics was Wrong at some point. It is a good “reality” check for anyone reading these posts!
But in this section Gillespie is exploring a somewhat different model. He contends that “the mean time until the first substitution” from a mutant from the “mutational caldron” is:
t = 1/(2Nvs, where N and s are as above, and v is the mutational rate, where Nvs << 1 (that is, the mutational rate is very low, which should be implicit in that it is a poisson distribution)
If one assumes that the parameters for originating mutations for substitutions are held equal, the rate of substitution then is simply the recipocral:
ρ = 2Nvs
In other words, the rate of substitution in this model is proportional to population size, selection coefficient and mutational rate! Jumping out of the “caldron” exhibits a sensitivity to population size.
Of course, there is a serious problem with this. Starting with R.A. Fisher’s conception of adaptation in the 1920s, and proceeding up to contemporary models such as H. Allen Orr’s, theories of evolution driven by selection upon mutations have emphasized that selection coefficients for subsequent mutations should decrease as the fitness optimum is approached. This is the classic “overshoot” problem which Fisher illustrated geometrically, as you near a phenotypic ideal, excessive genetical deviation via mutation is far more likely to result in a decrease in fitness as you jump over the optimum and go careening down the adaptive hill. As Gillespie notes these models imply a burst of substitutions driven by positive selection and then equilibration at the adaptive peak. This was the reasoning behind the “Classical School” of evolutionary genetics and their argument for why polymorphism should be minimally extant within most populations, evolution would occur in dramatic sweeps to fixation followed by fallow periods of genetic stagnation (this is similar to Punctuated Equilibrium). Many of these models also exhibit a property of relative insensitivity to mutational rate, population size or selection coefficients, as the evolutionary dynamics work quickly over short periods of time with the raw material on hand and proceed to the same optimum (some models depend on the logarithm of the population size). It must be cautioned that these models are based on assumptions (e.g., poisson distribution) which do not always hold in all, or many, cases. Clearly large regions of the genome are neutral or nearly neutral, while other portions seem to be under positive selection, and other regions are subject to balancing selection of various kinds. The important point of the models is to offer insight into the dynamics in particular situations which gives us a piece of the greater puzzle.
And yet there is another issue which Gillespie covers, and which is really the heart of the issue, and that is environmental variation. Why do the substitutions toward an optimum occur? One presumes that there might be an exogenous factor, or, there might be a coevolutionary interspecies “arms race” at work, or perhaps intraspecies dynamics. Whatever the reality, fitness is a difficult parameter to reify in a universal sense, as opposed to a local value. Clearly a fitness landscape can be reworked dramatically if there is environmental impetus, and a new regime of selection coefficients may drive an immediate burst of adaptive evolution from the genetic background in response to exogenous inputs. So in this scenario you have evolution proportional simply to the rate of environmental change, with the genetic substitutions being derived from the ambient genetic background, with varying mutational rates, population sizes and selection coefficients being sufficient and of little concern. In paleoanthropology one might consider The Turnover Pulse Hypothesis as an example.
I think that these sort of models are probably relevant, or not, contingent upon the species you are speaking of. Clearly humans have less reproductive variance than salmon, for example. Since I’m pretty homocentric I look at these models as guides that can aid us in elucidating our own evolutionary past, and present, and ultimately, future. Some keep that in mind, I’m not talking about Newton’s Three Laws or the Laws of Thermodynamics, fixed and precise truths, but models which are an aid into understanding the general processes bubbling under the surface of the rich texture of life’s variation.
Note: Adapted from chapter 5 of Evolutionary Genetics: Concepts & Case Studies.