Much to my professional shame, PZ recently pointed out David Plaisted, a Computer Science professor at
the University of North Carolina, who has
href="http://www.cs.unc.edu/%7Eplaisted/ce/challenge8.html">an anti-evolution screed on his university
website. Worse, it's typical creationist drivel, which anyone with half a brain should know is utter
rubbish. But worst of all, this computer scientist's article is chock full of bad math. And it's
deliberately bad math: this guy works in automated theorem proving and rewrite systems - there's no way
that he doesn't know what utter drivel his article is.
His argument comes in two parts. The first, which he sees as his real contribution, is
an argument that is a pseudo-scientific formulation of that good old standard, differentiating between micro- and macro- evolution (which he calls small and large evolution), and arguing that there is
a barrier - so that small changes cannot possibly add up to become large changes. From his introduction:
I would like to present an alternative to the theory of evolution that includes the type of evolution that has been observed directly or in the fossil record, but does not require a common origin for all of life. The kind of evolution that has been observed will be called small evolution, but evolution at some larger scale to be specified will be called large evolution. The purpose of this discussion is to provide some kind of a reasonable boundary on what has been observed and what can reasonably be considered possible without being a full-scale evolutionist. Evolutionists often say that if you believe in the origin of new species, you believe in evolution, showing that they do not make a distinction such as that between large and small evolution.
How does he back up this argument?
It is my impression that organisms have a loosely constrained part, consisting of
characteristics like skin color that are easily modified without many effects on the remainder of the
organism. There is also a tightly constrained part consisting of many elements that are tightly
interconnected, and one cannot change anything without significant effects on many other things. For
example, many proteins that have to interact with each other would be tightly constrained, because a major
change to any one of them would prevent its interaction with the others. And, it is difficult for such
groups of genes to evolve because of the many interactions. It may be possible, however, to modify some of
the amino acids in such a protein without much effect on its function. This would be a minor change to the
highly constrained part of the organism, and I am willing to consider such mutations as part of "small
evolution." However, major mutations to the tightly constrained "kernel" of an organism would be "large
evolution" if there were a significant number of them.Thus we have two parts of the genome, the loosely constrained part and the tightly constrained part,
or kernel. We also have two kinds of mutations, those that have little effect (minor mutations) and those
that have a significant effect (major mutations). Small evolution asserts that there can only be a small
number of major mutations in the tightly constrained part of an organism.
Yep, "It is my impression", followed by a bunch of nonsense trying to justify the idea that there must
be some kind of barrier. No actual evidence, no actual argument. No way of defining the difference between
the "tightly constrained part" and the "loosely constrained part". In fact, as the article goes on, what
his definition really t it comes down to is, if we've ever observed a mutation in some trait, it must be
part of the loosely constrained part. So for example, from the section he titles "A Precise Definition":
The highly constrained part of the organism would, as explained above, consist of proteins that
interact with many others, such as in the metabolism of ATP. Probably the best way to define this part of
the organism is to say that it consists of genes coding proteins without which the organism cannot
survive. We still need to specify what are major and minor mutations. Some point mutations substitute
amino acids in a protein, but do not change its shape or appreciably change its function. Some point
mutations do not even change the amino acid. These would be minor mutations. Some point mutations cause
drastic changes in the shape of the protein. This would undoubtedly have a major effect on the function of
the protein, and would be a major mutation. There are thousands of proteins in a cell, but most pairs of
proteins do not interact in any way. This is because their shapes are so carefully segregated. It is
difficult to see how this segregation could have arisen in an organic soup with many nucleic acid pieces
evolving in many different ways.We consider the probability that a mutation to the highly constrained part of the organism will be
beneficial or fatal. Kimure (cited in ReMine, The Biotic Message, page 246) estimates that mutations which
alter amino acids are ten times more likely to be harmful than neutral or beneficial. It would be a simple
matter to run laboratory tests to see how often a point mutation causes a major change in the shape of a
typical protein. I suspect that over half of the mutations that cause an amino acid substitution would be
major mutations, and that these would almost always be fatal. If the shape of a protein changes
drastically, the chance that the protein will still have a useful function in the cell is extremely small.
Thus the ratio of harmful or fatal mutations to beneficial ones (ignoring neutral mutations) would be very
high. A major mutations to the highly constrained part (kernel) of the organism would almost always be
fatal. Also, changes to the shape of a protein probably occur in large jumps or small increments, because
of how proteins fold. If the folding of a protein is changed due to a point mutation, its shape will
significantly change. Otherwise, the shape will not change appreciably. Thus there are gaps even in the
structure of proteins.
This is fairly carefully disguised, to try to hide the fact that he's got a weasel's argument. But
what it comes down to is an elaborate argument for why any observed mutation to
any part of an organism automatically disqualifies that part from being part of the core constrained
part of the genome. Remarkably handy, that - any mutation that anyone ever points out, he can just
wave his hands - poof! - declare it part of the unconstrained genome, and whoopie! no problem.
You can also see quite clearly where he's going with this whole argument. It's just another
wretched big numbers argument. He's going to pull a bundle of numbers out of his ass, multiply
them together, declare them to be the probability of something, and say "Look, that's just too unlikely to be possible, because it's just too improbable."
Of course, before he gets to that, he needs to do some quote mining. What's a crappy creationist
screed without any quote-mining? He pulls a little bit out of the talk.origins archive (naturally
without linking... Don't want the rubes to be following the link and seeing what it really says, now
do we?)
Here's what he quotes, in context from his article:
Here is a quotation from Introduction to Evolutionary Biology at the talk.origins archive:
"Most mutations that have any phenotypic effect are deleterious. Mutations that result in amino acid substitutions can change the shape of a protein, potentially changing or eliminating its function. This can lead to inadequacies in biochemical pathways or interfere with the process of development"
For evolution to have occurred, it would seem that the structures of proteins must have changed in large jumps due to point mutations, since many different species have substantially different genes. Thus if all life has a common ancestor, large evolution must have occurred. However, the evidence for this is lacking in the fossil record. Even the common structures found in different organisms can argue for a common designer rather than common descent. Furthermore, there are difficulties of plausibility with large evolution. One can imagine the proteins evolving in small increments, but for them to cross the large gaps seems impossible. About the only way I can conceive for this to happen is if the gene for the protein is first copied, and then one of the copies mutates to a new shape, while the original gene continues to preserve its needed function in the cell. However, it would probably be a long time before the new copy would have any function in the cell, so this would entail a useless protein existing in the organism for a significant time. These might correspond to the pseudogenes, whose function we do not know.
Before getting to the proper quote, let me point out that he's playing another trick here. We know perfectly well that one of the common mechanisms by which new functions recur is duplication of old functions, followed by mutation of one copy. This is common, and observed. He works that into his argument here, trying to wave it away, so that if anyone brings it up, he can say he refuted that.
Now, let's get back to the focus, and look at the original article from talk.origins:
Most mutations are thought to be neutral with regards to fitness. (Kimura defines neutral as |s| < 1/2Ne, where s is the selective coefficient and Ne is the effective population size.) Only a small portion of the genome of eukaryotes contains coding segments. And, although some non-coding DNA is involved in gene regulation or other cellular functions, it is probable that most base changes would have no fitness consequence.
Most mutations that have any phenotypic effect are deleterious. Mutations that result in amino acid substitutions can change the shape of a protein, potentially changing or eliminating its function. This can lead to inadequacies in biochemical pathways or interfere with the process of development. Organisms are sufficiently integrated that most random changes will not produce a fitness benefit. Only a very small percentage of mutations are beneficial. The ratio of neutral to deleterious to beneficial mutations is unknown and probably varies with respect to details of the locus in question and environment.
Mutation limits the rate of evolution. The rate of evolution can be expressed in terms of nucleotide substitutions in a lineage per generation. Substitution is the replacement of an allele by another in a population. This is a two step process: First a mutation occurs in an individual, creating a new allele. This allele subsequently increases in frequency to fixation in the population. The rate of evolution is k = 2Nvu (in diploids) where k is nucleotide substitutions, N is the effective population size, v is the rate of mutation and u is the proportion of mutants that eventually fix in the population.
Mutation need not be limiting over short time spans. The rate of evolution expressed above is given as a steady state equation; it assumes the system is at equilibrium. Given the time frames for a single mutant to fix, it is unclear if populations are ever at equilibrium. A change in environment can cause previously neutral alleles to have selective values; in the short term evolution can run on "stored" variation and thus is independent of mutation rate. Other mechanisms can also contribute selectable variation. Recombination creates new combinations of alleles (or new alleles) by joining sequences with separate microevolutionary histories within a population. Gene flow can also supply the gene pool with variants. Of course, the ultimate source of these variants is mutation.
The original context is one which is leading into a discussion of the actual probability math
of the propagation or elimination of a mutation in the genome of a species. It's clear that most mutations are neutral; that most (but not all) mutations that have immediate phenotypic
effect are deleterious; and that mutations that are initially neutral can become beneficial over
time. In context, it gives rather a different impression, no?
More importantly, it shows something about what good mathematical studies of evolution predict. And it means that the author of this little screed has seen the valid mathematical work. So he's
got no excuse for slapping together nonsense numbers to create his probability argument. We now know that if he doesn't cite the legitimate mathematics of mutation rates and probabilities, that it's not because he's ignorant - it's because he's deliberately not using the valid math.
If the shape of a protein changes, the protein will likely have no function in the organism. The protein will continue to have no function until enough mutations have accumulated that it again has a function in the cell. All of these intermediate mutations will be neutral ones. In order to obtain the new functional protein, all of the combinations of neutral mutations have to be generated, since evolution has no way to distinguish between them on the basis of fitness. Thus evolution has to do a blind search in this case, which is very inefficient.
All combinations of neutral mutations have to be generated? Where'd that come from, you ask? The answer is "nowhere". But still - ignoring that - watch what he's going to do next. He's claiming that
evolution must perform an exhaustive search over all possible neutral mutations of the gene that
produces a protein.
Typical polypeptide chains have from 50 to 3000 amino acids, so their genes have from 150 to 9000 base pairs. Often several polypeptide chains fit together in a protein, and their shapes have to match very carefully for this to occur. Once a gene has mutated, it will probably take a number of further mutations until it again has a function in the cell. For purposes of illustration, let's start with a gene having 100 base pairs and suppose that at least 5 point mutations are needed until it again has a function in the cell. Now, the more mutations that occur, the more random the gene will become, so we would expect that the density of useful genes decreases with increasing numbers of mutations. Therefore, the most efficient way to discover a new useful gene is to generate all possible combinations of mutations in order of the number of mutations. How many combinations of 5 point mutations are there? This would be 3 5 (100*99*98*97*96)/(1*2*3*4*5) since there are 3 point mutations at each locus. This is about 18 billion. We need to have at least 18 billion individuals, then, with different alleles, to be able to generate all of these. (We might find a useful gene before all of the combinations were generated, though. This could reduce the number 5 to somewhere near 4.) Anyway, this requires at least 18 billion mutations in a region of 100 base pairs, or about 180 million mutations per base pair. The genome probably has at least 10 million base pairs, so we would need about 2 * 10 15 mutations altogether. This might be feasible in a million years in a population of a billion with about a mutation per year per individual.
Did you notice the switch? In the previous paragraph, he said "all combinations of neutral mutations". Now he's switched it to "all combinations of any mutation". And he's asserted that only point mutations
count, that point mutations occur one at a time, and that there cannot possibly be any beneficial effect
until all 5 points are in their ideal mutated form. He's mutated his argument to boost the size of
the numbers.
But only 5 point mutations to get a new shape of a functional protein seems very small when there are 150 to 9000 base pairs. If there are 10 mutations, the same calculations lead to about 10^18 combinations, which is about 10^16 mutations per base pair, for a total of 10^23 mutations in the population if the genome has about 10^7 base pairs. A billion individuals for a billion years would give 10^18 mutations with one mutation per individual per year, so we would need a trillion individuals for 100 billion years, or a higher mutation rate.
And again. Earlier, he argued that even small changes in the constrained part of the genome have
drastic effects on protein shapes. But now, 5 point mutations, which were absolutely lethal
before, are too insignificant. He's making an argument to boost his numbers even more. 5 base pairs
of changes doesn't create big enough numbers - so he's going to shift the goalposts in order
to make the probability numbers even worse.
Of course, even 10 point mutations is quite small, and many genes have many more than 100 base pairs. One would expect many more than 10 point mutations to an allele before it again has a (new) function in the cell. So the numbers soon become astronomical and completely infeasible. In addition, we need to consider that some genes work together with others, so we might need to generate 3 or 4 genes at the same time, making the task even much more difficult. (This corresponds to Behe's "irreducibility." Note that this does not prevent evolution, but makes it astronomically more difficult, if irreducibility can be demonstrated.) For several polypeptide chains that fit together, it would be very hard to imagine how the whole complex could change shape gradually except by very large steps, which we have shown to be impossible. Another problem is that neutral mutations tend to die out of the population, so it may not be possible to generate all these alleles even in a vast amount of time unless the population is even more astronomically large to generate the combinations of neutral mutations rapidly in a single line of individuals.
And once again. See, he wants to make it look like the probability of evolution is so outrageously large that it's just not imaginable that it would work. So he's got to keep piling on excuses to increase
the numbers. He does it one more time, pulling in the rates of fixation/elimination of mutations from the talk.origin article, to try to make it appear to be even less likely that his required collection of mutations could occur and fix in the population.
The thing is - by this argument, this very argument that he presents, the so-called "small evolution" changes are impossible. What does it take to modify the color of a moth's wings? It's a change in proteins. How much about the protein needs to change to change the color? You can pull out the same nonsense argument that he uses to show that it probably needs at least 5 base pairs of change, and that the genes coding the color have at least 100 base pairs, and that none of the point changes can have any effect until they're all there, and so on. Nothing in this argument has anything to do with
what he's claiming to show. He's arguing for a difference between micro- and macro-evolution; but
he doesn't distinguish them, and his silly little big numbers argument applies as well (or as poorly) to
micro as it does to macro.
What makes me particularly angry about this is that the guy is a computer scientist. One of the
standard curriculum requirements for CS is discrete probability theory and combinatorics. We're well
trained in this area. He must know how stupid this argument is. He knows how dishonest he's being. And he's using his credibility as a member of my profession to make this deliberately dishonest, slimy, pathetic argument.
- Log in to post comments
Don't take it personally, or as an attack on your tribe, Mark.
NO profession is completely free of liars and fools.
And none is altogether without saints and geniuses.
I'm impressed that you could stomach reading all this stuff carefully enough to refute it. It's good that people like you are dedicated enough to do that.
It appears that this the same David Plaisted who attempted to argue against Edward Max' "Plagiarized Errors and Molecular Genetics". (Available at TalkOrigins: http://www.talkorigins.org/faqs/molgen/ ) In that article, Edward Max links to David Plaisted's website and gives a rebuttal to his claims.
Things could be worse. He could be Arthur Butz...
Is it me or is he trying to divide evolution in to "protected mode" and "user space" operations, with nature just having regular user permissions?
So does that make God the big admin in the sky?
David Plaisted is more precisely a multidenialist, he's a YECer.
This means he must deny the bulk of biology, geology, astronomy and physics. So of course he throws up a lot of quote mining and strawmen, which I'm not going to analyze in detail. It is old stuff apparently thrown together from a reading of Denton and Behe. For example, Denton's cytochrome-c argument is debunked here (mutation rates differs between species) and he uses Behe's hemoglobin argument similarly.
Plaisted conflates observations of evolution with theoretical definitions. Speciation is observed, for example in the fossil record when following transitional characters through lineages, but it doesn't describe what biologists usually means with macroevolution.
Micro- and macroevolution has several definitions, but it seems the following are common: microevolution is effects observed within populations (used and defined by population geneticists) and macroevolution is large scale effects (used and defined by anthropologists). AFAIK large scale effects are here things like punctuated equilibrium and other effects on the background of transitional characters and speciation.
Thanks for the link, tinyfrog!
[Cont.]
That aside, Plaisted's argument for separating basic mechanisms is ridiculous. There is a reason that Darwin used the term "variation" to describe the distribution of traits selection works on. The talk.origins quote describes most. But AFAIU phenotypic plasticity may be used by evolution, for example by genetic assimilation and canalization. [See http://scienceblogs.com/pharyngula/2006/08/symmetry_breaking_and_geneti… .]
Btw, take a look at the figure which illustrates an example which illustrates genetic assimilation. It shows independent routes observed taken by crab species to directional asymmetry (symmetry breaking) of claws (right or left claw larger).
[Either a mutation induces directed asymmetry which selection works on, the conventional route. Or a mutation induces undirected asymmetry on which genetic assimilation works on, exchanging a selected phenotype with a directed mutation which can fixate.]
How does that figure in Plaisted's probability calculations? Answer: it doesn't.
Finally, Plaisted's big number argument. My specific note is that it is the usual creationist strawman of choosing a specific target, but he conceals it by an exhaustive search argument.
Oh, and he uses Behe's "irreducibility" trick by defining away, only considering point mutations, the common redundancy in genes that happens by duplications in recombinations. A copy of a duplicated gene can mutate a lot before taking on a new function.
I don't exist!
Why? Well, the probability that I live in the city I live in, given that I'm a human being, is about one in 4,000. The probability that I went to the college I went to is about one in 60,000. And the probability that I went to the high school I went to (of course, my high school was in the suburbs of the city I now live in) is about one in a million. (I'm making some very crude estimates of the number of alums of my high school and college, but that's not the point.)
Therefore, the probability of any given person being in all three of those categories is one in 240 trillion. There are only six billion people in the world. Therefore I have a one in 40,000 chance of existing.
I'm sure I can similarly disprove the author's existence, if you like.
The macro/micro evolution distinction does bring up an interesting graph-theoretic question.
Suppose we simplify things and pretend that the adult form of a creature is completely specified by the pattern of its DNA. Now take two creatures that are far removed from each other, say a human and a jellyfish. Does there exist a sequence of DNA patterns P_1, P_2, ..., P_n such that
(1) P_1 is the DNA pattern for a human.
(2) P_n is the DNA pattern for a jellyfish.
(3) Each transition P_i -> P_{i+1} involves
a change of a single base pair.
(4) For each i, P_i is the DNA of some viable
species.
(Viability is of course relative to the environment, but let's say broadly that a species if viable if somewhere, at some time in the prehistoric past, there was an environment in which the species could live long enough to reproduce).
Is it expected that any two species have such a sequence connecting them? It's not strictly speaking necessary, because a mutation might involve changes to multiple base pairs, and because sexual reproduction introduces a new combination of DNA that could differ from both parents in many spots. But I am wondering if the set of viable species are "path connected" in this sense.
Isabel -
I don't exist either!
In order for "me" to exist only one of the 10,000 eggs of my mother and only 1 of the 100,000,000 sperm of my father would do to make a "me". Then you have to add to that the improbability of my mother and father getting together in the first place...and then add to that the fact that they themselves are the product of 1 in 10,000 eggs and one of 100,000,000 sperm whose owners had to also get together, and so on and so on for further generations.
Clearly that number has got to be to huge for me to exist. Must be that my ancestors thousands of years ago planned for this moment all along.
LR,
Wow! I haven't heard that name in years. I was at Northwestern when Arthur Butz was giving his Holocaust denial arguments.
Now you know how we EE's feel about all the creationist crackpots in our profession. After all, we are also required to know physics and statistics and thermodynamics, etc. I find it astounding how many EE's seem to fall into Creationism and ID nonsense. How does that happen?
I would think he would want to keep his worlds more separated. His total lack of scientific ability demonstrated here surely makes me wonder if any of his professional work is worth the ink (toner) used to write it...
On a bit of an off-topic note, the static link to this post sums things up nicely.
"http:// ... /creationist_drivel_from_a_sob ..."
:)
Dave S.:
You've touched on one of my favorite arguments when dealing with creationists who seem to get upset at the idea of "randomness" being able to create anything in the universe.
If nothing can be created through random processes, then why did god make men produce millions of sperm? Why not a single sperm to impregnate a single egg?
We are all the product of randomness, but it doesn't necessarily argue against the existance of god. It DOES argue against some peoples' version of what they think god is though.
Does there exist a sequence of DNA patterns P_1, P_2, ..., P_n such that
I think a lot of the 'evolutionarily significant' mutations affect the frequency of gene translation in the species. Further, there seems to be a wide variety of genes an organism can tolerate. For example, I remember one study where the cytochrome-c gene in yeast was replaced with the cytochrome-c versions from humans, pigeons, horse, Drosophila fly, or rat. The yeast had no problem using these other versions of cytochrome-c and surviving.
Daryl McCullough @ #9: Does there exist a sequence of DNA patterns P_1, P_2, ..., P_n such that
Sure. Trace backwards to the last common ancestor, then forwards up the other branch. Every genome you produce along the way matches that of a critter that reproduced. (The proper environment for that critter may be hard to come by these days, but I don't think the LCA of humans and jellyfish predates the Oxygen Boom.)
Plaisted used to post in talk.origins back in the 1990's, but seems to have given up Usenet around the turn of the millenium, at least according to Google.
(sheds a quiet tear for its profession)
i feel like a biology professor seen talking to Michael Behe.
L
Plaisted's also the author of the execrable The Radiometric Dating Game, which was dissected by Dr. Kevin Henke in Comments on David Plaisted's "The Radiometric Dating Game" - Part 1, Comments on David Plaisted's "The Radiometric Dating Game" - Part 2, and Comments on David Plaisted's "The Radiometric Dating Game" - Part 3.
Thumbs up to the debunking here.
I was wondering though-- MarkCC, did you see this "Evolutionary Informatics Lab" site at the center of the latest Dembski/Baylor faceoff? It's ostensibly run by Robert J. Marks, a computer science professor at Baylor, and contains practically no content except three papers listed as cowritten by Marks and Dembski. Although the site mostly seems to be of importance as a way of Dembski wedging his way back into association with Baylor, the papers might be worth having someone look at due to the threat that the involvement of Marks might create the appearance that the findings of Computer Science in some way support IDC. (I am unsure how much of each paper was contributed by Dembski and how much by Marks, and I have not had a chance to read them myself except superficially).
The first two papers seem to just be Dembski's old misunderstandings of the NFL theorem restated, with some new and likely irrelevant references tacked on. The third paper seems to be a bit more unique, and might actually contain some conceivably meaningful ideas buried within it somewhere-- the paper concerns an attempt to analyze the performance of an evolutionary algorithm by considering it as modeled by a neural network. (It appears that Marks has done legitimate work on neural networks in the past.) Unfortunately, the paper seems to mostly be concerned with measuring a quantity of "information" and analyzing where that "information" came from, where "information" does not appear to be specifically defined in the paper or unambiguously correspond to any specific definition of "information" I am familiar with.
(The word "information" here might be intended to be taken in the same sense that the term is used in a referenced paper by someone named Thomas Schneider-- the Schneider paper was actually published, and the Marks/Dembski neural network paper seems to be essentially a response to it-- or, it might be just more Dembskian word salad. I haven't read closely enough yet to be sure.)
[Comment got held due to too many links, reposting with links stripped.]
Thumbs up to the debunking here.
I was wondering though-- MarkCC, did you see this "Evolutionary Informatics Lab" site at the center of the latest Dembski/Baylor faceoff? It's ostensibly run by Robert J. Marks, a computer science professor at Baylor, and contains practically no content except three papers listed as cowritten by Marks and Dembski. Although the site mostly seems to be of importance as a way of Dembski wedging his way back into association with Baylor, the papers might be worth having someone look at due to the threat that the involvement of Marks might create the appearance that the findings of Computer Science in some way support IDC. (I am unsure how much of each paper was contributed by Dembski and how much by Marks, and I have not had a chance to read them myself except superficially).
The first two papers seem to just be Dembski's old misunderstandings of the NFL theorem restated, with some new and likely irrelevant references tacked on. The third paper seems to be a bit more unique, and might actually contain some conceivably meaningful ideas buried within it somewhere-- the paper concerns an attempt to analyze the performance of an evolutionary algorithm by considering it as modeled by a neural network. (It appears that Marks has done legitimate work on neural networks in the past.) Unfortunately, the paper seems to mostly be concerned with measuring a quantity of "information" and analyzing where that "information" came from, where "information" does not appear to be specifically defined in the paper or unambiguously correspond to any specific definition of "information" I am familiar with.
(The word "information" here might be intended to be taken in the same sense that the term is used in a referenced paper by someone named Thomas Schneider-- the Schneider paper was actually published, and the Marks/Dembski neural network paper seems to be essentially a response to it-- or, it might be just more Dembskian word salad. I haven't read closely enough yet to be sure.)
To #8: Faulty Logic. Non-sequitur. Bad math, also. Arrogance, perhaps? Lots of it on this page. Just stick to the subject and keep an open mind.
In real evolution, sure. The expectation from common descent becomes nested hierarchies (of genomes or phenomes) and following biological species populations branching out makes a genomic sequence. (Ie the connection goes through a latest common ancestor population.)
The caveat is "biological species" since lateral transfers makes lineages merge as well.
But for your imaginary species that passes through point mutations, I think not. Such effects as gene duplications (which inviolate Plaisted's assumptions) can have inviable imaginary intermediates.
Louie:
Ironic comment, since Isabel holds up a mirror to Plaisted's bad math.
I would like to see some incisive analysis as well, even though publication of Marks' & Dembski's papers doesn't seem likely.
This article informs more on the background of Marks "lab" (essentially, a grant to get a pc) and the prospect of publication of the papers (not good).
In my comment there I refer back to a GMBM thread where secondclass made some pertinent observations.
The authors don't support their claim that biological evolution differs from other natural processes. (In that they must have preloaded information.) Not so interesting for evolutionary biology then but perhaps for computer science, they claim that a reduction of the search space constitutes "active information".
Specifically in the ev paper, they claim that the "ev" perceptron is a constraint. This is of course besides the point since the genetic machinery it models evolved previously. Ironically they preload their analysis and can't bootstrap out of it.
More seriously, Schneider discussed (on his ev page) the narrow window he found usable for simulating independent variation. While these authors have a graph that seems to go outside the recommended window and then complain about non-gaussian behavior.
My analysis is that if they fix the obvious mistakes, this has no implications on biology (how could it have?) but it is also probably not an interesting analysis for computer science. I can't see what the method would be used for. It doesn't help you meaningfully analyze or improve search algorithms. (Admittedly, I'm no CS.)
Finally, I note that Robert Marks has not worked with evolutionary algorithms (EA) previously, or even general search algorithms outside possibly training neural networks which is a large part of his earlier work. The closest I can find to EA is a recent paper on collective swarm agents.
The issue of which proteins are one codon or amino acid different from an existing protein is referred to as "the adjacent other" in the deep theories of Stuart Kauffman, whom I expect to one day win a Nobel prize for his work on theoretical evolution, so robustly as to apply to Economics and Management as well as to Biology. he has books for the general (science savvy) public, as well as very technical publications. Recommended.
Trip the Space Parasite writes: Sure. Trace backwards to the last common ancestor, then forwards up the other branch.
No, that doesn't demonstrate it. A parent differs from its child in more than one base pair.
Daryl McCulloughL
I severely doubt it. As noted before, a big piece of evolution is duplication, which involves the change of potentially many thousands of base pairs at once. Trying to add each base pair one at a time is very likely to result in non-viable organisms.
Point mutation simply isn't as powerful as the anti-evolutionists make it out to be. ((Note, I'm not implying that you're one! ^_^ Just commenting in general.)) While it may be possible for some species to be path-connected in that sense, most of the intermediaries will have never before existed on this earth. In this sense, the graph is pretty meaningless.
MarkCC, can I second the request that other commenters on this thread have raised, please give Marks' work the sniff test. Specifically, his keynote speech where he seems to imply that genetic search methods impart "negative information". And this from a man with a published paper on using GAs to tune NN parameters.
Is Marks going to repudiate all of his old papers that used GAs because he now thinks GAs are worse than random search? or is he going to get on UD and explain to DaveScot and GilDodgen that yeah, evolution is an idea that can be dissociated from biology, abstracted, shown to be testable in the lab in reasonable time scales, and the results applied back to biology?
I'm not a biologist, but I noticed one pretty simple error in his theory. He's assuming that the only way to have a significant change is for something that's already important to be altered drastically.
But if it's going to be altered drastically, why couldn't it start out as something unimportant and become important through the alteration?
David vun Kannon:
Do you have any specific example? I gave his publication list a cursory glance but saw mostly neural networks (which maybe may use GA's for optimization) and a paper on swarm agents.
Nice catch on "negative information", btw.