Probability and Evolution

Returning now to my radio debate with Sean Pitman, another issue that arose involved the use of probability theory in understanding evolution. Sean argued, indeed, it was really his only argument, that natural selection was incapable in principle of crafting complex adaptations. He chided me for not including in my book any probability calculations to show that natural selection can do what I say it can do. I replied that probability theory was simply the wrong tool for that particular job. Sean was aghast, suggesting, bizarrely, that this somehow rendered evolution unscientific.

In his own blog post about our debate, Sean wrote this:

However, as I read through the book, I was disappointed to discover that Dr. Rosenhouse had not included a single mathematical/probability argument in favor of the creative potential of the evolutionary mechanism of random mutations and natural selection. In fact, as is almost always the case for modern neo-Darwinists, he claimed, in his book and during our debate, that the modern Theory of Evolution is not dependent upon mathematical arguments or statistical analysis at all. In this, he seemed to argue that his own field of expertise is effective irrelevant to the discussion – that, “It is the wrong tool to use.” Beyond this, he also explained that he wasn’t a biologist or a geneticist and that any discussion of biology would require bringing in someone with more expertise and knowledge of the field of biology than he had.

At this point I began to wonder why we were having a debate at all if his own field of expertise was, according to him, effectively irrelevant to the conversation and that he was not prepared to present arguments from biology or genetics regarding the main topic at hand – i.e., the potential and/or limits of the evolutionary mechanism of random mutations combined with natural selection.

This is all very muddled, and it pretty badly distorts what I said. Probability is the wrong tool for the job of determining whether natural selection can craft a complex adaptation like, say, a bacterial flagellum. That is a very long way from saying that mathematics in general, or probability theory in particular, is irrelevant to discussions of evolution. Moreover, it is very unclear what it means to say, “the modern Theory of Evolution is not dependent upon mathematical arguments or statistical analysis at all.” I certainly made no such claim. The modern theory of evolution receives contributions from many different disciplines, and mathematics is one of them. The field of population genetics is largely about applying probability theory to evolution, for example, and phylogenetic reconstruction relies heavily on statistics and combinatorics.

The question of whether natural selection can craft complex adaptions, which Sean is so keen to discuss, is actually both trivial and unimportant. Of course it can craft complexity, what on earth is the reason for thinking it cannot? Proofs of concept are easy to come by. The important question is whether it did craft complex adaptations in natural history. There is rather a lot of evidence to suggest that it did, as I discuss briefly at the end of this post. Not the least of that evidence are the routine successes of adaptationist reasoning in biology.

It is typical of how these discussions play out. Creationists sit on the sidelines making bold confident claims about what is possible and what is not. These claims are supported by nothing more than handwaving and misapplied jargon. Meanwhile, scientists go into the field and the lab, apply evolutionary reasoning to their work, and in this way they solve problems and get results. As I noted during the debate, if massive amounts of physical evidence say something happened, but some abstract mathematical model says it cannot happen, then it is the model and not the evidence that should be discarded.

There are three obvious reasons why probability theory has no role to play in validating the creative abilities of natural selection. The first is that there are so many unquantifiable variables in natural history that a meaningful calculation would simply be impossible. Probability calculations take place in the context of a properly defined probability space. This means that you must have a grasp on all the things that might have happened in lieu of the event you are studying, and you must have some basis for assigning a probability distribution to that collection of events.

(For example, if you want to know the probability of rolling a one with a six-sided die, you need to know not only that there are six possible outcomes, but also that the die is not loaded in a way that makes certain outcomes more likely than others).

Good luck trying to define the appropriate probability space for studying the long-term development of natural history.

The second reason is that it is not clear what you should be finding the probability of. Should we determine the probability of evolving the modern vertebrate eye, or do we care instead about the probability of evolving some kind of organ for using light to glean information about the environment? Any specific adaptation might have a very small probability, but the probability of evolving some representative of a class to which that adaptation belongs could be rather large. So even if you could define an appropriate space, you would still have the problem of determining the relevant event within the space.

(As an aside, this is known as the reference class problem, and it arises in many attempts to apply probability theory to practical situations. This is especially the case when taking a frequentist approach to probability.)

This leads naturally into the third problem. Let us suppose you could perform a relevant probability calculation and the result was a very small number. So what? What would that prove? Unlikely events occur all the time, after all. Any particular outcome of billions of years of evolution likely occurs with very small probability, but that is simply irrelevant to determining the credibility of evolution. The particular sequence of heads and tails you get when flipping a coin five hundred times is extremely unlikely, but something had to happen. The endpoints of eons of evolution are very much like that.

Now, this is the point where ID folks might point to William Dembski, and start going on about “complex specified information.” They might argue that while certain events are of the “something had to happen” sort, others are not. If five hundred heads came up, you would reject the hypothesis that a fair coin had been flipped in a fair manner. Indeed, but that is simply a bad analogy. Dembski's attempts to define his notion of “specificity” in a useful, non-vague way that can be applied to biology (or much of anything for that matter) have been entirely unsuccessful. The relentless use of the term “complex specified information” by ID proponents, as though this term actually meant anything, is an example of what I meant in saying that evolution's critics rely frequently on misapplied jargon.

So that's the problem. We have no way of defining a relevant probability space. Even if we did, we would still have no way of selecting the relevant event. And even if we could get past those two problems, the number produced by our calculation would tell us nothing. And that is why probability theory is not helpful in this context.

I don't know why Sean makes such a fetish of probability. During the debate he said that my refusal to supply a probability calculation somehow rendered evolution unscientific, which is rather bizarre. Probability theory is wonderful stuff (my first book was mostly about probability theory) but it is hardly the last word on what is science and what is not.

I certainly agree that natural selection has never been observed to produce something as complex as the vertebrate eye. Intelligent agents have never been observed to bring universes into being or to create life from scratch, but Sean has no trouble believing that occurred. The fact remains that there is voluminous circumstantial evidence supporting the claim that natural selection can in principle and has in natural history produced complex adaptations. When you contrast this with the perfect vacuum of evidence supporting the existence of intelligent designers who can do what ID folks say they can do, it becomes clear why scientists are all but unanimous in preferring evolution over intelligent design.

More like this

Bryan Fischer claims that anyone is capable of defeating Darwin in 4 easy steps, all they have to do is remember his four "scientific" arguments. I've got an easier strategy for creationists: be really stupid, lie a lot, and ignore anything a scientist tells you. See? Only three steps, and none of…
This is a post from June 28, 2005, reviewing one of my favourite new evolution books: Ever since I read Gould's Ontogeny and Phylogeny in about 1992 or 1993., I knew I wanted to do research that has something to do with evolution, development and timing. Well, when I applied to grad school, I…
This is a post from June 28, 2005, reviewing one of my favourite new evolution books (reposted here): Ever since I read Gould's Ontogeny and Phylogeny in about 1992 or 1993., I knew I wanted to do research that has something to do with evolution, development and timing. Well, when I applied to…
I thought that for a followup to yesterday's repost of my takedown of Berlinksi, that today I'd show you a digested version of the debate that ensued when Berlinksi showed up to defend himself. You can see the original post and the subsequent discussion here. It's interesting, because it…

"I don’t know why Sean P makes such a fetish of probability."
Ah, room for speculation. Basically it's the same as Sewell's fetish of thermodynamics: they long very much to point out a logical contradiction regarding Evolution Theory, no matter how. If it means they have to use math as a kind of magic wand then they do it. Never mind that any mathematical model is just as strong as the assumptions it rests on. Then follows the logical fallacy that ID is correct if Evolution Theory is wrong.
Sorry for repeating myself, but creationists need to be hammered on their heads before they listen. To be taken seriously Sean P should first answer Adam Lee's two questions:

http://www.patheos.com/blogs/daylightatheism/essays/the-two-questions/

In addition to your demand of evidence supporting the existence of an Intelligent Designer I demand that the ID-movement develops tests to find out which version of creationism is correct:

http://en.wikipedia.org/wiki/Creationism#Types_of_creationism

Then come back.

During the debate he said that my refusal to supply a probability calculation somehow rendered evolution unscientific

Well, that's a nice little trick he has there. If he find one person who professes acceptance of evolution but cannot provide a mathematical argument, suddenly the whole is off.

I guess the existence of people who don't speak English means that English is not a real language, then.

By Valhar2000 (not verified) on 24 Jan 2014 #permalink

The question of whether natural selection can craft complex adaptions, which Sean is so keen to discuss, is actually both trivial and unimportant. Of course it can craft complexity, what on earth is the reason for thinking it cannot?

And for those interested in an in-depth analysis, here’s a paper from Indiana University discussing that very question in specific terms. It’s a free PDF.

I found it by searching for "probability of complex adaptations", without quotes, and it was published in 2010. Conclusion: Sean Pitman lacks curiosity.

PS: Jason, could you delete my earlier post, currently at #3?

Nicely done, Jason. Excellent.

At root this comes down to fundamentalists seeking a uniquely privileged position in public policy making, as compared to all other possible stakeholders, including all other religious denominations.

The overwhelming majority of religious denominations and their adherents worldwide, agree that the science is correct about evolution and the history of the universe. That includes those who believe that a deity may be ultimately responsible for the creation of the universe "before" the Big Bang. Our goal shouldn't be to convert all of them to rationalism; only to ensure that they agree that public policy and public education must remain neutral about issues of religion.

The way I'd go about it is to put the fundamentalists on the spot about their implicit claims on public policy, as follows:

First: "Do you believe that all other Christian denominations, numbering X-billion believers worldwide, are wrong in their acceptance of evolution and the 13.7 billion year old universe?" (Understood this is arguement-by-majority, but bear with me;-)

Next: go through a list of specific tenets of American Christian fundamentalism, that are considered wacko and extremist by all other Christian denominations, and ask them about each of those, one by one. This I would back up with direct quotes from primary sources (keyword search the names Rousas Rushdoony and Gary North, the "intellectual fathers" of the modern religious right; see also the site talk2action.org which is expert analysis of the religious right).

Lastly, "Why do you think that you deserve to have a privileged position in public policy, compared to all other denominations of Christianity, and all other traditions and beliefs in our society?"

The goal is to demonstrate to people in the vast mainstream, that the fundamentalists in question are so extreme, and so outside the mainstream, that their claims upon public policy are overtly disrespectful of everyone else's beliefs (whatever they happen to be).

This strategy bypasses the "lead a horse to water" problem of trying to argue evolution vs. creationism/ID. Instead of setting a dividing line between "science" and "religion," it sets the line between "religious extremists" and "everyone else." To my mind that's a path to successful outcomes with large constituencies that we otherwise might not reach.

You continue to engage him on his terms.
First he credits you with being a scientist. Then he acts incredulous that you don’t seem to understand your own area of expertise (he doesn’t need evidence – he just says it). Then he begins to lecture you on proper science. Then comes the personal insults “Rosenhouse’s very neat imagination”– he is saying you don’t understand science so he has to lecture you on science. He makes a fetish of your express weakness. The point about probability not applying was not defended. Jason seems to agree that he (Jason) needs to be lectured on science.
He won and continues to win on the point that ID is a science.
MNb’s point is good. Perhaps the debate should be directed to choosing which view of religion is better.
The point he made about ID not predicting as a criterion for science indicates vulnerability. Notice how he quickly switches from ID doesn’t predict to natural selection cannot look into the future. But the model is evolution not natural selection, which is a means not a model. The model says there are missing links, and genetics says we can modify for our benefit. Does ID allow genetics to modify genes? Similarly he switches from the model to specific areas of inquiry that are tests of models.
Certainly, science has a lot to learn. Evolution indicates there are many new patterns to detect. One of the things to learn is what are the fundamental principles that form these patterns. Saying a designer did it is not science.
I don’t understand why we continue to debate in the science realm. Why don’t we debate whether ID is a religion?
G has some points. I disagree Jason did an excellent job. We should go lightly on specific science models (universe is 13 years old –some science disagrees and there are observations indicating this age is way too small).

Lenoxus--

Consider it deleted. And thanks for the link to the paper. It looks interesting.

John--

Don't be so melodramatic. I think it's interesting to discuss the relationship of probability to evolution, so I wrote this post. Any broader implications about whether ID is science or whatever are purely in your imagination.

Jason,

I really like your description of the relationship between probability and evolution; I’ll also take the chance to reiterate that most listeners to your “debate” (or whatever it was called) probably don’t understand these considerations, and came away either believing you were unable to explain it or confused about the question.

Structured, organized debates with creationists should not be avoided, but these kinds of unstructured discussions probably should be.

Anyway, thanks for your post.

sean s.

By sean samis (not verified) on 24 Jan 2014 #permalink

The question of whether natural selection can craft complex adaptions, which Sean is so keen to discuss, is actually both trivial and unimportant. Of course it can craft complexity, what on earth is the reason for thinking it cannot? Proofs of concept are easy to come by. The important question is whether it did craft complex adaptations in natural history. There is rather a lot of evidence to suggest that it did, as I discuss briefly at the end of this post. Not the least of that evidence are the routine successes of adaptationist reasoning in biology.

The reason for thinking that RM/NS cannot “craft complexity”, at least not beyond very low levels of functional complexity, is because sequence space simply is not set up like it would need to be before any “crafting” could take place. In other words, your argument that the beneficial steppingstones in Lake Superior are all closely spaced and lined up in a neat little row simply doesn’t reflect reality. They are not lined up in this manner and there is no rational reason to think that this might have been the case. Your entire theory is, therefore, dependent upon an assumption that doesn’t reflect known reality.

As I noted during the debate, if massive amounts of physical evidence say something happened, but some abstract mathematical model says it cannot happen, then it is the model and not the evidence that should be discarded.

The problem here is that you’re the one presenting a mathematical model that doesn’t represent known empirical reality. My position, on the other hand, is backed up by real observations as to the nature of sequence space. It is therefore your model that is based on an erroneous mathematical model, not mine.

There are three obvious reasons why probability theory has no role to play in validating the creative abilities of natural selection. The first is that there are so many unquantifiable variables in natural history that a meaningful calculation would simply be impossible. Probability calculations take place in the context of a properly defined probability space. This means that you must have a grasp on all the things that might have happened in lieu of the event you are studying, and you must have some basis for assigning a probability distribution to that collection of events. (For example, if you want to know the probability of rolling a one with a six-sided die, you need to know not only that there are six possible outcomes, but also that the die is not loaded in a way that makes certain outcomes more likely than others). Good luck trying to define the appropriate probability space for studying the long-term development of natural history.

The size of sequence space is definitely known for various levels of functional complexity. Also, there is very very good evidence as to the ratio of beneficial vs. non-beneficial sequences in that space. Finally, there is also very good evidence as to the distribution clusters of beneficial sequences within beneficial islands within sequence space - and their minimum likely distances relative to the other islands within that space.

What you are basically saying here is that there is no way to estimate how long it will take for the evolutionary mechanism of RM/NS to produce anything at a given level of functional complexity. Everything you believe is based on “circumstantial evidence” which is largely irrelevant to the actual evolutionary mechanism. In other words, your evidence is largely interpreted as what you think an intelligent designer would or would not do – not upon what your mechanism could or could not do. In short, there really is not science or predictability, from your perspective, with regard to the creative limits or potential of your mechanism. You simply don’t know how to calculate or estimate such potential or limitations. Where then is your “science” when it comes to your assumed mechanism in particular?

The second reason is that it is not clear what you should be finding the probability of. Should we determine the probability of evolving the modern vertebrate eye, or do we care instead about the probability of evolving some kind of organ for using light to glean information about the environment? Any specific adaptation might have a very small probability, but the probability of evolving some representative of a class to which that adaptation belongs could be rather large. So even if you could define an appropriate space, you would still have the problem of determining the relevant event within the space.

Any and all discoveries of new beneficial island or steppingstones count as success. The problem is that all of the various potential solutions to the problem at hand are very far away in sequence space. There is no solution that is significantly closer or more evolvable this side of a practical eternity of time at higher levels of functional complexity – all potentially solutions are far far too far away. That is why your argument that “The probability of evolving some representative of a class to which that adaptation belongs could be rather large” is simply not true. The probability simply is not improved to any significant degree by including all possible solutions to a problem within sequence space. That’s the key point here – all possible solutions, all possible beneficial steppingstones of any kind whatsoever, are too far away from any given starting point within sequence space at higher levels of functional complexity.

This leads naturally into the third problem. Let us suppose you could perform a relevant probability calculation and the result was a very small number. So what? What would that prove? Unlikely events occur all the time, after all. Any particular outcome of billions of years of evolution likely occurs with very small probability, but that is simply irrelevant to determining the credibility of evolution. The particular sequence of heads and tails you get when flipping a coin five hundred times is extremely unlikely, but something had to happen. The endpoints of eons of evolution are very much like that.

I’m genuinely surprised to see a mathematician with an interest in biological evolution produce this common, but mistaken, argument. It’s like saying that one shouldn’t be surprised if Arnold Schwarzenegger happens to win the California Lottery 10 times in a row. After all, unlikely events happen all the time!

You see, this argument, as presented by Dr. Rosenhouse, undermines science in general. It undermines the very concept of predictive value an estimating the likelihood that a particular hypothesis was actually responsible or the true explanation for a particular event.

As another example, let’s say that there are 10 randomly distributed steppingstones, each measuring one meter square, within Lake Superior. Is it possible that a blind swimmer might swim directly from one to the other in a straight line without missing it? Yes, it is possible, but is it likely? Is it possible that this blind swimmer might swim directly to all 10 steppingstones in a row without a single error? Yes, this is also possible, but is it likely?

You see, science isn’t based on what is possible (almost anything is possible). Science is based on what is most likely... which is why this particular point that Dr. Rosenhouse presents highlights a fundamental misunderstanding of the very basis of science itself.

Now, this is the point where ID folks might point to William Dembski, and start going on about “complex specified information.” They might argue that while certain events are of the “something had to happen” sort, others are not. If five hundred heads came up, you would reject the hypothesis that a fair coin had been flipped in a fair manner. Indeed, but that is simply a bad analogy. Dembski’s attempts to define his notion of “specificity” in a useful, non-vague way that can be applied to biology (or much of anything for that matter) have been entirely unsuccessful. The relentless use of the term “complex specified information” by ID proponents, as though this term actually meant anything, is an example of what I meant in saying that evolution’s critics rely frequently on misapplied jargon.

The concept of functional or meaningful complexity is defined by many others besides Dembski – to include a number of mainstream scientists. And, the concept is not too hard to understand. Basically, it is based on the minimum size requirement to achieve a particular type of function, combined with the limitations or minimum flexibility allowed for the characters in the sequence as far as their arrangement is concerned. This is where the concept of “specificity” comes into play. It simply isn’t enough to have all the right characters for a sequence. These characters must also be properly arranged, relative to each other in 3D space, before the function in question can be realized to any useful or selectable degree of functionality.

I fail to see how the meaning for this concept is unclear? It is very clear. It is so clear in fact that small children can understand it. And, what is interesting and relevant here, is what a linear increase in the minimum size and/or specificity requirement of a meaningful/functional sequence does to the overall ratio of potentially meaningful/beneficial sequences in sequence space with the same minimum structural threshold requirements – i.e., the ratio of these sequences is reduced, exponentially, relative to the number of non-meaningful/non-functional sequences of any and all kinds.

I don’t know why Sean makes such a fetish of probability. During the debate he said that my refusal to supply a probability calculation somehow rendered evolution unscientific, which is rather bizarre. Probability theory is wonderful stuff (my first book was mostly about probability theory) but it is hardly the last word on what is science and what is not.

Again, as mentioned in our debate, science is and must be based on probability in all of its claims. Your notion that something only needs to be shown to be possible to be a scientific conclusion is simply not a scientific argument. One must also demonstrate the likelihood, not just the possibility, of a particular event to occur within a given span of time. If you can’t do this, then you just don’t have a scientific theory with regard to the creative potential of your Darwinian mechanism at various levels of functional complexity. You simply don’t know and cannot say how it will work or how long it will take for your mechanism to do anything at any level of functional complexity. All you have are bold claims, unsupported by either demonstration or relevant statistical calculations or extrapolations, regarding the creative potential of your mechanism.

I certainly agree that natural selection has never been observed to produce something as complex as the vertebrate eye. Intelligent agents have never been observed to bring universes into being or to create life from scratch, but Sean has no trouble believing that occurred. The fact remains that there is voluminous circumstantial evidence supporting the claim that natural selection can in principle and has in natural history produced complex adaptations. When you contrast this with the perfect vacuum of evidence supporting the existence of intelligent designers who can do what ID folks say they can do, it becomes clear why scientists are all but unanimous in preferring evolution over intelligent design.

What circumstantial evidence are you talking about that actually suggests that your mechanism of RM/NS did what you claim it did? Remember, arguing for common descent isn’t the same thing as arguing that your mechanism was responsible for the required changes over time. These arguments are often confused by evolutionists, but they simply aren’t the same thing.

As far as your argument that ID has never been observed to do certain things, therefore extrapolations are necessary, I agree – hence the title of my book, “Turtles All the Way Down.”

As the title of my book suggests, it’s either “turtles all the way up” or “turtles all the way down”. You claim that a mindless mechanism is the most likely explanation for all that exists while I claim that an intelligence source is the most likely explanation. In order to determine which claim is most likely true, one is required to extrapolate from very limited information. However, I believe that a reasonable extrapolation is possible based on what is currently known about which way the turtles are going.

In other words, is RM/NS known to be more or less creative than what known intelligent agents (i.e., humans) are able to produce? The answer is quite clear. The mechanism of RM/NS is far far less creative, in a given amount of time (observable time) than is ID. Intelligence can create very complex machines in very short order. This simply isn’t true for RM/NS.

The obvious question is, why not? Why is ID so much quicker than RM/NS beyond very low levels of functional complexity? Why does the mechanism of RM/NS show a truly exponential decline in creative ability with each linear step up the ladder of functional complexity? Well, the answer is quite clear for anyone who has carefully considered the nature of sequence space and noticed the exponential decline in potentially beneficial vs. non-beneficial and isolated nature of clusters or islands of sequences with the same type of function – and how this isolation becomes exponentially more and more dramatic with each step up the ladder of functional complexity.

This observation can be extrapolated to get a very good idea as to the limitations of mindless mechanisms like RM/NS.
The same is true of ID. There are various levels of intelligence and knowledge. The ancient peopled would have considered some of our technology “miraculous” from their perspective. And, there is therefore no reason to doubt that a few thousand years from now discoveries will be made that will seem truly miraculous from our current perspective.

Therefore, it seems like there is no theoretical limit for the creative potential of intelligent design, while there is a very clear limitation, that is actually measurable, for RM/NS.

By Sean Pitman (not verified) on 24 Jan 2014 #permalink

Sean Pitman wrote (#11) that “ The reason for thinking that RM/NS cannot “craft complexity”, at least not beyond very low levels of functional complexity, is because sequence space simply is not set up like it would need to be before any “crafting” could take place.

Nature does not “craft complexity”; it merely produces it. No pre-sequencing is needed, mutations are random changes sometimes adding, sometimes deleting. There is no need to “line up beneficial stepping stones”.

Pitman also wrote that his “position ... is backed up by real observations as to the nature of sequence space.” Perhaps, but those observations are of the absence of things that don’t matter anyway.

Pitman also wrote:

The size of sequence space is definitely known for various levels of functional complexity. Also, there is very very good evidence as to the ratio of beneficial vs. non-beneficial sequences in that space. Finally, there is also very good evidence as to the distribution clusters of beneficial sequences within beneficial islands within sequence space – and their minimum likely distances relative to the other islands within that space.

Not for the first time, I’d love a cite to the evidence Pitman claims.

It also seems that Pitman believes that for evolution to work, mutations would have to “swim from stepping stone to stepping stone” DIRECTLY. Mutations would randomly find all the “beneficial islands” eventually.

One must also demonstrate the likelihood, not just the possibility, of a particular event to occur within a given span of time.

Please demonstrate the likelihood (not just possibility) that Sean Pitman will come into existence. Or that our Solar System has precisely the number, make up and distribution of planets, sub-planets, moons, etc.

While you’re at it, please demonstrate the likelihood (not just possibility) that a super-intelligence came into existence by natural processes.

sean s.

By sean samis (not verified) on 24 Jan 2014 #permalink

You have to know the pathway in order to calculate the probability. Consider:

The probability of ending up with 50 coins as "heads"
All coin tosses are 'fair' (50% heads, 50% tails, 0% edge-on)
One toss event per second

Algorithm 1: Take 50 coins. Toss the whole lot until you come up with all 50 as heads at the same time.

Algorithm 2: Take a single coin. Toss it until it ends up heads, then move on to the next coin.

Formal calculations left to the reader, but hopefully you can see that algorithm 2 has a much higher probability of success, and a much shorter mean time to success.

---
Creationists are fond of setting up unlikely examples, such as a 100 residue protein being assembled randomly. But their scenarios are unnecessarily restrictive and unrealistic. In biological organisms, proteins are not assembled randomly, they are assembled from one end to the other by a ribosome.

By Reginald Selkirk (not verified) on 24 Jan 2014 #permalink

How does "functional" complexity differ from complexity? Sticking a teleological adjective onto a word doesn't necessarily change anything. You are just assuming that it purposively produced when in actuality you have no evidence to support that.

By Michael Fugate (not verified) on 24 Jan 2014 #permalink

Jason: I think you missed an opportunity there. Probability _is_ relevant, and there are many examples of probability calculations refuting specific hypotheses. For example, whenever someone runs a homology search using BLAST (which happens more than 150,000 times _per day_ !) a set of probability calculations are performed (one for each position in each of more than 82 million sequences in GenBank) comparing the null hypothesis that the query sequence is unrelated to the reference sequence to the alternative hypothesis that they are descended from a common ancestor. So in concrete examples (1) we know how to do these calculations; (2) they are reliable; and (3) they are extremely useful, giving both positive and negative results in such a way as to provide useful information to biologists. This is the most heavily used example, but of course there are many others, including ones that look at phenotype rather than genotype and so are closer to the fuzzier example you were discussing.

Just because you were discussing a fuzzy question that couldn't easily be formalized shouldn't prevent you from bringing up a more concrete example that demonstrates the principle.

Sean Pitman--

The reason for thinking that RM/NS cannot “craft complexity”, at least not beyond very low levels of functional complexity, is because sequence space simply is not set up like it would need to be before any “crafting” could take place. In other words, your argument that the beneficial steppingstones in Lake Superior are all closely spaced and lined up in a neat little row simply doesn’t reflect reality. They are not lined up in this manner and there is no rational reason to think that this might have been the case. Your entire theory is, therefore, dependent upon an assumption that doesn’t reflect known reality.

I am not a biochemist, but it seems pretty obvious that you cannot possibly make a good argument for your claims here. We can do no more than study minuscule portions of protein space, and that only in modern organisms. The precise nature of protein space is itself something that evolves with time, which complicates things considerably. The fitness of a gene that codes for a given protein is often dependent on the environment in which it finds itself. The reachability of a given gene likewise depends on what happened previously in natural history. Furthermore, the overall size of the space is imply irrelevant, as I explained during the radio show. Natural selection guarantees that most of the space will never be explored in natural history, while guiding organisms to the functional genes. The result is a vast space where we have no good way of assigning a probability distribution, precisely as I said in my post.

That's an in principle argument for being highly skeptical of big bold claims about the nature of sequence space. When you then factor in the myriad practical successes in the field of molecular evolution, and the fact that not many biochemists seem to share your view, it looks like you are once again just waving your hands.

The problem here is that you’re the one presenting a mathematical model that doesn’t represent known empirical reality. My position, on the other hand, is backed up by real observations as to the nature of sequence space. It is therefore your model that is based on an erroneous mathematical model, not mine.

What could you possibly be talking about? My argument is that the numerous practical successes of evolutionary thinking in biology point strongly to its correctness. You're the one waving your hands and making grand, but indefensible, claims about the nature of sequence space.

The size of sequence space is definitely known for various levels of functional complexity. Also, there is very very good evidence as to the ratio of beneficial vs. non-beneficial sequences in that space. Finally, there is also very good evidence as to the distribution clusters of beneficial sequences within beneficial islands within sequence space – and their minimum likely distances relative to the other islands within that space.

The size of the space is irrelevant, as I have already explained. Your other claims are nonsense. At most we can make some judgments about small, local areas of sequence space as we see them in modern organisms and modern environments. That's plainly insufficient for drawing grand conclusions about the viability of evolution.

What you are basically saying here is that there is no way to estimate how long it will take for the evolutionary mechanism of RM/NS to produce anything at a given level of functional complexity. Everything you believe is based on “circumstantial evidence” which is largely irrelevant to the actual evolutionary mechanism. In other words, your evidence is largely interpreted as what you think an intelligent designer would or would not do – not upon what your mechanism could or could not do. In short, there really is not science or predictability, from your perspective, with regard to the creative limits or potential of your mechanism. You simply don’t know how to calculate or estimate such potential or limitations. Where then is your “science” when it comes to your assumed mechanism in particular?

You absolutely insist on discussing this at a highly abstract level. But for actual biologists this is not an abstract question. They do not apply natural selection as some vague principle in their work. Instead they do the hard work of studying actual complex systems, and in every case their findings are the same. They find that once the systems are well understood, and once similar systems in other organisms are studied and understood, plausible gradualistic scenarios inevitably appear. Those scenarios are hardly the end of the story, however, as I explained to you during our radio debate. Once you think you have a good scenario for how something evolved, that scenario can be used to generate testable hypotheses. Subsequent testing of these hypotheses then leads to new knowledge. This type of reasoning has been applied so frequently and so successfully that, if it is fundamentally flawed, we must conclude scientists are getting mighty lucky.

The reason scientists routinely find plausible gradualistic scenarios is that these complex systems all carry clear evidence of their evolutionary past. We are not talking about “design flaws” in some abstract sense and we are not trying to psychoanalyze some hypothesized creative supermind. Instead we are talking about structures that are hard to understand from the standpoint of what a human engineer would do, but are easy to understand once the history of the structure is taken into consideration. Not one or two examples, but every complex structure studied to date. Apparently it amused the designer to create in a way that perfectly mimics what we would expect if these systems were actually produced gradually by natural selection.

There's so much more, of course. In some cases, like the mammalian inner ear, we have strong evidence from paleontology and embryology to show how a complex structure evolved gradually. Likewise for molecular evolution where, in cases like anti-freeze proteins in fish, we have strong evidence for how the proteins evolved from simpler precursors. I could point also to the success of game theoretic models in ethology. In every case scientists are approaching their work with theoretical models based on an assumption of natural selection, and they get results. This consistent success, again, is mighty coincidental if the theory is just fundamentally flawed.

Yes, the evidence is circumstantial, since this process plainly takes too long to be observed in toto. That's not biologists's fault. Nor is it their fault that the metaphor of sequence space, useful in many contexts, is not so useful for drawing grand conclusions about the viability of evolution.

The relentless daily successes of reasoning based on natural selection points strongly to the conclusion that the theory is substantially correct, which in turn points strongly to the conclusion that sequence space is not structured the way you say it is. The physical evidence points strongly in one direction. That your abstract but groundless claims point in another is neither here nor there.

I’m genuinely surprised to see a mathematician with an interest in biological evolution produce this common, but mistaken, argument. It’s like saying that one shouldn’t be surprised if Arnold Schwarzenegger happens to win the California Lottery 10 times in a row. After all, unlikely events happen all the time!

Oh please. Since I plainly discuss this point in my next paragraph, you have a lot of nerve cutting me off where you did. The whole question at issue is whether the endpoints of evolution are like getting 500 heads on 500 tosses of a coin, or whether they are more like firing an arrow into a wall and then painting a target wherever it lands. You claim it is the former; more sensible people claim it is the latter. And that is why the end result of any probability calculation you carried out would be irrelevant.

The concept of functional or meaningful complexity is defined by many others besides Dembski – to include a number of mainstream scientists. And, the concept is not too hard to understand. Basically, it is based on the minimum size requirement to achieve a particular type of function, combined with the limitations or minimum flexibility allowed for the characters in the sequence as far as their arrangement is concerned. This is where the concept of “specificity” comes into play. It simply isn’t enough to have all the right characters for a sequence. These characters must also be properly arranged, relative to each other in 3D space, before the function in question can be realized to any useful or selectable degree of functionality.

But the issue wasn't merely coming up with a definition of functional complexity. It was doing so in a manner that is in any way relevant for determining what natural selection can do with eons in which to work. Show me in concrete terms how the definitions you've produced here permit a calculation that shows natural selection to be ineffective, and then I will be impressed. This is precisely what William Dembski attempted to do, but his work was so shot through with false assumptions and vague premises that it did not amount to much.

I fail to see how the meaning for this concept is unclear? It is very clear. It is so clear in fact that small children can understand it.

It does not reflect poorly on me that you are arguing at a childish level.

Again, as mentioned in our debate, science is and must be based on probability in all of its claims. Your notion that something only needs to be shown to be possible to be a scientific conclusion is simply not a scientific argument. One must also demonstrate the likelihood, not just the possibility, of a particular event to occur within a given span of time. If you can’t do this, then you just don’t have a scientific theory with regard to the creative potential of your Darwinian mechanism at various levels of functional complexity.

Once again, what on earth could you possibly be talking about? Where did I ever suggest that something only has to be possible for it to be a scientific conclusion? How could my post have been more clear that there is a distinction between the question of whether natural selection can produce complex adaptations and the question of whether or not it did?

Since creationists routinely claim to have devised in principle arguments for the insufficiency of natural selection, I think it is perfectly appropriate to argue at a theoretical level to show that all such arguments are mistaken. That natural selection is the correct explanation in practice, on the other hand, is shown by the sorts of evidence I described previously.

Your further remarks in this paragraph strike me as very confused. Science makes rather a lot of claims that do not depend on probability in any way, so I don't know where you came up with this idea that probability is the most important thing there is. And since unlikely things occur all the time, I don't see why I have to show that an event was likely to occur before I can conclude that it happened. Moreover, showing that something is likely or unlikely rarely involves performing an actual probability calculation. Usually you just follow the evidence where it leads, and if it points strongly to the conclusion that something happened then that's good enough. Abstract probability calculations are irrelevant in most cases.

I think I have already addressed your other remarks, so I will stop there.

It takes a very long time to write these comments, so this will be my last reply to you in this thread. If you choose to reply then I will certainly read it, but I will let you have the last word. These blog discussions can go on endlessly, but they have to be cut off somewhere.

"You claim that a mindless mechanism is the most likely explanation for all that exists while I claim that an intelligence source is the most likely explanation."
Note how Sean P gives William Ockham the middle finger here. He is the one postulating an extra entity. So of course he is the one who should either provide direct empirical data (which is impossible because that intelligent source is defined as an immaterial being) or show how this is the deductional result of a coherent and consistent theory (which he can't provide as there are no tests possible to decide which version of ID/creationism is the correct one).

@17 to 19
now w're getting it. It needs to be done in a public forum such as radio. I hope the Nye the science guy is getting this.

By John (not verified) on 24 Jan 2014 #permalink

In reply to by MNb (not verified)

konrad --

If you would care to read my post again I think you will find that I do mention areas where probability and statistics are relevant to evolutionary biology. I did not belabor the point since it was not really relevant to my argument. My claim was that probability theory is not relevant to the question of determining whether natural selection can craft complex adaptations in principle, not that probability theory has no applications at all to evolution.

Yes, I did notice that you are aware of that and am not disagreeing with any of the points you made. Instead, my point is about debating strategy: in such a discussion you do have some leeway to bring up related examples - instead of saying "that's a fuzzy question, crisp methods do not apply", you can say "that's a fuzzy question, but here's a related crisp question that we can answer - if you could come up with a crisp version of your question we could answer it too". The point is that the question was asked by your opponent, so the onus of making it crisp lies on him. Until he does so, just point out that his question is ill-posed (hence unscientific) and substitute a better one of your own choice.

Thanks for the scrubbing, Jason.

Sean Pitman, have you addressed the problem that asking something like "What is the probability of complex adaptation X evolving?" ignores the reference-class problem? Namely, had things gone differently, we might be talking about some other complex adaptation.

When you draw two-pair in poker, the relevant question is usually not "How likely was I to draw 3H-3C-7D-7C-8H?" but "How likely was I to draw two pair?", or even more relevant, "How likely was I to draw something better than high-card?" (The answer in five-card-draw is about 50%, with seven-card draw naturally being better). In other words, you have to ask yourself, "In how many possible universes would I be marvelling at this unlikelihood?" If the answer were "all of them", then of course the unlikelihood doesn't seem so impressive. I'll be generous and allow for "Not many of them", but you still have to grant that it's more than the number of universes in which, say, an eye evolves, or any other particular complex event.

In my unscientific opinion, the biggest bottlenecks for evolution are the initial development of life, and the survival of major extinction events. Once both of these hurdles has been crossed, "complex" adaptations of some kind are inevitable. Even particular adaptations, which have happened multiple times on our planet, can be anticipated as generally "bound" to occur. I would guess, for example, that flight either has arisen or will arise on some other planet in our universe (although the variable viscosity of planetary atmospheres may render the distinction between "flying" and "swimming" a matter of debate).

Regardless, ID (and its falsehood) are both sufficiently ill-defined that we can't make a Bayesian equation that takes into account the prior probability that a life-designer existed, and further, the prior probability of complex adaptations given an intelligent designer. Similar broad questions about evolution are also tricky, but specific ones, like "What is the probability of population X following one of these evolutionary pathways?", can be mathematically answerable.

All that makes for a tremendous difference from Schwarzenegger winning the lottery 10 times. Schwarzenegger definitely exists and is well-defined, lottery fraud is a known and definable activity, and given its history it probably wouldn't be too hard to calculate the probability that a governor would be involved in such a fraud.

Appeals to a designer, whose existence and definition are apparently based on the negation of alternatives, lack these advantages in comparison to other probability-based arguments. However unlikely something seems, we can't assert a vague "non-material" designer as the "likeliest" explanation, unless we are prepared to do so in all instances, including the lottery example!

At best, in extreme cases, we can resort to "cause unknown at present". That's more honest than relabeling "unknown" with appealing detail-free connotations such as "it was designed, like how humans design things". One may as well say "It's obviously fictional, like a novel." Nothing has been explained.

I should add that of course Jason already touched on the reference-class point, I'm merely extending it in a different direction. We could ask "How likely is vision-of-some-kind?", which may involve simpler and hence "likelier" means of accomplishing the same task. Or (among many, many 'or's) we could ask "How likely is the current number of adaptations-that-seem-complex-to-us?"

(For example, maybe IDists can identify 100 such adaptations. In a design-free evolution-only universe, how many could be expected on average? 0, because they're downright impossible? Or maybe just close to 0, because they just extremely unlikely? My own estimate is, of course, about 100, because some people are bound to be impressed by something or other, but those same people would find little point in counting more than 100 examples.)

@8 Ditto - ID of course is a religious concept.

I've found that even among those who accept evolution, many are still implicit IDers because they insist that the universe has some overarching meaning or purpose. This can only be so if there is some intelligence in a so-called Platonic realm that ensures that meaning, purpose and morality exists. Atheists are often not consistent enough to follow atheism to its logical conclusion. If you are going to implicitly allow ID in the realm of 'meaning' or morality, then it is a trivial step to admit it in the physical realm. Of course, it is a useless concept scientifically, but it provides comfort and succor to believers - which is its purpose.

We should continually press the conclusion that the universe is completely accidental and pointless without meaning or purpose and that there is no Platonic world that ensures the reality of morality, meaning or purpose.

Sean: "Again, as mentioned in our debate, science is and must be based on probability in all of its claims."

Absurd. You only have to look at some scientific papers to see that most don't contain the sorts of probability calculations that you're demanding. Most real-world systems are far too complex to calculate meaningful probabilities of this sort. Vulcanologists can draw conclusions about the processes of volcano formation without calculating the probability that these processes would produce the volcanos we see. Astronomers can draw conclusions about the processes by which our solar system was formed without calculating the probability of getting the configuration of planets we actually observe. And of course IDists cannot give such a probability calculation for their own hypothesis, so you are applying a double standard.

Of course, if there are valid probabilistic objections to a particular theory/hypothesis, then it may have to be rejected. But that's very different from demanding (unreasonably) that a theory/hypothesis cannot be accepted unless it can be idealized to such a degree of simplicity that a meaningful probability calculation is possible. Accepting such a demand would eliminate much of science (and almost all of history).

As for your particular objection to evolution theory, it all comes down to the assertion that functional sequences in protein sequence space are too divided into isolated islands. But that assertion is made, I believe, by just one appropriate expert, Douglas Axe. Non-experts like me (or you) should not accept the word of one expert over that of the vast majority of experts, who don't accept Axe's conclusions, especially given the vast amount of other evidence in support of evolutionary theory.

By Richard Wein (not verified) on 25 Jan 2014 #permalink

sean samis:
It also seems that Pitman believes that for evolution to work, mutations would have to “swim from stepping stone to stepping stone” DIRECTLY. Mutations would randomly find all the “beneficial islands” eventually.

That's just my point. Random mutations simply don't "swim" from one steppingstone to another in a straight line. They move through sequence space, well, randomly. This means, of course, that for every linear increase in the distance between the starting point and the next closest beneficial target within sequence space, the random walk time increases exponentially.

Of course, you argue that a beneficial target will be found, eventually. And, that's true. It is just that, at higher levels of functional complexity, this "eventually" is trillions upon trillions of years...

By Sean Pitman (not verified) on 25 Jan 2014 #permalink

My basic question: does Sean know that what he says is pure crap but still repeats it to satisfy the believers, or is he really as uninformed about science as he seems?

Sean Pitman,

I realize what your point is, do you realize that your point is wrong?

Random mutations simply don’t “swim” from one steppingstone to another in a straight line. They move through sequence space, well, randomly. This means, of course, that for every linear increase in the distance between the starting point and the next closest beneficial target within sequence space, the random walk time increases exponentially.

You do realize that “sequence space” is just a metaphor, correct? That the “distance” between any two “points” is not measured in time or distance, but in random steps? And that many “points” in this “sequence space” represent thermodynamically impossible “points” and that others represent thermodynamically favored “points”? So the random walk is constrained and channeled to favor certain thermodynamic “paths”? Given that, and the billions of “swimmers” who launch from any “beneficial target” the time expected for at least one “swimmer” to reach the next “beneficial target” is drastically reduced.

you argue that a beneficial target will be found, eventually. And, that’s true.

Given this realization on your part, you need only realize that, given the millions upon millions of “beneficial targets” and the billions of “swimmers” and that every time a “beneficial target” is reached, billions more “swimmers” start out from that “beneficial target” then the time for at least one swimmer to reach any particular “beneficial target” is much briefer than you represent, briefer by several orders of magnitude.

sean s.

By sean samis (not verified) on 25 Jan 2014 #permalink

dean asked, “ does Sean know that what he says is pure crap but still repeats it to satisfy the believers, or is he really as uninformed about science as he seems?

To achieve any useful results, one simply MUST take the other person’s statements at face value and presume that they honestly express their beliefs. It could be that Sean Pitman is deliberately trying to deceive, but there’s no value is working from that assumption, so I don’t. And if he is that mistaken about science then he is only one of many. That’s not a reason to be angry or rude to him.

sean s.

By sean samis (not verified) on 25 Jan 2014 #permalink

Reginald,

“Creationists are fond of setting up unlikely examples, such as a 100 residue protein being assembled randomly. But their scenarios are unnecessarily restrictive and unrealistic. In biological organisms, proteins are not assembled randomly, they are assembled from one end to the other by a ribosome.”

To actually address the problem of random assembly and functional results, you should probably begin with the random assembly of functioning ribosome.

===

Michael Fugate,

“How does “functional” complexity differ from complexity? Sticking a teleological adjective onto a word doesn’t necessarily change anything. You are just assuming that it purposively produced…”

Functional complexity could mean that a normal protein doing what it is supposed to do. On the other hand, an abnormal, dysfunctional protein could still qualify as a complex assembly.

===

On probability, it is best to view natural selection as uniform and focus on random mutations. Since they are events, they are easier to appraise in regards to development.

For instance, if modern whales descended from Pakicetus, there was a time when early forms did not have bio-sonar and a later point in time when toothed-whale forms had acquired an integrated system.

There are lots of parameters in regards to the probability of this happening. There are time limits, and limits on the number of generations. While gestation periods, mortality rates and population sizes have to be considered, reasonable estimated values could possibly/arguably be used.

But there are also a minimum number of fortuitous mutations involved, and everything known about DNA replication errors has to be regarded. The list of factors amounts to a collection of formidable restrictions. When you get away from bacteria, coin-flipping and all the enchantments about natural selection, and actually try to analyze the actual evolutionary change mechanism, it gets very nasty. I think this is why such analyses simply aren’t conducted. They are painful.

Agreed, and if my question was overly blunt and taken as angry or rude, tack that off to poor organization on my part and take my apologies.

Perhaps a better question: what does it take to determine whether one is simply uninformed versus intentionally dishonest? Is that ever possible?

Phil, that is a teleological answer - how do you know what a protein is supposed to do?

By Michael Fugate (not verified) on 25 Jan 2014 #permalink

Sean:

You see, science isn’t based on what is possible (almost anything is possible). Science is based on what is most likely…

This is, of course, totally untrue. A notorious limitation of statistical significance testing is that it doesn't tell us what results are most likely to be observed if the null hypothesis is false, let alone what hypothesis is most likely to be correct given a particular set of observations. This is precisely because, as Jason said, properly defining a probability space over the set of "everything science might observe and every hypothesis that might explain it" is ludicrously impossible.

Moreover, even if it were true, it would be even more clear that intelligent design cannot be scientific. As Jason points out here, we have no scientific grounds whatever for claiming that an intelligent being with the power, wisdom and motivation to create the universe (or Earth's biosphere, or the platypus, or whatever it is you believe could not have been created by unintelligent mechanisms) is likely to have existed.

Science can estimate (albeit very roughly) the probability of something being the result of human activity, because it is already relatively well-known where and when humans can be found, and what their habits and capabilities are. No such calculations can be made for a hypothetical intelligence with unknown origin, habits, capabilities, and limitations. We simply have no idea what is "most likely" in that case.

They move through sequence space, well, randomly. This means, of course, that for every linear increase in the distance between the starting point and the next closest beneficial target within sequence space, the random walk time increases exponentially.

Nope, that's not generally true of random walks. It's true of simple symmetric random walks, but of course the evolution of a DNA sequence is neither simple nor symmetric, since a) individual mutations may alter more than a single nucleotide and b) and all mutations are not equally likely to persist under natural selection. Therefore, your claim simply does not hold.

Remember what Jason said above about the difficulty of defining probability spaces, and events within those spaces? Simply calling a variable "random" is not enough to tell you how it is going to behave over time, statistically speaking. Random variables have all sorts of behavior patterns, depending on their other properties.

There is a field, computational phylogenetics, which studies the statistical behavior of evolving genotypes. Unsurprisingly, the consensus within that field is that evolutionary theory is perfectly adequate to explain the development of complex structures.

By Anton Mates (not verified) on 25 Jan 2014 #permalink

dean asked, “what does it take to determine whether one is simply uninformed versus intentionally dishonest? Is that ever possible?

I guess I don’t see the value of that determination. If it’s just you and that other talking, and you start to go round and round, better to just acknowledge that fact and go on with your life.

If you and the other are conversing with an audience (like in this blog) then there’s no value to calling the other out as a liar. Keep directing your audience to the other’s false statements and let the audience conclude the other is a liar. Once YOU call them a liar, your chances of persuading anyone drop pretty close to zero.

It’s an old rhetorical rule: you cannot offend and persuade at the same time. If there’s an audience, some will be on your “side” and some will be on the other “side” and some will be undecided. Calling the other a liar will offend everyone on the other side and most in the middle unless the lies are glaringly obvious—in which case there’s no need to say anything about it anyway; everyone already knows.

So protect your credibility; politely pound away at the facts; liars will out themselves, which is the sweetest punishment to inflict.

sean s.

By sean samis (not verified) on 25 Jan 2014 #permalink

Anton: "This is, of course, totally untrue. A notorious limitation of statistical significance testing is that it doesn’t tell us what results are most likely to be observed if the null hypothesis is false, let alone what hypothesis is most likely to be correct given a particular set of observations."

To be fair to Sean, that seems to be a response to a very specific and implausible interpretation of his comment. He didn't say anything about significance testing. I would just say that his statement ("Science is based on what is most likely") is too vague to be of any use or relevance. I think he was misinterpreting Jason as having said that we can pick any explanation we like, as long as it isn't totally impossible. Of course that's not what Jason was saying.

If we choose to think in terms of posterior probability (the probability something is true given all the evidence) then of course scientists try to choose the most probable explanation in that sense, the one that is most likely to be true. But that's hardly an interesting observation. More likely Sean is thinking of probability as a propensity to occur. The fit racehorse has a higher propensity to win than the old nag, so we say it has a higher probability of winning. But come the end of the race we don't automatically conclude that the fit racehorse won. If the evidence (e.g. visual observation) points to the fact that the old nag won, we accept that it won, despite that result having been unlikely a priori.

Sean is making some rather naive assertions about probability instead of concentrating on the evidence.

By Richard Wein (not verified) on 25 Jan 2014 #permalink

Michael,

"that is a teleological answer - how do you know what a protein is supposed to do?"

All kinds of (purposeful) research is conducted concerning dysfunctional proteins. They recognise the results of proteins not doing what they are supposed to do.

Jason Rosenhouse wrote:

I am not a biochemist, but it seems pretty obvious that you cannot possibly make a good argument for your claims here [regarding the limits of RM/NS with regard to levels of functional complexity]. We can do no more than study minuscule portions of protein space, and that only in modern organisms. The precise nature of protein space is itself something that evolves with time, which complicates things considerably. The fitness of a gene that codes for a given protein is often dependent on the environment in which it finds itself. The reachability of a given gene likewise depends on what happened previously in natural history. Furthermore, the overall size of the space is imply irrelevant, as I explained during the radio show. Natural selection guarantees that most of the space will never be explored in natural history, while guiding organisms to the functional genes. The result is a vast space where we have no good way of assigning a probability distribution, precisely as I said in my post.

You claim that natural selection guides organisms to functional genes. The problem with this notion is, of course, that natural selection cannot guide, in a positive manner, the random mutations of a mutating sequence at all, not even a little bit, until the mutations happen to hit upon a novel sequence that is actually functionally beneficial over what already exists within the gene pool. Until this happens, natural selection is completely helpless in the process of searching for the edges of novel beneficial islands in sequence space. It simply has not part to play aside from preserving what already exists. It is therefore more of a preserving force rather than a creative force of nature. RM/NS simply stalls out, in an exponential manner, with each step up the ladder of functional complexity. That is why there are no examples of evolution in action beyond a relatively low level of functional complexity. We aren’t talking about the evolution of highly complex machines here – like the human eye or the mammalian ear. We are talking about the lack of evolution of any qualitatively novel system that requires more than just 1000 specifically arranged residues. That’s not a very high level of functional complexity. That’s a very low level of complexity beyond which evolution simply stalls out – despite huge population sizes in bacteria, high mutation rates, very rapid generation times and very high selection pressures. Despite all of these things favoring evolutionary progress at higher levels, the mechanism of RM/NS completely stalls out on a rather low-level rung of the ladder of functional complexity. Why do you think that might be? – if your vision of closely spaced steppingstones were actually correct at higher levels of functional complexity?

Also, the overall nature of protein sequence space simply does not evolve with time in a manner which would actually favor evolutionary discoveries at higher levels of complexity. Most of the problem with protein sequence space is that the vast majority of sequence options within the space are simply not structurally stable and could not form useful proteins of any kind. Beyond this, say the environment changes so that new protein sequences are beneficial within sequence space (which does of course happen). How does this not provide an evolutionary advantage? Because, such changes to the potential targets in sequence space does nothing to the overall ratio of beneficial vs. non-beneficial since new islands appear while others disappear with such environmental changes. Also, it does nothing as far as setting up the steppingstones in a nice line of very closely spaced steppingstones as you originally claimed.

In short, this key argument upon which your entire theory is dependent, is simply mistaken and does not solve the problem of the exponential decline in potentially beneficial islands within sequence space with each step up the ladder of minimum structural threshold requirements.

That’s an in principle argument for being highly skeptical of big bold claims about the nature of sequence space. When you then factor in the myriad practical successes in the field of molecular evolution, and the fact that not many biochemists seem to share your view, it looks like you are once again just waving your hands.

Where are these “practical successes” in the field of molecular evolution? – beyond very very low levels of functional complexity? Where is there a single example of evolution in action that produced any qualitatively novel system of function that requires a minimum of more than 1000 specifically arranged residues? As far as I’m aware, there are no such examples in literature – not a single one.

As I pointed out during our debate, all the practical successes of the mechanism of RM/NS are based on low-levels of functional complexity – to include antibiotic resistance, novel single-protein enzymes, antifreeze proteins, and all of the other examples that you listed off in your book.

The size of the space is irrelevant, as I have already explained. Your other claims are nonsense. At most we can make some judgments about small, local areas of sequence space as we see them in modern organisms and modern environments. That’s plainly insufficient for drawing grand conclusions about the viability of evolution.

This size of sequence space is not irrelevant because it demonstrates the exponential nature of the increase in the overall size of sequence space with each increase in the minimum structural threshold requirements of systems at higher and higher levels of functional complexity. This observation would only be irrelevant if it could be shown that potentially beneficial sequences increase at the same rate. The problem, of course, is that the increase in potentially beneficial sequences is dwarfed by the increase in non-beneficial sequences – which in turn creates the exponentially declining ratio problem.

As far as your claim that we can only make judgments about small local areas of sequence space, that’s true. As you point out, it is completely impossible to explore all of sequence space at higher levels of complexity – since the size of sequence space beyond the level of 1000 specifically arranged residues is beyond imagination – being larger than universes upon universes. However, science, but definition, is about extrapolating the information that is currently in hand to make predictions about things which cannot be definitively known. And, given the information that is in fact currently in hand, we can gain a very good idea as to the nature of all of sequence space. The same can be said of the universal Law of Gravity, for example. It is thought that this Law of nature is true everywhere in the universe even though we haven’t actually tested it in all parts of the universe. It’s a scientific prediction based on the relatively little evidence that we have in hand. The very same thing is true of protein sequence space – or any other form of sequence space that is based on information that is coded into character sequences (i.e., English, French, Russian, computer code, Morse Code, etc). All of these language/information systems have the same basic features of sequence space where meaningful/beneficial sequences are randomly distributed throughout sequence space at various levels of functional complexity.

And, for all of these language/information systems we can actually know, with very high confidence and high predictive value, that the ratio of potentially beneficial vs. non-beneficial sequence does in fact decrease, in an exponential manner, with each increase in the minimum size and/or specificity requirement of a sequence.

In this line, consider an argument from a paper published in 2000 by Thirumalai and Klimov:

The minimum energy compact structures (MECSs), which have protein-like properties, require that the ground states have H residues surrounded by a large number of hydrophobic residues as is topologically allowed. . . There are implications of the spectacular finding that the number of MECSs, which have protein-like characteristics, is very small and does not grow significantly with the size of the polypeptide chain.

The number of possible sequences for a protein with N amino acids is 20^N which, for N = 100, is approximately 10^130. The number of folds in natural proteins, which are low free energy compact structures, is clearly far less than the number of possible sequences. . .

By imposing simple generic features of proteins (low energy and compaction) on all possible sequences we show that the structure space is sparse compared to the sequence space. Even though the sequence space grows exponentially with N (the number of amino acid residues [by 20^N]) we conjecture that the number of low energy compact structures only scales as lnN [The natural logarithm or the power to which e (2.718 . . . ) would have to be raised to reach N] . . . The number of sequences for which a given fold emerges as a native structure is further reduced by the dual requirements of stability and kinetic accessibility. . . We also suggest that the functional requirement may further reduce the number of sequences that are biologically competent.

So if, as sequence space size grows by 20^N the number of even theoretically useful protein systems only scales by the natural log of N, this differential rapidly produces an unimaginably huge discrepancy between potential target and non-target systems (given that the structures themselves require a certain degree of specificity). For example, the sequence space size of 1000aa space is 20^1000 = ~1e1301. According to these authors, what is the number of potentially useful protein structures contained within this space? It is 20^ln1000 = ~1e9.

All we really have left then is your argument that these exponentially rarer and rarer beneficial sequences are somehow all lined up in a nice neat little row. Is this a testable claim or not? If not, your claim simply isn’t scientific. If it is testable, what are the results of the tests? What is the best evidence that pertains to this hypothesis of yours? Is it likely to be truly representative of any aspect of sequence space? – or not?

In answer to this question, consider the work of Babajidge et. al. (1997) where the observation is made, regarding stable protein and RNA sequences:

“The sequences folding into a common structure are distributed randomly in sequence space. No clustering is visible.”

While this paper was admittedly based on very very short low-level sequences, it provides a good idea as to what higher levels of sequence space look like. The extrapolation is a very reasonable one that can be tested in a potentially falsifiable manner - a testable position which has been continually verified and has yet to be falsified (increasing its predictive value). Upon what, then, do you base your hypothesis that the line-up of closely spaced steppingstones that you envision remotely represents reality anywhere in sequence space at any level of functional complexity? – past, present, or future? It seems to me like you’re hiding behind the unknown, hopeful that someday someone will find some evidence to support your vision of what reality needs to be in order for your hypothesis to be true. I’m sorry, but that’s just wishful thinking, not science. The evidence that is currently in hand strongly counters your imagined scenario.

You absolutely insist on discussing this at a highly abstract level. But for actual biologists this is not an abstract question. They do not apply natural selection as some vague principle in their work. Instead they do the hard work of studying actual complex systems, and in every case their findings are the same. They find that once the systems are well understood, and once similar systems in other organisms are studied and understood, plausible gradualistic scenarios inevitably appear. Those scenarios are hardly the end of the story, however, as I explained to you during our radio debate. Once you think you have a good scenario for how something evolved, that scenario can be used to generate testable hypotheses. Subsequent testing of these hypotheses then leads to new knowledge. This type of reasoning has been applied so frequently and so successfully that, if it is fundamentally flawed, we must conclude scientists are getting mighty lucky.

Do you have even one example of what you call a “plausible gradualistic scenario”? - beyond very low levels of functional complexity? Take, for example, a scenario proposed for flagellar evolution by Nicholas J. Matzke in this 2003 paper, "Evolution in (Brownian) space: a model for the origin of the bacterial flagellum." In this paper Matzke lists off what appear to him to be fairly closely-spaced steppingstones, each of which would be beneficially selectable in most environments, along a pathway from simple to much more complex – i.e., the fully functional flagellar motility system. It looks great on paper! The steps certainly seem “plausible”. The only problem, of course, being that none of Matzke’s proposed steppingstones are actually close enough to each other, in sequence space, for random mutations to get across what initially seems like a fairly small gap (i.e., a series of non-selectable required mutational changes to get from one steppingstone to the next) this side of a practical eternity of time. And, in fact, there are no laboratory demonstrations of the crossing of any of Matzke’s proposed steppingstones – not a single one. You’d think that if Matzke’s proposed steppingstones were really as close together in sequence space as he suggests, that a real life demonstration of the crossing of at least one of his proposed gaps wouldn’t be much a problem. The problem, of course, is that there is no such demonstraiton because of the fact that his proposed steppingstones are far too far apart in sequence space to be reached from a lower-level steppingstone this side of a practical eternity of time. Yet, this is about as good as it gets in literature as far as any attempt to produce a truly plausible story of evolvability.

http://www.detectingdesign.com/flagellum.html

So, if you have anything better, I’d love to see it...

The reason scientists routinely find plausible gradualistic scenarios is that these complex systems all carry clear evidence of their evolutionary past. We are not talking about “design flaws” in some abstract sense and we are not trying to psychoanalyze some hypothesized creative supermind. Instead we are talking about structures that are hard to understand from the standpoint of what a human engineer would do, but are easy to understand once the history of the structure is taken into consideration. Not one or two examples, but every complex structure studied to date. Apparently it amused the designer to create in a way that perfectly mimics what we would expect if these systems were actually produced gradually by natural selection.

Again, one does not judge design or non-design based on supposed design flaws or a nested hierarchical pattern or any other such pattern or sequence that supposedly can only be produced by mindless mechanisms. All of these features can be and have been produced by human designers for various systems and for various reasons. I’m sorry, but appeals to design flaws and other such patterns simply doesn’t explain how your proposed mechanism could reasonable have done the job – especially in the light of very clear factors that strongly suggest that it simply cannot move around in sequence space like you imagine.

In short, despite your claims to the contrary, your entire theory is based on what you think an intelligent designer would or would not do (which is very subjective since intelligent designers can do and often do all kinds of things for all kinds of reasons). Your position simply is not based upon evidence for what your mechanism can actually do. I’m sorry, but arguing what an intelligent designer wouldn’t do, in your estimation, is just not a scientific argument when it comes to determining the creative potential and/or limitations of your proposed mechanism.

There’s so much more, of course. In some cases, like the mammalian inner ear, we have strong evidence from paleontology and embryology to show how a complex structure evolved gradually. Likewise for molecular evolution where, in cases like anti-freeze proteins in fish, we have strong evidence for how the proteins evolved from simpler precursors. I could point also to the success of game theoretic models in ethology. In every case scientists are approaching their work with theoretical models based on an assumption of natural selection, and they get results. This consistent success, again, is mighty coincidental if the theory is just fundamentally flawed.

Again, antifreeze proteins are not very complex. They are very simple, requiring a minimum of no more than a few dozen specifically arranged residues – the same as the similar examples in your book.

As far as your story of the evolution of the mammalian inner ear, it is indeed a lovely story, but it says nothing as far as how your proposed mechanism could have done the job. It just shows a serious of what appear to you to be gradual enough modifications, and you simply wave your hand and claim, without any other support, that your mechanism could easy produce such changes. Really? Where is your description of the mutations that would be required, on a genetic level, to get from one selectable steppingstone to the next in your proposed pathway?

Again, what seems to you to be easily explained from an anatomic level is not so easily explained once you start to actually look at the number of non-selectable genetic changes that would be required. It’s much like computer programming where apparently simple changes to the function and/or appearance of a computer program require fairly significant changes to the underlying computer code for the program.

Yes, the evidence is circumstantial, since this process plainly takes too long to be observed in toto. That’s not biologists’s fault. Nor is it their fault that the metaphor of sequence space, useful in many contexts, is not so useful for drawing grand conclusions about the viability of evolution.

You yourself draw grand conclusions about the viability of evolution given a little bit of circumstantial evidence that is almost entirely based on what you think an intelligent designer would or would not do. Based on these assumptions, you make grand conclusions about the potential for evolutionary progress via a mechanism that you really don’t understand beyond what you know must be true if your theory is to remain viable. Therefore, you dream up a picture of sequence space which is completely contrary to what is currently known about sequence space. You’re willing to argue that sequence space must somehow have these very very closely spaced steppingstones all lined up in nice little rows and that these neat little rows are not at all effected by the exponential decline in the ratio of beneficial vs. non-beneficial. You make these claims, not based on some superior understanding of the nature of sequence space, but based on your claimed ignorance of the actual nature of sequence space.

I’m sorry, but that isn’t a scientific position when it comes to a useful understanding of the evolutionary mechanism. Science isn’t about what is possible, but what is probable. If you don’t understand sequence space, you don’t understand your mechanism. And, if you don’t understand your mechanism, you really don’t have a scientific basis for the creative potential you ascribe to it.

I’m genuinely surprised to see a mathematician with an interest in biological evolution produce this common, but mistaken, argument. It’s like saying that one shouldn’t be surprised if Arnold Schwarzenegger happens to win the California Lottery 10 times in a row. After all, unlikely events happen all the time! – Sean Pitman

Oh please. Since I plainly discuss this point in my next paragraph, you have a lot of nerve cutting me off where you did. The whole question at issue is whether the endpoints of evolution are like getting 500 heads on 500 tosses of a coin, or whether they are more like firing an arrow into a wall and then painting a target wherever it lands. You claim it is the former; more sensible people claim it is the latter. And that is why the end result of any probability calculation you carried out would be irrelevant.

I’m sorry, I must have misunderstood your argument. Even after reading your entire argument several times, it seemed to me like you were trying to argue that rare events happen all the time, so it doesn’t matter if the odds are not favorable to your position.

In any case, your scenario is still very much misguided. You do realize that the sequences in sequence space are pre-defined as being beneficial or non-beneficial? Beneficial “targets” cannot be “pained later” after the randomly shot arrow hits the wall in just any location. If the arrow lands on a non-beneficial sequence, no one can claim that the sequence is in fact beneficial. The sequence is what it is. Therefore, it is perfectly reasonable to argue that the odds of actually hitting a novel beneficial target are extremely low and get exponentially worse and worse at higher and higher levels of functional complexity – worse than the odds than getting 500 heads in a row at relatively low levels of functional complexity (still at the level of small subcellular machines). Yet, you reject the implications of this statistical problem and argue that I’m painting targets after the arrow hits the wall? How can you possibly suggest such a thing when nature defines the targets ahead of time, not me?

But the issue wasn’t merely coming up with a definition of functional complexity. It was doing so in a manner that is in any way relevant for determining what natural selection can do with eons in which to work. Show me in concrete terms how the definitions you’ve produced here permit a calculation that shows natural selection to be ineffective, and then I will be impressed. This is precisely what William Dembski attempted to do, but his work was so shot through with false assumptions and vague premises that it did not amount to much.

It’s not just calculations, it is observations and statistical extrapolations based on those empirical observations of the real world. This isn’t just a mathematical world we’re talking about here. This is the real world observations and mathematical extrapolations based on those real world observations – i.e., a real scientific theory.

In any case, as already noted, the definition of levels of functional complexity is easy. It’s been published as well. For example, Hazen et. al. (2007) define functional complexity as follows:

1.n, the number of letters in the sequence.
2.Ex, the degree of function x of that sequence. In the case of the fire example cited above, Ex might represent the probability that a local fire department will understand and respond to the message (a value that might, in principle, be measured through statistical studies of the responses of many fire departments). Therefore, Ex is a measure (in this case from 0 to 1) of the effectiveness of the message in invoking a response.
3.M(Ex), the total number of different letter sequences that will achieve the desired function, in this case, the threshold degree of response, rEx. The functional information, I(Ex), for a system that achieves a degree of function, rEx, for sequences of exactly n letters is therefore

I(Ex)= - log(sub2) [M(Ex) / C^n] (C = number of possible characters per position)

What is also interesting is that Hazen et. al. go on to note that, "In every system, the fraction of configurations, F(Ex), capable of achieving a specified degree of function will generally decrease with increasing Ex." And, according to their own formulas, this decrease is an exponential decrease with each linear increase in n - or the number of "letters" or characters (or in this case amino acid residues), at minimum, required by the system to achieve the beneficial function in question.

So, yet again, the basic concept of levels of functional complexity is well defined in literature. It isn’t that science can’t define the concept or that the basic concept is difficult to understand, contrary to what you seemed to suggest in your book and during our debate. The only real question is if the potentially beneficial target islands are closely spaced and lined up in a nice little line across sequence space like you imagine – as must be the case if the claims of evolutionists for the creative power of RM/NS is actually “plausible”.

Given all that is currently known, through empirical observations, about sequence space and how beneficial islands are actually arranged in sequence space, your imagined scenario simply isn’t tenable. And, there is no evidence for why it might ever have been tenable – outside of intelligent design. There simply is no empirical evidence that sequence/structure space remotely resembles your description of it.

Beyond this, the hypothesis that all of sequence space at various levels of functional complexity has beneficial islands scattered around in a randomly uniform appearance, is a testable hypothesis with predictive value. This hypothesis can therefore be compared to your hypothesis to see which one produces the best results. The scenario I describe can be used to predict an exponential decline in evolution with each linear increase in the level of functional complexity under consideration. Your hypothesis, in comparison, predicts no such decline in evolutionary potential whatsoever. In fact, according to your hypothesis of sequence space, evolution should proceed at higher levels of functional complexity at pretty much the same rate as occurs at lower levels of functional complexity. Of course, this simply isn’t what happens. Lower-level functions that require no more than a few hundred specifically arranged characters (or amino acid residues for protein-based sequence space) evolve commonly and rapidly in fairly small populations with fairly slow reproductive rates. However, even given very large population with very rapid reproductive rates, short generation times, and high mutation rates, nothing evolves beyond the level of systems that require a minimum of at least 1000 specifically arranged amino acid residues. It just doesn’t happen – as predicted by my view of sequence space, not yours.

Therefore, your view of sequence space has effectively no predictive power. You cannot predict how often your mechanism will succeed in finding something qualitatively new at a given level of functional complexity within a given span of time. My model, on the other hand, can predict how often success can be expected at a given level of functional complexity within a given span of time. That is why my view of sequence space carries far more scientific predictive value compared to your view.

Your further remarks in this paragraph strike me as very confused. Science makes rather a lot of claims that do not depend on probability in any way, so I don’t know where you came up with this idea that probability is the most important thing there is. And since unlikely things occur all the time, I don’t see why I have to show that an event was likely to occur before I can conclude that it happened. Moreover, showing that something is likely or unlikely rarely involves performing an actual probability calculation. Usually you just follow the evidence where it leads, and if it points strongly to the conclusion that something happened then that’s good enough. Abstract probability calculations are irrelevant in most cases.

Science is dependent upon predictive value among competing hypotheses. One must therefore be able to demonstrate that the favored hypothesis actually has greater predictive value than the competing or opposing hypothesis, and that the predictions of the hypothesis have not been effectively falsified by various potentially falsifying tests. In other words, it must be shown that a given hypothesis has greater probability of predicting the future, of predicting future observations, than the opposing hypothesis when put to the test. If you cannot do this, if you cannot quantify the degree to which your hypothesis has greater predictive value than the opposing hypothesis, if your hypothesis cannot be effectively falsified by another hypothesis, even in theory, then you simply don’t have a scientific position. What you have is a just-so story.

Anyway, I do appreciate that you took the time to respond to my comments. Believe me, I know the time it takes, as I have very little time for such things myself. All the best to you and I hope to hear from you again in the future...

Sean

By Sean Pitman (not verified) on 25 Jan 2014 #permalink

sean samis:

You do realize that “sequence space” is just a metaphor, correct? That the “distance” between any two “points” is not measured in time or distance, but in random steps? And that many “points” in this “sequence space” represent thermodynamically impossible “points” and that others represent thermodynamically favored “points”? So the random walk is constrained and channeled to favor certain thermodynamic “paths”? Given that, and the billions of “swimmers” who launch from any “beneficial target” the time expected for at least one “swimmer” to reach the next “beneficial target” is drastically reduced.

First off, just because a protein sequence is thermodynamically unstable and cannot fold to produce any kind of viable much less beneficial protein does not mean that such a sequence cannot be produced by random mutations. You see, proteins are the product of DNA translation. The relevant mutations are in the original DNA sequences. And, these mutations can produce codes for protein sequences that are in fact thermodynamically unstable, a situation that represents the vast majority of sequence space at higher levels of functional complexity. This reality, of course, increases the average random walk time before success can be achieved.

Your proposed solution to the problem (i.e., increasing the number of “swimmers” does in fact work at lower levels of functional complexity. However, in order to keep up at higher and higher levels of functional complexity, the number of swimmers would have to be increased exponentially with each and every step up the ladder. Very quickly, the required number of swimmers required to keep evolution going at the same rate can no longer be supported by the environment, and evolutionary progress drops off – exponentially.

That is why your argument really isn’t a useful solution to the problem if you sit down and actually do some math of your own. You evidently don’t realize just how rare the potentially beneficial islands are in sequence space beyond the level of systems that require a minimum of no more than a few hundred specifically arranged residues. When you’re talking about systems that require more than 1000 specifically arranged residues, the Hamming distance between one steppingstone and the next closest steppingstone would be like a star in a universe where the next closest star or galaxy cluster is not visible. In fact, it billions of light years away.

Given this realization on your part, you need only realize that, given the millions upon millions of “beneficial targets” and the billions of “swimmers” and that every time a “beneficial target” is reached, billions more “swimmers” start out from that “beneficial target” then the time for at least one swimmer to reach any particular “beneficial target” is much briefer than you represent, briefer by several orders of magnitude.

Again, you evidently don’t understand the math – i.e., the degree of rarity of potentially beneficial targets at higher levels of functional complexity. Sit down and actually do some real calculations based on the known parameters of sequence space. For each of the “millions and millions of beneficial targets” within a higher level of sequence space, there are universes upon universes of non-beneficial options. In other words, each one of the potentially beneficial targets is surrounded, on all sides, by universes of non-beneficial sequences that look like an endless ocean from the perspective of any and all potential starting points. Do the math.

By Sean Pitman (not verified) on 25 Jan 2014 #permalink

Where does the number 1000 come from? Eg, why isn't it 500, or 750?

I guess I don’t see the value of that determination.

Every little bit of ammunition. That distinction (uninformed vs liar) can inform you of whether the other person is arguing in good faith or simply working from a set list of items he/she has practiced. There may be, after all, a chance of convincing someone who is misinformed that they are wrong: the willfully dishonest make that impossible.

Sean P,

“Where is your description of the mutations that would be required, on a genetic level, to get from one selectable steppingstone to the next in your proposed pathway?”

This is an excellent and very relevant question. Good luck on trying to get an answer, either from Jason, or finding one anywhere in the literature. Elevating natural selection to fairy status is acceptable, if not encouraged. But an honest appraisal of the likelihood of beneficial mutations opens too many cans of worms….activity non grata. Taboo.

On a general note... While many creationists stick to the same old arguments no matter how thoroughly they've been refuted, better creationists like Sean keep moving on to new arguments, based on relatively new and little-known science, where there are still major gaps in our knowledge. As that area of science becomes better understood, and creationism-refuters become more familiar with it, the better creationists start to find the argument less attractive, and they move on to yet another relatively undeveloped area of science. The poor track record of past creationist arguments doesn't seem to bother them. This time, they think, we really have got the killer argument. To the rest of us, this is strongly reminiscent of The Boy Who Cried "Wolf".

By Richard Wein (not verified) on 26 Jan 2014 #permalink

"where there are still major gaps"
Perhaps, but it still just an "Intelligent Designer of the Gaps" argument. It can't be stressed enough: Evolution Theory may have problems - note again that theories on superconductivity at relatively high temperatures has much bigger problems, but that no believer suggests an ID agent sustaining those magnets - but as long as IDer don't offer a testable alternative (ie answer Adam Lee's two questions) they simply have no choice but violating the rules of the scientific method.
That's why Phil above is just a laugh:

"Elevating natural selection to fairy status is acceptable"
The ID agent being an immaterial agent by definition any ID is and will remain just that - a fairy tale, based on an omni-everything daddy high high up the sky.

MNb,

"it still just an “Intelligent Designer of the Gaps” argument. It can’t be stressed enough: Evolution Theory may have problems"

Yes, it does. And they are about probabilities. It is easy to illustrate how selection has become a helpful fairy gap-filler when immense probability problems are encountered. Just read the last line of this abstract:
http://www.ncbi.nlm.nih.gov/pubmed/20129036/

While many creationists stick to the same old arguments no matter how thoroughly they’ve been refuted, better creationists like Sean keep moving on to new arguments, based on relatively new and little-known science, where there are still major gaps in our knowledge.

I'm glad that at least someone is willing to admit that there are no known answers to my questions of Jason - questions that are in fact fundamental to the creative potential and/or limits of the Darwinian mechanism of RM/NS. You guys simply don't know how it works at various levels of functional complexity, but are confident that someday you will figure it out.

Well, that's a nice hopeful position, but it isn't a scientific position. Science isn't based on what may be known in the future. Science is based on what is currently known and hypotheses that can be tested, in a potentially falsifiable manner, right now. Until you can do this, you can dream of future vindication all you want, but just don't call it science...

By Sean Pitman (not verified) on 26 Jan 2014 #permalink

Sean P@45
Wrong again. You don't know what science is.

By John (not verified) on 26 Jan 2014 #permalink

In reply to by Sean Pitman (not verified)

Sean: "I’m glad that at least someone is willing to admit that there are no known answers to my questions of Jason..."

If you're going to misrepresent what someone says, at least have the sense to do it in a different forum, where he's not present to call you on it!

By Richard Wein (not verified) on 26 Jan 2014 #permalink

Sean Pitman;

Regarding

Your proposed solution to the problem (i.e., increasing the number of “swimmers” does in fact work at lower levels of functional complexity. However, in order to keep up at higher and higher levels of functional complexity, the number of swimmers would have to be increased exponentially with each and every step up the ladder. Very quickly, the required number of swimmers required to keep evolution going at the same rate can no longer be supported by the environment, and evolutionary progress drops off – exponentially.

Umm, no. In this metaphor (“steppingstones” across Lake Superior), no swimmer needs to swim from Duluth to Sault Ste. Marie. Every swimmer needs only to swim from one “beneficial island” to another. Their progeny then need only swim on to the next “beneficial island”. And so on.

All that “higher and higher levels of functional complexity” means is that the swimmers are further and further from Duluth. But none need swim the whole way on their own; every generation of swimmers need only make it from one “island” to the next. Some “islands” will cease sending out swimmers (for various reasons). The number of swimmers will increase because of the increasing size of the “leading edge”, but not to the point of environmental bottleneck.


the Hamming distance between one steppingstone and the next closest steppingstone would be like a star in a universe where the next closest star or galaxy cluster is not visible. In fact, it billions of light years away.

This considers only steppingstones representing specific, highly complex, “finished” structures. You assume intermediate structures are not beneficial and can be disregarded. This is a HUGE, and unjustified assumption. Intermediate structures can provide a species sufficient benefit, vastly increasing the number of “stepping stones”.

Your assumption also forgets the complex interconnectedness of biology. Mutations that are “neutral” for some functionality are beneficial for others; so the “steppingstones” for many many functions overlap and fill in the “gaps”.

If you only consider the “stepping stones” for vision, there are far more “steppingstones” than you suppose, they are much closer together, and in between each are stepping stones for other functions which keep the leading edge moving forward. Instead of swimming all the way from Duluth to Sault Ste. Marie, the swimmers go from Duluth to Superior to Cornucopia to ... Or maybe the other way to Two Harbors, Beaver Bay and so on. You get the point.

you evidently don’t understand the math – i.e., the degree of rarity of potentially beneficial targets at higher levels of functional complexity. Sit down and actually do some real calculations based on the known parameters of sequence space. For each of the “millions and millions of beneficial targets” within a higher level of sequence space, there are universes upon universes of non-beneficial options. In other words, each one of the potentially beneficial targets is surrounded, on all sides, by universes of non-beneficial sequences that look like an endless ocean from the perspective of any and all potential starting points. Do the math.

You over estimate the “degree of rarity”. But since you apparently have done the math, please tell us all the exact number of “beneficial targets” in the sequence space of just one “function”; or please cite a source.

Can you tell us with mathematical certainty the total number of beneficial targets in the whole sequence space for all life on Earth? For all life that has ever existed on Earth?

Instead of making unsubstantiated claims; if you really want to persuade us, then cite a source. Since (apparently) you’ve done the math, please share the details. Sounds like an easy task for you.

BTW, please also share the math regarding the development of the designer itself, how it came into existence, and so forth. A cite to source would suffice.

sean s.

By sean samis (not verified) on 26 Jan 2014 #permalink

Sean S @48
The stepping stone metaphor may not too far off. Many scientists have supported panspermia. I suggest hydrogen was formed from more basic particles. Elements came from the fission of hydrogen in stars. The model suggests the development from chemical to DNA may have occurred on other planets around supernova suns or in outer space – one step, the rock of another planet.
Earth received the bacteria in stones (asteroids) that then grew – another rock. Earth is a death trap. So our purpose is to take the DNA to the interstellar space to what – another rock.
Really neat how a creator arranged the stepping-stones of planets to carry the DNA and life to ever greater complexity.

By John (not verified) on 26 Jan 2014 #permalink

In reply to by sean samis (not verified)

I am just curious how ID people think "functional" complexity arises. Are the designer and his trusty elves sitting in a workshop at the North Pole with a supercomputer matching enzyme function with the environment on an instantaneous basis? Are they altering ova, sperm? If so, how exactly are they doing it - anything akin to Uri Geller bending spoons?

Arguing from probabilities always looks impressive, but doesn't amount to much. For instance, let's say I am in a middle seat on an airline. The probability of me as a human on earth is sitting in that seat is 1 in 7 billion (this is bogus, but is typically for the ID attempt to wow with math) and that Jim Smith is sitting next to me - the two of us being there on the same plane is 1 in 7 billion squared. Wow this can't be true - it must be that a god put us here.

By Michael Fugate (not verified) on 26 Jan 2014 #permalink

correction - fission to fusion

Sean Pitman is a fascinating, somewhat depressing and even slightly frightening illustration of how a person can be highly educated and erudite in some ways, and yet so lacking in self-critical capability and basic logical powers that he can utter absurdities with complete conviction, and use his erudition to give them a superficial appearance of plausibility. A cautionary example for those of us (like myself) who can tend to be complacent about what we assume to be the transformative powers of education.

I, like other commenters above, await any specific, detailed explanation from Sean P. about what exactly stops RM/NS from generating novel traits beyond a certain point, since he admits it can do so up to a point, and how he knows this. So far all I see are unsupported assertions that there's some mysterious stopping point, accompanied by the vaguest appeals to unspecified probability calculations. Handwaving, in short.

By Michael Wells (not verified) on 26 Jan 2014 #permalink

Some additional thoughts on this topic:

Sean Pitman wrote in #38

When you’re talking about systems that require more than 1000 specifically arranged residues, the Hamming distance between one steppingstone and the next closest steppingstone would be like a star in a universe where the next closest star or galaxy cluster is not visible. In fact, it billions of light years away.

Mr. Pitman, I’m going to call you on this one.

Hamming distances are not SWAGs, they not distances between systems or functions. Hamming distances are the number of steps needed to convert strings of symbols from one specified order to another. The symbols can be letters of an alphabet, numerals, or nucleobases, or other symbols.

The Hamming distance can only be known when both the starting and finishing sequences are known, otherwise they are just estimates of average conversions from one random sequence to another.

So, for Sean Pitman to accurately say that the distance between steppingstones X, he would have to know

1. the exact, minimal genetic “target sequence” necessary for some functionality,
2. all the tolerable variations to that minimal “target sequence”,
3. all the starting sequences;
4. all the intermediate “steppingstones”,
5. all the permissible intermediate configurations, and
6. whatever requirements that I am unaware of.

I sincerely doubt that Sean Pitman knows even one of these 6 requirements. Does anyone know the minimum genetic sequence necessary for vertebrate vision? Arthropod vision (I’m thinking of insects here)? Cephalopod vision? Does anyone know all the tolerable variations of that minimal sequence?

“Steppingstones” in this context have also been called “beneficial islands” or “beneficial targets”. But in this context, what does “beneficial” mean? It means more than some genetic sequence that instantiates some level of functionality, it also includes any intermediate genetic sequence which is any step toward the “target sequence” even if it does not directly confer a benefit by itself to the functionality the “target sequence” refers to. So, for instance, a genetic mutation that favors disposal of some toxic protein might include a step toward color vision. Such a mutation is a “steppingstone” toward multiple “target sequences”; and this kind of “steppingstone” is probably more the rule than the exception.

Mr. Pitman challenges us on the Hamming distance between microbes and vertebrate vision; one is left to wonder about the Hamming distance between some microbe and intelligence sufficient to create an entire new universe. If the former is as cosmologically huge as Pitman claims, then the latter must be insuperable and there can be no Intelligent Designer.

sean s.

By sean samis (not verified) on 26 Jan 2014 #permalink

Jason, I find it very strange that you are able to fully understand Chess problems yet you seem unable to relate the probabilities involved with Chess to the probability of evolutionary outcomes when engaging in a debate with people such as Sean Pitman.

The number of possible position in a chess game is about 10^46 and the game-tree complexity is at least 10^123, which is a staggeringly large number considering that there are only(!) 10^80 atoms in the observable universe.

Despite its staggering complexity, a game of Chess always results in only three possible outcomes: win; lose; draw.

The "game of life" in our universe, despite its much more unfathomable complexity, has resulted in only one, not three, outcomes: life as it exists today.

The probability of the universe existing as it currently stands is 1.0. The ten to the power of almost infinity of other possible outcomes did not occur. The remit of science is to explain the observable universe. Another remit of science is to avoid wasting resources on trying to explain to the layperson why none of the other myriad of possibilities actually occurred instead.

At the start of a game of Chess it is impossible to predict the multiple moves that will occur to produce the final result. However, video record every game of chess that's ever been played then play it in reverse: one will always find the pieces in the same starting position on the board! Psychics make predictions; science provides self-correcting explanations of materialized reality.

Kindest regards,
Pete

By Pete Attkins (not verified) on 26 Jan 2014 #permalink

Pete Atkins: There's no reason to attribute the absence of an argument to some "inability" to make that argument; that was a little rude. It is a decent argument, though.

Hi Lenoxus, thank you for pointing that out. I had no intention of being rude so I apologise for coming across that way.

I have great difficulty with writing reasonably intelligible English, especially in such a manner to convey my tone. I was attempting to be highly supportive of Jason's endeavours to defeat the anti-science brigade.

By Pete Attkins (not verified) on 26 Jan 2014 #permalink

Wow. Pitman is definitely a blast from the past—and so is his verbiage about 'levels of functional complexity', the ID shibboleth he's been stumping for for at least the last decade or two. Tell us all, Pitman: Which has the higher 'level of functional complexity', (a) a naked mole rat, or (b) a mudskipper? And if you deign to answer that question, do show your work. You wouldn't want anyone to think that you're just pulling random guesses out of your lower GI tract, after all.
Of course, Pitman will not answer this question. He won't provide anything within bazooka range of an actual specified value for the 'level of specified complexity' of anything; rather, he'll just insist that his inchoate, indeterminate 'level of functional complexity' verbiage constitutes an insuperable obstacle that prevents natural selection from doing Significant Stuff™. And he'll do his damnedest to distract his audience's attention away from his continuing failure to upgrade his 'levels of functional complexity' verbiage from Bullshit ID Talking Point to Scientifically Useful Concept.

The ID religion accepts the idea of a universe creator. But unlike other religions ID doesn’t propose a full set of morals or other useful concepts that other religions propose. Thus, ID cannot partner with science for the benefit of mankind. The only morals they encourage are intolerance and ignorance of science.

The ID proponents want to dictate its mistaken concepts and conclusions is just snobbery. Such snobbery is a portent of what they would do if not opposed.

How many and who are the religious leaders who support the ID religious credo?

The ID repetition ad nauseam of faulty and misrepresented science is truly laughable, as we’ve seen. I suppose the hope is to repel scientists because they could refute their misrepresentations. This would allow them to influence the general public in radio and TV with their technobable. Then they would be allowed to change laws and curriculums. Maybe their science is intended to be repellent to scientists who may influence the public.

By John (not verified) on 26 Jan 2014 #permalink

In reply to by Cubist (not verified)

Or even an enzyme - pick any one you want and survey all of the DNA or amino acid sequences or tertiary structures - whatever in all of the organisms and report back with rankings for functional complexity. I am the DI lab in Seattle will let you have some space for free.

By Michael Fugate (not verified) on 26 Jan 2014 #permalink

"what exactly stops RM/NS from generating novel traits beyond a certain point"

For starters, replication enzymes which serve to prevent the copy errors which are the supposed source of novelty.

A better inquiry would be about how such complex proteins originated and acquired their various functions.

For those who have wanted statistical study of life, the statistics of life are being unlocked and noted in another blog:
Ask Ethan #21: Why does life exist?

I don’t understand why a Boltzman distribution is assumed. It does fit nicely into the idea of an exponential increase in complexity. Scratch one more ID misrepresentations.

By John (not verified) on 27 Jan 2014 #permalink

In reply to by Phil (not verified)

sez phil@61:

“what exactly stops RM/NS from generating novel traits beyond a certain point”

For starters, replication enzymes which serve to prevent the copy errors which are the supposed source of novelty.

A better inquiry would be about how such complex proteins originated and acquired their various functions.

It's true that there are various biochemical mechanisms which serve the purpose of keeping DNA unchanged, and/or repairing changed DNA.
It's also true that mutations do happen.
Therefore, mumbling "replication enzymes" is not actually an answer to the question “what exactly stops RM/NS from generating novel traits beyond a certain point?” Perhaps the most obvious problem is, that answer doesn't explain how those enzymes manage to distinguish between DNA sequences for "novel traits" that are "beyond a certain point", and DNA sequences for "novel traits" which are not "beyond a certain point". And if those enzymes cannot, in fact, distinguish which DNA-sequences-for-"novel traits" belong on which side of the "certain point" barrier…

>A better inquiry would be about how such complex proteins originated and acquired their various functions.<

um that would be RM/NS...

All that is needed is a tiny bit of catalytic action for it to be selected - it doesn't need to be perfect. In fact, it never will be.

Ever hear of relative fitness - Phil?

A better question would be for you to operationally define functional complexity so one could determine if and how it increases. Please tell us how one could measure it.

By Michael Fugate (not verified) on 27 Jan 2014 #permalink

Ye gods, the fact that basically all biochemists and evolutionary biologists would disagree with his assessment of the shape of sequence space really doesn't seem to faze Sean Pitman at all...

Cubist,

“It’s also true that mutations do happen.”

Of course they do. An enormous amount of medical research is devoted to understanding the results of the ones with noticeable effects.

The point is that the alterations necessary to build complex bio-features can only come from replication errors. So enzymes that check and repair errors are characteristically an obstacle. It is an interesting paradox.

That aside, if you are trying to understand how something like bio-sonar in mammals could develop, there are still a lot of other problematic factors. For instance, any error that might be helpful has to occur in germ cells. So it is very unlikely that a rare mutant would actually be involved in reproduction where there are enormous numbers of candidates.

===

Michael Fugate,

“All that is needed is a tiny bit of catalytic action for it to be selected”

Selected for what?

“Ever hear of relative fitness…?”

Competing phenotypical variants in a population? Is that how replication enzymes developed?

I think functional complexity can be easily observed in replication enzymes.

Phil - operational definition please. Hypothesis, prediction, independent and dependent variables..... Set up the experiment for us Phil if it is so easy.

By Michael Fugate (not verified) on 28 Jan 2014 #permalink

Pitman and his defenders seem to overlook a very basic point about probability arguments. Eventually, you have to put your calculations on the table, and you have to do them right, ie, NOT assume every outcome is equally likely. Otherwise you could with equal justification arguing that there is no way that any sports dynasty actually achieved their many championships, the odds of it happening at random being so small. Some chemical arrangements are more likely than others, and this must be taken into account. This I've never seen done, and for good reason: they don't have enough information to do so. The rest is just blather.

By Science Avenger (not verified) on 28 Jan 2014 #permalink

@65 "Ye gods, the fact that basically all biochemists and evolutionary biologists would disagree with his assessment of the shape of sequence space really doesn’t seem to faze Sean Pitman at all…"

Nor does it register with evolution deniers that basically all experts in the areas they reference as problems for evolution disagree with them, ie, physicists do not agree that there are physics problems with evolution, mathematicians do not agree that there are mathematical problems with evolution, etc. These two groups are especially noteworthy because they are smarter than the rest of us (just ask them), and if there really was a problem with evolution from their perspective, no power on earth could shut them up about it. Just ask the cold fusion proponents.

By Science Avenger (not verified) on 28 Jan 2014 #permalink

Michael Fugate,

"operational definition please. Hypothesis, prediction, independent and dependent variables….. Set up the experiment for us"

I’m not proposing any grand experiment. We’re talking about probabilities and evolution, and with that in mind, I’m just noting things that have to be factored in when appraising the mutations side of the equation. Besides, based on your apparent convictions about the capabilities of RM/NS, I would be surprised if you always insist on strict operational definitions and methodology.

Another consideration in the mutations game is fixation. Assuming that a rare beneficial copy error can actually be introduced into a population, it would either be lost altogether, or fixed. Some number of generations will be required in either case. Granting all optimism, and assuming fixation, this increases the time in between incremental steps. In the bio-sonar example, this is a complication because there is a limited amount of available time for development.

Phil; in #70 you replied to Michael that you’re “not proposing any grand experiment” but you do make claims that need experimental validation. Do you have numbers on the rate of errors that slip past “enzymes that check and repair errors” (from #66)?

Certainly the activity of repair enzymes would have to be considered, but so would their failure rates, which are probably sensitive to the kinds of errors they are supposed to fix. They are also probably sensitive to the overall health of the individual they reside in. As would the rate of replication errors in the first place... and on and on. So many questions, so few answers.

Without numbers, though, we don’t know whether these enzymes are a significant obstacle to evolutionary change.

In #70 you also wrote about fixation. I am not aware of any requirement for a mutation to become “fixed” before another mutation arises. Why would there be?

sean s.

By sean samis (not verified) on 29 Jan 2014 #permalink

Phil, no what you are doing is just making stuff up. If you cannot operationally define "functional complexity", then it is a useless combination of words and nothing more.

By Michael Fugate (not verified) on 29 Jan 2014 #permalink

Phil:

So it is very unlikely that a rare mutant would actually be involved in reproduction where there are enormous numbers of candidates....

and

Another consideration in the mutations game is fixation. Assuming that a rare beneficial copy error can actually be introduced into a population, it would either be lost altogether, or fixed. Some number of generations will be required in either case.

AFAIK, these and other factors are included in the work that geneticists do, and the conclusion of that work is that RM+NS is sufficient. IOW Phil, people have already asked your questions, they've studied these issues, they've done experiments such a Lenski's to measure the "final, considering all the factors" rates of change of allele frequency in population, and the observations coming out of such experiments supports the TOE.

Your skepticism of this end result reminds me a bit of Behe's testimony in the Dover trial. As a witness, he told the court all about the unlikelihood of a 2-step or 3-step mutation. And he came up with some very high improbabilities (i.e., extremely low rates of expected occurrence). That's where you are; you have a sense that these are highly improbable sequences of events. But Behe (and you) neglected to think about the other side of the equation, which is the population size and generational time available. When Behe was forced to factor those in, it turned out that (IIRC) the bacterium in one cubic meter of soil would be expected to produce his highly improbable sequence in (IIRC) a year. Each year. IOW his intuitional sense of what could happen in the real world even if his calculation of improbability was correct was dramatically wrong. I suspect the same is true for you here; even if your sense of improbability is in the right ballpark, your intuitional sense of how often such an improbable event could occur is dramatically wrong. And we have empirical evidence that folk like Behe are wrong, because scientists like Lenski watch populations of bacteria evolving new genetic sequences which result in novel developmental capabilities, and report that yes, in fact, this does happen and is happening no matter how improbable you intuit it to be.

sean samis,

“Do you have numbers on the rate of errors that slip past “enzymes that check and repair errors” “

Lots of them do. But with the bio-sonar example in mind, I think the only ones you or I would be interested in would be the beneficial ones occurring in germ cells.

“I am not aware of any requirement for a mutation to become “fixed” before another mutation arises. Why would there be?”

They would get lost. 100% fixation is slow and tedious, and not necessarily likely to happen. But it would make every member of a population a possible source for adaptation.

===

Michael,

“what you are doing is just making stuff up. If you cannot operationally define “functional complexity”, then it is a useless combination of words and nothing more.”

So far, I’ve mentioned replication enzymes, the fact that errors that can be passed on have to occur in the germline, and fixation. None of these are made up, and all of them stack the odds against rare beneficial mutations making their way into (whale) genomes.

===

Eric,

“these and other factors are included in the work that geneticists do, and the conclusion of that work is that RM+NS is sufficient”

These factors are often ignored, and rarely, if ever, considered together in assessing development. I will be pleased to read any published work you can link to that proves me wrong.

“people have already asked your questions, they’ve studied these issues, they’ve done experiments such a Lenski’s to measure the “final, considering all the factors” rates of change of allele frequency in population, and the observations coming out of such experiments supports the TOE.”

Fair enough. Would you like me to incorporate Lenski’s numbers in the development of bio-sonar? After 31,500 generations, his culture acquired the ability to utilize citrate in aerobic circumstances. I’ll let you estimate the time to sexual maturity, gestation times, population sizes and infant mortality rates in the last most recent supposed whale ancestor that didn’t echolocate, and we’ll scale up and go from there.

===

I haven’t gotten to the more difficult and complex developmental problems that need to be considered.

Phil, what was the definition of functional complexity again, I missed it in you answer?

By Michael Fugate (not verified) on 29 Jan 2014 #permalink

sez phil @74:

sean samis,

“Do you have numbers on the rate of errors that slip past “enzymes that check and repair errors”

Lots of them do. But with the bio-sonar example in mind, I think the only ones you or I would be interested in would be the beneficial ones occurring in germ cells.

In that case, what's the point of bringing up DNA-'proofreading' mechanisms (i.e., restriction enzymes & etc) in the first place? If those 'proofreading' mechanisms were absolutely perfect, then sure, there would be no mutations, and evolution couldn't work.
But those 'proofreading' mechanisms are not perfect. Mutations do manage to 'slip by' them. And given the fact that mutations cannot be distinguished as 'beneficial' or 'deleterious' until after their effect on the organism's survival/fecundity in its environment is demonstrated, there is no reason to think that those 'proofreading' mechanisms are more effective on 'beneficial' mutations than on any others. So… what was your point (if any), again?

I am not aware of any requirement for a mutation to become “fixed” before another mutation arises. Why would there be?

They would get lost. 100% fixation is slow and tedious, and not necessarily likely to happen. But it would make every member of a population a possible source for adaptation.

Hold it. "They would get lost"? 'Would', meaning a 100% certainty? If that's what you're arguing for, you really need to show your work.

Michael,

what you are doing is just making stuff up. If you cannot operationally define “functional complexity”, then it is a useless combination of words and nothing more.

So far, I’ve mentioned replication enzymes, the fact that errors that can be passed on have to occur in the germline, and fixation.

The items on that list do not constitute a definition of 'functional complexity'. Rather, the items on that list are specific thingies which you are offering as distinct instances of whatever-it-is you believe 'functional complexity' to be. In the absence of a good, solid, definition of 'functional complexity', how can anyone else confirm whether or not you were correct to identify those specific thingies as distinct instances of 'functional complexity'? Answer: They can't. So unless you're satisfied with 'defining' this 'functional complexity' notion as whatever Phil says it is, you really do have to pony up a good, solid definition of 'functional complexity'.

None of these are made up…

Since nobody claimed they were made up, so what? Do you make a habit of refuting assertions which were never made?

…and all of them stack the odds against rare beneficial mutations making their way into (whale) genomes.

Again: 'DNA proofreading' mechanisms act against all mutations, and there is good reason to believe that said mechanisms cannot distinguish between mutations which eventually prove to be 'beneficial', and mutations which eventually do not prove to be 'beneficial'. So, again: What's your point (if any)?

Would you like me to incorporate Lenski’s numbers in the development of bio-sonar?

Yes, let's do that! Its not much more than a back-of-the-envelope comparison, but I think its very good for showing just how badly you underestimate the likelihood.

After 31,500 generations, his culture acquired the ability to utilize citrate in aerobic circumstances. I’ll let you estimate the time to sexual maturity, gestation times, population sizes and infant mortality rates in the last most recent supposed whale ancestor that didn’t echolocate, and we’ll scale up and go from there.

Lenski doesn't measure population sizes in terms of number of organisms, so we'll have to let that part of the analysis go by for now. A quick googling tells me that sonar evolution has been going on for at least 40 million years. Taking your 31,000 generations number as a working starting point, that means that as long as cetaceans were having a generation of kids every 1,290 years, they could do it. In order for you to be right and for ecolocation evolution in cetaceans to be impossible in the time frame we know it developed, you have to assert that it would require more like 3 million generations to develop it. Or you could assert that cetaceans only give birth once every 1,500 years. :)

I'm pressed for time, but will respond to recent comments as soon as possible. Sorry for the interruption.

Phil, you wrote in #74 “I think the only ones [errors that slip past repair enzymes] you or I would be interested in would be the beneficial ones occurring in germ cells.

As Cubist wrote in #76, any admission that these error-correcting enzymes are imperfect makes this whole topic moot. Clearly they are not perfect. (How could they be? They are evolutionary products also!)

And as Cubist also wrote, we cannot determine what mutations are beneficial until long after their occurrence, so we cannot narrow our focus as you suggest.

Similarly, Cubist caught your stumble about “fixation”. There is no requirement for a mutation to be “fixed” before another arises. The process is far more dynamic than you allow for.

I am interested in your eventual responses; but I do understand being busy; that is why I didn’t respond earlier.

sean s.

By sean samis (not verified) on 31 Jan 2014 #permalink

Phil,

When you get around to it, here's another thought for you:

Exactly how improbable is it that biosonar developed via RM/NS in whales? The quantitative answer to this question matters. Let's just say for the sake of argument that the probability of such a development is 1 in a trillion. Using eric's analysis above, assuming that cetacean gestation periods over the 40 million years of their evolutionary history has been relatively constant, we can estimate about 3 million generations of whales during that time. If we further assume (for simplicity) a constant whale population from generation to generation of 100,000 individuals, that represents 300 billion (using the American usage of billion, ie 3E11), individual whales.

Each of these 300 billion whales represents one opportunity for the inheritance of the requiste mutation for the biosonar trait. If the inheritance of that trait has a probability of 1E-12, then the probability that the trait is NOT inherited is (1-1E-12). The probability that the trait is inherited by NONE of the whales would be (1-1E-12)^3E11. This number works out to about 74%. Thus, the probability of SOME whale in 40 million years inheriting the trait of biosonar from a parent in which that mutation occurred would be about 26%, which is not exactly wildly improbable.

Obviously there's a lot of guesswork in the above calculation, but the point is that our intuitions about probability are actually not very accurate, especially in the face of large numbers of trials.

Further, this whole discussion is predicated on the fallacy that evolution is directional. That is, while we sit here and wonder about how something as improbable and complex as biosonar occurred, this trait's occurance was just a contingent one. Given different mutations and different environmental conditions, it's likely that some other trait would be the one we would have been discussing right now. Thus, we should not really be calculating how improbable it is that biosonar arose, but rather what the probability of ANY complex trait arising would be. This probability would obviously be higher than the probability of some specific trait, such as biosonar occurring. I cannot even begin to try to estimate such a probability, since I have no idea how many such traits are theoretically possible.

sez sean t:

Each of these 300 billion whales represents one opportunity for the inheritance of the requiste mutation for the biosonar trait. If the inheritance of that trait has a probability of 1E-12, then the probability that the trait is NOT inherited is (1-1E-12). The probability that the trait is inherited by NONE of the whales would be (1-1E-12)^3E11. This number works out to about 74%. Thus, the probability of SOME whale in 40 million years inheriting the trait of biosonar from a parent in which that mutation occurred would be about 26%, which is not exactly wildly improbable.

Beggin' yer pardon, guv'nor, but… are you sure "inherit" is the appropriate word for the argument you just made? I can, arguendo, accept a guesstimate of 1E-12 as the probability of a particular mutation occurring—but once said mutation has occurred, the probability of said mutation being inherited by descendants of the critter what carries said mutation is another matter entirely. Assuming that said mutation doesn't render its carrier effectively sterile, which would of course render the whole question of 'descendants' moot, it seems to me that for any one offspring of the mutation-carrier, the probability of said mutation being passed along to that specific offspring should be about 50%. Certainly not 1E-12!

Sorry for being unclear. The relevant event for which we should be trying to assign a probability is actually a compound event, namely the conjunction of the following events:

1. The appropriate mutation occurs
2. This mutation is passed along to offspring.
3. The offspring pass it along to their offspring, etc.

I make no reprensentation as to whether or not the 1E-12 I guess at is an accurate guess, but certainly the probability of the compound event I outline above should be much less than 50%. I was not intending to talk about a conditional probability here; that would open up a whole new can of counterintuitive worms (See Monty Hall Problem, for instance).

Phil, here is why your argument (and Sean P's) from functional complexity is not science. I am going to introduce a new concept - let's call it stochastic diversity. I am going to tell you that the high stochastic diversity of the insulin gene shows that intelligent design is so improbable that it proves no intelligence was involved. I am never going to define the term nor tell you how to calculate it. If you try to fathom what it is and make suggestions, I will reply that you are not even close to understanding the concept - it must be that designerism is clouding your judgement. And on and on, repeating the same tired arguments, but never ever making the concept testable.....

By Michael Fugate (not verified) on 31 Jan 2014 #permalink

Stochastic diversity is great fun, MF. I'm totally going to steal it.

Michael Fugate,

“what was the definition of functional complexity again, I missed it in you answer?”

Okay, let’s try this, the noun first. Anything with a detectable level of complicatedness could qualify as complex. This would include brick walls, math problems, furniture, light bulbs, crystals, parades or perhaps even galaxies. But pertaining to biology, it would be an assembly, like proteins, genes, organs, blood cells, or even ant colonies and orca pods.

Now, for the adjective, I can’t really think of anything biological that isn’t ultimately about discriminating roles and/or deliberation that serves to maintain or perpetuate life.

So, as it pertains to the logical study of living things, I would define functional complexity as ‘purposeful organization’.

====

Cubist,

“what’s the point of bringing up DNA-’proofreading’ mechanisms (i.e., restriction enzymes & etc) in the first place?”

Jason’s post is titled ‘Probability and Evolution’, so anything that affects the odds is pertinent. Replication enzymes reduce mutations. They function in the interest of fidelity, and in doing so, they either prohibit or inhibit accidental changes. At the very baseline of self-replicating life, nature is hostile to things evolving.

There is easy evidence available concerning RE effectiveness. There are lots of plants and animals that exhibit only very modest changes after tens or even hundreds of millions of supposed years. Gould and Eldredge, noting that stasis is the norm, devised a theory called punctuated equilibrium. This was not to tell about or explain the evidence of the fossil record. It was an attempt to manage the fact that what evidence is there is not particularly supportive of evolutionary theory.
-
“Hold it. “They would get lost”? ‘Would’, meaning a 100% certainty? If that’s what you’re arguing for, you really need to show your work.”

No, not necessarily 100% certainty, though fixation is expressed in those terms:

“When an allele reaches a frequency of 1 (100%) it is said to be "fixed" in the population and when an allele reaches a frequency of 0 (0%) it is lost. Once an allele becomes fixed, genetic drift comes to a halt, and the allele frequency cannot change unless a new allele is introduced in the population via mutation or gene flow. Thus even while genetic drift is a random, directionless process, it acts to eliminate genetic variation over time.”
http://en.wikipedia.org/wiki/Genetic_drift#cite_note-14

The point is that fixation, which evolution depends on, takes time. And it is by no means a guaranteed outcome. Again, it is a factor in the probability of evolution development.
-
“Since nobody claimed they were made up”

See post #72

===

eric,

“A quick googling tells me that sonar evolution has been going on for at least 40 million years.”

“Some 30 million years ago, Ganges river dolphins diverged from other toothed whales, making them one of the oldest species of aquatic mammals that use echolocation, or biosonar…” http://www.sciencedaily.com/releases/2013/04/130404152625.htm

This means that echolocation would have been essentially complete before the divergence. That would stand to reason if this is correct:

“Based on past phylogenies, it has been found that the evolution of odontocetes is monophyletic, suggesting that echolocation evolved only once 36 to 34 million years ago.”
http://en.wikipedia.org/wiki/Animal_echolocation#Toothed_whales

Information on dating and development is conflicting and confusing. There are also anomalous reports that really complicate the situation, like this:

“…the fossilized archaeocete jawbone found in February dates back 49 million years. In evolutionary terms, that's not far off from the fossils of even older proto-whales from 53 million years ago…This jawbone, in contrast, belongs to the Basilosauridae group of fully aquatic whales…
http://www.nbcnews.com/id/44867222/ns/technology_and_science-science/#…

Basilosaurids and Dorudontids were supposed to be around 38 million years ago, but did not echolocate. Your timeframe needs to be compressed if you want to apply E coli adaptation to bio-sonar development.

More when I have time.

Phil,

Your still misunderstanding the timeframes. Let's say for the sake of argument that echolocation evolved in whales 38 million years ago. How does that affect anything? Did the first living organism appear 38 million years ago? At least a couple of billion years passed between the first appearance of life on earth and the appearance of echolocation in whales, whether that appearance occurred 38 million, 40 million, 50 million or even 100 million years ago. Development of echolocation in ANY species ancestral to modern whales could have led to the observed echolocation system in modern whales. The appropriate time frame is the time between the appearance of the first cell to the appearance of echolocation. That time is measured in the billions of years.

Even if the time were shorter, however, so what? Improbable events happen all the time. The probability of the sequence of numbers drawn in the last 50 Powerball lottery drawings was an incredibly small value, for instance. Are we to conclude that this sequence of numbers did not occur? The analogy here is good; the problem is generally a misunderstanding of what is contingent vs. what is necessary. The probabilty of a modern whale having the specific sequence of DNA bases in its genome is vanishingly small, akin to the probability of the specific sequence of Powerball numbers. However, we cannot look on this as a surprise; the whale must inevitably have SOME sequence of DNA bases in its genome; all would be equally unlikely.

You simply focus on the likelihood of the whale having the particular sequence it does, point to that sequence's small likelihood, and conclude that since it's so small, there must be design. I could use the same unsound argument to conclude that the Powerball lottery is rigged. Just as there must have been SOME sequence of Powerball numbers that was drawn, there must have been SOME sequence of DNA bases in the whale's genome. No matter what traits the whale developed, we would be marvelling at how unlikely it was that these traits developed as they did.

Phil,

BTW, there is one BIG difference between the Powerball numbers and the genome in my analogy. The Powerball lacks a selection mechanism, so all possible results are equally likely. That is not true in the biological case; not all DNA sequences are equally likely; most likely the vast majority of possible sequences would lead to a non-viable organism. Therefore, we would expect, given a RM/NS model for biological evolution, that the results would appear to be distinctly non-random in appearance.

Phil:

“what was the definition of functional complexity again, I missed it in you answer?”

Okay, let’s try this, the noun first. Anything with a detectable level of complicatedness could qualify as complex.

Okay, how do I detect complicatedness? How do I measure it?

as it pertains to the logical study of living things, I would define functional complexity as ‘purposeful organization’.

How do I detect purposefullness as a factor separate from complicatedness or organization?

You are doing exactly what Michael Fugate illustrated; you're creating terms but not defining them in a way that allows others to independently calculate their value for various cases or examples. What's the complicatedness of a brick wall and more importantly, how do I calculate it?

Now it seems to me like you are thinking of something like Shannon Entropy. However, there's a problem using that for ID: natural processs can in fact increase Shannon Entropy. A simple duplication (for example GGA -> GGAGGA) will do that.

it has been found that the evolution of odontocetes is monophyletic, suggesting that echolocation evolved only once 36 to 34 million years ago.”
...Your timeframe needs to be compressed if you want to apply E coli adaptation to bio-sonar development.

Compress it by a factor of 10 (4MY evolution) and there's still enough time for 30k generations with realistic reproduction times. Compress it by a factor of 100 and there's STILL enough time. That is how badly your 'not enough time' intuition is off.

Phil,
What eric said.

By Michael Fugate (not verified) on 03 Feb 2014 #permalink

Sean T,

“The appropriate time frame is the time between the appearance of the first cell to the appearance of echolocation.”

Sean, in the evolutionary scheme of things, there was a time when every biological specialty did not exist, and had to start from genetic scratch.

===

eric,

“you’re creating terms but not defining them in a way that allows others to independently calculate their value”

What terms? Functional complexity is irrelevant to assessing the probability of evolution occurring by way of random errors. Mutations, replication enzymes, germ cell statistics and the necessity of fixation are all established science.

“How do I detect purposefullness as a factor separate from complicatedness or organization?”

Well, you could arrange a long chain of amino acids into some aesthetically appealing and complex sequence. But would it would probably not be a functioning protein.

“Compress it by a factor of 10 (4MY evolution) and there’s still enough time for 30k generations with realistic reproduction times. Compress it by a factor of 100 and there’s STILL enough time.”

Lenski’s bacteria achieved a significant (if rather modest) mutation in 30,000 generations, rounded down. To keep things simple, let’s say the generation time for all the whale ancestors was one year. That mean either 40 million, 4 million or 4 hundred thousand generations. You can choose.

If successive generations of whales gained their own useful mutation at the same rate as the E coli, they would accumulate 1333, 133 or 13, respectively. So what you have to do is estimate how many mutations it would take to build each of the necessary components involved in echolocation. I’ll leave it to you to list the individual elements, and you can decide for yourself how many errors it would take for each one to develop.

This brings me to the next complicating factor in the mutations game.

Every accidental forward step has to compliment the last one. The incremental, random errors would have to occur in a developmental sequence. This gets really nasty because the mutations would have to keep happening in the same genes, which is not likely in a genome of billions of nucleotides, thousands of genes.

Phil:

Functional complexity is irrelevant to assessing the probability of evolution occurring by way of random errors. Mutations, replication enzymes, germ cell statistics and the necessity of fixation are all established science.

Okay then, calculate the probability of evolution occurring by way of random errors. I'll give you a real example (HT to Blaine from the "Branch Discusses Falsifiability" thread). Figure 4A from the link shows the ancestral FLeN residue 200* to be: DRFLDVALQY. One of the new critters with the multiple flagella has, instead: DRLLDVALQY. Now we "evolutionists" would say that this is a result of random mutation and not design. What was the prior probability of this mutation occurring by random mutation? Please show your work: I am not merely interested in your proability but in how you calculate it.

[Eric] “How do I detect purposefullness as a factor separate from complicatedness or organization?”

[Phil] Well, you could arrange a long chain of amino acids into some aesthetically appealing and complex sequence. But would it would probably not be a functioning protein.

No, that is not detecting purposefullness, that's having a purpose and carrying it out. I want to know you method for looking at some genetic code or developmental feature and detecting purposefullness. I want to be able to independently reproduce your line of reasoning, from your assumptions through to your conclusions. So pick a biological example we would likely disagree on (not a shoe or a car or whatever, but whale sonar or similar) and show me how you detect purposefullness in it.

Phil,

Point taken, but there is no reason that the original organism that first displayed echolocation had to be something we would recognize as a modern whale. It could well have evolved in a "proto-whale" organism. That does greatly extend the timeframe.

In any case, time is really not particularly relevant anyway. The development of echolocation could very well have been just a wildly improbable event that happened to actually occur. That does happen, you know. Again, development of complex traits is more akin to observation of a sequence of lottery numbers. Each individual result is improbable. The sequence of these results is therefore wildly improbable, yet we are not surprised by the fact that SOME sequence actually occurred.

Further, the development of biological systems is much more non-random than the results of any lottery. Just for concreteness, let's assume that a lottery draws 6 random numbers between 1 and 40. There would be 3,838,380 combinations of numbers, each equally likely. Any sequence of 50 such drawings would yield a result with a probability of occurrance of 1.07E-329, a wildly improbable event. Obviously we are unsurprised by the occurance of this event. However, let's say we observed the event that the first drawing was duplicated by a later drawing. We would rightfully be very surprised at that since the probability that this would occur would be 0.000013 .

Now, if we introduce a non-random, selection element into this, we can see that it's much less surprising that we would see the "surprising" result. In the case of the lottery example, let's introduce the selection rule that any drawing with any number greater than 10 is an invalid drawing. Let's then consider the sequence of 50 valid drawings under this selection rule. The probability that the first valid drawing is duplicated would then be 0.208. The selection rule increased the probability of the occurrance of this event by a factor of 16,300!

By analogy, you must keep in mind that biological evolution is not completely random. It operates much like my above example; the underlying basis is random, but the process as a whole is not entirely random. The result, much like in my lottery example, is that the probabilty of a given event is magnified relative to the probabilty that it will occur in purely random fashion.

Moreover, while we might marvel at the low probability of occurrance of something like the mutations needed for echolocation, we should keep in mind that whale evolution was not directed toward the goal of developing echolocation. If the wrong mutations had occurred, or the organisms carrying those original muations had failed to reproduce, then whales would simply not have echolocation. However, it's very likely that whatever would have resulted would have SOME system that you would claim was improbable to have developed through evolution. The relevant probability, then is not the probability of the development of echolocation in whales, but rather the probabilty of whales developing ANY complex system. That probability is certainly higher, but I have no idea how anyone could even estimate it since we have no clue what evolutionary paths were possible for whales.

there is no reason that the original organism that first displayed echolocation had to be something we would recognize as a modern whale. It could well have evolved in a “proto-whale” organism. That does greatly extend the timeframe.

As well as reducing the expected number of evolutionary steps. After all, I think we would probably all concede that whatever land critter was evolving to live under water started off being able to make sounds, hear sounds through auditory organs, and probably even "feel" some sounds just through bone and muscle structure (the same way we can). So the question is not really how sonar developed from no sense at all, but how it developed into a directional and distance-sensitive sense from a pre-existing set of nondirectional send-and-receive capabilities.

eric,

“…the link shows the ancestral FLeN residue 200* to be: DRFLDVALQY. One of the new critters with the multiple flagella has, instead: DRLLDVALQY. Now we “evolutionists” would say that this is a result of random mutation and not design.”

A damaged FLeN regulator gene results in an abnormal number of flagella which immobilizes the bacteria? No, I wouldn’t call that design either. But it is a very good example of the creative power of mutations.

-

“pick a biological example we would likely disagree on (not a shoe or a car or whatever, but whale sonar or similar) and show me how you detect purposefullness in it.”

Helicase. It specifically serves to unwind and split DNA/RNA molecules during replication. That is the purpose.

===

Sean T,

“Moreover, while we might marvel at the low probability of occurrance of something like the mutations needed for echolocation, we should keep in mind that whale evolution was not directed toward the goal of developing echolocation.”

But of course it was.

“ "When the early toothed whales began to cross the open ocean, they found this incredibly rich source of food surfacing around them every night, bumping into them," said Lindberg, former director and now a curator in UC Berkeley's Museum of Paleontology. "This set the stage for the evolution of the more sophisticated biosonar system that their descendents use today to hunt squids at depth." “
http://www.berkeley.edu/news/media/releases/2007/09/05_WhaleSonar.shtml

That’s how evolution works. Bumping into your food is very heavy, frustrating selection pressure.

===

eric,

“So the question is not really how sonar developed from no sense at all, but how it developed into a directional and distance-sensitive sense from a pre-existing set of nondirectional send-and-receive capabilities.”

Yeah, the ears had to close over and essentially became an actual sight organ. This brings me to another developmental consideration.

All the distinct subsystems had to be sculpted into genetic shape concurrently. Lots of gene families being incrementally produced or damaged towards a specific functions at the same time. Skull alterations, transduction apparatus, matching vocal and hearing mechanisms, impedance and frequency control, the imaging center in the brain, fantastic neural accommodations, etc. With the errors being random, and with lots of unhelpful errors occurring in between the good ones, it must have been a very serious and confused mess till the whole system was accidentally integrated.

Of course if you look at the skull alterations of cetaceans, you will see that they make absolutely no sense under the design hypothesis. Why use the skull of a terrestrial mammal to make an aquatic one? Same exact bones - and in order to get the nares on top of the head - all the bones are shifted caudally so that the nasals are oriented vertically above and between the eyes rather than horizontally far anterior to the eyes and the frontals squashed into almost oblivion (but still there). It is as if you wanted to move a vent pipe that exited at the floor to exit at the ceiling and instead of cutting a hole in the ceiling and moving the pipe to run to the new hole - you shoved the pipe up through the existing wall all the way to the ceiling and plastered over the resulting 10 foot gap.

By Michael Fugate (not verified) on 05 Feb 2014 #permalink

Michael Fugate,

I don't really follow your analysis about the skull, and why you feel it is a poor design. I've never heard that particular criticism before.

But what I also don't understand is how mutations would have altered the skull like that. I have the same problem with reptile jawbones migrating towards the middle ear and being transformed by replication errors into the tiny ossicles in mammals.

This touches on another profound problem with random mutations, and one that isn’t often ever discussed in the context of development. For alterations like those above to actually happen would require more than just a gene being changed to express a new or modified protein. There has to be precise regulation of the expression. This means that the regulation sequences elsewhere in the genome would have to develop at the same time to result in improved morphological fitness.

One of the most intriguing discoveries the human genome project is about how phenomenally complex gene regulation is. The number of genes was much lower than expected, but they can code for multiple proteins. So the complexity is not as much in the genes as it is in the regulation and expression.

Things like this really strain the notion that accidents can result in organization and function. For all the reasons I’ve listed, my personal conclusion is that current theoretical evolution does not have a plausible core production mechanism. RM+NS is simply not believable.

Phil in response to my request to calculate a probability of a mutation:

A damaged FLeN regulator gene results in an abnormal number of flagella which immobilizes the bacteria? No, I wouldn’t call that design either. But it is a very good example of the creative power of mutations.

I asked you to calculate a probability. I asked you to show me how you arrived at the conclusion that RM+NS leading to useful features is improbable. You either cannot or choose not to do that.

On calculating purposefullness:

[eric]“pick a biological example we would likely disagree on (not a shoe or a car or whatever, but whale sonar or similar) and show me how you detect purposefullness in it.”
[phil]
Helicase. It specifically serves to unwind and split DNA/RNA molecules during replication. That is the purpose.

Same deal; I ask you to tell me how do determine purposefullness and you simply quote me a purpose. A conclusion is not a methodology. Your claims are irreproducible. You either have no method or are unwilling to share it. Either way, there's no science here.

Phil,

Again, you've missed the point. There is no direction or necessity to evolution. The evolution of echolocation in whales was the result of a random occurrance, namely the mutation or set of mutations making it possible. There was no necessity for whales to have developed echolocation. It is possible that some other system for avoiding their prey could have evolved (maybe heat sensing, for instance), or it's also possible that the course of whale evolution might have developed no system for avoiding prey. Of course, if it were the latter, we most likely would not be discussing whale evolution as that line would have gone extinct, as is the case for the vast majority of evolutionary lines.

Phil,

It is tempting to look at modern biological systems and conclude that these are optimal for a given environment, but that is fallacious. What's the optimality, for instance in a system by which food that is eaten can cause death of an organism by blocking the airway? What's the optimality of having a blind spot right in the most sensitive area of an organism's visual field? What's the optimality of having an organ that performs no discernable function, but can become infected and kill the organism? What's the functionality of lacking the genetic structure which codes for a metabolic pathway allowing the synthesis of vitamin C, especially given that other organisms possess this precise pathway?

Those are just of few suboptimalities in biological systems present in ONE organism, namely humans in case it wasn't clear. There are certainly other examples in other organisms. Why would a directed, designed process lead to such suboptimalities? It's easy to see why an evolutionary process would do so. Evolution does NOT produce optimal systems, only incremental improvements on pre-existing ones. In technical jargon, evolution finds the local maxima of the optimality function, not the global maximum. We would expect otherwise from an intelligently designed system, especially one with a designer much more intelligent than we are, which it would have to be since we have been unable to design even a simple prokaryotic cell.

Phil, It is not bad design, it is just not "intelligent" design. It works just fine, but comes about in a way very different than any known intelligence would design something. Come on surely you can see this. ID is all about analogy to human design and organisms are in no way designed like humans would do it. For instance, no human would add a pelvis and random pieces of femur to a sirenian or a cetacean just to mimic a tetrapod. Why would any intelligent designer stick them in? I would suggest a good comparative vertebrate course with a lab - then you could see what is going on. Some old school natural history museums still have tons of skeletons to compare - the one in Dublin is fabulous.

By Michael Fugate (not verified) on 06 Feb 2014 #permalink

eric,

“I asked you to calculate a probability. I asked you to show me how you arrived at the conclusion that RM+NS leading to useful features is improbable. You either cannot or choose not to do that.”

I’ve listed several reasonable factors concerning mutations. You can dispute the efficacy of replication enzymes, and argue that beneficial mutations are quite common, that they often occur in germ cells, and that those cells usually win the race. You can claim that fixation is all but guaranteed, and that the next replication error will further enhance the previous error, and that system components can all accidentally develop in parallel and be useful every step of the way. You could even say that regulation and control are inconsequential considerations, and easy to come by.

But I wouldn’t accept such claims about accidents for the same reason I wouldn’t believe you if you said you can run a mile in 38 seconds. It doesn’t require complicated calculations to determine improbability. The mutations game, in my estimation, is a bike with square wheels. There is a good reason that the literature usually deals with mutations in general terms, avoiding the fact that they are failures. It is better to emphasize selection, just say that things evolve, or build great fantasies without getting into trouble by mentioning mutations at all, like this:
http://www.talkorigins.org/faqs/bombardier.html

===

Sean T,

“There is no direction or necessity to evolution”

Sean, pardon me for noticing, but this is standard, canned, evolutionary jargon. The articles and papers are loaded with claims about selection pressure, which has become a less-than-candid way of saying that the environment will cause necessary mutations to happen.

“What’s the optimality of having an organ that performs no discernable function, but can become infected and kill the organism?”

The appendix? This is old news… http://www.scientificamerican.com/article/what-is-the-function-of-t/

===

Michael Fugate,

“no human would add a pelvis and random pieces of femur to a sirenian or a cetacean just to mimic a tetrapod”

No, they wouldn’t, but I don’t trust that such things are vestigial considering spinal detachment and reattachment to other muscles. It is hard to make a case for mutations and selection being able to do that, but not being able to completely eliminate supposedly useless vestiges or convert them instead of developing a tail from scratch. Too many evolutionary icons, like junk DNA, have crashed and burned with time.

No Phil, you misunderstand the concept of selection pressure. Selection pressure refers to the fact that out of the entire set of POSSIBLE mutations, only a certain subset would be likely to become fixed in the population, namely those mutations that confer traits that lead to better survivability. It certainly does not mean that any particular mutation or subset of possible mutations MUST occur.

The confusion arises precisely because most organisms have gone exitinct. Since that is the case, we have difficulty studying the evolutionary paths of most organisms. It's much easier to study the organisms that have survived. If all you study are the successes, it's easy to start to think that mutations leading to success have occurred by necessity. Nothing could be further from the truth; I don't recall the exact number, but I believe I've seen estimates that over 90% of all species that ever existed have gone extinct.

Arguing that there is some driving force causing the occurrance of mutations necessary for organisms to survive is akin to arguing that most exoplanets are Jupiter-sized or bigger and close to their parent stars. That's not true either; it's just

Phil - your fundamental misunderstanding of evolution is causing you to make all kinds of bogus assertions. Please read about the difference between relative fitness and absolute fitness for a start.

By Michael Fugate (not verified) on 07 Feb 2014 #permalink

Sean T,

“you misunderstand the concept of selection pressure”

I understand it. I’m just pointing out that the concept is abused. It leads people to believe, perhaps unconsciously, that in a system relying solely on chance, situation and circumstance can cause new features to develop. It is a subtle mental shift that results from the constant emphasis on selection, to the neglect of mutations and probability. The article on echolocation I linked to above shows this very well.

This well-known paper includes the following statement:

“Collectively, these results strongly suggest that a genomic region (estimated at ∼12 kbp) containing the LdSAS-B gene and its immediate neighbor sequences (including LdCR1-3) was duplicated and translocated to a site between Synuclein and LIM domain binding 3b genes; from this, the primordial AFPIII gene evolved, and the large AFPIII locus arose from in situ gene family expansion under selection pressure from polar sea-level glaciation.”
http://www.pnas.org/content/107/50/21593.full

In other words, random DNA replication error(s) accidentally duplicated and moved a random region to a suitable random genomic neighborhood where a gene family evolved and arose because of more random mutations that occurred because of cold water selection pressure.

In terms of probability, this is absurd. For the life of me, I can’t understand why anyone would buy into such fairy tales.

"In other words, random DNA replication error(s) accidentally duplicated and moved a random region to a suitable random genomic neighborhood where a gene family evolved and arose because of more random mutations that occurred because of cold water selection pressure."

Happens all the time - gene duplications a dime a dozen.

By Michael Fugate (not verified) on 08 Feb 2014 #permalink

Michael,

"Happens all the time – gene duplications a dime a dozen"

It’s an anesthetic for evolutionary theory. Having never adequately explained the accidental origin of genes in the first place, it is easier to imagine one being erroneously copied and modified than it is to imagine a new one being randomly assembled from nucleotide scratch.

Phil:

I’ve listed several reasonable factors concerning mutations.

Yes, but you've never told me how you reach your conclusion that this is low probability.

Look, I am not asking you to do anything new. I"m asking you how you got to a conclusion you've already stated you have. If you can't describe the methodology you used to arrive at a conclusion you've already reached, then I gotta think you don't actually have one. You're using gut instinct, and your support of ID amounts to nothing more than a vague "it makes more sense to me."

Phil, your comments have less and less substance all the time. You've been asked for clear, coherent definitions of your terms, you've been asked for calculations of the probabilities you keep vaguely referencing, etc. And all you're offering is unsupported assertions that contradict the state of scientific knowledge, and arguments from personal incredulity. The patience and politeness of the other commenters here is pretty remarkable.

I seriously doubt anything can penetrate your shell of denial at this point, but to reiterate what others have said, you really need to educate yourself more on the sheer scale of what's going on here - that seems to be one of your key misunderstandings. When you're talking about millions of years, and a population comprising thousands or millions of individuals in any one generation, and then add in how utterly common genetic mutations are, you get a different understanding of the odds of useful mutations cropping up and becoming fixed in the population.

By Michael Wells (not verified) on 08 Feb 2014 #permalink

eric,

“but you’ve never told me how you reach your conclusion that this is low probability.”

The things I’ve mentioned obviously do not support the idea of complex systems being produced by random errors. Perhaps you can tell me why I should reach any other conclusion, or why you would. I’m happy to listen to any objection you might have.

As I see it, two portraits have been painted of mutations. One is rosy and optimistic, commissioned exclusively by and for evolutionary theory. The other is dreadful and ugly, but is easily confirmed as accurate because in real life, mutations is about abnormality, disease and death.

===

Michael Wells,

“When you’re talking about millions of years, and a population comprising thousands or millions of individuals in any one generation and then add in how utterly common genetic mutations are, you get a different understanding of the odds of useful mutations cropping up and becoming fixed in the population”

Larger populations just mean lower probabilities.

“So eventually, a given allele will eventually become fixed in a population, or go extinct, the latter being the more likely fate. Indeed mathematical models show that a neutral allele arising by mutation has a very low probability of becoming fixed in a population; the larger the population, the lower the probability of fixation.”
http://plato.stanford.edu/entries/population-genetics/

See, this is what I'm talking about... "in real life, mutations is about abnormality, disease and death." Do you have anything to support this blithe contradiction of the current state of genetics knowledge? Most mutations are neutral, a smaller number are harmful, and a still smaller number beneficial. The harmful ones generally weeding themselves out of the population (for obvious reasons), the beneficial ones are given a chance to become fixed.

"Larger populations just mean lower probabilities." No. It's right there in your cite, Phil. "NEUTRAL alleles" are the subject of discussion in the paragraph you quote. Beneficial mutations are another story. Looking at the link, I see: "Our discussion has focused exclusively on deleterious mutations, i.e. ones which reduce the fitness of their host organism. This may seem odd, given that beneficial mutations play so crucial a role in the evolutionary process... If a gene is beneficial, natural selection is likely to be the major determinant of its equilibrium frequency..."

The phenomenon of evolution denialists quoting sources that actually contradict their claims is nothing new, of course. It's a particularly stark example of the Dunning-Kruger effect. What never ceases to amaze me is the hypocrisy in this behavior - you're perfectly willing to cite the bits of scientific sources that you think support your position, and then casually dismiss the same sources as titanically deluded or fraudulent in those bits where they don't.

By Michael Wells (not verified) on 09 Feb 2014 #permalink

As I see it, two portraits have been painted of mutations. One is rosy and optimistic, commissioned exclusively by and for evolutionary theory. The other is dreadful and ugly, but is easily confirmed as accurate because in real life, mutations is about abnormality, disease and death.

Why are you excluding the possibility of beneficial mutations? Ever heard of ApoA-1 Milano, for example?

Larger populations just mean lower probabilities.

Huh?

the larger the population, the lower the probability of fixation.

Aah! You seem to be confusing at least two things. Large populations, or many populations, increase the probability of a particular mutation taking place. Increasing the size of a random-breeding population reduces the probability of a neutral mutation becoming fixed.

By Richard Simons (not verified) on 09 Feb 2014 #permalink

Perhaps it is social conservatism's a priori conclusion all change is bad that is getting in Phil's way. When people long for the good old days, I always ask when that was?

The problem with fixation - in small populations, deleterious mutations are more likely to be fixed, in large populations beneficial ones are. It is drift v. selection in a nutshell.

By Michael Fugate (not verified) on 09 Feb 2014 #permalink

Sorry for the delayed response.

Michael Wells,

“Do you have anything to support this blithe contradiction of the current state of genetics knowledge? Most mutations are neutral, a smaller number are harmful, and a still smaller number beneficial.”

The overwhelming majority of mutations that have an effect are deleterious. It doesn’t take much of a search to determine that enormous amounts of research involve mutations causing problems. The point is about the extreme rarity of beneficial mutations compared to the ones people actually notice.

“ “NEUTRAL alleles” are the subject of discussion in the paragraph you quote. Beneficial mutations are another story.”

Another story, but not significantly different. There are numerous articles/papers on the probabilities of fixation for beneficial mutations, but there isn’t really much data about them. Not only are they very rare, they are difficult to identify. There are also lots of variables involved in fixation. If you can find something you find encouraging, I’ll be happy to read it.

That said, it is pretty obvious that beneficial mutations are not a strong influence. The list of beneficial ones is always about such things as HIV resistance, sickle cell/malaria and the one Richard mentioned. These are not the kind of events that are going to result in eyes or bioluminescence, or put 40,000 muscles and everything they require in an elephant’s trunk.

I noted some of my objections to mutations being able to produce complex systems and coincidental systems above in posts 61, 66, 70, 90, 94 and 96. They are reasonable, and simple enough for any interested high school student to understand. But a science teacher trying to float standard evolutionary theory will not have an easy time dealing with them. Feel free to address them if you’d like.

===

Richard Simons,

“Why are you excluding the possibility of beneficial mutations?”

I don’t. I question the idea that they can do what they are credited with doing. I acknowledge adaptation, even radical adaptation because there is no hiding from the data (though I would challenge the interpretation of the data).

Look at what there is; an impoverished list of tepid examples. Look at what is necessary: billions of replication errors producing millions of functional alterations, and thousands of role-specific proteins.

===

Michael Fugate,

“Perhaps it is social conservatism’s a priori conclusion all change is bad that is getting in Phil’s way.”

No, it’s just probabilities.

Phil -

>>"The overwhelming majority of mutations that have an effect are deleterious. It doesn’t take much of a search to determine that enormous amounts of research involve mutations causing problems. The point is about the extreme rarity of beneficial mutations compared to the ones people actually notice."

Nothing you say there contradicts anything I said. But you apparently think it does, in some very significant way. Why is this?

>>"Another story, but not significantly different... it is pretty obvious that beneficial mutations are not a strong influence."

Says you. The actual scientists who study this stuff for their entire careers say otherwise. I repeat, do you have anything to support this blithe contradiction of the current state of genetics knowledge? Besides your say-so?

>>"These are not the kind of events that are going to result in eyes or bioluminescence, or put 40,000 muscles and everything they require in an elephant’s trunk."

And when I take a step, I only cover a couple of feet. That's not the kind of method that's going to get me from Michigan to Florida.

>>"If you can find something you find encouraging, I’ll be happy to read it."

[chortle] No, it doesn't work like that. You're the one proposing you know that commonplaces of the science of population genetics are completely wrong. The burden of proof is on you. Show us your research, or the source from which you get these groundbreaking discoveries.

By Michael Wells (not verified) on 10 Feb 2014 #permalink

Can't be probabilities, Phil. We have pointed out over and over why it can't be - so it must be something else.

By Michael Fugate (not verified) on 10 Feb 2014 #permalink

[eric]“but you’ve never told me how you reach your conclusion that this is low probability.”

[phil]The things I’ve mentioned obviously do not support the idea of complex systems being produced by random errors.

No, this is not "obvious." That is precisely the problem - you are assuming what you are trying to prove. You are supposed to be showing how you've determined that complex systems aren't produced by random mutation, and instead you're just asserting that it can't be.

Perhaps you can tell me why I should reach any other conclusion, or why you would. I’m happy to listen to any objection you might have.

You should not reach any conclusion at all - neither that it's probable nor improbable - if you don't have concrete data about the likelihood of the sequence of events and a line of reasoning connecting that data together. What's the probability of a die with an unknown number of sides, each with an unknown number on it, rolling a "6"? The proper answer is: I don't know. It's indeterminate.

As far as I can tell, you don't have either component; neither concrete data about the probability of any sequence of events, nor a reproducible, clear method of linking various probabilities together. It looks very much to me like your entire argument is one of ignorance: saying "obviously do not support" is just other words for "I seems to me that they don't support." You don't understand how it could happen, so you are intuiting a low probability to it, and that is the complete extent of your reasoning. Now if I'm wrong about that, I apologize. Please correct me by showing me how you derive the conclusion "improbable" from data and reasoning (in a way which does not simply assume improbability in the first place).

Phil:

It doesn’t take much of a search to determine that enormous amounts of research involve mutations causing problems.

Well of course, because it is impossible to determine the number or extent of neutral mutations based on observing development. Moreover, outside of the field of genetics, people are not likely to care about neutral mutations as much as deleterious ones, so they are going to study the deteterious ones more.

This does not mean deleterious mutations are more common in the genome, it means that we'd expect fewer studies to track them. You are mistaking "set of mutations humans care about studying" for "set of mutations that occur."

Ack, mismatch in the first sentence in my second paragraph. It should read: "This does not mean deleterious mutations are more common in the genome, it means that we’d expect more studies to track them."

eric and phil,

I agree with eric, the relevant probabilities are not known. However, I will concede that the events necessary to produce the features seen in modern organisms would have a probability that we would ordinarily call low.

However, the relevant question is how low, and how many opportunities have there been for these low probability events to have occurred. A one in a trillion event, for instance, is certainly something we would call improbable. However, if you have 10^15 opportunities for this event to occur, it is very likely that this event will occur. Therefore, to really use an improbability argument, we must quantify two values, namely the probability that an event occurs and the number of opportunities that there are for that event to have occurred. AFAIK, for biological systems, neither of these has been well quantified.

Now, you might argue that even if it's likely that a given improbable event actually occurred, it may be unlikely that a series of such events occurred. However, keep in mind, the sequence of events leading to some complex system, say the echolocation system in whales, are not independent events. Therefore, the probability of all occurring is not simply the product of the probabilities of the individual events, but some other more complex function of the individual probabilities, and one which I certainly could not even begin to try to estimate. It is entirely possible that the occurrance of event A in the sequence increases the probability of event B occurring, and so on down the line. At the very least, event B must occur in some organism in which event A has already occurred. The whole point of natural selection, though, is that the subpopulation of organisms in which event A has occurred is likely to be a large fraction, if not all, of the population of the organism. That alone would increase the probability of event B occurring in an organism in which event A had already occurred.

Phil, I like to ask this question of people who claim that the probability of evolution via mutation and natural selection is too low. Precisely what probability is too low? Give me a number. What probabilty for a given event would lead you to absolutely rule out the possibility that it occurred?

Therefore, to really use an improbability argument, we must quantify two values, namely the probability that an event occurs and the number of opportunities that there are for that event to have occurred.

Three values in the case of biology, because of the interaction between genetics and development. We need, as you say, the probability of mutational event sequence N and the number of trials. That gives us the overall likelihood of seeing a specific mutational sequence occur in the genome for a given number of trials. But for the evolution of developmental mechanisms, we also need to know how many different mutations would lead to the same or a bascially similar developmental effect. How many different genetic sequences can build some version of an eye, for example. What is size of the set of N's that could do generally what N does?

Now if you're a "lumper" type of person, you could fold that into your consideration of "the probability that an event occurs" and talk about the probability of some developmental capability occurring (i.e., light sensitivity). But I think it's useful to keep the two factors separate because it helps to understand Jason's point: in many cases we might understand the physical mechanisms behind a point mutation enough to assign it some probability of happening, but we rarely or never know the probability of all possible sequences that could lead to a similar developmental capability.

Phil are you claiming you are not a conservative and a Christian or are you claiming that neither of those has any influence on you opposition to evolution?

Speaking of probabilities, it is much more likely to reject evolution if you are religious than if you are not. It is also much more likely to reject evolution if you are a Republican than if you are not. Are you trying to claim no correlation?

By Michael Fugate (not verified) on 11 Feb 2014 #permalink

eric,

“You are supposed to be showing how you’ve determined that complex systems aren’t produced by random mutation, and instead you’re just asserting that it can’t be.”

Sortof, but ideas about error-based production were accepted without ever having been scrutinized in the first place. The establishment community was irreversibly invested in the theory long before the molecular level stuff ever came into view.
-
“You should not reach any conclusion at all – neither that it’s probable nor improbable – if you don’t have concrete data about the likelihood of the sequence of events and a line of reasoning connecting that data together.”

I agree. It would be more reasonable to not draw a conclusion.
-
“…showing me how you derive the conclusion “improbable” from data and reasoning”

First, the data is never complete, and there is always some measure of subjectivity involved in reasoning. But probability can be reasonably appraised in general terms.
-
“This does not mean deleterious mutations are more common in the genome, it means that we’d expect more studies to track them.”

Check. But there are other reasons for the neglect. Nobody is going to undertake the task of trying to figure out how many mutations have to occur to get from no eye, to functional eye. There are just too many interdependent parts, and too many supporting systems. In my experience, articles about the evolution of eyes and vision are going to steer well clear of mutations. Who would want to try and explain how errors resulted in ten retinal layers, much less all the rest.

===

“..keep in mind, the sequence of events leading to some complex system, say the echolocation system in whales, are not independent events.”

I don’t follow you on this.
-
“It is entirely possible that the occurrance of event A in the sequence increases the probability of event B occurring, and so on down the line.”

I don’t get this either, but you could probably coin a new term for it whether it makes sense or not. Like “momentum mutational effect” or “relay mutations”. Some ambitious grad student could probable get that past the reviewers.
-
“Precisely what probability is too low? Give me a number. What probabilty for a given event would lead you to absolutely rule out the possibility that it occurred?”

That’s a fair enough question, but ultimately it is going to come down to nothing more than individual judgment. Once I saw a guy flip a cigarette butt away and when it stopped moving, it was standing on end. Improbable things do happen, but not consistently. The RM game depends on extremely unlikely sequential events happening billions of times with monotonous regularity. It is an idea that deserves to be doubted, if not rejected.

===

Michael Fugate,

“…are you claiming you are not a conservative and a Christian or are you claiming that neither of those has any influence on you opposition to evolution?”

I’m claiming that it is unreasonable to believe that accidental DNA replication errors resulted in you sitting at your keyboard. But, with so little to work with, I must admire your considerable faith.

Sean, I neglected to address you by name. I beg your pardon.

Phil -

>>"ultimately it is going to come down to nothing more than individual judgment."

Are you kidding us here? No, it isn't. That's exactly what the methodology of science is for, to transcend the vagaries of individual judgment by setting up a rigorous system of checks and balances for new ideas. Why are you even bothering to debate anyone unless you believe there's a way to at least approach an objective view using evidence and reason? This is a cop-out.

>>"probability can be reasonably appraised in general terms"

I suppose, as a starting point. But you're not willing or maybe able to go any further than this. Science proceeds with precise data, not general terms, and you continue to completely avoid offering any specific numbers or novel facts to support your general intuitions.

>>"It would be more reasonable to not draw a conclusion."

Which is precisely why scientists don't draw conclusions about or from such probabilities and instead base their conclusions on concrete evidence, both circumstantial and direct. You (and the mysteriously missing Sean Pitman) are letting the tail wag the dog with your focus on probabilities in evolutionary theory. For another example...

>>"The RM game depends on extremely unlikely sequential events happening billions of times with monotonous regularity."

On the contrary, there's research indicating that novel traits can in some cases emerge with startling rapidity (relatively speaking). Less than a minute of googling got me this example:

http://www.sciencedaily.com/releases/2013/12/131212141938.htm

Even that aside, it's already been explained ad nauseam in this thread how you're misunderstanding the issue of probabilities when dealing with deep time and large populations. But you keep just sticking your fingers in your ears and saying, "Nope."

You keep saying things can't happen when there's clear and overwhelming evidence that they have. You're like someone who shows up late to a murder scene, lectures the cops on the extremely low probability that the victim could meet his end in this way, and hence concludes that the body you're all looking at can't possibly be there.

And that isn't everything that's wrong or distorted in your one post. This...

>>"ideas about error-based production were accepted without ever having been scrutinized in the first place. The establishment community was irreversibly invested in the theory long before the molecular level stuff ever came into view"

... requires a longer response than I can compose this far past my sleepytime. Maybe someone more qualified and rested can pick up my slack.

By Michael Wells (not verified) on 11 Feb 2014 #permalink

I love non-answers to direct questions. Evasion the chief weapon of the apologist.

By Michael Fugate (not verified) on 11 Feb 2014 #permalink

"Of course it can craft complexity, what on earth is the reason for thinking it cannot?"

Accepting as valid the term "craft" here is a mistake and plays into the assumptions of the ID proponents. Nothing is crafted in nature, nothing is done purposely and, therefore, nothing is, properly speaking, crafted. Nature's events are, in the common-sense terms we use everyday, to be understood as accidents-- a mass of more or less probabilistic events.

You'd be better-armed in the debate if you read Jean-Jacques Kupiec's book, The Origin of Individuals. But, then, you might not agree with his views. Many molecular biologists--the so-called standard view's practitioners-- are either simply ignorant of Kupiec's work or they do not understand or agree with it.

In fact, expert scientists not understanding key concepts about "their own" fields of practice is quite common. The reaction against Kupiec's work from many of his profession is an example. In the field of statistics--quite relevant to the discussion here--there is another example ably set out in the book, (Ziliak and McCloskey, 2008) The Cult of Statistical Significance.

See, for reference,

http://www.deirdremccloskey.com/articles/stats/coelho.php

By proximity1 (not verified) on 12 Feb 2014 #permalink

Phil:

ideas about error-based production were accepted without ever having been scrutinized in the first place. The establishment community was irreversibly invested in the theory long before the molecular level stuff ever came into view.
What??? We scrutinized Darwin's ideas about heritable mechanisms and rejected them. AFAIK we didn't even come up with the notion that mutaton consisted of copy errors until after the double helix structure and base units were known - i.e., after the "molecular stuff" was known.

It would be more reasonable to not draw a conclusion.

[and later]

First, the data is never complete, and there is always some measure of subjectivity involved in reasoning. But probability can be reasonably appraised in general terms.

You seem to be contradicting yourself. Do you think the data you have allows you to draw a reasonable conclusion about the probablity, or not? If your answer is "it does," please show us how you reached your conclusion. If your answer is "it doesn't," stop asserting that these things are so improbable that evolution can't produce them, because you have no data or reasoning to back that asserton up.

There are just too many interdependent parts, and too many supporting systems.

How many is too many? What's the number of interdependent parts evolution can't assemble and how did you reach that number? When you make this assertion, are you assuming individual parts have no adaptive value on their own, some small adaptive value, or a large adaptive value?

You see, you make these claims but there doesn't seem to be any quantitative or even qualitative substantive assessment behind them. You're basically sticking to the level of argument-from-ignorance - "I don't see how it could have happened, therefore I conclude its improbable."

The RM game depends on extremely unlikely sequential events happening billions of times with monotonous regularity.

When you've got trillions of reproduction events, yeah, billion-to-one chances will happen with monotonous regularity.

Phil,

No problem about not addressing me by name - your quotes made it clear that you were addressing my post.

As far as independent events goes, if event A and event B are independent, then probability theory states that the probability of both occurring is the product of the probabilities of each event occurring individually. For instance, the probability of rolling a six-sided fair die and getting a 6 is 1/6. Therefore the probability of rolling this die twice and getting two sixes is 1/6 x 1/6 = 1/36. The reason that this is the case is that the two events do not affect each other.

Now, in biological systems, I don't think it's too much of a stretch to assert that the occurrance of sequential events like the fixation of a beneficial mutation in a population are not independent of each other. For a simple example, consider eye development. A reasonable first step might be a mutation which renders a cluster of cells light sensitive. A reasonable second step might be a mutation which causes a growth of a channel that allows the organism to detect the direction from which the light is coming. The probability of the second of these becoming fixed in a population is certainly affected by the first having already becoming fixed. There is no great benefit to a channel growing off a certain location on an organism if that channel does not lead to light sensitive cells. The fixation of light sensitivity makes the fixation of the channel much more probable, since that would confer a direct benefit.

I am not a biologist, so I will leave this to the experts, but I could even see a possibility that one mutation might make another more probable. The DNA code is a triplet code, meaning that three DNA bases code for one amino acid residue in a protein. Often, the third base in the triplet is redundant, but not always. Consider (and to those biologists out there; I am making this up. I don't know the genetic code off hand), an example: suppose a mutation that yields a codon of GGT would result in some benefit for the organism. Suppose that the current codon in this position is GCG. A mutation to GCT would likely have no effect on the organism (remember, the third place is often redundant, GCG and GCT would quite possibly code for the same AA residue). Surely you can see that a mutation from GCT to GTT is much more likely to occur than the mutation from GCG to GTT; only one base needs to be copied inaccurately. Therefore if event A is "mutation of GCG to GCT' and event B is "mutation of GCG to GTT, event A and event B are not independent. Event B is more probable if event A occurs.

Michael Wells,

“Science proceeds with precise data, not general terms, and you continue to completely avoid offering any specific numbers or novel facts to support your general intuitions.”

My intuitions have little to do with the things I see as problems. You can fill in the blanks with your own numbers on, and list any mitigating factors or circumstances you can think of concerning replication errors accumulating into hyper-complex systems. I welcome your scrutiny and your analysis, but I have to wonder, were you asking for all kinds of facts and specific numbers when you accepted that idea?

Thanks for the cave fish article. Blind cave species actually pose yet another statistical mutations problem. According to the theory, the elimination of sight and pigment happens by way random errors being selected for as because they reduce energy expenditures. These researchers apparently recognize that this is “an extremely time-consuming process”, but it enables the fish to “reallocate their finite physiological resources to biological functions more helpful in the cave setting”. So they are proposing “an evolutionary concept known as "standing genetic variation," which argues that pools of genetic mutations -- some potentially helpful -- exist in a given population but are normally kept silent”. In other words, selection seems to be saving up replication errors for a rainy day. But that would seem to conflict with the finite resources deal, don’t you think?

“you’re misunderstanding the issue of probabilities when dealing with deep time and large populations. But you keep just sticking your fingers in your ears and saying, “Nope.” “

Not really. I’ve just done a lot a reading about fixation. And large populations just mean more physical and generational distance in between the rare mutants. What specific numbers and novel facts support your appeal to deep time and large populations?

===

proximity1,

“Nothing is crafted in nature, nothing is done purposely and, therefore, nothing is, properly speaking, crafted”

Do the non-crafted results do things purposely?

I’ll peruse the link you provided when I have time. I did notice this interesting statement:

“Scientific assertions should be confronted quantitatively with the world as it is or else the assertion is a philosophical or mathematical one, meritorious no doubt in its own terms but not scientific.”

There is something to that, especially the ‘world as it is’ clause. I think the recognized minimum number of necessary genes for a self-replicating organism is down to around 250 now. I’m glad to find out that someone recognizes that the idea of a single original ancestor is not scientific.

===

eric,

“How many is too many?”

The odds are already stacking up with just two. But things like eyes are in the dozens, and not just to do with the actual camera. There are three different kinds of tears and glands, skull accommodations, wiring, brain imagery, all kinds of stuff right down to the lashes. How many is going to depend on individual perceptions about what is possible and what is not.

===

Sean T,

“A reasonable first step might be a mutation which renders a cluster of cells light sensitive. A reasonable second step might be a mutation which causes a growth of a channel…”

But those aren’t reasonable in my view. Even at this point, there has to be regulation and controls involved because the error is confined to a cluster. The channel has to be a limited number of cells and have some kind of genetic definition. I realize that there isn’t any significant insight into why cells, all having the same DNA, differentiate themselves into distinct specialties. But you can’t trivialize the complexity involved in order to promote an accident-based system. Or at least I can’t.

Besides, what is the actual heritable, selectable advantage of a patch of light sensitive cells?

“I could even see a possibility that one mutation might make another more probable.”

Mutations are random.

Phil,
FIrst of all, the selectable advantage of a patch of light sensitive cells is that an organism can detect light, and possibly move away from a bright area to a dark one. Some microorganisms, for instance, survive and reproduce better in the dark than they do in the light.

Second, did you even read what I wrote about one mutation making another more probable. How can you claim that a mutation from GCG to GTT is not made more probable by a prior mutation from GCG to GCT? Yes, they are random events, but random does not necessarily imply independent. Another anology that will hopefully help me get the point across. Suppose event A is rolling a 6 on one die, and event B is rolling that die twice and getting a sum of 7 or more. Obviously, both of these are random events, but are you really going to argue that the occurrance of random event A does not improve the probability of random event B? In this case, the occurrance of random event A ENSURES that random event B will occur.

Finally, even granting for the sake of argument that all random events are independent (which is, of course, manifestly false), you still have not addressed the main points.
1. You claim that the development of complex systems is an event of low probability. How low? How many opportunities for the appropriate development has there been. Without answers to these, it's impossible to conclude anything regarding the actual probability of evolution.

2. You need to define an alternative hypothesis and compute the probability that this alternative hypothesis is true. It's not enough to say "it's designed". You must put forward something about your designer. For instance, you must specify whether or not your designer is itself a complex system. If so, how did that designer come to be. If it's improbable that a complex biological system arose, how improbable is it that a complex designer arose? It's not just the improbability of evolution that matters; it's the probability of evolution RELATIVE to any particular alternative theory that is relevant. It's not enough to say evolution is a one in a billion chance, if the alternative you're presenting only has a one in a quadrillion chance of being true. If that's the case, we'd still be better off with the theory of evolution.

3. Exactly how improbable must something be before we conclude that it could not have happened. One in a million? One in a billion? Something else. Be careful here, though. For any number that seems intuitively good, I can most likely point out events with lower probability of occurrance that actually have occurred.

Phil (on cave fish):

According to the theory, the elimination of sight and pigment happens by way random errors being selected for as because they reduce energy expenditures.

AIUI, this is incorrect. The random errors aren't selected, they build up because there is no longer any selective force killing off the fish who get them.
This makes the process much quicker and more likely, because the errors only have to have the minimal requirement of "doesn't adversely affect fitness" to propagate through the population by genetic drift, rather than meeting the much more difficult requirement of "saves the fish a significant amount of invested energy/food."

In other words, selection seems to be saving up replication errors for a rainy day. But that would seem to conflict with the finite resources deal, don’t you think?

No, that's a complete mischaracterization of what's going on. In fact it's so off base, I'm not even sure how you arrived at it so don't know where you went wrong. So I'll start at the beginning. The situation is quite simple:
1. Every fish begins a little different (with "standing genetic variation."
2. Their kids inherit this, plus some new mutations.
3. Outside the cave, the fish whose variations + mutations leads to worse eyesight are at a disadvantage, but inside the cave, they aren't.
4. There are a lot of different ways eyesight can be broken; different mutations that will reduce it. To use an analogy, there are many more lottery draws in the set of "eyesight gets worse" than there are in the set of "eyesight stays the same or gets better."
5. The combination of 3 and 4 guarantees that as more generations of fish are born (i.e., there are more lottery draws), the population in general will become more blind, because there is no longer any selective pressure that essentially rewards the fish with better eyesight with more kids (or getting eaten less often, I suppose).

Phil:

The odds are already stacking up with just two.

Have you read Behe's testimony at the Dover trial? He said what you were saying. Then the prosecution pointed out that his "stacked up odds" could be expected to occur in every generation of bacteria in a cubic meter of soil. "There is a lot more than just one cubic meter of soil on Earth, isn't there, Profssor Behe?" was their last question.

So I'll repeat a point I made above and which you've ignored. You are looking only at one side of the issue; likelihood of some sequence of events. You are not looking at the other side, which is number of times evolution can try things. You're quoting odds of winning the lottery without any understanding of the number of tickets bought, and concluding that when someone shows you a winning ticket, it must be design at work.

@ 131: " I’m glad to find out that someone recognizes that the idea of a single original ancestor is not scientific."

And how did you derive that idea from my post or the page I linked? I certainly don't argue that "the idea of a single original ancestor is not scientific." I am in full accord with Darwin's published views on evolution--- as far as I am aware of them ---and all their implications.

By proximity1 (not verified) on 13 Feb 2014 #permalink

P.S.

I forgot to respond to "Do the non-crafted results do things purposely?"

No "purpose" in nature. None that I can discover. Whatever the "non-crafted results" refers to, I suppose that these, too, are not purpose-driven or derived. So I think the short answer should be obvious. No, they don't.

By proximity1 (not verified) on 13 Feb 2014 #permalink

ID is not a scientific issue, but an ideological one. People like Phil believe in a god and believe that god is an agent. Given that, one starts looking for agency and finds it wherever one looks. Once found, then it is necessary to rationalize it. Phil believes he has the answer in probability which is really just a dressed up from "fish to Gish" argument used by creationist for 50 years. It is the argument from personal incredulity and nothing more. If ID were based on science, its supporters wouldn't keep trotting out the same tired long-refuted arguments over and over.

If you want me to accept ID, then produce the designer and let me see it design something.

By Michael Fugate (not verified) on 13 Feb 2014 #permalink

Phil -

You're welcome for the cavefish article. Although Sean T. handled your objections to it better and more thoroughly than I could ever have. So I'll fisk some other key tidbits of your reply:

>>"My intuitions have little to do with the things I see as problems."

I don't even know what this means.

>>"You can fill in the blanks with your own numbers on, and list any mitigating factors or circumstances you can think of..."

Uh, I was asking you to do this, since it's your claim. You obviously can't. Because you're just guessing at these probabilities you keep touting.

>>"were you asking for all kinds of facts and specific numbers when you accepted that idea?"

I take this to mean, "Did you hold the theory of evolution to the same standards?" Of course, more or less. Although I don't worry about the probabilities, because, as we keep pointing out, they're not that important in the face of the overwhelming physical evidence. And evolutionary theory continues to produce a vast pile of such evidence every year. While ID critiques of the TOE consist of nothing except vague "in-principle" objections such as yours, with no concrete evidence forthcoming, or even useful details about the principles involved.

>>"large populations just mean more physical and generational distance in between the rare mutants."

[insert Scooby-Doo "HUH?!" here] Where on earth do you get this from? Citation, please. It's already been shown above (comments 110-112) that you badly misunderstand the relation of population size to mutation fixation, to the extent that you'll quote a line on the subject and think it means the opposite of what it does. But you don't seem to learn any caution about wading in over your head.

As far as my non-expert brain recalls, every creature - at least every mutlicellular creature - has mutations/genetic errors in its DNA. For example, each human has on average about 60 in his/her body, according to the latest info I've seen. Multiplying individuals means multiplying mutations for natural selection to act upon, and as a certain percentage of those are going to be beneficial in the environmental context, that means beneficial mutations multiply as well. I don't even see how you could get any idea otherwise. Is this what they call "The New Math"? Which leads us directly to...

>>"What specific numbers and novel facts support your appeal to deep time and large populations?"

Uh, the facts and numbers of deep time and large populations. Three billion years, roughly, since life has existed on earth. You don't even have to actually do the rough, back-of-the-envelope calculations to see this means countless trillions of individuals, and many more trillions of mutations within their genomes, over the history of life. If even a fraction of a percent of those mutations are beneficial*, that's billions of beneficial mutations - which is exactly what you claim has to happen for RM+NS to work, blithely assuming that's plainly impossible. As a little thought about the numbers has just shown, you seem to be wrong.

*[see http://www.talkorigins.org/indexcc/CB/CB101.html for this interesting example: "An experiment with E. coli found that about 1 in 150 newly arising mutations and 1 in 10 functional mutations are beneficial."]

Of course, the burden is on you to come up with much more specific and detailed probability arguments and numbers than I've played around with above, since you're making the argument that the probabilities are quite obvious, and overwhelmingly important to the question, whereas we're saying they're not particularly important and we have little ability to calculate them anyway.

Again, your whole argument is based on massive assumptions about the numbers that are demolished by the most casual glance at what scant numbers we have available.

By Michael Wells (not verified) on 13 Feb 2014 #permalink

Sean T,

“the selectable advantage of a patch of light sensitive cells is that an organism can detect light”

I don’t really see the advantage there, or why that would lead to special organs devoted exclusively to perceiving a very narrow segment of the ems. I think that UV is actually what most microorganisms don’t like.

The light-sensitive spot story is mentioned often, but usually it stalls out. So, after acquiring the spot and a channel, what’s next?
-
“How can you claim that a mutation from GCG to GTT is not made more probable by a prior mutation from GCG to GCT?”

Well, because that would require another mutation in the same codon. When and if that happens, a back mutation would be just as likely as the substitution you prefer. Please excuse this cut and paste, but this excerpt does a pretty good job of explaining why:

http://www.cod.edu/people/faculty/fancher/genetics/Mutation.htm
------------------------------------------------------------------------------------------------------------
II. Mutations can occur in two different directions.

A. A forward mutation is a mutation which changes a wild type allele into a new allele (for example, a mutation in one of the genes coding for color producing enzymes may change a wild type [normal color] allele into an albino allele).

B. A true reversion (reverse mutation) is a mutation which changes a mutant allele back into a wild type allele (for example, in the previous example, a reversion would be anohter mutation at exactly the same location of the first mutation, hwich simply reverses the change made the first time, changing the albino allele back into a wild type [normal] allele). As you might expect, true reversions are much less common than forward mutations because the "target area" is much smaller. A typical gene is hundreds of bases long; a forward mutation can be achieved by altering any one of many of those bases. But a reversion must hit exactly the previously altered base, and must alter it in such a way as to change it back to what was originally in that position.

C. A suppressor mutation seems like a reversion, but is actually a second change in the same gene, at a different site in the gene, which compensates for the forward mutation in the behavior of the gene product. The gene actually now has two differences when compared to the original wild type, but the protein made following its instructions works just like the original, wild type protein.
------------------------------------------------------------------------------------------------------------
I haven’t made a big deal out of reversion or compensatory mutations, but they are in the literature. I haven’t been able to discern how likely they are, or how often they occur but I am pretty confident that if Lenski’s E coli were liberated, they would revert to wild type. As I’ve mentioned, nature is hostile towards evolutionary mechanisms. B and C are two more illustrations.

“I can most likely point out events with lower probability of occurrance that actually have occurred.”

Of course. But it isn’t a single event. The mutations game relies on countless millions of sequences, with each sequence being a series of unlikely single events.

===

eric,

“…that’s a complete mischaracterization of what’s going on. In fact it’s so off base, I’m not even sure how you arrived at it so don’t know where you went wrong.”

I think it is off-base too, but it wasn’t my idea. Here’s they quote again:

“Eye loss in these fish is considered to be a demonstration of an evolutionary concept known as "standing genetic variation," which argues that pools of genetic mutations -- some potentially helpful -- exist in a given population but are normally kept silent.”

You can substitute ‘standing” with ‘stand-by’ to get what they are saying. In so many words, this concept (and who doesn’t love a new evolutionary concept) proposes that HSP90 protein depletion allows the replication errors waiting in the pool to escape like shock troops and help the fish evolve blindness.
-
“Have you read Behe’s testimony at the Dover trial?”

No….not interesting.

“You are looking only at one side of the issue; likelihood of some sequence of events. You are not looking at the other side, which is number of times evolution can try things.”

I believe the technical term is tinker. but in my view, appealing to possible tries is still offset by a short and weak list of supposed winners. And like it or not, stasis/extinction are still the rule. Behe could have pointed that out about what can actually be found in any given meter of soil.

===

proximity1,

“how did you derive that idea from my post or the page I linked? I certainly don’t argue that “the idea of a single original ancestor is not scientific.”

I derived that from the quote in your link, noting the mention of the “world as it is”. The world as it is, is only host to organisms with a gene set large enough to live and self-replicate. Anything less is imaginary.
-
“Whatever the “non-crafted results” refers to, I suppose that these, too, are not purpose-driven or derived. So I think the short answer should be obvious. No, they don’t.”

So what’s your take on leukocytes?

===

Michael Fugate,

“…keep trotting out the same tired long-refuted arguments over and over”

You haven’t really been engaged in refuting, and I’ve only mentioned a few major points that I think are significant. Perhaps you could pick two or three and re-refute so I can understand your confidence.