# Rosenhouse on ID and Information

Jason Rosenhouse attended an ID conference in Knoxville last weekend and has been posting about some of the presentations there (Part one, part two, part three). Part three is very interesting as it is primarily about the ID argument about information in the genome. You will often hear this stated in one of two ways, either as a demand for an explanation for how evolution produces new information in the genome or, alternatively, how it produces increased information in the genome. Either way, it is a nonsense question and Jason does an excellent job of explaining why. I’ll post a long excerpt below the fold:

This is the point where Meyer started discoursing about the nature of information. According to Meyer, there are two different notions of information. First, there is the mathematical version, as elucidated by Claude Shannon. In this version “information” is construed as inversely proportional to probability. High information content is correlated with a low probability of occurrence. In this notion, Meyer argued, any lengthy string of symbols can be viewed as containing a large amount of information, since it is only one of a very large collection of possible strings of symbols.

But in biology we have a different sort of information. We don’t just have complexity. We have specified complexity! Meyer repeated several times that this is a richer sort of information than what Shannon considered. He drew a distinction between a meaningless string of symbols, and the phrase “Time and tide wait for no man.” The latter phrase is clearly different since it conveys a meaning.

Standard ID fare. What is interesting here is that Meyer drew an explicit distinction between the Shannon notion of information on the one hand, and this notion of specified complexity on the other. ID folks draw this distinction routinely, apparently oblivious to the logical hole they dig themselves by doing so. The nice thing about Shannon’s theory is that it gives us a precise method for measuring the information content of a message (technically, the “entropy” of a message is a somewhat more accurate way of putting it.) As long as you can embed a particular message within a meaningful probability space you can talk about the amount of information it contains. This makes it possible to answer questions like, “Does physical process X lead to an increase or decrease in information?”

But the ID folks distance themselves from Shannon’s notion. This leaves them with the problem of quantifying the information they find in DNA. They assert that DNA possesses specified complexity and then challenge scientists to explain how this quantity of information can increase over time via natural processes. The question is meaningless unless they can tell us precisely how to measure the quantity of specified complexity. Otherwise, how would we know an increase when we saw one?

This sort of facile simple-mindedness is ubiquitous in ID and creationist writing. They routinely point to this or that mutation and say that information has been lost because the organism in question can no longer perform some function that its unmutated brethren can perform. Indeed, but the mutation might very well also lead to some new functionality that the nonmutants lack. Sure, we lost one ability, but perhaps we gained another. Does that add up to a net gain or net loss in information?

I have raised this objection to ID folks before. The answer I usually get (when I get any answer at all) is that when it comes time to measure information, they are still using Shannon’s conception. But if that is the case, then their little challenge turns out to be no difficulty at all. There are a vairety of familiar genetic mechanisms that can account for increases in genetic information. Gene duplication followed by subsequent divergence is an especially important one. Any decent genetics textbook will tell you about many others. So it is either trivial to explain the growth of genetic information in time (if you mean Shannon’s version of information) or the question is meaningless (if addle-brained creationist argle-bargle about “specified complexity” is what you mean).

We saw how this game of the perpetually moving goalposts was played in the recent exchanges beween Michael Egnor and practically everyone else. First he demands a specific measurement of the amount of information in the genome (something that can only be done using the Shannon definition of information and deriving a formula from it), but then claims that Shannon measurements are meaningless because they don’t measure the specified nature of the information. Heads I win, tails you lose.

Meyer then went on, as almost all ID advocates seem positively obsessed with doing, to distort and exaggerate the findings of the two perfectly mundane protein sequence studies done by Douglas Axe. I and many others have shredded that bit of silliness time and time again, yet it is the claim that will not die (perhaps akin to Justice Scalia’s late night movie ghoul). These ID conferences are little more, it seems, than a rote recitation of the same old bad arguments that have been debunked over and over.

1. #1 Michael Suttkus, II
March 29, 2007

Late night movie ghoul?

2. #2 THobbes
March 29, 2007

Is that a reference to when the Justices would have a “porn night” as part of the indecency cases?

3. #3 Jim RL
March 29, 2007

In the end you can scientifically refute their “evidence” a million times. They frankly don’t care about scientific validity. All they care about is being able to convince their audiences that there is some scientific validity. Most of their audience will never here these arguments and will go home believing that ID scientifically legitimate.

4. #4 Ed Brayton
March 29, 2007

The reference is to an infamous concurring opinion that Scalia wrote in the Lamb’s Chapel case where he criticized the court for its inconsistent use of the Lemon test. I quoted it here.

5. #5 Tyler DiPietro
March 29, 2007

Judging from the whining of the IDiots, “biological information” doesn’t really seem to be much more than “magic, ineffable disproof of evolution”. Generating new information is trivially easy when going by any standard metric.

The reason there is no concrete definition of information from the IDiots is that making any testable prediction is bound to land in evolution’s favor. When Behe started stammering about IC, it was obvious that evolutionary mechanisms could easily produce structures that met his definition. So the next best thing is to have some vague, ill-defined, quasi-mystical and above all technical sounding catch phrase to wow the lay public and convince them that ID is valid.

New comments have been disabled.