I was recently sent a link to yet another of Dembski's wretched writings about specified complexity, titled Specification: The Pattern The Signifies Intelligence.

While reading this, I came across a statement that actually changes my opinion of Dembski. Before reading this, I thought that Dembski was just a liar. I thought that he was a reasonably competent mathematician who was willing to misuse his knowledge in order to prop up his religious beliefs with pseudo-intellectual rigor. I no longer think that. I've now become convinced that he's just an idiot who's able to throw around mathematical jargon without understanding it.

In this paper, as usual, he's spending rather a lot of time avoiding defining specification. Purportedly, he's doing a survey of the mathematical techniques that can be used to define specification. Of course, while rambling on and on, he manages to never actually say just what the hell specification *is* - just goes on and on with various discussions of what it *could be*.

Most of which are wrong.

"But wait", I can hear objectors saying. "It's his theory! How can his own definitions of his own theory be wrong? Sure, his theory can be wrong, but how can his own definition of his theory be wrong?" Allow me to head off that objection before I continue.

Demsbki's theory of specicfied complexity as a discriminator for identifying intelligent design relies on the idea that there are two *distinct* quantifiable properties: specification, and complexity. He argues that if you can find systems that posess sufficient quantities of both specification *and* complexity, that those systems cannot have arisen except by intelligent intervention.

But what if Demsbki defines specification and complexity *as the same thing*? Then his definitions are wrong: because he requires them to be distinct concepts, but he defines them as being *the same thing*.

Throughout this paper, he pretty ignores the complexity to focus on specification. He's pretty careful never to say "specification **is** this", but rather "specification **can be** this". If you actually read what he *does* say about specification, and you go back and compare it to some of his other writings about complexity, you'll find a positively amazing resemblance.

But onwards. Here's the part that really blew my mind.

One of the methods that he purports to use to discuss specification is based on Kolmogorov-Chaitin algorithmic information theory. And in his explanation, he demonstrates a profound lack of comprehension of *anything* about KC theory.

First - he purports to discuss K-C within the framework of probability theory. K-C theory has *nothing to do* with probability theory. K-C theory is about the meaning of quantifying information; the central question of K-C theory is: How much information is in a given string? It defines the answer to that question in terms of computation and the size of programs that can generate that string.

Now, the quotes that blew my mind:

Consider a concrete case. If we flip a fair coin and note the occurrences of heads and tails in

order, denoting heads by 1 and tails by 0, then a sequence of 100 coin flips looks as follows:(R) 11000011010110001101111111010001100011011001110111 00011001000010111101110110011111010010100101011110.This is in fact a sequence I obtained by flipping a coin 100 times. The problem algorithmic

information theory seeks to resolve is this: Given probability theory and its usual way of

calculating probabilities for coin tosses, how is it possible to distinguish these sequences in terms

of their degree of randomness? Probability theory alone is not enough. For instance, instead of

flipping (R) I might just as well have flipped the following sequence:(N) 11111111111111111111111111111111111111111111111111 11111111111111111111111111111111111111111111111111.Sequences (R) and (N) have been labeled suggestively, R for "random," N for "nonrandom."

Chaitin, Kolmogorov, and Solomonoff wanted to say that (R) was "more random" than (N). But

given the usual way of computing probabilities, all one could say was that each of these

sequences had the same small probability of occurring, namely, 1 in 2100, or approximately 1 in

1030. Indeed, every sequence of 100 coin tosses has exactly this same small probability of

occurring.To get around this difficulty Chaitin, Kolmogorov, and Solomonoff supplemented conventional

probability theory with some ideas from recursion theory, a subfield of mathematical logic that

provides the theoretical underpinnings for computer science and generally is considered quite far

removed from probability theory.

It would be difficult to find a more misrepresentative description of K-C theory than this. This has nothing to do with the original motivation of K-C theory; it has nothing to do with the practice of K-C theory; and it has pretty much nothing to do with the actual value of K-C theory. This is, to put it mildly, a pile of nonsense spewed from the keyboard of an idiot who thinks that he knows something that he doesn't.

But it gets worse.

Since one can always describe a sequence in terms of itself, (R) has the description

copy '11000011010110001101111111010001100011011001110111 00011001000010111101110110011111010010100101011110'.Because (R) was constructed by flipping a coin, it is very likely that this is the shortest

description of (R). It is a combinatorial fact that the vast majority of sequences of 0s and 1s have

as their shortest description just the sequence itself. In other words, most sequences are random

in the sense of being algorithmically incompressible. It follows that the collection of nonrandom

sequences has small probability among the totality of sequences so that observing a nonrandom

sequence is reason to look for explanations other than chance.

This is *so* very wrong that it demonstrates a total lack of comprehension of what K-C theory is about, how it measures information, or what it says about *anything*. No one who actually understands K-C theory would *ever* make a statement like Dembski's quote above. *No one*.

But to make matters worse - this statement explicitly invalidates the entire concept of specified complexity. What this statement means - what it *explicitly says* if you understand the math - is that *specification* is the opposite of *complexity*. Anything which posesses the property of specification *by definition* does not posess the property of complexity.

In information-theory terms, complexity is non-compressibility. But according to Dembski, in IT terms, specification is compressibility. Something that possesses "specified complexity" is therefore something which is simultaneously compressible and non-compressible.

The only thing that saves Dembski is that he hedges everything that he says. He's not saying that this is what specification means. He's saying that this *could be* what specification means. But he also offers a half-dozen other alternative definitions - with similar problems. Anytime you point out what's wrong with any of them, he can always say "No, that's not specification. It's one of the others." Even if you go through the whole list of possible definitions, and show why every single one is no good - he can still say "But I didn't say any of those were the definition".

But the fact that he would even say this - that he would present this as even a possibility for the definition of specification - shows that Dembski quite simply *does not get it*. He believes that he gets it - he believes that he gets it well enough to use it in his arguments. But there is absolutely *no way* that he understands it. He is an ignorant jackass pretending to know things so that he can trick people into accepting his religious beliefs.

- Log in to post comments

I don't know anything about K-C theory, so could elaborate on what are some of the major errors in the quotations by Dembski?

I'm not one to make excuses for Dembski, but can you elaborate on what you find wrong about the second quoted passage? It may be imprecise, and I also agree that he's been repeatedly undermined by the fact that random sequences have higher complexity than highly compressible ones by every reasonable defintion. But taken by itself, the informal description of Kolmogorov complexity doesn't seem that bad.

A bit string taken from a uniform distribution has a low probability of being very compressible, just as he says. That's because there are 2^n bit strings of length n, but only 2^k descriptions of bit strings compressible to k bits. If k is even a little less than n, the probability that you've picked a string that's compressible to k bits drops exponentially. E.g., the probability that the 100 bit string is compressible to 90 bits is less than 2^-10. If you're willing to equate coin flips with a true random source, then it would be reasonable for you to bet that the minimal algorithmic description of the sequence generated that way was not much smaller than 100 bits.

(Note that bringing in Kolmogorov complexity is a kind of overkill when mundane binomial p-values would suffice to reject the null hypothesis (random) for a string of 100 1s in a row; it's typical of Dembski to try to make his claim look more impressive than it is).

I would also agree with Dembski's implication that if you find a sequence to be highly compressible, then you can reject the random hypothesis using a level of confidence that would typically be expressed as a p-value. I.e., given a particular n-bit string, and a way of compressing it to k bits (which might not be the best compression; finding the best one is an incomputable problem), you would calculate the p-value: given a uniform random bit string of length n, what is the probability of choosing one compressible to k bits or less.

You then interpret this value as "the probability that, given that the null hypothesis is true, T will assume a value as or more unfavorable to the null hypothesis as the observed value" http://en.wikipedia.org/wiki/P-value

If the string is highly compressible, and therefore the p-value is very low, then the null hypothesis does not look like a compelling explanation. At that point it would be worth looking for explanations other than randomness. (Note that this in no way

disprovesevolution, which isnota uniform random process. For instance, the fact that my DNA looks a lot like a chimp's DNA is better explained by common descent than by the assumption that both came from random coin flips.).BTW, a good source on the connection between Kolmogorov complexity and randomness testing is: "The Miraculous Universal Distribution" by Kirchherr, Li, and Vitanyi. Vitanyi has a PostScript copy of the paper online: http://www.cwi.nl/~paulv/papers/mathint97.ps

The main problem with Dembski is that he consistently misapplies these arguments. If I am betting on coin flips and notice any kind of algorithmic pattern then there are legitimate reasons to conclude that the person flipping coins is not playing fair. However, this method does not allow one to discriminate between the product of evolution and the product of "intelligent design" since neither of these processes in any way resemble random coin flips.

I'll move some of my blogger posts on information theory over to SB tonight.

Basically, K-C information theory is a sub-branch of recursive function theory, which is one of the fundamental theoretical areas underlying computer science.

The idea of K-C theory is to study what information means in terms of comptuation. The goal of K-C theory is to find a meaningful way of asking the question "How much information does something contain?"

It is

notbased on probability theory. It didnotdevelop as an attempt to solve some problem with describing the difference between probable and improbable outcomes of a process. And there is absolutelynojustification for saying that a string with low compressibility (aka low information content) is an indication of intelligent intervention.Seriously: what Dembski is arguing is that discovering anything with *low information content* should be taken as an indication that it was created by an intelligent agent.

I'm pretty sure that he did not *want* to say that anything that doesn't contain much information is indicative of an intelligent agent. I'm pretty sure that he doesn't realize that this is exactly the opposite of what he argues in his definition of "complexity". And I'm pretty sure that he doesn't realize that by saying this, he's saying that "X has specified complexity" means "X contains very little information" and "X contains a lot of information" at the same time.

I honestly couldn't tell you what Dembski is trying to argue. Obviously a string that goes 111111111111 is not a sign of an intelligent agent, nor is the diversity of life on earth, for which evolution is a far more parsimonious explanation fitting all the data.

But finding a large string with low information content can be taken as a reasonable indication that it

did not come from a uniform random source. Randomness testing is a legitimate and active field. It does use the notion of compressibility. I grant that Kolmogorov complexity is not "based on probability theory" but it does have legitimate applications to areas that may also involve the application of probability theory.PaulC:

Finding a large with string with low information content can be taken as as an indication that it did not come from a uniform random source is true. But going from there to indicating an intelligent agent - that is a huge, unjustified leap.

And the fact that he's arguing that it's K-C theory that allows one to make that leap from "non-uniform random source" to "intelligent agent" is ridiculous - particularly when his writing doesn't indicate that he has any idea that there is a leap to be made.

BTW, I added the reference to Kirchherr, Li, and Vitanyi above although it was posted as "Anonymous" for some reason. Once again, I recommend it highly as a legitimate and rigorous source for this kind of argument. To quote from an example in the paper:

To cut to the chase, what is meant by the intuitive notion of "random" is indeed tied up in Kolmogorov complexity. I disagree strongly with Mark Chu-Carroll that there is no relevance here. However, I agree that Dembski is misapplying it. Merely refuting the null hypothesis that an observation comes from a uniform random source is not the same as proving it comes from an intelligent agent.

Yes, I agree completely. But I don't think that Dembski's second quoted passage taken in isolation is as completely wrong as you state.

I don't mean to say that K-C theory has no relevance to the kind of question we're discussing. But it remains important to draw the distinction between what K-C theory says, and the kind of bizzare conclusion that Dembski is trying to draw by pretending that you can take statements from probability theory and statements from K-C theory and mix them together without distinction.

K-C theory is not probability theory; the fact that it has insights that are useful in probability, and that probability has insights that are useful in K-C theory doesn't mean that you can just mush the two together in a careless half-assed way, and say that the probability-theory conclusions are actually K-C conclusions, and vice-versa.

Dembski is misapplying and misrepresenting both probability theory and K-C information theory - I think because he doesn't realize that there's any difference between the two. That's what I mean about his fundamental cluelessness. I don't think he understands what he's doing, what he's saying, where the ideas that he's talking about come from, or what they actually mean. The fuzziness of his distinction between KC information theory and probability theory is just part of that cluelessness.

I don't know a thing about information theory but the concept he was trying to get across at this point (whether it be valid information theory or not) struck me as one of the few sound bits of the paper.

The big problems with the paper that struck me include:

* A complete misrepresentation of classical hypothesis testing - which absolutely requires a clear definition of the alternative hypothesis (you can't decide whether to do one-tailed or two-tailed testing or create confidence intervals without this)

* Sudden, unjustified, leaps from defining simplicity in the "information theory" terms used above to defining it in terms of minimum number of concepts and then to defining it in minimum number words

* A subtle move where on one page specification is in terms of "the observed outcome and all simpler outcomes" (whatever that means at that stage) to the next page where it is suddenly "the observed outcome and all simpler *less probable* outcomes" - this change is not even commented on.

* No justification for any of the many definitions of specification in the paper.

There are many others but that will do for a start.

PaulC:

There seem to be some network problems around here today; I've had some trouble with various timeouts. Since you posted the comment which came up anonymous, I've updated it to show you as the author.

Mark Frank:

I suspect that it's just a matter of the old "errors in the stuff you know best are easiest to recognize" thing. For me, the errors in IT stick out like a sore thumb, because that's much closer to my expertise; whereas I'm sure that for a probability theory person, the probability theory errors are the most glaring; etc. It's just all so damned wretchedly bad that the places where you can really get just how bad it is stand out as so awesomely bad that the stuff you don't know as well seems like it *must* be better in comparison.

For example, I didn't notice the misrepresentation of hypothesis testing that grabbed your attention. There was so much else wrong that was grabbing my attention that something like that, where I'd probably need to pull out a textbook to refresh myself on it, just slipped under my radar.

But the most glaring things are really definition errors: the number of different definitions of specification he uses; the way that none of them are really justified; the way that he constantly shifts definitions in small ways without explanation; etc.

The thing that I was really trying to get across in talking about this was that it's a

reallysloppy paper. Things like the shift you point out - from "simpler" to "simpler less probable" - I really don't think that he actually understands that a wording change like that *needs* justification. I'm convinced that he really doesn't understand this stuff, and (to put it somewhat crudely) he's just talking out his ass.I think Dembski is capable of doing mathematics in the sense of manipulating formalism. Where he fails is when he tries to put it into a larger context or apply it to the problem he claims to.

I'm not sure how much is sloppy reasoning and how much is a matter of dishonesty. My working assumption is that he intentionally tries to baffle his reader with BS. He brings in a lot of esoteric machinery (e.g. NFL theorems) to make points that if stated simply are clearly not relevant to his case. It's the diametric opposite of good popularization: think of Feynman's QED in which he attempted to explain some very difficult physics with elementary examples and an intuitive treatment of complex numbers. Dembski doesn't want the reader to understand what he has to say. He wants the reader to come away thinking Dembski is very smart and must be right even if the exposition made absolutely no sense.

I just read this up to page 30. It's just Fred Hoyle's jumbo with KC window dressing.

Anonymous - whoever you are. I think you make the point very well.

It is interesting how Dembski sees so much of the world in digital terms. He starts with bit strings and poker hands - and then he goes on to treat bacterial flagella and even the universe in much same way. So you end up with sophisticated looking formulae that might, questionably, mean something applied to a well defined domain such a poker hands, but are meaningless elsewhere. It's ironic that someone who is so opposed to materialism should view reality as a large computer.

In another thread, Mark C. Chu-Carroll says:

I'll be more than happy. First, a comment on your "holier than thou" attitude. Here is you talking about Dembski:

When you decide to climb into the mud wrestling ring, you should understand that it isn't only about trying to throw your opponent into the mud. He gets to do that too. And with tag-team mud wrestling . . . So spare me the "I'm shocked" response. I've seen it all before.

So, let us get down to the details:

But whether I like it or not, you've merely opined that "Dembski screwed up that badly". But you provide no evidence. For example, you say: "and then in the paper where he discusses specification, he defines specification as the IT sense of "low information content"."

Yet when I search the article for the phrase "low information content", I don't find it. So, what page is that on? And how do you reconcile these two claims:

Does he tell us what it is, or doesn't he? You seem to be confused on the point of your very complaint.

And where does he do that? Give me the exact citations. That would seem to be game, set and match for you. Yet for unexplained reasons, you don't do this.

Yes, which is not surprising, since the title is "Specification: The Pattern The Signifies Intelligence". But he doesn't really ignore complexity, as it surfaces where relevant. For example:

Then we get to this:

Of course, that's nonsense. Complexity Dembski defines as:

So, returning to the coin toss example, specifications are patterns that we can simply describe. All 1's, all 0's, alternating 1's and 0's, etc. But the complexity has to do with how many times we toss the fair coin, and therefore how many possible outcomes we have for the bit string, irrespective of whether the outcome will be specified or not. The intersection of complexity and specification is what he calls specified complexity.

Sounds like Dembski is the one being a straight shooter, and you've been caught lieing.

But feel free to provide those elusive citations that you claim exist.

I'm all ears.

Roger:

My pointin discussing that was that for all of the articles he's written that include IT-based discussions, hedoes notrealize that the argument he's putting forward contradicts the point he'd like to make. That is precisely why I've revised my opinion of him from "competent lying mathematician" to "incompetent mathematician".supports my positionif you understand the math! That's defining complexity in terms of probability; but if you actually understand information theory, and what it means for a string to have high information content, the passage you quote is defining complexity as high information content.is a compressed formof the information content of the string. If it can be described simply -really describedin a simple form - it has low information content. If it's complex - it has high information content. If you say it has "specified complexity", you are saying that you have a string with high information content which has very little information content. That's what this stuffmeans. That's why I say that I no longer believe Dembski is a competent mathematician - because a competent mathematician who uses information theory in his argumentswould know that this is a profound and obvious error. A dishonest mathematician wouldn't make a mistake this obvious.I'm still waiting to see where you say I'm *lying*. Everything you quote is legitimate criticism of Dembski. You can claim my arguments are

wrong, in which case I would expect you to present an argument for why they're wrong. But lying is a much stronger claim: nothing that you mention up there is a lie. What in all of that stuff is a deliberate misrepresentation of fact?I could contest much of what you said, but this will do to show your blindness to what Dembski's point is:

You look pretty foolish here. All this talk about the "string", but nothing about the underlying event. That is where the complexity is determined. Consider this:

We have two scenarios: one is the hundred flips of a fair coin ala Dembski. The other is the reporting of the result of a programmed bit string printing machine, which prints 99 1's, followed by either a 1 or a 0. Now, we report that the coin flipping produces 100 1's. Ditto for the bit printing machine. Which result contains more information? One eliminates 1030 possibilities, the other only one.

By viewing the string in isolation, you miss its signifigance. You are obsessed over one aspect of the math, to the exclusion of all other math considerations involved.

Roger:

You're the one who's making yourself look foolish.

First - you claim to be showing why I'm lying. And yet, you're lecturing me about how I'm purportedly not understanding Dembski. What happened to that very specific accusation that I'm a liar?

Second, you clearly don't understand information theory, but information theory is what we're discussing here - in particular, the definitions of information content used by information theory. That is what *Dembski* uses in his definition of complexity, and it's what *Demsbki* uses in his pseudo-definition of specification in the text I cited.

The "one aspect of the math" that you're claiming I'm obsessed over is

the one aspect of maththat Dembski is using in his definition. And it's the specific field of mathematics that addresses theprecise issuethat you claim is the significance of Dembski's pseudo-definition.Dembski's definition of specification *means* low information content. Dembski's definition of complexity *means* high information content. Demsbki's definition of specified complexity therefore means "low information content with high information content".

Roger:

Quoting definitions that you don't understand doesn't change the fact that neither you nor Dembski understand them.

Go read some Chaitin: http://cs.umaine.edu/~chaitin/. Greg is the co-inventor of K-C information theory (the C is Chaitin), and a really amazing writer. He's written a series of books that actually make IT comprehensible to a lay audience without losing any of the rigour. His website includes some of his lectures on the subject, which are really excellent. And you can check Dembski's papers to see that K-C information theory is what Dembski is using.

Don't know if my last posting of last night inadvertantly fell into the bit bucket, or was directed there because you wearied of the discussion.

If this doesn't make it through, I'll assume the latter.

So, are you saying that the information content of a sequence of nucleotides differs depending on whether it was generated using mutations or magic? Does the version that uses magic elminate fewer possibilities or more possibilities?

Roger Rabbit:

All this talk about the "string", but nothing about the underlying event. That is where the complexity is determined.If that's true, then Specified Complexity is useless in making the inference of design, because complexity would not be a property of the output (the string).

Roger:

The only time I have ever deleted comments from my blog is when they're porno-spam. (For some reason, there've been several waves of porno-spam in the comments over the last week, but I've been very careful about deletions.) I assure you I didn't delete anything that you posted. There was one comment which I approved last night which was just a one-paragraph quotation from one of Dembski's books, with no context... Perhaps you made an editing error?

I thought the most amusing part of Dembski's screed was the following:

Translated into English, this boils down to "there is no reason to consider the possiblity that the universe might be larger than the part of it that we can see from here, because if I don't know the size of the universe, then it is impossible for me to make the kind of argument that I am trying to make."

To him, this corresponds to a "wholesale breakdown in statistical reasoning." But in fact, hardly any statisical reasoning other than Dembski's is dependent upon knowning the size of the universe. He rather drastically misrepresents Fisher's approach in suggesting that Fisher was trying to define a threshold that would absolutely eliminate chance--which is basically suggesting that Fisher was too stupid to realize that events with a probability of 1 in 20 or 1 in 100 do in fact occur rather frequently. Statisitical significance has never been understood by anybody as absolutely eliminating chance--rather it defines a frequency of error that is acceptable in many contexts.

Of course, it doesn't matter, because in the end, Dembski's argument boils down to "if the likelihood of evolution of functional structures of this complexity by evolutionary mechanisms is very, very small, then we need to consider the possibility that the structure did not form by evolutionary mechanisms." I think that most biologists would agree with this, and most likely would not require a probability as low as Dembski's 10^-120 to convince them to seek other explanations. Unfortunately, neither Dembski nor anybody else has any idea of how to reiliably calculate the likelihood of evolution of structures of particular specified complexity (whatever that might mean).

tgibbs:

I really hate the argument that you cite :-) It's one which is constantly brought up by creationists and other idiots. It's basically an argument that comes down to "My imagination defines the limits of the universe". If a probability gets so small that I can't understand it, then it must be impossible. If a process takes steps that I can't understand, it can't happen. The universe can't be bigger than what I can see.

The Dembski line is really just a variation on that: if the universe is bigger than what we can see from earth, then all statistical reasoning suddenly becomes meaningless; not because they're anything wrong with statistics, but because there's something wrong with the concept of things being more than what a human being can observe. If we can't see it, it can't exist; even postulating the existence of anything beyond what we can see is irrational and meaningless. It's just the most arrogant kind of argument.

There more things in Heaven and Earth than are dreamt of in your philosophy, Mr. Dembski.

Dembski quote:

This kind of statement is the red flag of a

non-proof.It's reminiscent of attempts to find a proof of Euclid's parallel postulate in terms of the other axioms. I'm not sure if this is the case I remember hearing, but a quick search turned up Saccheri, who attempted to assume the parallel postulate was untrue and derive a contradiction:

http://www.southernct.edu/~grant/nicolai/history.html

There is also the case of the Poisson spot--a bright spot that appears in the shadow of a sphere. http://www.schillerinstitute.org/fid_97-01/993poisson_jbt.html Poisson derived its existence by following the logic of a paper by Fresnel on the wave theory of light. The intent was to show the paper to be obviously wrong, but in fact, the spot turned out to be real.

If your argument ends with something like "and then 1 would have to be 0" then you can be satisfied that you have found a successful reductio ad absurdum argument. If it ends "and that would just be unthinkable" then you have a non-proof attesting to nothing but your own unwillingness to think beyond a certain point.

That's a key point. Is Specified Complexity a function of the end product only, or is it also a function of the causal story? Can you answer that question, Roger R? No matter which way you answer it, I'll provide quotes by Dembski that contradict your answer.

Dembski's been working on specified complexity for 15 years now. You would think that by now he would have a consistent answer to this very fundamental question.

I think PaulC said exactly what needs to be said, that grants us all hearty permission to ignore Dembski's work. I think it bears repeating:

"If I am betting on coin flips and notice any kind of algorithmic pattern then there are legitimate reasons to conclude that the person flipping coins is not playing fair. However, this method does not allow one to discriminate between the product of evolution and the product of "intelligent design" since neither of these processes in any way resemble random coin flips."

Dembski's feverishly comparing apples to oranges, to prove that one is an artichoke.

garote:

I actually disagree with you.

My whole viewpoint on mathematics is that much of its value comes from the ability to look something, and abstract it down to a simple form based on its fundamental principles. That process of abstraction - of stripping away the details to identify a simpler problem that has the relevant features of the real-world problem - that's a really useful thing to do, and you can often discover a lot of interesting things by performing that abstraction.

In a way, the "coin-flip" thing is an abstraction of a complex process to something simple that contains some relevant feature.

The problem WRT to Demsbki isn't that he's reducing a problem to an abstraction and analyzing the abstraction. The problem is that you need to make sure that your abstraction captures

all of the relevant featuresthat you're discussing. You need tovalidate the abstractionas a model of the real world.Demsbki's reduction of the process of analyzing certain features of the universe to the a metaphor involving recognition of patterns in coin flips is fine with me. The problem is that he creates a model that omits important elements of the real world, and then insists that the conclusions drawn from his abstract model are applicable to the real world, even though his model

is a poor representation of the real world.In the case of the coin flip metaphor: if you've reduced the "problem" of observing patterns to one of flipping coins, and you notice some pattern in the results of a sequence of coin-flips, you could perhaps conclude that someone is cheating.

But if, in your abstract model of coin-flipping, you've eliminated the possibility of non-deliberately biased coins; and you've assumed that the table that the coin lands on is perfectly smooth, eliminating any bias-factor that could be produced by irregularities in the surface; then your process of abstracton has deliberately removed relevant features of the real world. So if you observed a seemingly non-random result from a sequence of real coin flips, and you concluded that

the only possible sourceof that apparent non-randomness is deliberate intervention, then you'd be drawing an invalid conclusion. Because there are a number of factors - irregularies in the coin, irregularities in the surface that it lands on, accidental bias in the way the coin is thrown - that could explain the observed results; but you excluded them from consideration.To be a bit more concrete: take a Dembski-ish model of the world. Now, point an antenna at a random point in the sky. If you see what looks like a highly regular repeated pattern of on-off pulses of radio waves, can you conclude that there must be an intelligent agent creating that regular pattern? No. There are things like pulsars - which produce very regular pulse patterns. Sure, the "I shouldn't see anything with a regular pattern" is an OK abstraction, but when you see somethning that seemingly violates the conclusion that you drew from the abstraction, you need to ask "Is it my model that's wrong?". You can't just wave your hands and say "Ooh, look a pattern, my model says that can't happen unless there's something intelligent producing it, so I just proved there's extraterristrial intelligent life."

Mark C. Chu-Carroll says:

Fair enough, I'll just assume the mysterious bit-bucket scenario. But I don't understand the "with no context" comment. Dembski's definition of "information" lacks context in a thread where what he means by certain terms, including "information", is the subject of some dispute.

Troublesome Frog says:

You must have me confused with another poster, since I made no mention of "magic", nor do I understand what you think that word means.

jackd says:

"Output" of what? Isn't that the issue I raised? According to MCC, and how he is trying to force definitions onto Dembski's text, the definition is that of an "input" not "output" string. IOW, the source of the string is irrelevant to the "information" it contains. He would like you to believe that is the only definition relevant to information theory. But that isn't true:

http://en.wikipedia.org/wiki/Self-information

http://www.answers.com/topic/information-theory

[excuse the poor formatting and missing formula - see link]

So, by this view of the amount of information, we need to know something about the event and its probability. For example, with a simple bit string example, a string of 100 "1"'s doesn't necessarily imply information or the lack thereof by Dembski's definition. Lawlike behavior can produce something like that. It is only in the context of the "tossing of a fair coin" that we can begin to construct the probabilities, and hence the information involved. That is different than the "information" as MCC is viewing it.

Roger:

Until you address the fact that you have specifically accused me of

lying, I'm not interested in continuing the discussion. Either show how I'm lying, or retract the claim, either way, I'll happily continue the discussion.But being called a liar doesn't sit well with me. So either put up - and show that I've lied here; or admit that you were wrong in accusing me of lying. Otherwise, as far as I'm concerned, the discussion is over.

Mark C. Chu-Carroll says:

I thought we had dealt with that. Indeed, I only replied to your claims about Dembski's being dishonest. Yet, you provide no evidence that he is, and brush it off by labeling it "[a]brasive, insulting, argumentative, even arrogant and obnixous". Is calling Dembski a liar merely being "[a]brasive, insulting, argumentative, even arrogant and obnixous", or is it making a specific claim? If the former, your current case fails by definition. If the latter, then you have failed to make your own case on the merits.

You are free to obsess about this, or ban me from your blog, as you wish. The point I am making, which seems to have sailed over your head, is that what's sauce for the goose is sauce for the gander. And an even more important message for you (which seems will elude you for now, but may be exposed to others who haven't so much personally invested) is that there is a reason that ad homs are a logical fallacy.

You approach Dembski with the a priori conviction that he is dishonest and an idiot, and lo and behold, that is what you "see" in his words. But PaulC, a guy who disagrees with Dembksi's conclusions, nevertheless didn't stumble into that trap. He saw where Dembski made defensible statements. A guy like me, who as you claim, doesn't "understand information theory", has no problem understanding Dembski, and easily finding support for his claims from non-ID people involved in information theory.

That is what should embarass you. Not that some pseud on the internet called you dishonest.

And for those of you familiar with the film "Flock of Dodos", this is part of what Randy Olsen (sp?) is trying to figure out.

Roger:

As I said - no discussion until you either back up your claim that I'm a liar, or withdraw it. Playing semantic games to avoid the fact that you specifically called me a liar, and said you were going to demonstrate that isn't going to get around it. Back it up, withdraw it, or shut up.

MCC says:

I'll let this be my last posting here, whether it gets thru or not. I can't imagine what you think I would want to discuss with you now. You are the classic case of the fella who can dish it out, but can't take it. Most discussions that interest me are with folks with whom I can establish some kind of mutual ground rules, frequently implicit, about what goes and what doesn't. But you wish to always be the "special case", dishing it out but claiming immunity from return fire. I held up the mirror to you, reflecting your own language back to you, and you freak out and start whining about retractions.

I've given more evidence for my "charge" against you, than you have against Dembski, but you refuse to consider that. You are obsessed about your own hurt ego. I see no sense of any concern about the charges you have made against Dembski, despite the fact that both PaulC and I have shown you errors in your critique of the Dembski article.

You can only do something about that which you control. If I saw any effort on your part to crank down the rhetoric which you used to prior to my entry into the debate, I would be more than willing to defuse the issue and move on. Indeed, it was my hope that by holding up the mirror to your rhetoric you could see the problem with it. Alas, I misjudged your ability to evaluate your own behavior. To yourself, you will always be the " religious jew[,] [b]ut [] an honest one - which means that I do not play stupid games", despite all evidence to the contrary.

I have no problem with discussions that are "[a]brasive, insulting, argumentative, even arrogant and obnixous", or those that are more genteel. But I have a low tolerance for baby-sitting folks who think they are immune to the standards they themselves promulgate.

Roger:

You don't seem to get it, at all.

You're babbling about math you don't understand, and then going out of your way to throw personal insults.

Statements like "you're focusing on the string instead of the event" is an absolute demonstration that *you don't understand information theory*. There are two ways of understanding information: the on-line perspective (which is the perspective of observing the event, and characterizing information in terms of "surprise"), and the off-line perspective (which is looking at the string that results, and considering its compressibility).

You can play rhetorical games all you want, but in the end, it comes down to this: you're making nonsensical mathematical arguments, and you've made very specific accusations of dishonesty against me which you can't be bothered to justify.

Mark:

I would like to thank you for a well written piece on the many mistakes that dembski makes. When I first saw dembskis use these mathematical concepts I knew that he either abused it or misunderstood it. But I did not know exactly which until read this entry.

Whereas I don't think using words like "idiot" is helpful, in the debate between Mark Chu-Carroll and Roger Rabbit I tend to agree more with MCC than with RB. What puzzled me in this debate was that both sides ignored earlier publications wherein Dembski's treatment of specification was discused in thorough detail (see, for example, the essay by Wesley Elsberry and Jeff Shallit at http://www.talkreason.org/articles/eandsdembski.pdf ; BTW Shallit was teaching Kolmogorov complexity to the class where Dembski was a student; it seems Dembski did not absorb Shallit's lectures well enough).

Regarding Dembski's dishonesty (which Roger seems to doubt) see, for example, http://www.talkreason.org/articles/creative.cfm .

Mark P:

As I understand the point with Mark's blog, it is that he looks for and criticize bad math where he sees it, without neccessarily bother to study prior art. But I'm sure your references are welcome. I found them interesting.

Speaking of which, in Elsberry's and Shallit's essay I can't find, at a quick glance, the rather short and elegant analysis that Mark does in his post. I also note that they seem to conflate SC and CSI, which as I understand it has the same basings while CSI makes a more specific and stronger claim and is used differently by Dembski. Of course, taking care of CSI takes care of SC which is what they do. But I can't see that they care to make that point explicit.

I also think Mark has come to the conclusion that Dembski's math shows that he is inept at it, lying or not. It indeed seems that on other areas he is more convincing as a liar.

If information does not prove anything about its source, as seen in the coin-flip vs. bit-machine example, then Dembski's mighty effort of 15 years to prove source based on information...

...is all a total waste.

No matter to what absurd and apparently unreachable heights Dembski pushes the improbability of life on Earth, those arguing for chance can simply expand the probabilities infinitely. Maybe there were a million universes which existed lifeless before collapsing back into the cosmic egg, and this is the millionth and first. Other than baseless assertion and pointless handwaving, what could possibly end the argument in Dembski's favor?

The most irritating part of this, to me, is that Dembski keeps asserting, without evidence, that he can construct a confidence interval to trap the presence of a First Cause. Such a claim is meaningless in both science and theology, and resembles some kind of near-waking dream or maybe a shroom fantasy. Can anyone explain why we're still having this conversation?

Dave

Hi Mark;

As you indicate, it's a problem that Dembski uses the word "complexity" in more than one meaning, and if you don't keep track of, which meaning is used where, things get confusing.

At p. 15 Dembski writes:

At p. 16 he writes:

Now, "pattern-simplicity" and "descriptive complexity" are one and the same and at least analogous to K-C complexity. However, "event-complexity" from p. 15 is the improbability of the event occuring by chance.

Say you toss a fair coin 100 times, and you toss it again 100 times. What is the probability of getting the same sequence in both rounds? It's 1/2^100, a low probability, and therefore corresponding to a high event-complexity.

This complexity, the event-complexity, is independent of the simplicity/complexity of the description of the event. If the sequence happens to be "all 1s", there's a simple description; but in most cases there is not. However, my point is that Dembski uses the word "complexity" in two different and unrelated ways, and the way it's supposed to be used in the term "specified complexity" is in the probability sense. That's just the way it is, believe me :-)

That is, an event with high specified complexity is an event that is simple to describe (low descriptive complexity) but difficult to (re)produce without cheating - where "difficult" means "unlikely to occur in one try".

Why does someone who is a 'scientist' have to resort to name-calling and personal attacks when debating a scientific theory? You sound like someone who is very angry, and defending their faith! you don't need a ph.d to figure out this is not about 'science', its personal, and ugly.

I've noticed that darwiniacs try to silence their critics, as in the kitzmuller case. If that doesn't work, then they quickly resort to personal attacks. If you were so secure in your 'theory' then you would welcome any challenges. But you act more like an islamo-fascist when they think islam has been insulted.

you do your side no credit.

For all your complaints, I notice that you don't bother to address *any* of my actual criticisms of Dembski and his sloppy math. The simple fact of the matter is: Demsbki *does not* understand information theory, and makes incredibly foolish and inept mistakes. And you *can't* refuse that, which is why you're focusing on the *tone* of my critique rather than the content.

why should I bother to get into catfight with an someone who may have intellect, but no maturity? Why should I bring myself to your level? You wouldn't believe anything I said anyway, you are firm in your faith.

and make no mistake, this is not 'science' its faith. You perceive it to be under attack, and you react with hatred and anger. I would never raise my voice to my colleagues, or call them names, when discussing something related to my profession. I've noticed its easy to call people names on a message board, but much harder in person.

Yes, no doubt, you're so superior to a lowly swine like me that I'm not worth talking to. So you'll lecture me on manners; and you'll come back to my blog repeatedly to reply; and tell me about how what I've written is "not science".

But addressing the actual *content* of my post? No. That's clearly over the line, because I'm not worth it.

The fact remains: Dembski does not get information theory; and he's defined his terms so that his fundamental concept of "specified complexity" is meaningless.

The fact remains when you have to resort to name-calling you've already lost the argument! looks like I got under your skin! you darwiniacs, and yes Ann is right about people like you, are very sensitive when it comes to your faith aren't you? You have to try to silence your opponents, and when that doesn't work, personally attack them. You're desperately trying to make a name for yourself. I tried to find where Dembski has responded to you, but I couldn't in a quick search, perhaps he doesn't think you're worth responding to? thats what really bothers you isn't it? He has responded to many others, but not to you!

I can't argue the math, I'm not a mathematician. But what we're really talking about is evolution. You look around at all this complexity and see it all happening by chance. But Hoyle, who was probably a better mathematician than you, said it was like the chance of a tornado going through a junkyard and creating a 747. You look at the universe, and again Hoyle said: "A common-sense interpretation of the facts suggests that a superintellect has monkeyed with the physics."

Then the darwiniacs resort to just-so stories...given this, then that, and in desperation, resort to calling when a bacteria loses sensitivity to an anti-biotic, 'evolution'. When its still a bacteria, and, according to spetner, whom I'm sure you disagree with, causes the bacteria to lose information.

You want to make a name for yourself? its simple, prove evolution. Which should be easy as pie! since life arose by chance, using what...INTELLIGENCE, you should be able to produce life! do that, and you win the nobel prize, and are set for life, and you prove dembski, behe, and all those other hicks wrong!!! just think of the glory! just mix a few chemicals, zap it with electricity (ala frankenstein) and voila LIFE!! should be easy since, according to darwin, life arose by chance, dumb luck!

but you can't. Dawkins was right, biologists study very complex things that look like they were designed....he should have stopped there, because it goes against common sense to think that all this immense complexity, that we barely understand (whats dark matter again?) just came here by chance.

You can babble all you like. But you've just *admitted* that you don't understand the post that you're criticizing.

You started off by saying "this is not science". But when you're pressed to say *why* it's not science; *why* my criticism of Dembski is invalid, what do you do? You admit that "I can't argue the math, I'm not a mathematician."

The whole point of this post was that Demsbki's math is *wrong*. Badly, god-awfully wrong. I'm not trying to make a one-page proof of the theory of evolution. I'm criticizing sloppy, bad, *wrong* math by someone who presents himself as a mathematician.

Part of the beauty of math - and science - is that I don't *have* to be as smart as Hoyle. I don't even have to be as smart as Dembski. As long as I *show the math*, and my math is valid, the argument is over.

Dembski argues that specification = low K-C complexity, and complexity = high K-C complexity; therefore specified complexity is "high and low K-C complexity at the same time".

Unless I'm *wrong* that in the paper quoted above, Dembski describes specification as low K-C complexity, then Dembski's SC definition is a pile of gibberish. If I'm wrong about Dembski's definition of specification, then show me how.

Remember - you're the one who started this little discussion by asserting that the argument here "isn't science". If you can't actually make any argument for *why* it wasn't science, then just what does that make you? A four letter word comes to mind...

Why should I believe YOUR math over Dembski's? when you have to resort to calling him names!! and no its not 'science' when you resort to name-calling, but I've noticed you darwiniacs call science, and evolution whatever you want!

I notice you couldn't answer any of my points, but hey thats OK, have you created that life form yet? whats taking you so long?? having a litle problem?? HMMMM?? well lets make it easy for you, go ahead and take an existing life form and 'evolve' it into sd98fhvw (something NEW) given that they've tried for thousands of generations with fruit flies and bacteria, and have had no luck...I'm sure you with your 'intellect' will have no problems!!

as far as calling me a 'four letter word' isn't that nice, real 'tough' on a message board, but you wouldn't do it to my face you punk ass little !!!! HAHAHAHAAHHA

thanks for proving my points LOSER!! HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAH

Why should you believe my math of Dembski's? That's an easy one.

As I said in my last comment: the beauty of math is that *anyone* can do it. It's not that it's *my* math. It's that it's *math*. It's not just me blindly asserting my belief over Dembski's. It's me presenting *the mathematical argument that demonstrates that Dembski is wrong*. Anyone who knows math can read a mathematical argument, and see if the math is valid. The reason why you should believe my math over Dembski's is quite simple: because the argument is valid mathematics. I've shown what's wrong with Dembski's math.

Incidentally, the four letter word I was thinking of was "liar". Since you popped up, and claimed that my article was not science, but you later admitted that you didn't understand the actual *content* of the article. So you were lying when you claimed to be able to say whether or not there was any scientific content.

Oh gods... I just read this again (linked via Rockstar), and now my head hurts...

Probably long-gone troll:

WHAT points? You never made one.

Reminds me of some trolls who seemed to think that keeping my gmail account open on a weekend proves that Sylvia Browne is psychic.

Check out this paper

Sorry for not seeing this sooner.

T. Larsson wrote:

We did make the point that K-C complexity has nothing to do with probability in our discussion of Dembski's misuse of Davies. There we also noted the relation between CSI and compressibility.

Dembski's uses SC, CSI, and "complexity-specification" as synonymous phrases. See the preface of "No Free Lunch" for an example of Dembski explicitly doing so for "specified complexity" and "complex specified information".