As lots of you have heard, William Dembski and Robert Marks just had
paper published in an IEEE journal. In the last couple of days, I’ve received about 30
copies of the paper in my email with requests to analyze it.
My biggest criticism of the paper is how utterly dull it is. It’s obvious
how they got it published – they removed anything that’s really interesting from it. It’s
a rehash of the stuff they’ve written before, stripped of any content that directly hints
at the anti-evolution part of their claims – which leaves a not-particularly-interesting
paper on search algorithms.
I’m not going to rehash my primary criticisms of Dembski’s approach here – I’ve done it lots of times before, most recently in this post, which critiques a very closely related paper by D&M. In fact, this paper
is really just an edited version of the one I critiqued in that post: it’s that paper with all of the intelligent-design speak removed.
What this paper does is take the basic idea of the NFL theorems, and builds on it in a
way that allows them to quantify success. If you recall, NFL says that averaged over
all possible search landscapes, the performance of all search algorithms are
equivalent – and therefore, no search algorithm is generally better than pure random
search. But searches obviously work – we use search algorithms every day, and they
clearly work a whole lot better than random guessing.
Search algorithms work because they’re not intended to succeed in all
possible landscapes – they’re designed to work in specific kinds of landscapes.
A successful search algorithm is designed to operate in a
landscape with a particular kind of structure. The performance of a particular search is
tied to how well suited the search algorithm is to the landscape it’s searching. A hill-climbing search-algorithm will work well in a continuous landscape with well-defined hills. A partitioning landscape will work well in a landscape that can be evenly partitioned in a well-defined way.
So how can you quantify the part of a search algorithm that allows it to
perform well in a particular kind of landscape? In terms of information theory, you can
look at a search algorithm and how it’s shaped for its search space, and describe how
much information is contained in the search algorithm about the structure of the
space it’s going to search.
What D&M do in this paper is work out a formalism for doing that – for quantifying the amount of information encoded in a search algorithm, and then show how it applies
to a series of different kinds of search algorithms.
None of this is surprising. Anyone who’s studied Kolmogorov-Chaitin
information theory will respond to this with “Yeah, so?”. After all, K-C theory describes
the information content of stuff in terms of the length of programs – the search programs
clearly contain information – they’re programs. And they clearly only search certain spaces, so some portion of the information in the program is an algorithm that
exploits the structure of the search space. This much is obvious. Being able to quantify
how much information is encoded in the search? If you could do it well,
it would be moderately interesting.
It’s a nice idea. So what’s wrong with the paper?
Personally, I come from a background where I look at these things in K-C terms. And
one of the major results in K-C theory is that you can’t really quantify
information very well. How much information is in a particular search? Damned hard to say
in a meaningful way. But D&M don’t bother to address that. They just use a very naive
quantification – the same logorithmic one that Dembski always uses. I dislike it
intensely, because it essentially asserts that any two strings with equal
numbers of bits have the same amount of information – which is just wrong.
(“00000000″ and “01001101″ have different amounts of information, according to information theory.) The reason that Dembski can quantify things specifically is really because
he’s handwaved that important distinction away – and so he can assert that a particular
search encodes a certain number of bits of information. What he actually means is
that a given search algorithm can be said to contain something equivalent to a string whose length is N bits – but he doesn’t say anything about what the actual true
information content of that string is.
Second, he rehashes some old, sloppy examples. As has been discussed numerous times,
Dembski likes to harp on the “Methinks it is like a weasel” example used by Dawkins. But
he misrepresents the example. In Dawkins’ presentation of that example, he used a program
that searched for a string based on its closeness to the target string. It did
not lock positions in the string. But Demsbki has always presented it as if it
locks. (In a locking version of this, when a character position is identified as correct,
it’s removed from the set of possible mutations – it’s locked, so that it will never
change from correct to incorrect. In Dawkins’ real example, a correct character position
would sometimes change to incorrect – if a given mutation round, one of the candidate
strings added two new correct character positions, but changed one formerly correct one to
something incorrect, that candidate would survive.) In this paper, one of the examples
that Dembski uses is the Weasel with locking. (And he does it without any reference to
Dawkins.) He uses it as an example of partitioned search. He’s absolutely correct that the
locking version of that is partitioned search. But frankly, it’s a lousy example
of partitioned search. The classic partitioned search algorithm is binary search. For more
complicated examples, people generally use a graph search. So why did he use the locking
weasel? I’m attributing motive here without proof, but based on his track record, he wants
to continue harping on the weasel, pretending that Dawkins’ example used locking, and he
wants to be able to say that his use of it was approved of by a set of peer reviewers.
If he claims that in the future, it will be a lie.
As for intelligent design? There’s really nothing in this paper about it. I’m Dembski will run around bragging about how he got an ID paper published in a peer-reviewed journal. But arguing that this is an ID paper is really dishonest, because all ID-related content was stripped out. In other places, Dembski has used these quantification-based
arguments to claim that evolution can’t possible work. But this paper contains none of
that. It’s just a fairly drab paper about how to quantify the amount of information
in a search algorithm.
A final couple of snarky notes:
- This paper is published in “IEEE Transactions on Systems, Man, and
Cybernetics, Part A: Systems and Humans”. This is not exactly a top-tier journal.
It’s a journal of speculative papers. Out of curiosity, I looked through
a handful of them. I’m not super impressed by any of the papers I looked
- As usual, Dembski is remarkably pompous. At the end of each paper in the
journal is a short biography of the author. Dembski’s is more than 3 times
longer than the bios of others of any of the other papers I looked at. He just
can’t resist going on about how brilliant he is.
- On a related topic, at the beginning of the paper, in the author identification,
Dembski is identified as a “Senior Member” of the IEEE. What’s a senior member
of the IEEE? Someone who wrote the IEEE a check to make them a senior member. It’s
a membership level based on dues – student member, standard member, senior member.
Marks is an IEEE fellow – that’s a title that you need to earn based on your