Despite having written about it before, I still get a lot of questions about William Dembski’s “No Free Lunch”

(NFL) theorems. One message recently contained the question in a particularly interesting form, so I thought I’d take

the opportunity to answer it with a post.

Here’s the question I received:

- Is the NFL theorem itself bad math?
- If the theorem itself is sound, what’s wrong with how it’s being applied? Is it a breadth issue

or a depth issue?

The theorems themselves are actually good math. They’re valid given their premises; they’re even deep

and interesting theorems in their way. The problem isn’t the theorem: it’s the way that the theorem is abused to apply it to evolution. But what I found interesting is the idea of characterizing the problem as breadth or depth.

Before I get into the depth of the answer, let me quickly explain what the NFL theorems say.

Suppose you’ve got a multivariable function, f(x_{1},…,x_{n}), called a *landscape function*. You want to use a search function to find the maximum value of f. NFL asks: if you know nothing about f, and all you can do is ask for the value of f applied to a specific set of parameters, can you write a search

function which can find the maximum of X?

NFL says no. If you know nothing about the landscape, no search function that you can write is guaranteed to do any

better at finding the optimum than a random walk through the landscape. More precisely, it says that given any

particular search function, the performance of that search function *averaged over all possible landscapes* is

no better than a random search.

Interestingly, you can view NFL as something pretty close to a variant of the halting problem. You can model

the halting problem is a landscape where each position in the landscape corresponds to a sequence of instructions, and the landscape value l at that point is a measure of the probability of the program halting after that sequence of instructions. (Of course, you can’t really construct that landscape; but you can construct one where you use a heuristic to approximate the halting probability, and only get it exactly right at places where the program

halts in an observed execution.) Then solving the halting problem is equivalent to searching that landscape for a point where l is either 0 or 1. In that case, NFL is saying, basically, that there’s no search function for that space

that does any better than randomly executing the program for a certain duration and seeing if it halts.

Hopefully, that gives you some idea of why I say there’s some actual depth to NFL. It is saying something about a

general problem: there is no universal search. If you don’t know anything about the properties of the landscape in

advance, then you can’t pick a good search function for that landscape. You need to understand something about what the

landscape looks like in order to design or select an effective search algorithm for it.

Back to the question: is the problem with NFL as an anti-evolution argument breadth or depth?

The answer is *both*.

The breadth problem is simple: the NFL formulation is too broad. NFL says that averaged over *all* fitness

landscapes, any given search function is no better than a random walk. Based in this, they say that if you model

evolution as a search function over the fitness landscape of survival, it can’t possibly do better than total randomness. That’s nonsense: evolution doesn’t need to work over *all possible* fitness landscapes. Biology and survival, modeled as a fitness landscape, isn’t all possible landscapes. It isn’t even a single randomly selected

landscape. It’s a highly constrained landscape. Evolution doesn’t need to be able to search *any* landscape – it just needs to be able to search *one particular kind* of landscape. If you constrain the properties of the landscape, then you absolutely *can* pick effective search functions.

For example: newtons method of root finding is a landscape search. It uses the slope at a point to guide it towards

the root of a function. It’s optimizing for the minimum distance from the true root: the landscape at any point

is the distance from a root. As anyone who’s taken college calculus knows that Newton’s method works extremely well for a large number of continuous functions. Dembski’s argument that NFL shows that evolution can’t possibly produce fit organisms could be applied, virtually without modification, to show that Newton’s method can’t possibly find roots. And it would be equally true: applied to arbitrary functions, Newton’s method will fail most of the time. Sometimes it

will work, if the function happens to have the right properties – but *most* functions don’t. Newton’s method requires functions to be continuous and differentiable – most functions aren’t even differentiable. And Newton’s method fails even on most differentiable functions.

Because NFL plays overly broadly with landscapes, its conclusions don’t make any sense applied to the real

phenomenon of life. You can’t use facts about the set of all possible landscapes to draw conclusions about

individual landscapes.

Now, let’s move on to the depth problem. If you look at how NFL models evolution in depth, you find that it’s

a terrible model. I’ve discussed the problems with modeling evolution as a fitness landscape before, so I won’t go in depth. But the entire idea of using a fitness landscape is a train wreck. Landscape search is based on *static* landscapes; life isn’t a static landscape. That, right there, is enough to utterly wreck the entire argument. Evolution isn’t landscape search. Fitness landscapes are a useful tool for modeling certain *short term* aspects of evolution; but as a general model, they do a terrible job. Looking at the problem in depth, and

you find that the model of biology and of evolution used in the NFL arguments are such a poor model of evolution that

there are *no* valid conclusions that can be drawn from it about the process as a whole.