Dembski Responds

Over at Uncommon Descent, Dembski has responded to my critique of
his paper with Marks. In classic Dembski style, he ignores the
substance of my critique, and resorts to quote-mining.

In my previous post, I included a summary of my past critiques of
why search is a lousy model for evolution. It was a brief summary of
past comments, which did nothing but set the stage for my
critique. But, typically, Dembski pretended that that was the entire
substance of my post, and ignored the rest of it. Very typical of
Dembski - just misrepresent your opponents, create a strawman, and
then pretend that you've addressed everything.

Dembski does his best to misrepresent even that small portion. As
I've said lots of times in the past: search is a crappy model for
evolution. It's value is that it provides a handle on which to hang
your intuition. But its great weakness is that your intuition is
frequently wrong. Our sense of a landscape is something
static and unchanging, smooth, with hills and valleys. We intuitively
expect a landscape to have hills with maxima, and valleys with minima.
That comes both from our intuition about "normal" shapes, and our
experience with 3 dimensional space. But in evolutionary search,
we're looking at a dynamic landscape with thousands of dimensions.
None of our expectations work there.

Dembski points out that you can always add time as a
dimension. That's true - in fact, I've pointed that out before. But
the point still holds that the static landscape model doesn't work
very well. Why? The simple way of adding time as a dimension
corresponds to our experience of time: a linear progression forwards -
a single additional dimension, where the motion through that dimension
is linear. But if we want to model evolution, we can't use
that model of time - because the landscape changes in response to
our search
. If the search chooses step A, then the landscape
responds in one way; if the search chooses step B, the landscape
responds in a different way.

To make that a bit more concrete, think about a simple example
for which we've got lots of observations: antibiotic resistance.
You've got a bacteria that's not resistant to any antibiotics. How's
it going to evolve over time? No particularly good idea. Now,
you add penicillin to its environment. What's going to happen? You're
going to select for penicillin resistance. Now, imagine two scenarios:
one where you keep giving penicillin, and one where you stop the
penicillin. In the first scenario, the landscape requires that the
bacteria maintain penicillin resistance; the bacteria can't reproduce
if it isn't resistant. So you've got a landscape that maximizes
the survival of the resistant bacteria. In the second scenario,
one possible outcome is that resistance disappears: the
resistant bacteria waste energy on their resistance strategies,
and get outcompeted by the non-resistant. The two landscapes are
totally different.

In that case, you've got two different landscapes because of two
different external interventions. But you can get an equivalent
situation without the external change of removing the
penicillin. Bacteria have multiple ways of combatting antibiotics.
Penicillin works by interfering with the process by which the bacteria
produce their cell walls. When they divide in the presence of
penicillin, they basically explode - they split, and they can't
manufacture the walls to close the two new cells. Some bacteria
respond to this by producing a chemical that binds to the penicillin
molecule, so that it's neutralized, and can't interfere with the
production of the new cell wall. Another response is to build the
walls in a different way, so that penicillin no longer
interferes. Both of those strategies work; and they each have
advantages and disadvantages. When exposed to penicillin, bacteria
can go either direction. If they manufacture their cell walls
differently, that produces one fitness landscape; if they excrete
a penicillin neutralizer, that produces a different fitness

The simple way of adding dimensions to a landscape model
doesn't capture this. And it's really quite difficult to actually
capture this in a landscape model - it basically means that
you can never know what the landscape is like. You get
a massively multidimensional landscape, with absolutely no ability to
predict what will happen as time passes. It basically turns the model
whose advantage is simplicity and intuitiveness, and turns it into
something astonishingly complex and non-intuitive.

In short, it's a crappy model.

But - none of that matters for this discussion. I don't
like reasoning about evolution using search, because I find it to be
obfuscatory rather than clarifying; but you can model
evolution as a search over an incredibly complex and
hard-to-comprehend landscape. That's why the critique of the landscape
model in my original post was three sentences out of a rather long

When you look at Dembski's article, it's very messy because he
chose (deliberately) to use a messy model. But the messiness isn't the
problem: the messiness is a distraction, which is intended to obscure
the actual problem. And the actual problem is that it's a circular
argument. It basically starts by assuming that only an intelligent
agent can create information; then it jumps through all sort of hoops
in order to finally conclude that only an intelligent agent can create
information. In that sense, the whole thing is completely content
free: it's pure obfuscatory math - math used to obscure an argument,
rather than clarify or formalize it. In fact, the fundamental argument
of the entire thing is non-mathematical: "intelligence" isn't a part
of the math at all.

So what's all that impressive looking math about?

Not a heck of a lot. There's a reason that I keep going on about
how it's all just a mass of obfuscation.

First of all, it pretends to present a conservation law. Second,
it pretends to prove that an intelligent source of information is
required. But in fact, the conservation law isn't really a
conservation law, and the idea that an intelligent source of
information is required is not only not proved by the argument based
on the conservation law, but is in fact inconsistent with
the conservation law.

We'll start with the conservation law. As I said in the previous
post, Dembski engages in a whole lot of obfuscatory mathematics to
create a supposed proof that in order for a search to perform better
than a random walk, that search must, in essence, cheat: it must
encode information about the landscape that it traverses. In Dembski's
terminology, it must contain "smuggled" active information. The
supposed "law of conservation of information" basically says that the
amount of information encoded into the search can be measured by the
the rate at which that search outperforms random walk.

So why is that not a "conservation of information"

Because there's no conserved quantity. In a real conservation law,
you have a measured quantity that you start with, and throughout any
series of actions or events, you can prove that that quantity never
changes. For example, you can look at a physical system in a
particular frame of reference and measure the total momentum in the
system. Then throughout any series of interactions, you can show
that the momentum never changes.

In Dembski's system, can you measure the total information in the
system? No. Can you show that the amount of information in the system
is the same before and after a search? Not in any meaningful way,
no. Can you look at a search function, and ask how much information
it encodes from a particular landscape? Not in any meaningful way,

To be a little bit concrete: there is no analytic way to
look at a search function and quantify how much &lquot;active
information&rquot; is embedded in it. It can only be determined
retrospectively: run the search in the landscape, determine how well
it performed, and then quantify its performance. Looked at from that
perspective, it's a (sloppy) re-statement of Kolmogorov-Chaitin information
theory: the information contained in a string (or, to use Dembski's
scenario, a landscape position) is the shortest program that can
generate that string (or a path to that position).

So - Dembski unknowingly rephrased a bit of K-C theory. That's not
so bad, right? To manage to redo a bit of work by two of the best
mathematicians of the 20th century? Well, if that's what he meant to
do, it wouldn't be bad. But it's very bad for Dembski's argument:
K-C theory doesn't support Dembski's argument. In fact, in K-C theory,
you can't quantify information in a precise way. Beyond
some absolutely trivial examples, you can't measure the quantity of

Dembski is arguing that information must be conserved, using a
framework that in which you can't measure it. And further,
it's a framework in which the intuitive notion of information
- which is what Dembski is really relying on - has absolutely no
connection with the information that's supposedly hidden in
the search.

Once again, I'll get a bit concrete. A while back, I wrote about a
experiment to design a better antenna. The engineers involved used
an evolution-based approach. The result was completely
. The particular shape that turned out to optimal was
not something that any of the engineers involved would have come up
with by themselves. By Dembski's argument in this paper, the
description of that antenna was encoded into the search system as
"active information". But the people involved in the experiment
didn't have the information about what the optimal antenna
would look like. And by looking at their simulation, no one could
extract or identify the information that Dembski insists was smuggled
in. Running the experiment produced the information that this
particular antenna appears to be optimal. In the Kolmogorov-Chaitin
sense, that means that information about the optimality was contained
in the combination of the search and the search landscape - which is
trivially true, because running the program in the landscape produces
that antenna as a result. But that's not what Dembski is
claiming to say. Dembski is saying that the program implicitly
contains the solution.

Why does it implicitly contain the solution? Because Dembski says
so. Seriously - that's what his argument reduces to. He
defines the active information in a system in terms of how
that system performs in a search. Then he shows that the amount of
information that results from doing the search is equal to the amount
of active information in the search algorithm. It's a trick of
definitions, obscured by a lot of pointlessly complex math. In
essence, it reduces to making a blind assertion: information is
conserved; therefore any system that can in any sense produce
information must contain that information. But since by the
algorithmic definition of information, any system that produces
information contains the information it produces, saying that
information is conserved is a simple tautology - exactly the kind of
statement that Dembski mocks in the beginning of the paper!

Moving on to the second point, Dembski pretends that this whole
argument somehow shows that intelligence must be involved in any
process that creates information. As I said in both the previous
paragraph and the previous post, it doesn't work - because he doesn't
actually make that argument. He assumes it as a premise
before he starts, and then he concludes it at the end as if it's a new
result. Look at all of his mathematical arguments - there's nothing in
there that defines "intelligence". There's nothing that concludes
anything about intelligence. There's the whole so-called conservation
law - but it doesn't allow any exceptions. There is nowhere
in that whole line of mathematical argument that says "Intelligence
can create information"; in fact, according to that argument,
intelligent agents cannot create information - nothing

That's the nail in the coffin of Dembski's argument. His
whole conservation law effectively argues that you can't create
information. He claims that it says you can't create
information without intelligence - but that exception, that
intelligence can create information - is totally omitted from
the math

You can apply Dembski's own argument to an intelligent
agent searching for a solution to something. The intelligent
agent, by Dembski's own argument, already contains all of
the information
that it uses in the search. If
you actually follow through on what Dembski's conservation of
information stuff says, what it ultimately means is that
intelligent agents can't produce information. If an
intelligent agent produces information, they must contain that
information as well. So either intelligent agents can't create
information, or information isn't conserved. Either way,
Dembski winds up defeating his own argument.

Dembski's entire argument is self-defeating. Either
information is conserved in the sense that he insists - and then
intelligent agents can't produce information; or intelligent
agents can create information - and then the conservation
of information "law" is refuted.

As I said in the original post, and have now argued multiple times
here, this is all an eloborate, highly obscured circle. Information is
conserved, which means that information can't be created; therefore
something must have created it. Why? Because at the beginning of the
whole argument, he asserted that an intelligent agent can create
information; therefore if you can find any information, since it's
conserved, something intelligent must have created it. He defined
information as a conserved quantity created by an intelligent agent,
and then used all of that impressive-looking math to ultimately
"prove" that information is a conserved quantity created by an
intelligent agent. It's a perfect example of obfuscatory math: all of
that math is just a smoke screen to cover up the fact that he's
embedded his conclusion in the assumptions at the beginning of the
argument - and worse, it's an argument that contains its own
refutation, because by the supposed conservation law, an intelligent
agent can't create information either - so no information can ever
be created.

You can see the foundation of the basic circle of the argument if
you look at the dreadful text of section one of the paper. He starts
with Shannon's definition of information, and then (as I pointed out
before), engages in a bunch of sillyness to try to pretend that
various philosophers who talked about philosophical ideas of
information were actually talking about Shannon information, and then
uses them to build up a purely philosophical argument that only
intelligence can create information. Then he uses that as a basis of
his further arguments.

The whole paper is an exercise in circularity. There's nothing
there - which is why this isn't a paper in a mathematical journal;
instead, it's just a chapter in one of Bill Dembski's vanity

More like this

So. William Dembski, the supposed "Isaac Newton of Information Theory" has a new paper out with co-author Robert Marks. Since I've written about Dembski's bad IT numerous times inthe past, I've been getting email from readers wanting me to comment on this latest piece of intellectual excreta. I…
Back when I first started this blog on blogspot, one of the first things I did was write an introduction to information theory. It's not super deep, but it was a decent overview - enough, I thought, to give people a good idea of what it's about, and to help understand why the common citations of it…
As lots of you have heard, William Dembski and Robert Marks just had href="">a paper published in an IEEE journal. In the last couple of days, I've received about 30 copies of the paper in…
A while ago, I wrote about Dembski's definition of specified complexity, arguing that it was a non-sensical pile of rubbish, because of the fact that "specified complexity" likes to present itself as being a combination of two distinct concepts: specification and complexity. In various places,…

I thought IDists such as Dembski hated theistic evolutionists. How exactly is Dembski's position here different from a T.E.'s ?

By B L Harville (not verified) on 11 May 2009 #permalink

Beautiful Post!

If Dembski had any class at all, he would send you his famous sweater, close down his blog Uncommon Descent, and crawl back into the religious fantasy-world and obscure anti-science that spawned him.

Thnaks for taking the time to so effectively answer, correct and fisk the Dr. Dr.

To (bend over backwards trying to) be fair to Demski, he's following in the well-worn tracks of many, many theologians. In essence, their argument is that the laws of nature (or in Demski's case, mathematics) show that the current Universe can't exist -- therefore, Nature depends upon something not bound by those laws.


Ergo, Christianity.

The Underpants Gnomes could explain the argument better than I do.

By D. C. Sessions (not verified) on 11 May 2009 #permalink

Mark, I think that time can, in fact, be added as a dimension even if the fitness function changes in response to the execution of the search.

Given a search algorithm (deterministic or not), a fitness function that changes according to queries (deterministically or not), and a limit of n queries, we can model the search space as n-dimensional, each n-vector of which represents a series of n queries. So executing a search amounts to choosing one of the vectors.

That's not to say that there aren't plenty of other problems with the active info approach.

The vectors have different probabilities of being chosen, depending on the algorithm and fitness function. But as long as the probabilities associated with the algorithm and the dynamic fitness function are well-defined, the probability associated with each vector is well defined.

The fitness-theoretic CoI theorem would not apply to this search, since there is no fitness function for the space of n-vectors (evaluating an n-vector returns only "found the target" or "did not find the target"). But the other two CoI theorems would apply, since there is a search space, well-defined probabilities over that space, and a target.

That's not to say that there aren't plenty of other problems with the active info approach.

In the above post, the first "That's not to say that there aren't plenty of other problems with the active info approach" was not supposed to be there. Sorry.

Also, I was wrong in [4] when I said that the fitness-theoretic CoI theorem wouldn't apply. It would still apply even though the fitness function isn't over the n-dimensional space.

Perhaps the best metaphor for non-static landscapes is something like an n-dimensional displacement map. It's a computer graphics technique where you use textures with values along a certain axis to displace geometry. I suppose you could extend the technique to a series of successive transforms along arbitrary axes, though obviously the effects wouldn't be commutative.

"The whole paper is an exercise in circularity."

So was Gödel's Incompleteness Theorems, which led to a deeper study of metalanguages and their sublanguages, I think a dimension of communication heuristics is where this leads. (memes, signs)

reflexivity: To give a simple definition, reflexivity is a rather common process by which:
A situation evolves and adapts to the perception of its observers or players.
This new situation alter in its turn its perception by those players.
It could be said to be a "double feedback" process (see feedback), a double reflex. It is also called circularity, self-reference effect.

By isotelesis (not verified) on 11 May 2009 #permalink

Mark wrote

There is nowhere in that whole line of mathematical argument that says "Intelligence can create information"; in fact, according to that argument, intelligent agents cannot create information - nothing can.


Re #9:

Have you actually read or studied Godel's work? It's "circular" in that it describes how you can embed a logical system within an arithmetic system defined using that logical system. But there's nothing circular about the reasoning.

If you look at Godel's proof, it's a very elegant piece of work, and it's not remotely circular. It takes as its starting point a logical system in which you can define Peano arithmetic, and then shows how you can encode a version of predicate logic in numbers using nothing beyond Peano arithmetic. Then he shows how by doing that, you now have a system in which statements about arithmetic are also statements about
logic; and while they're perfectly reasonable interpreted as arithmetic, they're the kind of meta-circular statement that's problematic in logic.

Specifically, Godel showed that you can, in the embedded logic, produce the statement "This statement cannot be proven". The statement is true: you cannot, within the formal system itself, prove the truth of the statement. You can, by using a meta-level system, prove that the statement is true. (But in the meta-level system, you can do the same trick, and produce a statement in that system which is true but can't be proven.

From that, Godel concludes that every formal system is either incomplete or inconsistent. If the meta-statement is provable, then the formal system will necessarily include the kind of meta-statements that break down the consistency of the system. If the meta-statement isn't provable, then the system is incomplete, because there's something true
in the system which can't be proven in the system.

That proof describes something circular - the embedding of a logic within a logic using arithmetic. But there's no circularity of reasoning there. Godel doesn't start with "All formal systems are either incomplete or inconsistent" as one of his premises."

By contrast, Dembski does start with "Only intelligence can produce information" as a premise; and then he pretends to go on to prove that. That is circular reasoning.

Circular reasoning like that is utter garbage. For any statement A, if you take A as a premise, you can derive A as a conclusion. The complexity of your proof is up to you - but it can be reduced to "A therefore A". When you strip away the verbiage and the obfuscation, that's all Dembski's paper really says: "Only intelligence can produce information, therefore only intelligence can produce information".

"So was Gödel's Incompleteness Theorems..."

You're confusing circularity with recursion. He proved that axiomatic systems (one's powerful enough to encode the natural numbers, such as Peano arithmetic) always allow for statements to be made that are true but not provable within that system. That's recursion, using the axiomatic systems to reason about axiomatic systems. Circularity would be starting with the conclusion as a premise.

Here is some pertinent reading, courtesy of Wikipedia.

Reading Dembski's "response" actually gave me the blues for a few hours. How can people live with themselves, engaging in such intellectual dishonesty and obfuscation? I feel the despair coming on again.

=The ID Blues=

Where's that information
Coming from you say
Oh where's that information
Coming from you say
There's no way it could be coming
From mixing DNA

I see the hand of a designer
In everything I see
Oh I see the hand of a designer
In everything I see
Cuz everything's so complex
And confusing to me

So I'm wandering round the landscape
Searching desperately
I'm wandering round the landscape
Searching desperately
Just going 'round in circles
That's plain to see

(Feel free to add your own verse.)

Thinking about this and the last post about Dembski's mathematical escapades, I really wonder why the ID folks are so obsessed with information theory. I mean, sure, it's an interesting branch of modern mathematics, but why all the applications to evolutionary biology and the like? It must be because there really aren't any mathematical theories of evolutionary biology.

Oh wait, nevermind.

"By contrast, Dembski does start with "Only intelligence can produce information" as a premise; and then he pretends to go on to prove that. That is circular reasoning."

Indeed he isn't proving or elucidating anything about the way biological systems evolve and self-regulate. I think his approach to the whole subject is missing the good stuff: the ability to engineer an organisms capable of influencing their mutation rate, and to select for optimal adaptations through internal mechanisms, therefore making the process not only a reflection of inherited traits, but on some level acquired.

I prefer Gregory Bateson's definition of information as "difference that makes a difference". Since Information can't be created or destroyed, "makes" refers to being converted (analog/digital) in parallel. Since it's effects can be measured, Intelligence may as-sign/encode value to information by using cyber-semiotics.

One quality indicating (not proving) the absence of intelligence is randomness, which definitely makes a difference. The presence of intelligence, an emergent phenomena involving the ability to learn, interact, understand one's enviornment, also makes a difference.

According to Stephen Wolfram, there are two ways apparent randomness can occur:

homoplectic systems: external noise, so that if the evolution of the system is unstable, external perturbations amplify exponentially with time.

autoplectic systems: internal mechanisms, so that the randomness is generated purely by the dynamics itself and does not depend on any external sources or require that randomness be present in the initial conditions.

Let's say all the organisms on this planet are incapable of intelligently directing their own evolution (consciously or unconsciously) through internal, self-generated methods.

If we could do that, then perhaps we could also better understand how to also interrupt that ability to adapt and evolve quickly through internally generated mechanisms, I'm not sure whether epigenetics or the holographic paradigm would have to some insight on that. I think 'self-reference' is a fundamental characteristic of living, evolving systems.

I'm not going to defend the credibility of this guy's work, since it's associated with a lot of crackpot science, but it could be interesting:

"Peter Gariaev, Ph.D., is renowned for his discovery of the "DNA Phantom Effect" and as one of the founders of Wave-based Genetics. The basic concept of the revolutionary approach to morphogenesis proposed and developed by Dr. Gariaev's team of geneticists and linguists combines physical models of holographic associative memory and mathematical formalism having to do with intrinsic wave patterns in DNA. The underlying principles of holographic storage and solitonic wave transfer of morphogenetic information reveal previously unknown "ener-genetic" aspects of biological systems functioning. This new insight into the nature of morphogenesis makes it possible to treat the genome as a holographic bio-computer that generates endogenous solitonic acoustic and electromagnetic (sound and light) waves to carry 4D epigenetic (alternative coding) information used by biosystems for spatial and temporal self-organizing. In other words, this new model of genetic creation establishes the primacy of energetic, as opposed to biochemical, activity in directing cellular metabolism and replication--a notion that, when finally accepted by the mainstream, will radically transform genetic science."

By isotelesis (not verified) on 11 May 2009 #permalink

The whole paper is an exercise in circularity. There's nothing there

yes, but look how long it took you to actually explain that.

sucks, don't it?

this is why I hate these asshats so much. It takes an hour to explain why their 10 second sound bite is so fucking wrong.

I once tried to post on Dembskiâs website. My comment was censored. The man is just fractally dishonest.


I'm just speaking out of my hat here, but it appears to me that Dembski's circular argument is solved (at least for him) by using two different definitions of intelligence. But he doesn't indicate which definition applies at which stage in his argument.

I think he wants to say that one form of intelligence is a creator deity (outside of the universe), while the other form of intelligence(which he wants to conserve) is within the universe. How he defines intelligence is beyond me, and I suspect beyond him. It appears that he defines intelligence with 'I know it when I see it' argument.

So a creator deity embeds the active information necessary to conduct an evolutionary search within the entire construct of creation, but once the information is embedded by the creator it cannot be increased or reduced (i.e. it is conserved). Of course, he has no evidence for either assertion; that a creator exists or that intelligence is conserved. Or, in fact, that intelligence is needed to develop complex systems.

So it is a very silly argument, and it is not helped by using different definitions for the same term. But it is consistent with previous Dembski arguments where he tries to shoehorn several meanings into a single term and he picks which meaning he wants to use at different points.

Assigning different meanings to the same term in the same argument would earn a failure in any high-school philosophy course. Philosophers may be known for working with fuzzy terms like truth and beauty, but they are also very rigorous in defining what they mean within the context of their argument. How Dembksi thinks he can get away with this trick is beyond me.

About time as a dimension.
I just want to pose the following question?
Are the paths output by the search algorithms growing in all its coordinates?
If you add time as a coordinate then is a very special one, because is rather hard to produce a decrement on that coordinate. Maybe time can only be a just another coordinate if you know what is going to happen before hand...anyway I think the time coordinate will always show some kind of peculiarity (like in the definition of the space interval in special relativity).
What do you think?

#12: "From that, Godel concludes that every formal system is either incomplete or inconsistent."

Nitpick: "From that, Godel concludes that every formal system able to encode arithmetic is either incomplete or inconsistent."

We have very nice complete axiomatic systems (Euclid geometry, real numbers, etc.).

By Alex Besogonov (not verified) on 12 May 2009 #permalink

Re #21:

No, we don't.

The real numbers contain Peano arithmetic. Using them, we can, therefore, generate an embedded logic, in which arithmetic statements also have logical interpretations; and in the logical system which we can embed in the real numbers, you get the Godel statement - a true statement which can't be proven - so an axiomatic system based on the real numbers is inevitably incomplete.

Euclidean geometry can be used as a basis for defining a system of numbers based on length and proportion. You can, in fact, derive the real numbers using geometric methods. And so, once again, if you've got the ability to define basic arithmetic, you're stuck with Godel incompleteness.

I don't understand how Dembski can make any claim about "smuggled" information in the first place. Doesn't a search, even a random search, learn about the landscape as it progresses? A search that is better than a random walk will make use of the information it gains to improve the next guess, such as climbing the steepest gradient.

I'm not sure that it matters, but information would also be required for a search to perform *worse* than a random walk. Therefore I don't even agree that a random walk is a good standard for comparison. It seems reasonable nature could evolve a search method to outperform a random walk. My knowledge of biology is rudimentary, but based on a recent read of Carl Zimmer's book Microcosm, I'm pretty sure nature easily outperforms a random walk.

@23: you're assuming some properties of the search landscape, for example that it has a gradient. The No Free Lunch theorem is a valid statement about search _across all possible landscapes_; because for every landscape where an algorithm works well and takes you to a point that's the optimum, there'll be another landscape where that point is the minimum.

Dembski keeps trying to apply NFL to evolution without apparently noticing that _evolution does not occur in all possible fitness landscapes_. It occurs in those fitness landscapes which actually exist! And here, empirically, mutation and selection beat random walk very handily. the "smuggled information" would have to be "this is the kind of landscape where evolutionary algorithms work well"; but that isn't smuggled into evolution at all; it's just that evolution doesn't happen in situations where it doesn't work :)

By Stephen Wells (not verified) on 12 May 2009 #permalink

"it's just that evolution doesn't happen in situations where it doesn't work"

That's what makes it a tidy and generally applicable, it doesn't make any unnecessary assumptions about the physical system in question. What I would like to see more of are simulations which consider the role an embodied intelligent/reflexive entities could play in the process, thus considering convergent/parallel factors producing equivalent outcomes. Occam's razor would disapprove, but for the purposes of studying non-failure driven evolutionary modes.

"I think he wants to say that one form of intelligence is a creator deity (outside of the universe), while the other form of intelligence(which he wants to conserve) is within the universe."

The only aspect of the idea that interests me is the fact that we are starting to play 'God' as it were, and may become intelligent designers ourselves (like Dr. Manhattan)

By isotelesis (not verified) on 12 May 2009 #permalink

Naturalism = I can see because I have eyes.
Teleology = I have eyes because my species had the need to see.

Are they mutually exclusive?

"Petroski may well have proposed "form follows failure" as an alternative to the ubiquitous "form follows function". The two are worth comparing. "Form follows failure" suggests that engineers are frequently throwing backward glances to the past as they create their designs; "form follows function" alludes to a forward-thinking designer, carefully anticipating future needs. "Form follows failure" stresses the absence of negative features; "form follows functions" believes in the presence of positive attributes. Although "form follows failure" may well account for the current forms we see around us, "form follows function" has stronger, more positive connotations associated with it."

By isotelesis (not verified) on 12 May 2009 #permalink

I think this says it best: "Clearly, complex adaptive systems have a tendency to give rise to other complex adaptive systems."

Seeking the foundations of cognition in bacteria: From Schrödinger's negative entropy to latent information:

"Gell-Mann refers to top-level emergence (i.e., the basic constituents are not altered during the emergence process itself) in adaptive complex systems as sufficient mechanism together with the principles of the Neo-Darwinian paradigm to explain Life saying that: ``In my opinion, a great deal of confusion can be avoided, in many different contexts, by making use of the notion of emergence. Some people may ask, ``Doesn't life on Earth somehow involve more than physics and chemistry plus the results of chance events in the history of the planet and the course of biological evolution? Doesn't mind, including consciousness or self-awareness, somehow involve more than neurobiology and the accidents of primate evolution? Doesn't there have to be something more?'' But they are not taking sufficiently into account the possibility of emergence. Life can perfectly well emerge from the laws of physics plus accidents, and mind, from neurobiology. It is not necessary to assume additional mechanisms or hidden causes. Once emergence is considered, a huge burden is lifted from the inquiring mind. We don't need something more in order to get something more. Although the ``reduction'' of one level of organization to a previous one---plus specific circumstances arising from historical accidents---is possible in principle, it is not by itself an adequate strategy for understanding the world. At each level, new laws emerge that should be studied for themselves; new phenomena appear that should be appreciated and valued at their own level''. He further explains that: ``Examples on Earth of the operation of complex adaptive systems include biological evolution, learning and thinking in animals (including people), the functioning of the immune system in mammals and other vertebrates, the operation of the human scientific enterprise, and the behavior of computers that are built or programmed to evolve strategies for example by means of neural nets or genetic algorithms. Clearly, complex adaptive systems have a tendency to give rise to other complex adaptive systems''."

By isotelesis (not verified) on 12 May 2009 #permalink

A comment aside from information theory: conservation laws exist in physics, and all have a common property: the conserved property cannot *decrease* or increase.

How can information be constrained to not decrease in a closed system? It's trivial to visualise closed systems losing information. (Burn a dictionary, for example.)

It seems information, however defined, is analogous to entropy. Much discussion on this topic! But of course entropy is not a conserved thermodynamic function.

Dembski's 'Law" is nonsense, of course. But it is obviously garbage if he means 'conservation' in the way it is normally used.

Bob Carroll

By Bob Carroll (not verified) on 12 May 2009 #permalink

Mark, regarding decidability there are few issues: First, regarding decidability of the reals, it depends somewhat what you mean by the reals. Similar remarks apply to Euclidean geometry. In fact, first order theories of both are not strong enough to include Peano arithmetic. One cannot purely on the first order axioms embed Peano arithmetic into the reals. In order to do so, one needs to appeal to 2nd order concepts. So what one is really doing is smuggling in a whole lot of set theory that isn't actually part of the reals.

First order reals are in fact complete. This was shown by Tarski in the late 1940s. (This is in contrast with for example first order rational arithmetic in which we can model the natural numbers in and is thus not complete (this is due to work by Julia Robinson).

Finally, both the statement of Alex Besogonov and yours regarding what Godel's theorem says aren't quite right, although Alex's is closer to being correct. There are many formal systems which are complete. However, Godel's theorem doesn't say that any system which can embed the natural numbers must be incomplete. The obvious counterexample is to take an incomplete system and go through all the well formed statements in some ordering (we can do this. There are only countably many) and at each one that isn't provable either adjoin it or its negation as an axiom. This can't be done in any algorithmic way, but the system exists.

It is thus more accurate to say that Godel's theorem shows that if we have consistent axiomatic system, able to model N, with some algorithm to determine whether any given statement is an axiom of the system, then the system is incomplete.

Posted by: Stephen Wells

The "smuggled information" would have to be "this is the kind of landscape where evolutionary algorithms work well"; but that isn't smuggled into evolution at all; it's just that evolution doesn't happen in situations where it doesn't work :)

That's where Dembski gives away the (evolutionary) store. Now he has to go pester the geologists and physicists about how come the physical world is full of gradients, since it's physical gradients that are the first selective environments. After a number of differentiated populations get going in physical selective environments, of course, they form important parts of each others' selective environment.

"It seems information, however defined, is analogous to entropy. Much discussion on this topic! But of course entropy is not a conserved thermodynamic function."

You mean the discussion on the maximum information capacity of a holographic universe?

"The analogy with the tendency of entropy to increase led me to propose in 1972 that a black hole has entropy proportional to the area of its horizon [see illustration on preceding page]. I conjectured that when matter falls into a black hole, the increase in black hole entropy always compensates or overcompensates for the "lost" entropy of the matter. More generally, the sum of black hole entropies and the ordinary entropy outside the black holes cannot decrease. This is the generalized second law--GSL for short.

The GSL allows us to set bounds on the information capacity of any isolated physical system, limits that refer to the information at all levels of structure down to level X. In 1980 I began studying the first such bound, called the universal entropy bound, which limits how much entropy can be carried by a specified mass of a specified size [see box on opposite page]. A related idea, the holographic bound, was devised in 1995 by Leonard Susskind of Stanford University. It limits how much entropy can be contained in matter and energy occupying a specified volume of space."

By isotelesis (not verified) on 12 May 2009 #permalink

Posted by: B L Harville | May 11, 2009 3:22 PM

"I thought IDists such as Dembski hated theistic evolutionists. How exactly is Dembski's position here different from a T.E.'s ?"

William Dembski, in an interview with Focus on the Family:
4. Does your research conclude that God is the Intelligent Designer?
"I believe God created the world for a purpose. The Designer of intelligent design is, ultimately, the Christian God."

Regarding the issue of Dembski and theistic evolution. The primary difference is that TE proponents acknolwedge that the evolution part is science and the theistic part is religion. Dembski wants his theism to be considered science. Oh, and depending on what day of the week it is, he also seems to reject common descent.

it appears to me that Dembski's circular argument is solved (at least for him) by using two different definitions of intelligence. But he doesn't indicate which definition applies at which stage in his argument.

In which case he's committing the equivocation fallacy.

"The Designer of intelligent design is, ultimately, the Christian God."

He is officially crazy in my book.

By isotelesis (not verified) on 12 May 2009 #permalink

Dembski apparently didn't recover from his WEASEL trauma yet ..

By infidel_michael (not verified) on 12 May 2009 #permalink

Re 12:

I see Joshua Zelinsky has already beaten me to it, but just to clarify a little:

In the first order theory of real arithmetic, there is no predicate phi(x) such that phi(x) is true iff x is a natural number. To express number theoretic statements, we would need some way of quantifying over the naturals, which we cannot do for the above reason.

So the obvious way in which one might try to embed the theory of the naturals within that of the reals doesn't work.

This doesn't by itself prove that arithmetic can't be embedded somehow - a priori there might be some other encoding (e.g. represent 0 as 1 and successor as doubling or whatever) but in fact there isn't.

(I don't know how to prove this, but just off the top of my head...) I suspect one can show that a set of reals is definable (in first-order real arithmetic) iff it is a finite union of points and (possibly unbounded) intervals. In particular, no countably infinite subsets of R are definable.

@37: There have to be more restrictions on definable sets of reals; your definition would give uncountably many definable sets. (Probably just a statement that each point and each interval endpoint must be individually definable.)

By Carl Witty (not verified) on 13 May 2009 #permalink

Neil, I'm not aware of any nice proof involving what sets are definable in first order arithmetic of the reals. The easiest proof that one cannot embed Peano arithmetic into first order R is to just use Tarski's result with Godel's theorem. An embedding would lead to a contradiction.


You're quite right, but in case you're interested, it looks like there is an easy way to show which sets of reals are definable.

The key lemma in Tarski's proof that the first order theory of reals is complete is establishing quantifier elimination. But this immediately characterises definable sets as those definable by quantifier-free formulas.

Any such formula is just a boolean combination of equations p(x) = 0 and inequalities q(x) > 0 (for polynomials p and q). Hence, definable sets are indeed finite unions of points and intervals.

(Note: This requires "<" to be part of the language.)

By Anonymous (not verified) on 13 May 2009 #permalink

I tried to write < there, but it got interpreted as a tag. Oops!

Was going to say 'this requires < to be part of the language'.


I'm obviously too stupid to be using the internet... but anyway, the thing I was trying to say at the end wasn't important :-/


I don't understand how Dembski can make any claim about "smuggled" information in the first place.

I think it goes back to Mark's point about Dembski using circular logic. Abiogenisis did not occur in a vacuum; it occurred on a planet with physical and chemical properties, i.e. a pre-existing environment. Now, most people using the 'information search' analogy would instantly recognize that this pre-existing environment as the solution: evolution occurs in one specific landscape, not all possible landscapes, and that one specific landscape supplies the missing piece. Its responsible for the smuggling.

But Dembski can't accept this because he's locked into thinking that only intelligence can create information. So instead of arriving at the right conclusion, he has to conclude with his own premise: the information encoded in the pre-existing landscape must've been put there by an intelligent someone, because only intelligence can produce information.

"the information encoded in the pre-existing landscape must've been put there by an intelligent someone, because only intelligence can produce information."

If he's saying it must have been put there by some cognitive entity he is basically invoking God, which is a line one should not cross in vain, he should know his ten commandments by now.

By isotelesis (not verified) on 13 May 2009 #permalink

isotelesis: The usenet post which you cited contains its own refutation. As such, I'm not sure why you bothered to cite it.

By Michael Ralston (not verified) on 13 May 2009 #permalink

Because it regards information conservation laws.

"The famous bet between physicists Stephen Hawking and John Preskill that Hawking conceded he lost in July 2004 concerned whether physical information is conserved in
black holes."

It doesn't support what Dembski's trying to pull of, so I see why you call it irrelevent, perhaps I added it to add some debate.

By isotelesis (not verified) on 13 May 2009 #permalink

Preskill bet that information was conserved.

By isotelesis (not verified) on 13 May 2009 #permalink

Preskill bet that information was conserved.

By isotelesis (not verified) on 13 May 2009 #permalink

Information exists with or without sentience, the act of information transduction is 'perception', which is facilitated with intelligent systems. The only thing which can be "created" is order and disorder, both are important to evolution and complex adaption.

By isotelesis (not verified) on 13 May 2009 #permalink

Another cool way to model evolution is with swarm-like, global-local feedback-signaling methods, there can be no bacterial 'cognition' without a quasi-intelligent genome metalanguage.

I'd look down, not up, for signs of intelligence.

By isotelesis (not verified) on 13 May 2009 #permalink

"Information exists with or without sentience, the act of information transduction is 'perception', which is facilitated with intelligent systems. The only thing which can be "created" is order and disorder, both are important to evolution and complex adaption."

You're playing a game of semantics with yourself. Information can't exist without someone (namely humans) being able to not only come up with the word 'information', but to make it useful.

Beware of anthropomorphism.

'Information can't exist without someone (namely humans) being able to not only come up with the word 'information', but to make it useful.'

Yes, it's merely an explanatory device, a useful abstraction which corresponds to a physical notion. We can assume that before there was life, there was information, and black holes would still have conserved it even if there were no local perceptors. The possibility of a global self-perceptual mechanism with non-living sentient attributes can't be ruled out or proven. In which case, the logos, the primary teleological operator, the ultimate hyperincursion, the Gott (or some aspect of it) may have been able to actuate information dynamically.

Gene-network regulation mechanisms and dynamics may play an important role in evolution.

Bacteria are small, but not stupid:

By isotelesis (not verified) on 13 May 2009 #permalink

Darek, you are incorrect. In the description of a physical system, we often use the concept of 'information propagation' to try to understand what problems we need to solve at what times, without the need for an observer other than the (purely physical) system itself. Say we'd like to understand what happens in the atmosphere above the beginning of a rock concert with loud amplifiers on the stage. Before the show, we'll have the air being still and undisturbed, and during the show, there will be rock music propagating through it.

When does the changeover happen? We think of the fact that 'the concert has begun' as being information that affects the state of the air - and in this case, the relevant speed of propagation is the speed of sound. If you're further away, it takes longer for you to notice the concert has begun. If it was not a rock concert but a bomb detonating, the relevant speed would be the speed of the shockwave travelling through the air (which could be supersonic)

This particular concept of information propagation is extremely important and well grounded, but does not require a sentient observer.

That paper, isotelesis, is actually pretty interesting. Thanks.

However, the issue is still mainly, in my opinion, a philosophical one. One is holding in advance our current understanding to attribute what we know now to things that would exist regardless of us.

CJ, forgive me if I just don't get it, but I don't see how your analogy escapes the issue I speak of...

It is we (humans) who are making sense of the world. The world does not inherently make sense.

Completeness & consistency of axioms for the reals: to shorten previous comments, the system does not allow picking out the natural numbers.

By Pete Dunkelberg (not verified) on 14 May 2009 #permalink

Well, never mind consistency, and should completeness be changed to decidability?

By Pete Dunkelberg (not verified) on 14 May 2009 #permalink


Technically I didn't give an analogy - I intended to give an honest application of one form of the idea of 'information' - the form I'm familliar with. Let's make up some numbers to give my example a more concrete feel:

Say the speed of sound is 300 metres per second, and there is a loud rock concert which will start at 2pm sharp in Hyde Park. 3km (3000 metres) away from the stage sits a little black cat that is dozing in the sun, with its eyes closed.

At 2pm, the concert starts. But the information that the rock concert has started won't reach the dozing cat for 10 seconds, because of the limit of the speed of sound. At 10 seconds past 2pm the cat will wake up and run away, but not before - the information hasn't propagated to it yet. The cat does not need someone to sit it down and explain that there is a rock concert going on - it just wants to run away. No humans necessary.

Now, I agree that I am giving this example using human terminology and putting it in a context which I expect you will be familliar with, but this is because I am a human talking to another human. The cat will still be unaware for those 10 seconds even if no humans existed in the world (although that raises a question: who is holding the rock concert?).

I expect that this idea of 'information' is relatively crude when compared to the pure mathematics one can presumably do, but it isn't a human idea, any more than gravity is. If you want to say 'anything requires an observer to exist' then you're moving into quite a philosophical area, rather than a scientific one.

Re #57 and previous:

D'oh. The fact that I'm not a mathematician bites me on the ass again. I'm a computer scientist. So my background is heavily biased towards that side of the universe, and the further I get from it, the more likely I am to mess up.

I didn't remember that the axiomatization of the reals doesn't have any way of selecting the rationals. I keep thinking of reals as axiomatized in the progressive method of set theory: define the naturals; use the naturals to define the integers; use the integers to define the rationals; use the rationals to define the reals. But that formulation includes set theory. (d'oh again).

Thanks for the correction. I appreciate it, and I'll try not to make the same mistake again.


There is some confusion I need to clear up in my defense as I actually see quite clearly now (and we are essentially in agreement) what it is you are saying.

I am not saying that an observer is necessary for things to exist. When I said that 'information can't exist...' I was referring to the fact that information is a construct of human knowledge. Anything can be considered a piece of info - I understand that, but to make use of it in the example you gave requires concepts of human knowledge. Thats all I was trying to point out.

You say what we call gravity would exist without humans being here - yes, of course. But would there be such a thing as the concept of gravity without humans? I think the answer is obviously no, because gravity only is as we know it. Thats what I'm getting at.

Perhaps my thinking is flawed or I am going about this the wrong way... if so - your thoughts?

I expect that this idea of 'information' is relatively crude when compared to the pure mathematics one can presumably do, but it isn't a human idea, any more than gravity is. If you want to say 'anything requires an observer to exist' then you're moving into quite a philosophical area, rather than a scientific one.

Why is it assumed that the two domains are mutually exclusive? Especially when science has gone a long way towards solving a lot of "philosophical" problems, and when philosophy provides (or at least tries) an epistemological foundation for science.

I'm of the opinion that science and mathematics (as a mode of discourse among homo sapiens) cannot exist without language, so I tend to side with darek on this one. I'm not saying the referents of scientific terminology wouldn't exist without human beings, but the terminology itself and the theoretical and intellectual frameworks underpinning science certainly wouldn't.

"One is holding in advance our current understanding to attribute what we know now to things that would exist regardless of us."

Our descriptions would not exist without us, but what they refer to have an independent protoexistence, the holoverse may require some level of self-referential interdependence to configure itself to maintain the logical consistency of it's sub-parameters. I know this is basically all futile filawzafy, and has nothing to do with science. My only argument is that something more than a lesser observer determines endomorphic cohesion, I'd call it simply the 'equivariant' orientator, whatever organism that possibly can emerge from the 'orientifold'(poetic license) has the ability to refer, transduce information from the manifold.

Beyond here ther be dragons.

By isotelesis (not verified) on 14 May 2009 #permalink

Here is the challenge:

"With the help of Process Physics, it can be realised that much of "modern science" is probably just a religion based on extreme physical materialism."

By isotelesis (not verified) on 14 May 2009 #permalink

My reply to Nietzsche:

Gott ist Geräusch

Process Physics: Modelling Reality as Self-Organising Information

"Process physics is a new and radical approach to the modeling of fundamental physics drawing on information theory. It aims to be a theory of everything by abandoning the space-time construct of Galileo, Newton and Einstein, and by arguing that time can only be modeled as a process. The abandonment of time as a geometrical construct is used to solve the problems with conventional physics such as the incompatibility between the theories of general relativity and quantum mechanics. The model exhibits both gravitational and non-local quantum mechanical behaviour, uniting them in one theory. Space, matter, gravity and time seem to emerge from the model without the pre-existing notion of object or laws built in the model."

By isotelesis (not verified) on 14 May 2009 #permalink

isotelesis, why not get your own blog instead of using Mark's? That way we won't have to scroll past your repetitions.

You're right, sorry about my poor manners, this is a classy blog with a highly educated readership, I'll get back to mathematical linguistics after finishing school.

By isotelesis (not verified) on 14 May 2009 #permalink

Dan L.: Because these philosophical ideas we bandy about aren't testable. The Many Worlds View isn't testable, for example, so I would call it philosophical.

Dan L.: Because these philosophical ideas we bandy about aren't testable. The Many Worlds View isn't testable, for example, so I would call it philosophical.

Is that necessarily the case? It seems like if the MW idea is not correct, then isn't some additional mechanism still required to explain the "collapse" of the quantum wave function? If so, that mechanism might be testable. In quantum mechanics, it seems unwise to be too hasty to dismiss ideas as untestable, considering how many things in quantum mechanics that were once thought to be untestable have since yielded to experimental investigation.

@71: is there any way to distinguish "the wavefunction collapsed to state |X> because of mechanism Z" and "In this world the state collapsed to state |X> because of mechanism Z and in another world it collapsed to |Y>" ? I don't think there is, regardless of what Z is; thus testing for a proposed mechanism of collapse could never eliminate many-worlds. Thus MX could be rendered redundant but couldn't be disproved; I think it's empty.

By Stephen Wells (not verified) on 15 May 2009 #permalink

Dan L.: Because these philosophical ideas we bandy about aren't testable. The Many Worlds View isn't testable, for example, so I would call it philosophical.

Implicitly asserting that all scientific work is testable. While that may be almost true in modern times, a great deal of scientific work in the past has consisted of observation and classification, of pure theory and the development of terminology -- none of which is testable.

Likewise, mathematics is in no way testable. It could even be considered an extension of philosophy as intertwined as it is with logic.

I'm certainly not saying all philosophy is science and all science is philosophy (false prima facie), but I think it would be difficult to argue that there's no overlap between the two. In other words, I don't think they are the same, but nor do I think they're mutually exclusive.

Is there any way to distinguish "the wavefunction collapsed to state |X> because of mechanism Z" and "In this world the state collapsed to state |X> because of mechanism Z and in another world it collapsed to |Y>" ?

My understanding is that in the many-worlds theory, there is no collapse, so no mechanism is required.

Sorry, yes - was taking 'pertaining to a hypothesis that can be falsified' as my definition of 'scientific'. I might make the further argument that Mathematics has become to be regarded as separate from the other sciences for this very reason.

Agree that classification of specimens is both important and scientific.

@74: You may have noticed that in our everyday experience we do see wavefunction collapse; systems cease to be in a superposition and enter an eigenstate. Whether there is or is not some many-worlds superposition of universes, many-worlds doesn't help to explain why superpositions collapse in any one universe, or equivalently in many-worlds terms, why we experience a universe with a system in state |X> rather than the superposition of universes with systems in states |X> and |Y>.

By Stephen Wells (not verified) on 22 May 2009 #permalink

While he doesn't make any mathematical arguments, it's refreshing to hear non-religious types being skeptical of the present consensus on the means and modes of evolution.

"If you are new to the Dilbert Blog, I remind you that I donât believe in Intelligent Design or Creationism or invisible friends of any sort. I just think that evolution looks like a blend of science and bullshit, and have predicted for years that it would be revised in scientific terms in my lifetime. Itâs a hunch â nothing more."

"Itâs a hunch â nothing more." That's even MORE absurdly wrong that to brand Evolution by Natural Selection as "Merely a Theory."

THEORY in Science does NOT mean "hunch." Nor, in the other direction, does it mean "FACT."

Evolution, according to Darwin, is BOTH a FACT and a THEORY. He made pains to explain his evidence and arguments for both of these quite-different claims.

Lunatics muddy the otherwise transparent waters.

I agree that life evolves without the foresight of external intelligence, but not that life evolves without intermediary forms of intelligence, such as teamwork.

"Mr Adams has also blindly accepted the claims of the Designists. If it's about "how", then how do the Intelligent Design creationists suggest it was done? Well? Can you list some of their specific hypotheses?
Of course not. They don't have any idea "how". Intelligent Design creationism is exclusively about poking holes in "Darwinism"âjust as Adams acknowledges is the creationist goal."

While Mr. Adams seems to like the 'retro-causality' theory, or "us" from the future, the idea that our universe is embedded with intelligence should not be given the straw man treatment and falsely equated with creationism.

"Symbiosis as a Source of Evolutionary Innovation:

A departure from mainstream biology, the idea of symbiosisâas in the genetic and metabolic interactions of the bacterial communities that became the earliest eukaryotes and eventually evolved into plants and animalsâhas attracted the attention of a growing number of scientists.

These original contributions by symbiosis biologists and evolutionary theorists address the adequacy of the prevailing neo-Darwinian concept of evolution in the light of growing evidence that hereditary symbiosis, supplemented by the gradual accumulation of heritable mutation, results in the origin of new species and morphological novelty. They include reports of current research on the evolutionary consequences of symbiosis, the protracted physical association between organisms of different species. Among the issues considered are individuality and evolution, microbial symbioses, animal bacterial symbioses, and the importance of symbiosis in cell evolution, ecology, and morphogenesis.

Lynn Margulis, Distinguished Professor of Botany at the University of Massachusetts at Amherst, is the modern originator of the symbiotic theory of cell evolution. Once considered heresy, her ideas are now part of the microbiological revolution."

By isotelesis (not verified) on 14 Jul 2009 #permalink

An alternative to intelligent design theory?

"The Gaia hypothesis holds that Earth's physical and biological processes are linked to form a complex, self-regulating system and that life has affected this system over time. Until a few decades ago, most of the earth sciences viewed the planet through disciplinary lenses: biology, chemistry, geology, atmospheric and ocean studies. The Gaia hypothesis, on the other hand, takes a very broad interdisciplinary approach. Its most controversial aspect suggests that life actively participates in shaping the physical and chemical environment on which it depends in a way that optimizes the conditions for life. Despite intial dismissal of the Gaian approach as New Age philosophy, it has today been incorporated into mainstream interdisciplinary scientific theory, as seen in its strong influence on the field of Earth System Science."

By isotelesis (not verified) on 14 Jul 2009 #permalink

One more intelligent person with an interesting, alternative perspective on the evolution debate:

"I am skeptical of claims for the ability of random mutation and natural selection to account for the complexity of life. Careful examination of the evidence for any such theory should be encouraged. What I do believe is that the evident variations between individuals of any species, however caused, in combination with natural selection, are primary drivers of speciation, in accord with Darwin. Accounting for the complexity of life is a tremendously more challenging task that random mutation and natural selection cannot possibly accomplish on their own. Darwin never claimed his theory did so, and the vast amount we have learned since Darwin about molecular biology, cell biology, and ecology shows that it would have been wildly presumptuous for him to have done so.

The above is my position statement. The first two sentences reproduce almost verbatim the wording of A Scientific Dissent from Darwinism signed by some 700 scientists. The only replacements are the obvious "I am" for "We are," and more importantly "any such theory" for "Darwinian theory" since Darwin did not claim to account for the complexity of life but for the phenomenon of speciation, whereby the characteristics of a population can change over time: the longer the time the greater the number and extent of possible changes. Speciation can contribute to the complexity of life, as Darwin pointed out with admirable clarity, but unless it is the only such contributor it cannot be said to account for it, and nowhere did Darwin claim to have done so."