In comments to [my recent post about Gilder's article][gilder], a couple of readers asked me to take a look at a [DI promoted][dipromote] paper by
Albert Voie, called [Biological function and the genetic code are interdependent][voie]. This paper was actually peer reviewed and accepted by a journal called “Chaos, Solitons, and Fractals”. I’m not familiar with the journal, but it is published by Elsevier, a respectable publisher.
Overall, it’s a rather dreadful paper. It’s one of those wretched attempts to take Gödel’s theorem and try to apply it to something other than formal axiomatic systems.
Let’s take a look at the abstract: it’s pretty representative of the style of the paper.
>Life never ceases to astonish scientists as its secrets are more and more
>revealed. In particular the origin of life remains a mystery. One wonders how
>the scientific community could unravel a one-time past-tense event with such
>low probability. This paper shows that there are logical reasons for this
>problem. Life expresses both function and sign systems. This parallels the
>logically necessary symbolic self-referring structure in self-reproducing
>systems. Due to the abstract realm of function and sign systems, life is not a
>subsystem of natural laws. This suggests that our reason is limited in respect
>to solve the problem of the origin of life and that we are left taking life as
We get a good idea of what we’re in for with that second sentence: there’s no particular reason to throw in an assertion about the probability of life; but he’s signaling his intended audience by throwing in that old canard without any support.
The babble about “function” and “sign” systems is the real focus of the paper. He creates this distinction between a “function” system (which is a mechanism that performs some function), and a “sign” system (which is information describing a system), and then tries to use a Gödel-based argument to claim that life is a self-referencing system that produces the classic problematical statements of incompleteness.
Gödel formulas are subsystems of the mind
So. Let’s dive in a hit the meat of the paper. Section one is titled “Gödel formulas are subsystems of the mind”. The basic argument of the section is that the paradoxical statements that Gödel showed are unavoidable are strictly products of intelligence.
He starts off by providing a summary of the incompleteness theorem. He uses a quote from Wikipedia. The interesting thing is that he *misquotes* wikipedia; my guess is that it’s deliberate.
>In any consistent formalization of mathematics that is sufficiently strong to
>axiomatize the natural numbers — that is, sufficiently strong to define the
>operations that collectively define the natural numbers — one can construct a
>true (!) statement that can be neither proved nor disproved within that system
In the [wikipedia article][wiki-incompleteness] that that comes from, where he places the “!”, there’s actually a footnote explaining that “true” in used in the disquotational sense, meaning (to quote the wikipedia article on disquotationalism): “that ‘truth’ is a mere word that is conventional to use in certain contexts of discourse but not a word that points to anything in reality”. (As an interesting sidenote, he provides a bibliographic citation for that quote that it comes from wikipedia; but he *doesn’t* identify the article that it came from. I had to go searching for those words.) Two paragraphs later, he includes another quotation of a summary of Godel, which ends midsentence with elipsis. I don’t have a copy of the quoted text, but let’s just say that I have my doubts about the honesty of the statement.
The reason that I believe this removal of the footnote is deliberate is because he immediately starts to build on the “truth” of the self-referential statement. For example, the very first statement after the misquote:
>Gödel’s statement says: “I am unprovable in this formal system.” This turns out
>to be a difficult statement for a formal system to deal with since whether the
>statement is true or not the formal system will end up contradicting itself.
>However, we then know something that the formal system doesn’t: that the
>statement is really true.
The catch of course is that the statement is *not* really true. Incompleteness statements are neither true *nor* false. They are paradoxical.
And now we start to get to his real point:
>What might confuse the readers are the words *”there are true mathematical
>statements”*. It sounds like they have some sort of pre-existence in a Platonic
>realm. A more down to earth formulation is that it is always possible to
>**construct** or **design** such statements.
See, he’s trying to use the fact that we can devise the Gödel type circular statements as an “out” to demand design. He wants to argue that *any* self-referential statement is in the family of things that fall under the rubric of incompleteness; and that incompleteness means that no mechanical system can *produce* a self-referential statement. So the only way to create these self-referencing statements is by the intervention of an intelligent mind. And finally, he asserts that a self-replicating *device* is the same as a self-referencing *statement*; and therefore a self-replicating device is impossible except as a product of an intelligent mind.
There are lots of problems with that notion. The two key ones:
1. There are plenty of self-referential statements that *don’t* trigger
incompleteness. For example, in set theory, I *can* talk about “the set of
all sets that contain themselves”. I can prove that there are two
sets that meet that description: one contains itself, the other doesn’t.
There’s no paradox there; there’s no incompleteness issue.
2. Unintelligent mechanical systems can produce self-referential statements
that do fall under incompleteness. It’s actually not difficult: it’s
a *mechanical* process to generate canonical incompleteness statements.
Computer programs and machines are subsystems of the mind
So now we’re on to section two. Voie wants to get to the point of being able to
“prove” that life is a kind of a machine that has an incompleteness property.
He starts by saying a formal system is “abstract and non-physical”, and as such “is is really easy to see that they are subsystems of the human mind”, and “belong to another category of phenomena than subsystems of the laws of nature”.
One one level, it’s true; a formal system is an abstract set of rules, with no physical form. It does *not* follow that they are “subsystems of the human mind”. In fact, I’d argue that the statement “X is a subsystem of the human mind” is a totally meaningless statement. Given that we don’t understand quite what the mind is or how it works, what does it mean that something is a “subsystem” of it.
There’s a clear undercurrent here of mind/body dualism here; but he doesn’t bother to argue the point. He simply asserts its difference as an implicit part of his argument.
From this point, he starts to try to define “function” in an abstract sense. He quotes wikipedia again (he doesn’t have much of a taste for citations in the primary literature!), leading to the statement (his statement, not a wikipedia quotation):
>The non-physical part of a machine fit into the same category of phenomena as
>formal systems. This is also reflected by the fact that an algorithm and an
>analogue computer share the same function.
Quoting wikipedia again, he moves on to: “A machine, for example, cannot be explained in terms of physics and chemistry.” Yeah, that old thing again. I’m sure the folks at Intel will be absolutely *shocked* to discover that they can’t explain a computer in terms of physics and chemistry. This is just degenerating into silliness.
>As the logician can manipulate a formal system to create true statements that
>are not formally derivable from the system, the engineer can manipulate
>inanimate matter to create the structure of the machine, which harnesses the
>laws of physics and chemistry for the purposes the machine is designed to
>serve. The cause to a machine’s functionality is found in the mind of the
>engineer and nowhere else.
Again: dualism. According to Voie, the “purpose” or “function” of the machine is described as a formal system; the machine itself is a physical system; and those are *two distinctly different things*: one exists only in the mind of the creator; one exists in the physical world.
The interdependency of biological function and sign systems
And now, section three.
He insists on the existence of a “sign system”. A sign system, as near as I can figure it out (he never defines it clearly) is a language for describing and/or building function systems. He asserts:
>Only an abstract sign based language can store the abstract information
>necessary to build functional biomolecules.
This is just a naked assertion, completely unsupported. Why does a biomolecule *require* an abstract sign-based language? Because he says so. That’s all.
Now, here’s where the train *really* goes off the tracks:
>An important implication of Gödel’s incompleteness theorem is that it is not
>possible to have a finite description with itself as the proper part. In other
>words, it is not possible to read yourself or process yourself as process. We
>will investigate how this parallels the necessary coexistence of biological
>function and biological information.
This is the real key point of this section; and it is total nonsense. Gödel’s theorem says no such thing. In fact, what it does is demonstrate exactly *how* you can represent a formal system with itself as a part, There’s no problem there at all.
What’s a universal turing machine? It’s a turing machine that takes a description of a turing machine as an input. And there *is* a universal turing machine implementation of a universal turing machine: a formal system which has itself as a part.
Life is not a subsystem of the laws of nature
It gets worse.
Now he’s going to try to put thing together: he’s claimed that a formal system can’t include itself; he’s argued that biomolecules are the result of a formal sign system; so now, he’s going to try to combine that to say that life is a self-referential thing that requires the kind of self-reference that can only be the product of an intelligent mind:
>Life is fundamentally dependent upon symbolic representation in order to
>realize biological function. A system based on autocatalysis, like the
>hypothesized RNA-world, can’t really express biological function since it is a
>pure dynamical process. Life is autonomous with something we could call
>”closure of operations” or a cluster of functional parts relating to a whole
>(see  for a wider discussion of these terms). Functional parts are only
>meaningful under a whole, in other words it is the whole that gives meaning to
>its parts. Further, in order to define a sign (which can be a symbol, an index,
>or an icon) a whole cluster of self-referring concepts seems to be presupposed,
>that is, the definition cannot be given on a priori grounds, without implicitly
>referring to this cluster of conceptual agents . This recursive dependency
>really seals off the system from a deterministic bottom up causation. The top
>down causation constitutes an irreducible structure.
Got it? Life is dependent on symbolic representation. But biochemical processes can’t possibly express biological function, because biological function is dependent on symbolic representations, which are outside of the domain of physical processes. He asserts the symbolic nature of biochemicals; then he asserts that symbolic stuff is a distinct domain separate from the physical; and therefore physical stuff can’t represent it. Poof! An irreducible structure!
And now, the crowning stupidity, at least when it comes to the math:
>In algorithmic information theory there is another concept of irreducible
>structures. If some phenomena X (such as life) follows from laws there should
>be a compression algorithm H(X) with much less information content in bits than
Nonsense, bullshit, pure gibberish. There is absolutely no such statement anywhere in information theory. He tries to build up more argument based on this
statement: but of course, it makes no more sense than the statement it’s built on.
But you know where he’s going: it’s exactly what he’s been building all along. The idea is what I’ve been mocking all along: Life is a self-referential system with two parts: a symbolic one, and a functional one. A functional system cannot represent the symbolic part of the biological systems. A symbolic system can’t perform any function without an intelligence to realize it in a functional system. And the two can’t work together without being assembled by an intelligent mind, because when the two are combined, you have a self-referential
system, which is impossible.
So… To summarize the points of the argument:
1. Dualism: there is a distinction between the physical realm of objects and machines, and the idealogical realm of symbols and functions; if something exists in the symbolic realm, it can’t be represented in the physical realm except by the intervention of an intelligent mind.
2. Gödel’s theorem says that self-referential systems are impossible, except by intervention of an intelligent mind. (wrong)
3. Gödel’s theorem says that incompleteness statements are *true*.(wrong)
4. Biological systems are a combination of functional and symbol parts which form a self-referential system.
5. Therefore, biological systems can only exist as the result of the deliberate actions of an intelligent being.
This stinker actually got *peer-reviewed* and *accepted* by a journal. It just goes to show that peer review can *really* screw up badly at times. Given that the journal is apparently supposed to be about fractals and such that the reviewers likely weren’t particularly familiar with Gödel and information theory. Because anyone with a clue about either would have sent this to the trashbin where it belongs.