Irreducible Complexity in Mathematics?

It's been a while since I've replied to anything over at Uncommon Descent. But this entry from Salvador Cordova really caught my eye.

It is based on this paper, by mathematician Gregory Chaitin, The paper's title: “The Halting Probability Omega: Irreducible Complexity in Pure Mathematics.”

Goodness! There's irreducible complexity again. Let's check in with Salvador first:

On the surface Chaitin's notion of Irreducible Complexity (IC) in math may seem totally irrelevant to Irreducible Complexity (IC) in ID literature. But let me argue that notion of IC in math relates to IC in physics which may point to some IC in biology...

First, of consider this article archived at Access Research Network (ARN) by George Johnson in the NY Times on IC in physics:

Challenging Particle Physics as Path to Truth

Many complex systems -- the very ones the solid-staters study -- appear to be irreducible.

The concept of “irreducible complexity” has been used by Alan Turing, Michael Behe, and perhaps now by physicists. Behe's sense of irreducible is not too far from the sense of irreducible in the context of this physics. If biological systems take advantage of irreducible phenomena in physics (for example, what if we discover the brain uses irreducible physical phenomena ) we will have a strong proof by contradiction that there are no Darwinian pathways for biolgoical systems to incorporate that phenomena.

The possibility of IC in physics may be tied to IC in math and this may have relevance to IC in biology. (Emphasis Added)

Let us ignore the arrogance and abusrdity of mentioning Michael Behe in the same sentence with Alan Turing. Instead let's try to describe all the ways in which that paragraph above is meaningless.

There's no comparison between Behe's notion of irreducible complexity and anything Johnson was talking about in his article. In fact, they are almost diamterically opposed, as I will now show.

According to Behe, irreducible complexity is a property that might be possessed by any multi-part system that can be said to have a clear function. He is fond of illustrating his idea with a mousetrap, you might recall. As applied to biology, an IC system is any biochemical system composed of several well-matched parts, such that the removal of any part destroys the function of the system. This property was said to be important because, according to Behe, any biochemical system possessing this property could not have evolved gradually under the aegis of natural selection.

Of course, Behe's claim is simply wrong as a matter of logic. Leaving aside the fact that biology has many illuminating things to say about the evolution of complex systems, the simple fact is that as a matter of logic Behe's notion of irreducible complexity is simply irrelevant to any determination of whether a system could have evolved. There are a variety of scenarios through which a system meeting Behe's definition could have evolved through gradual stages, and that is enough to refute his central argument.

Johnson, by contrast, is contrasting the approaches of particle physicists with those of solid-state physicists. The idea is that the particle physicists like to break down complex systems into their component parts, and then try to understand the whole by formulating laws governing the interactions of the parts. Solid-staters, according to Johnson, regard certain complex systems as “irreducible” that is, they display properties that can not be understood via simple laws governing the interactions of fundamental particles. This, of course, is related to the idea of emergent properties.

So where Behe is saying that something valuable is learned by taking a complex biochemical system, atomizing it into its component parts, and considering the effects of knocking out each part in turn, Johsnon is saying this is precisely the sort of thing you must not do.

Moving on, what could that bold-face remark possibly mean? Salvador bases his notions of irreduciblity in physics on the usage by Johnson in the article linked to above. But Johnson does not apply the word “irreducible” to phenomena. He is talking about complex systems that can not be studied properly by breaking them into their component parts. In other words, it is systems, not phenomena, that are irreducible. So what could it mean for the brain to take advantage of them? And even if we manage to impart some meaning to that phrase, why would it imply that such systems could not have evolved? I suspect Salvador has no more idea than I do.

So what was Chaitin talking about? Well, that gets a bit complicated. Basically, he is using ideas from algorithmic information theory to try to elucidate the nature of incompleteness in mathematics. Godel's famous incompleteness theorem showed that any finite set of axioms strong enough to include elementary number theory must contain statements that are true but unprovable.

A “complete” mathematical theory would be one in which any statement that is true within the theory could be proved from the axioms of that theory. Since Godel effectively showed this to be impossible, his result is referred to as the incompleteness theorem.

In his paper, Chaitin observes that, as important as Godel's theorem is, it does not really tell us how serious a problem incompleteness is. In other words, Godel showed that there must be certain propositions that are true but unprovable. But to do this he had to conjure up a pretty bizarre, self-referential kind of statement. Not exactly the usual, humdrum kind of statements with which mathematicians generally concern themselves. The way mathematicians undertook their work was ultimately little affected by Godel's discovery. It was possible for professional mathematicians to pretty much ignore what Godel did.

Chaitin proceeds to make an argument that mathematics, in a technical sense described in the article, can be said to be infinitely complex. Since any finite axiom system can only encompass a finite portion of this complexity, incompleteness is something that permeates the entire mathematical enterprise. This is said to point to a fundamental inadequacy in the axiomatic method.

The “irreducible” part refers to certain bit strings that can not be realized as the output of a computer program less complex than the string itself. According to Chaitin, most bit strings, that is, most mathematical facts, are of this sort. This sheds some new light both on the results of Godel, and on the later results on computability by Turing.

There is a lot to mull over in Chaitin's paper; I don't feel that I fully understand all of his points. I also find much to disagree with in his remarks about the proper conduct of mathematics. But for now the relevant thing is that Chaitin's notion of irreduciblility has nothing to do with Behe's worthless notion of the same name, and has only a passing connection at best with the sort of irreducibility to which Johnson refers in his article.

Salvador is simply playing word games.

Tags

More like this

I'd be willing to bet that Chaitin purposefully tuns out the term "irreducible complexity" whenever an IDer uses it, much like physicists tune out when they hear a new-ager use words like "energy" and "field". Like the new-agers, Behe has basically stolen a perfectly useful term with a specific meaning and mis-applied it to something much more vague.
The ID sense of "irreducible complexity" is actually a meaningless term, as MarkCC has pointed out. This point is very important: Until Behe can actually define "irreducible complexity" formally, it cannot be compared to either Turing's sense (computations that do not complete) or Chaitin's generalisation ("messages" whose Kolmogorov complexity is equal to the length of the message itself).
However, that's not the point here. I recommend that you read the New York Times article that's linked; you'll see that that, too, has nothing to do with Chaitin's incompleteness theorem. The irreducibility in that case has to do with the physics of complex systems. Some physicists, and they know they are in the minority, believe that complex physical systems cannot always be accurately modelled by looking at the physics of their parts.
Personally, I find the claim a bit dubious; I think it's more likely that we don't yet have the tools to model such systems mathematically. But you know what? It's a question that we can, eventually, settle. Either the modelling tools will get better (more powerful computers and better simulation techniques may be required; give it time and research), and prove that the complexity isn't a problem after all, or these minority physicists will be proven right, in which case, theoretical physicists have some deep thinking to do, and we'll be on a verge of a new era in physics as profound as the dawn of quantum theory.
Even if we discover that brains use "irreducible phenomena" in the physics sense, all this will prove is that evolution can find new uses for extant features. But biologists knew that already.

By Pseudonym (not verified) on 31 Jan 2007 #permalink

One more thing. In another paper by Chaitin, he points out:

In a way, saying something is irreducible is giving up, saying that it can't ever be proved. Mathematicians would rather die than do that [...]

The whole "field" of ID is, of course, founded on giving up.

By Pseudonym (not verified) on 31 Jan 2007 #permalink

Salvador is simply playing word games.

Indeed. He makes the unwarranted claim that because some phenomena looks like they are emergent, they could not be modeled.

He also takes care in his overlong citation of Johnson to excise the two parts that describe emergence of fundamental laws and the universe as a process like economy (or evolution!) or as reductionism. Both because it contradicts his argument and because he doesn't like natural explanations.

The very existence and verification of evolutionary theory describing the large scale emergent behavior of life shows that Sal's unwarranted claim is false. It also shows that life, at least in this sense, isn't irreducible in the sense Chaitin's Ω is.

Btw, I will enjoy reading Chaitin. I like his idea about the importance of adding observed axioms such as P != NP. (And I think Scott Aaronson argues the same, making an analogy to the second law of thermodynamics.) I also like his suggestion that mathematics is quasi-empirical.

By Torbj�rn Larsson (not verified) on 01 Feb 2007 #permalink

Behe has basically stolen a perfectly useful term with a specific meaning

It appears the corresponding biological term is "interlocking complexity". (Note that Johnson mentions "interlocking parts" too.) IIRC it was introduced in the 1930's as a (verified) prediction from evolutionary theory!

The ID sense of "irreducible complexity" is actually a meaningless term, as MarkCC has pointed out.

A nitpick: That article debunks Dembski's term "specified complexity" which in one of his many attempts for definition is seen to be understood as simultaneous simplicity and complexity (compressible and non-compressible).

MarkCC debunks Behe's "irreducible complexity" in another article where he finds it is (local) simplicity, and simplicity in general is illdefined.

"given a system S, you cannot in general show that there is no smaller/simpler system that performs the same task as S." (From Chaitin, btw.) Systems can become interlocked, and also evolve away from that situation, by many known mechanisms. So while interlocking complexity exists and is predicted by evolution, it isn't a barrier to it.

Some physicists, and they know they are in the minority, believe that complex physical systems cannot always be accurately modelled by looking at the physics of their parts.

Another nitpick: I don't think it is emergence in general that Laughlin and Zhang argues for, which maybe not is such a controversial claim. Chemistry emerges from physics, and biology emerges from both, and we may never be able to connect these areas completely.

What they seem to argue especially for is emergence of fundamental laws. Polchinski seems elsewhere perfectly all right with observed fundamental laws being chosen anthropically in string theory, but he also seem to not see any reason to believe reductionism should not apply at all.

By Torbj�rn Larsson (not verified) on 01 Feb 2007 #permalink

"Let us ignore the arrogance and abusrdity of mentioning Michael Behe in the same sentence with Alan Turing."

Thank you for getting that out of the way quickly.

The confusion (intentional or otherwise) revolves around the word "complexity."

In Mathematics, the competition is between the cluster of meanings as used by Shannon, et al, as measured by Entropy of an ensemble (set) of messages; and, on the other hand, the Kolmogorov-Chaitin complexity, which can apply to a single structure, message, or number.

ID points vaguely at both, replaces each by straw-man misrepresentations, and then confuses, conflates, and internally contradicts itself about those straw men.

Information-theoretic and Communications-theoretic usages of "information" and "entropy" have a complicated connection with the Thermodynamic usage, which is also replaced in ID by a straw man, typically revolving around confusion between equilibrium and nonequilibrium.

If biological systems take advantage of irreducible phenomena in physics (for example, what if we discover the brain uses irreducible physical phenomena ) we will have a strong proof by contradiction that there are no Darwinian pathways for biolgoical systems to incorporate that phenomena.

Even setting aside all the other problems, I can't begin to understand why Sal says this (beyond disingenuousness, of course). Let's pretend, for example, that the atom is irreducibly complex (in either a physical or ID sense). Why on earth would this pose a problem for natural selection? What the hell does it have to do with anything?

Or to put it another way, let's pretend that I make hats for a living. I have no idea how to make felt, but I get it from somewhere else and use it in my hats. Does this mean I can't really make hats?

By Ginger Yellow (not verified) on 01 Feb 2007 #permalink

The title says it all. If the title uses phrases like "Halting probability omega," which sounds like it was lifted from bad science fiction, the paper itself is probably bad science fiction.

Thanks for the nitpicking, BTW, Torbjørn. Pure maths I know a bit about, solid state physics and biology not so much. That's one of the reasons why I hang around; you learn a lot from watching other peoples' mistakes being corrected.

By Pseudonym (not verified) on 01 Feb 2007 #permalink

The title says it all. If the title uses phrases like "Halting probability omega," which sounds like it was lifted from bad science fiction, the paper itself is probably bad science fiction.

Actually, no. Halting probability omega is Greg Chaitin's construction for the probability that a random program will halt. It ties into Kolmogorov formulations of complexity and illustrates an interesting instance of a number that can be defined but no computed.

Tyler DiPietro is exactly right. That's why I wrote: "Kolmogorov-Chaitin complexity."

Needless to say, Chaitin's "Omega" bears little resemblence to the cosmic Theological evolutionary "Omega Point" of Teilhard de Chardin and his noosphere, or its later speculative Physics-embedding by Frank J. Tipler, or the science fiction novels by George Zebrowski [The Omega Point Trilogy, Ace, 1983], let alone "I am the alpha and the omega" in Revelations 22-13.

Tyler DiPietro is exactly right. That's why I wrote: "Kolmogorov-Chaitin complexity."

And you forgot to include Solomonoff in there for maximum verbosity, violating a cardinal rule of geekey. ;)

Now I'm confused. Are you saying that irreducible verbosity doesn't apply strictly in geek theory?

By Torbjörn Larsson (not verified) on 01 Feb 2007 #permalink

Now I'm confused. Are you saying that irreducible verbosity doesn't apply strictly in geek theory?

No, the problem is that what is irreducible is only so in that you can effectively reconstruct something. Using anything aside from the maximum level of verbosity possible fails to make you effectively the Biggest Of All Possible Geeks (BOAPG).

Yes, of course. Jonathan must guard his status carefully, there will be many contenders watching out for slip-ups like that. Modern Geek is complicated, with influences from Scithian culture (with important Threcian contributions), Hackish language, and Medieval Geek (Latin). It is a daunting task!

By Torbjörn Larsson (not verified) on 01 Feb 2007 #permalink

Gentlemen, ladies, and on-carbon-based entities: you must realize that the > 10^7 words that I have in print and/or online is just the tip of the irreducibly complex iceberg.

Solomonoff is quoted extensively in my draft monograph, over 100 pages in single-spaced format, 2 excepts of which have been presented at international conferences:

"Complexity in the Paradox of Simplicity", Jonathan Vos Post and Philip Fellman, 6th International Conference on Complex Systems,
http://necsi.org/events/iccs6/viewabstract.php?id=248

and

Jonathan Vos Post and Philip Vos Fellman, "The Paradox of Simplicity", Annual Conference of the North American Association for Computational Social and Organizational Sciences (NAACSOS), 22-23 June 2006, Friday 23 June 2006, 10:30-noon.

One of my goals is to have beings on worlds not yet discovered reading my work a thousand years from now, and saying (roughly translated): there must be some error in our database here. How could a human from the mid-20th century have written so cogently on our contemporary situation?

The best way to do this would be to create a single very high quality immortal work of genius, with huge fitness in the plane of ideas.

The road that I seem to following is: create a large number of medium quality rapidly decaying works of geekiness, with small individual fitness in the plane of ideas, and let posterity sort things out by a transtemporal interstellar evolutionary algorithm.

I think the thing that irritates me most about IDers and other creationists is that you can spend a long, LONG time and a lot of effort trying to reach a careful understanding some scientific paper or mathematical result, but they IDers seem to have no compunction whatsoever about shooting from the hip. "Uh ... that paper does mean what you say it does."

"Sure it does."

"Uh ... no, it doesn't, because of reasons R1 through R3."

"Sure it does."

"But you haven't addressed reasons R1 through R3."

"They're just wrong."

"Why are they wrong?"

"Because the paper means what I said."

"Never mind."