Finally, I have found online, a copy of the magnificent culmination of the 20th century's most ambitious work of mathematics. The last page of Russel and Whitehead's proof that 1+1=2. On page 378 (yes, three hundred and seventy eight!) of the Principia Mathematica.. Yes, it's there. The whole thing: the entire Principia, in all of its hideous glory, scanned and made available for all of us to utterly fail to comprehend.
For those who are fortunate enough not to know about this, the Principia was, basically, an attempt to create the perfect mathematics: a complete formalization of all things mathematical, in which all true statements are provably true, all false statements are provably false, and no paradoxical statements can even be written, much less proven.
Back at the beginning of the 20th century, there was a lot of concern about paradox. Set theorists had come across strange things - things like the horrifying set of all sets that don't contain themselves. (If it contains itself, then it doesn't contain itself, but if it doesn't contain itself, then it contains itself. And then really bad actors need to pretend to be robots short-circuiting while Leonard Nimoy looks on smugly.)
So some really smart guys named Bertrand Russell and Alfred North Whitehead got together, and spent years of their lives trying to figure out how to come up with a way of being able to do math without involving bad actors. 378 pages later, they'd managed to prove that 1+1=2. Almost.
Actually, they weren't there yet. After 378 pages, they were able to talk about how you could prove that 1+1=2. But they couldn't actually do it yet, because they hadn't yet managed to define addition.
And then, along came this obnoxious guy by the name of Kurt Godel, who proceeded to show that it was all a big waste of time. At which point I assume Russell and Whitehead went off and had their brains explode, pretty much the same way that the bad actors would later pretend to do.
- Log in to post comments
An entertaining post. But perhaps the Principia wasn't quite as pointless as you suggest ...
Wow, I don't think I'd actually seen that before. Metal Machine Music for math geeks. Of course, nowadays machine verification of proofs is a minor industry, and is (in my opinion) leading to greater certainty of the truths of the statements whose proofs can be so verified for instance it would be of great interest to have such a verification of the classification of finite simple groups, for instance so I think saying that it was all a waste of time is a bit of an exaggeration.
Rats. I was going to post a nitpick that the name is not Godel, but Gödel. But then it turns out that the crucial second letter of his name is replaced by a question mark in the preview. Probably a bug in the blogging software. Oh, but wait! It works right when I try again, from the preview! How very odd.
Could you please link to an image with better resolution - the text is illegible.
I'd reprhase last paragraph as...
At which point I assume Russell, Whitehead AND ALL THEIR FELOW MATHEMATICIANS WORLDWIDE went off and had their brains explode, pretty much the same way that the bad actors would later pretend to do.
About Goedel's conjecture, I recommend this novel.
Correction: Goedel's nice piece of math is a theorem, not a conjecture.
So is 1 + 1 = 2 merely an approximation?
I take it you're not a big fan of number theory, the love of my warped life? *grin*
My favorite part of this story is that Mrs. Whitehead apparently disapproved of Russell, the scruffy, free-thinking socialist, and wouldn't countenance having him around. So the Principia was written in Whitehead's kitchen where the two would meet after Mrs. Whitehead went to bed.
Of course, both men went on to do many more worthwhile things after this, so no exploding brains, i guess.
(Now I can't dig up where I first read this story but it has stuck in my mind for years)
Nick:
I actually have a huge amount of respect for Russell and Whitehead. I was very serious when I said that the principia was the most ambitious work of mathematics of the 20th century. They were seriously setting out to solve a serious problem, and they went about it in an astoundingly intense and difficult way in order to try to fix the problem once and for all.
It's not their fault that what they were working on was actually impossible. No one had a clue that the effort was doomed to failure.
And you can still learn a lot from the methods of the principia. It is truly both magnificent and hideous: magnificent in that it's one of the most amazing applications of human intelligence, reasoning, and abstraction ever written; hideous in that it spends hundreds of pages if an ultimately futile effort to try to get to the point where the simplest statements of arithmetic become meaningful without allowing contradictions.
Janne:
If you follow the link in the original post, they have the entire principia online. The page that that image comes from is page 378. I can't put a better quality image up; the permissions on the scanned document that I linked to do not allow me to do the image alterations that I'd need to to make it legible at a reasonable size. But you can see it - and all of the rest of the Principia - by following the link.
Extreme math indeed. Somewhere you say that math is fun, and it is, but here the notational complexity ruins the fun.Just after (nearly) proving that 1+1=2, the breaking news comes only a couple of pages later ("382"):
I think they were on the right track, but the pace a bit on the slow side...
I think that the program of making math completely rigorous was a good idea at the time, but it has limited usefulness in practice. Actually, the main accomplishment was probably the (at the time) counterintuitive result that some propositions are undecidable. That's an important result that would have been missed otherwise.
However, when you're trying to get someone to think like mathematician, the question is not how to get the formalism precise but how to map it into cognitive machinery of the human brain. There's a big difference between verifying each step of a proof and actually having a sense of why the whole thing is true. In the latter case, it's easy to be led astray, so you should be able to apply this to similar questions and get the right answer. You shouldn't stop at intuition, but if it is well-placed, the actual writing of a formal proof should be close to a mechanical process, without any major surprises along the way.
I think Polya was on the right track in How to Solve It by suggesting heuristics that obviously aren't sound but actually work well in getting a handle on problems: e.g., solve a simpler related problem first, solve a more general problem first (inventor's paradox), sleep on it (works only if you have really been thinking hard). I think there's not enough emphasis along these lines in math education, with the result that people equate mathematics with mind-numbing pages of formalism that give only the product and not the process of mathematics as practiced by humans with brains ill-suited to the task.
Actually, with the ready availability of computers, we should be able to do both in some cases. Provide humans a readable summary of the proof steps (what we call a "proof" in a mathematics publication) and then construct a formally verifiable proof--it doesn't have to go down to a minimal set of axioms, just well established theorems.
If Vulcans are fully logical, wouldn't Russell's paradox make their heads explode too? Maybe the fact that he survived when all those computers died is what led Spock to conclude, "Logic is the beginning of wisdom, Valeris, not the end."
Remember the Futurama episode where Leela tries to destroy the evil Santa Claus robot with a logical paradox? "Ha! My head was built with paradox-absorbing crumple zones!" I also like the Ghost in the Shell: Stand Alone Complex episode where one robot stumps another, less capable one with the Liar paradox.
Uh, re my previous post: It's becoming clear that scienceblogs.com is mixing its charsets in a bad way. It uses latin-1 for the main pages, and hence also for the posting form. But the comment preview uses utf-8, so when I wrote Kurt's last name there, it looked fine. But then it's displayed as latin-1 once more, so it gets screwed up. Guess I get to find out to whom to report the bug. Sorry about the off-topic-ness, but I think great people like Kurt G. deserve to get their name spelled right.
For a similar modern project, see also Metamath's proof that 2+2=4.
PaulC wrote: "I think that the program of making math completely rigorous was a good idea at the time, but it has limited usefulness in practice."
It would have been useful, in that future math proofs that followed their formalism would have been more "trustworthy". The problem is, how do you know that a math proof you have in front of you doesn't implicitly rely on a paradox (like the "set of all sets...")? If the foundations of your field of math are suspect, then all conclusions are similarly suspect.
I suspect it would have been quite useful, if it had worked.
"Actually, the main accomplishment was probably the (at the time) counterintuitive result that some propositions are undecidable. That's an important result that would have been missed otherwise."
But Russell and Whitehead didn't come up with the undecidable result. That was Godel.
Unless you somehow think that Godel wouldn't have worked on his stuff without Principia Mathematica coming first. That's a pretty indirect connection, and not really an "accomplishment" of Russell and Whitehead.
Harald,
It's not all of scienceblogs.com since for example Pharyngula seems to work fine. Maybe it's only this blog.
Don:
My math history isn't the best, but I do think there's a strong connection between the Principia and Godel. I think I heard the story in a talk that Greg Chaitin was giving at work, but I'm not sure. This would have been my first year at my job, which was 10 years ago, so a lot of talks from that long ago have blurred together.
Anyway, if I remember it correctly, the story is that R&W weren't alone. The problem of paradox was *the* math problem that people were interested in. The principia was the most ambitious approach to trying to solve the paradox problem. Lots of other very famous mathematicians worked on it; particularly important for the history of Godel was Hilbert.
Godel came along when that stuff was in full swing, and was introduced to the field through a course on Russell's work on number theory (given by? using a book by? Russell). So you can make an argument that Russell was a major mover in what led to Godel's incompleteness theorem.
But I think that incompleteness was driven more by a reaction to Hilbert than R&W.
For the text format problems, I'll check with Seed's technical guru, and see if we can get it fixed up.
Mark,
It seems the history is incomplete. :-)
My name is usually provided by TypeKey, that I started on Pharyngula. We can all check what happens with my "omlaut o". Or Godel's. :-)
Don Geddis:
Russell and Whitehead were not the only ones interested in developing a way to prove mathematical principles formally starting with axioms. Peano's axioms preceded Goedel. Earlier than that, Boole certainly had the goal of formalizing logical argument.
Absent a program of trying to formalize mathematics, Goedel would have had to do the whole thing himself--not just showing that inference steps could be encoded as numbers, but inventing the whole notion of a formal symbolic proof. He was a brilliant guy, but it seems reasonable to assume that he was helped along by the fact that people already had a notion of what was meant by a formal proof. And, in fact, Hilbert had already asked the question of whether these were sufficient to decide everything. That question only makes sense if you think there could be something interesting or worthwhile about having a formal proof.
In fact, the evidence shows that Goedel was directly, admittedly influenced by Hilbert's Program, http://plato.stanford.edu/entries/hilbert-program/ which was basically a plan to try to decide mathematics formally.
So, in short, yes I feel confident in saying that unless there had been a significant ferment of mathematicians interested in trying to show that all of mathematics could be proven formally, then it is highly unlikely Goedel would have come in and proved his incompleteness result. I mean, I guess it could have happened, but it is not what actually happened, and what actually happened required the work of many mathematicians over a century or so, most of whom were motivated by the goal of proving completeness rather than incompleteness.
The extent to which Goedel was influenced by Whitehead and Russell in particular wasn't my point. I have no idea, and did not refer to them in particular in my comment.
Paul:
According to Wikipedia, my memory was actually amazingly close :-)
Godel did get exposed to the ideas through one of Russell's textbooks; but Hilbert's attempt in the same vein as R&W actually had more to do with Godel than R&W did. But the info I can find does strongly suggest that without Hilbert, Godel wouldn't have wound up doing the work that led to incompleteness.
Also, thanks for that link; looks like a very nice summary of the Hilbert program.
Of course, the big irony is that it was Russell that had pointed out the fatal flaw in Frege's system of logic; Frege's initial reply to Russell is one of the classics of mathematics....
MJD at http://newbabe.pobox.com/~mjd/blog/math/PM.html discusses this and also has a nifty postscript linking the symbology in PM with the choice of lambda for lambda calculus. An excellent read.
My recollection is that R&W came up with the paradoxical "set of all sets that are not members of themselves," but thought that such paradoxical statements were discrete curiousities that could be safely "walled off" from the rest of mathematics as simply undecidable, like the value of 0/0. Godel came up with a construction proof that used this concept to demonstrate that unprovable propositions could not be segregated from the rest of mathematics.
tgibbs:
Mostly, but not entirely right.
Russell is the discoverer of the "set of all sets that are not members of themselves"; in fact, it's called Russell's paradox.
It was generally believed by mathematicians around the time that paradoxical constructs were errors caused by inadequate formalisms, and that they could be eliminated by creating a proper basis for mathematics. That's a big part of the central idea behind the Principia and Hilbert's program. They thought that by starting from scratch and building up mathematics entirely in formal logic that they could arrive at a decision procedure: a procedure that could, for any mathematical statement, produce a proof of either its truth or its falsehood.
They didn't believe that the paradoxical statements could be "walled off"; they believed that they were fundamental errors in the basis of mathematics, meaningless statements that could only be constructed because the mathematical formalisms that permitted them were invalid. They thought that by constructing mathematics properly, they would basically "go away": there would be no way to even say them, because they were based on invalid concepts.
Loosely, the idea behind R&W's construction (and Hilbert's program as well, if I'm recalling it correctly) was a system of types: every statement was clearly defined to range over a particular kind of value; and no statement could be defined in a way that allowed it to talk about itself. So, for example, you could have first order predicates that reasoned about base values; second-order predicates that could reason about base values and first order predicates; third order predicates that could reason about base values, first order, and second order predicates, etc. But no predicate could ever reason about itself: an Nth order predicate could never reason about anything beyond the (N-1)th order predicates.
The key to what Godel did was show how you couldn't make that separation hold: even in the strictest type-theoretic framework, you could represent, for example, first order predicates using natural numbers, and then allow first order predicates to range over first order predicates encoded into numbers. (Actually, I think it was not just natural numbers, but naturals with Peano arithmetic.)
If you think about doing formalized (machine verifiable) math - think twice. You do a proof, then another one and ... you are hooked. You don't want to do romantic math any more. Every standard proof (informal proof sketch that is) seems full of errors and missing references. Russel never recovered. Watch out.
Addicted
P.S. As for usefulness of formalized (and machine verifiable) math I think the most interesting application is writing formalized proofs in logical systems that are much different that what our brain is naturally wired for (for example, quantum logic, see Metamath's Quantum Explorer). This can not be done in an informal way - humans can not reliably verify such proofs. Maybe one day it will turn out that mathematics built on one of them is better at describing the physical world we live in than the one bult on on classical logic and ZF set theory?
I just got my copy of the April issue of Notices of the American Mathematical Society. The whole issue is essentially a tribute to Kurt G. The whole thing is available on the web. Enjoy! (But is anybody reading comments to a blog post that is already several days old?)
Torbjörn et al:
Thanks to a pointer from the seed tech guru, I think that the charset issues are fixed, and everything should now be doing UTF-8.
Well, if you want an actual formal proof, see the metamath proof of 2+2=4. (1+1=2 is boring because that's how it defines 2.) This is a really nifty browseable hypertext of a computer-verified proof derived from the axioms of Zermelo-Fraenkel set theory. That is 5254th theorem in the system, although not all of the preceding 5253 are required for the proof. (In particular, number 1329, the Axiom of Choice, is not a prerequisite.)
This also includes lots of other, more significant math, like that if an infinite series converges to a limit, that limit is unique.
Just to clear up the history a little bit for those who are interested: Gödel's work was done in response to Hilbert's call to mathematicians to prove the consistency, completeness and decidability of arithmetic using finite methods. The title of his all important paper however is "On Formally Undecidable Propositions of Principia Mathematica and Related Systems I" (translation from the German Martin Davis). So as you can see R & W do have a direct connection to Gödel. As a foot note as you will notice the title ends with a Roman one (I) indicating that this is only part one of the paper. There never was a part two!
The notation is a killer - see
http://www.seop.leeds.ac.uk/archives/win2004/entries/pm-notation/
But there has been some improvement:
http://us.metamath.org/mpegif/equid.html
Actually, 1 + 1 = 3 for sufficiently large values of 1. The proof is left as an exercize for the reader.
Gödel's paper publishing the incompleteness theorem is "Ãber formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme," or "On Formally Undecidable Propositions in Principia Mathematica and Related Systems."
So R&W do necessarily figure into his work, even if he only uses their notation and semantics.
The actual page number is 378+1. (Supply your own punch line.)
1+1=2 by definition.
The services rendered by Russell and Whitehead for claification of concepts and cleansing of Mathematics from Metaphysical notions may not be forgotten for ages to come.No doubt they were founder fathers of Modern Symbolic/Mathematical Logic.If they proved 1+1=2 at page 378,they were not fools but dedicated and thorough.Their commitment should be respected instead of being laughed at.The great brains have always been humiliated in this manner.There is always room for error and improvement,but acknowledge the contribution of others without prejudice.
To reply to comment 37: No, 1 + 1 = 2 is not a definition, although it surely seems that way.
We've been told all our lives that 1 + 1 = 2 is true because it's easier to just remember it rather than understand it. So what Russel and Whitehead, (also Gödel and Landau) attempted to prove was that all the math we are familiar with could be derived from just a few primitive axioms. Surely 1 + 1 = 2 seems an obvious result but if I were to ask "yes, one apple plus one apple is two apples, but how do we know that's true for very very large numbers?" It is dangerous to make assumptions like that because sometimes they end up being false. Thus, it is enough to show that adding 1 to, say a, gives you 1+a, or even easier, it suffices to show that adding 1 to 1 gives you 2. That's what all those mathematicians were after. :)