One of the reasons that the axiom of choice is so important, and so necessary, is that there are a lot of important facts from other fields of mathematics that we'd like to define in terms of set theory, but which either require the AC, or are equivalent to the AC.
The most well-known of these is called the well-ordering theorem, which is fully equivalent to the axiom of choice. What it says is that every set has a well-ordering. Which doesn't say much until we define what well-ordering means. The reason that it's so important is that the well-ordering theorem means that a form of inductive proof, called transfinite induction can be used on all sets.
What's well-ordering?
A set S is well-ordered if and only if:
* There is a total ordering relation ◊ on S;
* Every non-empty subset T ⊆ S has a least element l : ∀t∈T, t≠l, l◊t.
Note that these definitions mean that there must be a least element of the set; and that there can be a greatest greatest.
These two properties mean that every element (except the greatest, if there is one) e∈S has a unique successor under ◊: the least element of the set {x : e◊x}. By the same reasoning, it also means that every set with an upper bound has a least upper bound.
It's easy to show that ≤ works as an ordering relation for showing that the natural numbers are well-ordered. But it doesn't work for showing that the integers are well-ordered: ≤ has no least value for the set of negative integers. But we can easily define a well-ordering relation on the integers:
* x◊y =
* x is zero and y is non-zero
* x is positive and y is negative.
* x and y are both positive and x≤y.
* x and y are both negative and y≤x.
This gives us an ordering of the set 0, -1, 1, -2, 2, -3, 3, ...
The real numbers are where the axiom of choice comes in. We can't show a well-ordering of the reals. But the axiom of choice tells us that one exists.
In fact, by the axiom of choice, the well-ordering theorem is provable, which means that every set has a well-ordering. And if every set is well-ordered, then we can use induction on the set using the following construction, called transfinite induction:
Suppose we have a set S, and a predicate P. We want to prove that P is true for all members of S. We can prove this using the well-ordering relation ◊ of S by showing:
* P(l) is true where l is the least element of S under ◊.
* For all x ∈ S: P(x) is true if P(y) is true for all y◊x∈S.
With the well-ordering theorem, we now have a way of applying induction to *all* sets; and therefore to use induction for proofs in any of the mathematical theories derivable from set theory. Without the axiom of choice, we wouldn't have this ability, and without induction, our ability to prove all manner of important theorems would be lost.
- Log in to post comments
Since the axiom of choice is an axiom, presumably there are alternative axioms one can use, analagous to non-Euclidean geometry. Do you get anything interesting or useful if you do not accept the axiom of choice?
Yes, there are alternatives to the Axiom of Choice. The most common is probably the Axiom of Countable Choice, or ACÏ, which says that every nonempty collection of countable sets has a choice function.
There is also the Axiom of Dependent Choice, or DC, which states that for any nonempty set X and any entire binary relation ~ on X there exists a sequence (xn) in X such that xn~xn+1.
In terms of strength, we have ACÏ < DC < AC, where '<' means "strictly weaker than."
You can prove that DC is equivalent to the Baire Category Theorem for complete metric spaces, which means that the Baire Category Theorem is not provable in ZF+ACÏ, although you can construct the reals in ZF+ACÏ.
Blargh, even though I used HTML entities the post ate part of my comment.
ACÏ is strictly weaker than DC which is strictly weaker than AC.
Not only do you have these weaker versions, but you can reject infinite choice altogether to get an intuitionistic topos. Then you get nice (for analysts) things like "all functions are continuous".
"Not only do you have these weaker versions, but you can reject infinite choice altogether to get an intuitionistic topos. Then you get nice (for analysts) things like "all functions are continuous".
Now there's a thought:-)
I'm one of those people who cannot believe that AC is true in our world (and are not educated in infinity-related mathematics).
Can someone show me some of those "all manner of important theorems" that require the full-strength AC?
I'm trying to understand the distinction between cardinals and ordinals, the well-ordering of real numbers, and the use of AC, for several years already, with no success at all. Perhaps if you can show me something useful that requires AC, it might light the bulb.
To me, Cantor's approach to measuring infinities is just not right. There are, clearly, twice as many integers than even integers; there are more points on the whole real line than in the interval 0.0 to 1.0; and there is no decomposition that would allow the Banach-Tarski paradox to happen.
I know you all will think I'm stupid but I just cannot see where my problem is. Is it that the "mathematics" has no connection to the "real world"?
milan_va: I don't think you're stupid, but I think you're relying too much on intuition. Math has *a* relationship with the real world, but it's not an intuitive relationship or one where the real world puts restrictions on what can happen in math.
Mark's original post on the Axiom of Choice has some examples of things you'd lose if you gave it up: http://scienceblogs.com/goodmath/2007/05/the_axiom_of_choice.php
You've made a mistake with your ordering on the integers. What you've described is a well-ordering, but it's not 0, -1, 1, -2, 2, . . .
It's 0, 1, 2, . . . -1, -2, -3, . . .
So it has order-type Ï+Ï.
Concerning alternatives to choice, without AC, it's possible that the reals are a countable union of countable sets.
milan_va: I'll echo Brian and say that you're not stupid, you're just relying on intuition. Our sense of intuition was honed over evolutionary time to be quite accurate in most contexts. Unfortunately, we don't typically encounter infinite sets in the wild, and so our intuition about their sizes may be a little off. ^_^
For a very simple example, let's just look at what you said about there clearly being twice as many integers as there are even integers. We'll restrict ourselves to just the natural numbers (0 and greater) for now for simplicity, but I hope you can see that it doesn't change anything.
Now, for each natural number, multiply it by two. This is the associated even number. It should be obvious that if we start at 0 and continue on to 1, 2, 3, and so on, we hit every even number, starting at 0 and continuing on to 2, 4, 6, and so on. There aren't any gaps left behind.
Now, if there were really twice as many natural numbers as even numbers, this means that there are half as many even numbers as natural numbers. Thus, we should run out of even numbers when we're only halfway through the natural numbers. But we'll obviously never run out of even numbers. No matter how far along you go on the natural numbers, you can always multiply your number by two to get the next natural number.
Thus, since you can *always* match a natural number to a unique even number, the two sets must be the same size. If one was smaller than the other you'd run out of numbers from it while you still had some left over in the other.
Well, in my professional life I've rarely used the well-ordering theorem, but I frequently use the fact, which the AC is necessary for, that every commutative ring with unit has a maximal ideal; more than that, that every proper ideal in such a ring is contained in (at least one) maximal ideal.
Intuition isn't bad per se, but there are different intuitions about the same thing, some better than others. Your intuition that there are twice as many positive integers than positive evens is due to construing number lines as rigid rulers, with the former ruler having twice as many tick marks as the latter.
But if we think of a number line as an expanse of rubber, then there's no problem. Supposing the tick marks are twice as wide on the even line as on the integer line (to suggest a void where an odd tick mark should be), then assuming the lengths of the rubber pieces were the same and finite, yep, there would be twice as many integers as evens. But all we do then is squish / contract the even (infinite) piece of rubber so that its tick marks are spaced the same width as those on the integer piece. This cuts the evens in half and matches them in a one-to-one correspondence with the integers in the obvious way (e.g., 6 when cut in half is 3, so it's matched with 3).
Since we have an infinite expanse of rubber to work with, then we'll pull in evens that were "too far out to see" before the contraction. That ensures that our two rubber pieces are now exactly the same, and hence must be of equal size.
The rubber view also shows why (0,1) and the set of all reals are of equal size -- stretch out the (0,1) piece infinitely to cover the entire real line. (I read the rubber analogy in Pugh's _Real Mathematical Analysis_, but I don't know if it's due to him or someone else.)
Er, the last sentence of Par.3 should read:
"That ensures that the widths between tick marks of our two rubber pieces..."
OK, so let me try this "oracle" mapping of real numbers to integers:
Given any real number,
a) If it was already mapped, return the same answer as before.
b) If it's a new query, return the first unused integer.
So it would map Pi to 0, 1.334 to 1, 45.3232 to 2, and so on, just as you ask. The reverse mapping could be defined as well.
Now, this maps "every" real number to an integer, yet there are always some unmapped reals.
To me, it's similar to the mapping of integers to even integers: you can map any one integer, yet there are still some awaiting their mapping. Where is the difference?
The difference is that you can *prove* there will be missing reals on your list. On *every possible* list of reals-to-integers. The most famous example is Cantor's diagonalization argument.
First, just for ease, let's assume you're only using the reals between 0 and 1 (including 0, but not 1). Since this range must be equal or smaller than the range of all reals, if we can show that you *must* have missed a number in here then we know that you would have missed a real if you used the whole set.
Now, take your mapping. We'll construct a number that is guaranteed to not show up on your list. For the first digit, we'll look at the first digit of whatever number maps to 0. If it's a 0, our number will have a 1 there. Otherwise, it'll have a 0. For the second digit, we'll look at the second digit of whatever number maps to 1, and do the same. For the third, we'll look at the third digit of whatever number maps to 2, and so on.
Eventually we go through the entire list and end up with a brand new real number. Because of how we assigned each digit, though, this number is guaranteed to be different from every real number in your mapping by at least 1 digit. Thus, it doesn't map to any integer, and you missed one.
Obviously, you can repeat this process for *any* mapping. It doesn't matter what you do. You can even take this brand new real we just made and add it to your list. We can make another one. And another. And another. You are guaranteed to miss an infinity of real numbers, and so we can always find a new one that you've left off. Thus, you run out of integers before you run out of reals, and so the reals are larger than the integers.
You can't repeat this argument for the mapping between the integers and the even integers. Some mappings will miss numbers, but there's at least one (the one I described, for example) that will connect all of them. You can give me any even number and I'll tell you the natural it maps to. Give me any natural and I'll tell you the even it maps to. In the even-to-integers mapping, I can find a mapping such that, no matter what number you give me, it's in there. In the real-to-integers mapping, no matter what mapping you give me, I can find a number that *isn't* on there. See the difference?
This isn't, for example, related to the fact that the reals are dense (that is, between any two reals you'll always find a third). The integers and even integers aren't dense, but this doesn't actually matter. The rational numbers *are* dense, but they're equal in size to the integers as well (with a really cool proof! to me, at least...).
The important bit is, is it possible to make a list out of them that only uses the ... once but still gets everything? If so, then they are countable, which is the smallest infinite value you can get in normal ordinals. If not, then they are uncountable, which is larger. We're still not quite sure how large the reals are, exactly, but we know they're bigger than the integers. The size of the integers is often referred to as aleph-zero, and we *think* the reals are the next highest size, aleph-one, but we've never been able to prove it.
Everybody says that the interval [0, 1] can be mapped to the entire real line, but very rarely is one provided. So here's one, for reference:
y = log[x / (1 - x)], 0 < x < 1
The fact that the cardinality of the set real numbers (constructed through Dedekind sections) is strictly greater than the cardinality of natural numbers can be proven in ZF without any form of axiom of choice.
I personally dislike the axiom of choice for esthetic reasons. I think only if you do some formalized mathematics you can see how out of place this statement is in ZF. While ZF itself feels like a coherent system, AC is a contrived add-on. I have yet to see an example that would convince me that it is necessary for something important. For example if it turned out that that you can't formulate any version of stochastic calculus without the axiom of choice, that would convince me.
>Without the axiom of choice, we wouldn't have this ability, and without induction, our ability to prove all manner of important theorems would be lost.
This statement is very misleading. Of course even without AC the induction on natural numbers would still work and that is what is really needed.
The real numbers are where the axiom of choice comes in. We can't show a well-ordering of the reals. But the axiom of choice tells us that one exists.
We can't or haven't? Is it somehow the case that we cannot construct one (not enough time, perhaps)? If so, can you talk about that?
Slawekk: I don't know much at all about stochastic calculus. In ZF without AC, you cannot prove that the set of all reals is not a countable union of countable sets. Would that make any trouble for stochastic calculus?
rcriii: We cannot prove in ZF that there exists a well-ordering of the set R of reals. This has been proven. Roughly, and glossing over some technicalities, the argument goes like this. Assuming there is a model of ZF (which does exist as long as ZF is consistent), one can modify it to produce a model of ZF in which there is no well-ordering of R. Thus, ZF+"There is no well-ordering of R" is consistent. In fact, this was more or less how Cohen's original argument for the consistency of ZF+"not AC" went.
I realize that's probably not a satisfying summary if you don't have a good handle on what is meant by "model of ZF". Sorry.
> Would that make any trouble for stochastic calculus?
I don't know. Stochastic calculus is something that is widely used in applications. If AC is needed to formulate it then AC is needed. But I don't think anybody tried to formulate it without AC and there is a good chance it can't be done.
>in ZF without AC, you cannot prove that the set of all reals is not a countable union of countable sets.
This troubles me. On one hand we have a formally verified fact that the set of reals constructed with Dedekind sections is not countable. On the other hand I think I can show in ZF that a countable union of countable sets is countable (but I don't have a formally verified proof, so I can't be sure).
I think the apparent contradiction comes from what "the set of all reals" means in your statement. It probably means "a completely ordered field". So while it is true that we can not prove in ZF that a completely ordered field is not a countable union of countable sets, we can do it for some specific models of reals, like the one obtained with Dedekind sections. This explanation makes sense only if the statement "every two complete ordered fields have the same cardinality" that is true in ZFC requires AC.
Nope, by "the set of all reals", we mean the reals as constructed by Dedekind cuts (or an equivalent formulation).
On the other hand I think I can show in ZF that a countable union of countable sets is countable
No you can't. If a set is countable then it can be put into bijective correspondence with N, this you know. Note, however, that there will never be only one such bijection. For instance, take one bijection and swap where 1 and 2 get sent to.
The proof that a countable union of countable sets is countable requires you to choose such a bijection for each of the countable sets, and thus requires an infinite number of choices.
At first glance, it would seem that Cantor's diagonal argument could be used to disprove the existence of a well-ordering of the reals. Given a well-ordering, take its corresponding successor function, and use it to construct a sequence of the reals. A diagonal argument then shows that there must be a missing real.
Why is that argument invalid? Is it in assuming that this sequence of reals should contain every real? Or are the "..."s in the diagonal argument essentially different kinds of "..."s?
> The proof that a countable union of countable sets is countable requires you to choose such a bijection for each of the countable sets
OK, I can see that now, thanks.
>by "the set of all reals", we mean the reals as constructed by Dedekind cuts (or an equivalent formulation).
In what sense those other formulations are "equivalent"? Is the construction as classes of rational Cauchy sequences equivalent?
milan_va:
I am going to go with the rest here, it is intuition and "common sense" that fails. Since we live in something that can be approximated by a manifold we can't see those decompositions demonstrated. (I think; I'm sure the mathematicians here will correct me if I'm wrong.)
Slawekk:
I will (of course :-) take the opportunity to pound on my "model" drum. (As in physicists "model".) This makes me not worry about truth but if the results of a theory are useful.
So if ZF+AC delivers more useful results (and easier) I find it more elegant. Elegance is not parsimony, it is mainly power of expression IMHO. (Parsimony is in this view only used in the choice between equally powerful theories.)
But esthetic is different for different persons, so this may not be helpful.
Yes. A well-ordering of the reals would have to contain several (in fact uncountably many) consecutive infinite subsequences, and your method would only give you elements from one of them.
In what sense those other formulations are "equivalent"? Is the construction as classes of rational Cauchy sequences equivalent?
In the sense that they're isomorphic as complete ordered fields. So, among other things, they have the same cardinality. Yeah, the Cauchy sequences construction works.
Given a well-ordering, take its corresponding successor function, and use it to construct a sequence of the reals. A diagonal argument then shows that there must be a missing real.
The problem is that the diagonalization argument relies on the sequence of reals being ordered like the natural numbers: First, Second, Third, Fourth, etc, with no infinite spots on the list.
This is because, in the argument, every number in the sequence works with a certain decimal place, and the decimal places are ordered like that. You have the first, second and third decimal places, but you don't have any decimal places that are infinitely far to the right.
There are plenty of well-orderings that don't look like the natural numbers, though. They start like the naturals, but then they keep going past that and have elements infinitely far along. Such a well-ordering is what the reals have.
Torbjörn:
Well, I know that quantum mechanics uses lots of linear operators applied to infinite-dimensional (but separable) Hilbert space. AC-derived theorems like the Hahn-Banach and Baire Category theorems are generally useful in that sort of situation. I'm pretty sure those theorems are necessary in QM, but not certain. Even if they are, countable AC or Dependent Choice might be sufficient, because of separability. It's not my focus these days, but you might want to take a look.
Xanthir:
"We" do? I don't think there's any consensus on this point. It might be aleph-one (that was Cantor's guess), but apparently there's better reason for taking it to be aleph-two, or weakly inaccessible, or (depending on your notion of "true") ill-defined. I haven't heard any other reasonable answers.
And we never will be able to, unless we expand ZFC somehow. Gödel and Cohen hit us with that one too.
Ah the cuts!
Yet the number of gaps cannot be larger than the number of their ends.
It is difficult to divorce a discussion of AC from infinity of various sorts, being partners in crime so to speak.
Excerpt from a great new blog by a Field medalist:
http://borcherds.wordpress.com/2007/05/26/my-dad-has-more-money-than-yo…
ε0
which is the ordinal measuring the strength of Peano arithmetic, in the sense that it is the smallest ordinal that Peano arithmetic cannot prove to be well ordered (Gentzen's theorem). A reasonably natural function at this level, related to Ramsey's theorem, was found by Paris and Harrington as part of the proof of the Paris-Harrington theorem.
In fact, ε0 measures the strength of Peano arithmetic in a somewhat stronger sense : namely, in Peano arithmetic well-orderness of this ordinal implies consistency. (The way it is done as follows: you can measure complexity of proofs in Peano arithmetic by ordinals less than ε0, and than "streamline" proofs by "a cut-elimination" procedure reducing the complexity and formalisable in PA. It is obvious that a "streamlined" proof cannot be a proof of a contradiction; you use well-orderness of ε0 to prove, uniformly, that every proof can be "streamlined".) I would assume that this holds for some other theories as well.
In fact, one can define a very simple and explicit inductive process, which given a natural number, always converges to 0; but the number of iterations required bounds any function which PA proves to be totally defined.
(A reference for this is a textbook of Takeuti on proof theory, _2nd_ edition.)
Maya:
Except that the "gaps" between rational numbers don't have ends. Give me a rational number on the "left" side of the gap, and I'll give you a rational number larger than that which is still on the left side of the gap; similarly on the right. Thus the argument fails.
Sort of; if everything is finite, then we don't need to explicitly assume AC. However, AC doesn't play a role in the Dedekind cut construction, if that's what you meant to imply.
Incidentally, regarding "all real functions are continuous": I'm not sure that's such a nice thing to have. Suppose we take the function f where f(x) = 1 for x rational and f(x) = 0 for x irrational. Perfectly well-defined from R to itself, and not continuous anywhere. If we were in one of the topoi where all functions are continuous, then this function wouldn't make sense, or at least would look vastly different, I'm not sure which. Either way, it seems like too high a price to me.
Chad: Thank you for expanding on my words! I'm only a lay mathematician, as you may have guessed, and I love being corrected when it's interesting like that.
Oh, I was thinking of general use of the reals. I'm not especially worried (again :-) about which specific math is useful in physics - as long as any of them are.
But necessity may be an argument to elevate some math as theoretically if not practically better the day we have 'the theory of everything'. And it seems a fairly safe bet that QM is incorporated in it. So, thanks for the pointers!
(Though my understanding, such as it is, was that quantization isn't properly formalized "math-vise" yet so it may be premature to discuss necessity in detail.)
i'm very interested in this idea of a well-ordering on the reals. i confess i have never heard of such a thing, despite an undergraduate degree in math. i don't doubt you, but i'm having a hard time imagining what the idea of "successor" could mean in the reals. maybe you could help out by going into more detail and/or providing a proof that uses induction on the reals (or at least an idea of what kind of theorem uses it and a rough idea of how it works).
i suspect i'm not the only one with this question. thanks for bringing it up!
Well, the well-ordering won't look like the regular ordering on the reals, so the successor of a given real could be anywhere.
I don't have a result on hand that uses induction on the reals, but here's something similar: There exists a subset of the plane that intersects every circle exactly 3 times.
The proof uses a well-ordering of the set of all circles in the plane (and also a well-ordering of the set of all points in the plane, although this use is somewhat hidden). First, it's not hard to show that the set of all such circles has the same cardinality as the reals.
So well-order the circles with the order-type of the continuum. This means choose the shortest well-ordering of the circles. The important fact about this well-ordering is that if you pick any circle and look at the set of all circles that come before it in the well-ordering, that set has cardinality smaller than the reals.
Now, we'll build a set X by going through the circles in order. At each circle, if it already intersects X three times, do nothing. Otherwise, add points to X so that it does, but make sure you pick these points so that they don't mess up any circles we've already handled.
Proving that there exist points that work for the current circle without messing up earlier circles is a cardinality argument: Each circle we've handled intersects the current circle either 0, 1 or 2 times. These points of intersection are bad points; all other points in the current circle are good points. But the number of points in the current circle is the same size as the reals, and the number of circles we've already handled is less than the reals. So the number of bad points is less than the reals, and thus there are infinitely many good points.
An example of how the weakening of the Axiom of Choice is important for quantum mechanics is given in a paper by Maitland-Wright published in 1973 or so, ``All operators on Hilbert space are bounded.''
Polymath: "The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" -- Jerry Bona
As Wikipedia says (in its article on the axiom of choice), this is a joke, as all three of these statements are equivalent (in ZF). The well-ordering principle is considered "obviously false" in the joke, because it's quite difficult to visualize what it means to well-order the reals. But it's not that hard to see where the equivalency comes from.
Deriving choice from well-ordering is obvious: well-order the union of all the sets from which one is choosing, then let the "choice" from each set be the least element of the set under the well-ordering.
Going from choice to well ordering is a little trickier. Let S be the set one wishes to well-order, and set a choice function on P(S)\{}, the non-empty subsets of S. The choice function can be used to define a well ordering -- given any initial segment W of the well ordering, the supremum of that segment is set to be the element chosen by the choice function from S - W. If W = S, of course, we have a well-ordering of the entire set S; the fact that the ordinals are a proper class guarantees that we will eventually exhaust all of S. (This is just a sketch of a proof, but you can find the full proof elsewhere on the web.)
This argument is clear, but it doesn't really say *anything* about what the well-ordering of a set like the reals looks like...
I was thinking about #34 pretty hard but I just cannot take this one:
Trying to understand it, I came with this idea: if there is a well-ordered set, and there are at most 10 elements before any element, then the set has the next bigger cardinality, i.e. 11.
Take the reverse: If my set has cardinality C, then there are at most C-1 elements before any element.
Now move to infinite sets: If the set has cardinality ℵ1, then there are at most ℵ0 elements before any element, since it's the next lower cardinality before ℵ1. Quite strange, isn't it?
But then, how many elements are before any natural number? It's "finite", but which one?
If the answer is ℵ0, then there would be, likewise, ℵ1 reals before any real, so the quoted claim would be false.
Is there another explanation or an error in my reasoning that I cannot see?
Another problem is that the construction does not save "future" circles from having 4 or more elements of X on them, but that's just a secondary problem.
But then, how many elements are before any natural number? It's "finite", but which one?
It doesn't need to be just one; it can vary from number to number. Consider the standard ordering on the natural numbers. How many elements are there before 4? Four. 0, 1, 2 and 3. How many elements are there before 7? Seven. Etc.
Also, notice that not all well-orderings have this property. If I take two copies of the natural numbers, call the first one "a" and the second one "b", and then put b after a, we get a well-ordering that looks like this:
0a, 1a, 2a, 3a, 4a,... 0b, 1b, 2b, 3b,...
This well-ordering has cardinality âµ0, but there are âµ0 many elements before 0b.
So I need to choose the right well-ordering to make the above argument work (which is why it's so hard to make work using Zorn's Lemma instead of the Well-Ordering Theorem).
Another problem is that the construction does not save "future" circles from having 4 or more elements of X on them, but that's just a secondary problem.
You're right, I was a little loose with my construction. We need a slightly more general definition of "bad points": rather than only considering circles we've already been to, we need to consider all circles which have three points in X.
Since the number of circles we've been to so far is fewer than the number of reals, and for each circle we added at most 3 points, X has size smaller than the size of the reals. So the number of 3 element subsets of X is less than the number of reals (this relies on a familiarity with cardinal arithmetic). So the number of circles we need to worry about is less than the number of the reals.
Now the argument proceeds as before.