With ordinals, we use exponents to create really big numbers. The idea is that we can define ever-larger families of transfinite
ordinals using exponentiation. Exponentiation is defined in terms of
repeated multiplication, but it allows us to represent numbers that we
can't express in terms of any finite sequence of multiplications.
As usual, the concept of ordinal exponentiation comes from a concept
of set exponentiation where the ordinal
αβ where α=|A| is the set of positions in
the well-ordering of Aβ; and Aβ is the
set of all ordered tuples of length β consisting of members of
A. (It should be obvious what a well-ordering on this looks like: it's
the lexicographic ordering of the tuples based on the ordering of
elements in A.)
This shouldn't be too surprising: the basic idea of exponentiation is, as I said, repeated multiplication, so that A2=A×A, which is the set of ordered pairs of members of A. To be a bit more formal about what we mean by repeated multiplication:
- α0 = 1
- α1 = α
- αb+1 = αβ×α
- If β is a limit ordinal, then αβ is the
limit ordinal of αγ for all γ<β.
<
The main use of exponentiation is to let us express ever larger
ordinals. We added ω to the ordinals as the first transfinite,
and we can talk about many extremely large numbers by using ω
and multiples of ω. But finite multiples of ω can only get us so far; then we need to start talking in exponents. Exponents are where things get
seriouly large: ωω, ωωω, and so on. Infinite exponents of ω open up whole new realms of ever larger numbers.
One thing that I consistently get screwed up is the difference between cardinal exponentiation and ordinal exponentiation. Given how closely related the ordinals and cardinals are, and given how they're both consistent extensions of the naturals, it seems like they should behave the same in exponentiation - but they don't. Not even close. In ordinal arithmetic, 2ω=ω; but in cardinal arithmetic, 2ℵ0 is the cardinality of the reals - at least ℵ1.
Once we start playing with ordinal exponents, we can find some interesting large objects by using powers of ω, but they're all limited: using ω, we can't construct any ordinal which can describe the set of positions in an uncountable set: ω only gives us the ability to find places in countably infinite sets.
But we can still create some interesting things. For example, there's something called ε numbers. ε-numbers are fixed point limit ordinals of exponent chains; they're the first numbers unreachable from ω; the smallest ε-number is the limit - the first number larger than anything definable using exponents of ω:
The ε numbers are the set of numbers x such that ωx=x. ε0, the first ε number is also the limit ordinal of ωωω...ω, where that stack of exponents has length ω.
Even the ε numbers are ordinals of countable sets. In general, we don't really worry about ordinals beyond the ε numbers, because they're are results showing that if a transfinite induction proof covers everything up to ε0, then it will also be true for all of the ordinals, including ordinals for uncountable sets.
- Log in to post comments
(just to point out a couple of typos, you may delete this comment. Next one is the real comment!)
When explaining multiplication, you put an exponent b instead of β
In the last paragraph, you write "they're are" instead of "there are".
I knew that - when talking about ordinals - 2ω is equal to ω. But I believe that it was possible to state explicitly an ordinal whose cardinality is more than the integers.
For example, why ε0 is countable?
Can anyone recommend a good book on this stuff (this and the last few posts)?
Jon: You might want to try Infinity and the Mind by Rudy Rucker as a starting point.
How can that be? From your definition, 2ω would be "the set of positions in the well-ordering of" real numbers between 0 and 1, since those reals, written in binary, are exactly the ordered tuples of elements {0,1} and of length ω.
How could there be only ω positions in the well-ordering of ℵ1 elements? These things with infinity are really confusing.
BTW, could you explain if and where the surreal numbers fit into all this?
To the best of my (limited) knowledge, the set exponentiation definition given only applies when β is finite.
milan:
2&omega is the set of *ordered pairs* of finite ordinals. It's *not* the same thing as 2ℵ0 - it's not the set positions in the well ordering of the reals. The difference is that all of the members of 2ω are pairs of *finite* ordinals. So there's no way of representing the irrationals - it's exactly the set of numbers that we use in Cantor's diagonalization to show the difference between infinities that allowed us to get into the transfinites in the first place.
As for the surreals: they're a different formulation of numbers - I would actually say a better formulation of numbers. The surreals provide a single uniform construction which easily defines the natural numbers, the ordinal and cardinal numbers (including the transfinites), the rational numbers, and the real numbers.
Using the classic formulation, we start defining numbers using the vonNeumann construction of the ordinals as a way of getting the natural numbers. Then we augment that to get a sign, so that we have a new construct based on a pair of naturals to get the integers. Then we define a new construct, based on pairs, to get rational numbers from the integers. Then we use a construct based on sets of rationals to get dedekind cuts to define the reals. It's messy - each step means defining some new construct.
With the surreals, we have the left and right sets - and from that, we get the entire system of numbers. We can define the naturals, or the reals, or the integers, or the rationals, or the ordinals - all by restriction over the surreals.
A more elegant construct, sure. [Expressive power is mostly at the basis of elegance, I think.]
The classic formulation demonstrates many constructs that you would use in practice at one point or other, so IMHO I would call it more useful.
But better is in the eye of the beholder. :-)
What you define in the second paragraph is cardinal exponentiation. That's why milan's confused.
In order to get ordinal exponentiation, you need to require that the tuples are almost everywhere 0.
Jon:
Naive Set Theory by Paul R. Halmos is excellent.
#7:
I think that "ordered pairs of finite ordinals" would be ω2?
Maybe I've found my answer: 2ω are strings of 0's and 1's of all finite lengths, but not of infinite length.
That's really confusing, but I'm getting into it. Keep writing, Mark!
Can you give a brief hint what it takes, once the reals are constructed, to construct complex numbers? I'm guessing an ordered pair would do?
Is it as simple to define complex numbers via the surreals as it is to do everything else?
Waterbreath:
The complex numbers are, structurally, just pairs of real numbers. Of course, when we refer to the complex numbers, most of the time we're really talking about the *field* of the complex numbers, which also requires the definition of a set of functions for the basic operations of complex arithmetic.
I'm not sure exactly what you're trying to claim here. Induction up to Î0 for example is not equivalent to induction up to ε0. The inequivalency of induction up to various large countable ordinals is, after all, the basis of the reverse program in the foundation of mathematics.
As to the surreals there are many different ways of defining them and the generalised cut definition that Mark refers to is certainly elegant but its not the only way. One can define something called the cantor normal form of an ordinal - essesntially a transfinitely long polynomial with natural numbers as coefficients. These admit an alternative algebraic structure where modified forms of additiona and multiplication are commutative. One can then extend this to get relative ordinals where the coefficients are integers giving you additive inverses of everything. By taking the obvious ordered pair construction one can then get rational ordinals which also admit multiplicative inverses. It is then possible to construct a notion of cauchy sequence - don't ask me for details on all this, my german isn't good enough - and define transfinite reals. I've never seen an explicit comment to this effect, although, from memory, Conway appears to hint at this at one point, but as far as I know the surreals are just a compactification of the transfinite reals - all those points at infinity and 1/infinity etc.
It's not the case that 2^{\aleph_0} is at least \aleph_1, at least not on the basis of the conventional axioms of set theory. By definition \aleph_1 is the least cardinal greater than \aleph_0. Whether this is smaller or larger than 2^{\aleph_0} is independent of ZFC, so there's no basis for asserting it as a fact.
Robert:
Thanks for clearing that up, this series of posts was becoming very contradictory here.
Antendren:
Um, I don't get why almost everywhere 0 is necessary for well-ordering. Can that be explained by the definition? I'm not sure how to start showing that. Or do you mean it must be inserted in the definition itself? If so, the right question is probably: why?
Though intuitively an out-thinning such as that one would be the difference to cardinal exponentiation. If c.e. is what is defined, I can't see that either. The tuples are ordered and that wasn't required for the cardinals.
(AFAIK the underlying set can be well-ordered by the w.o. axiom, and that would presumably mean the tuples can too, but it is not required by the given definition of cardinals what I can see. And as I understand it only the first may be required for making the comparison that a measure of cardinality is based on.)
Robert:
I thought that there is a theorem which states that the cardinality of 2^X is strictly greater than the cardinality of X. Are you saying that this theorem requires ZF(C)?
Re a good book on Set Theory. Try "Set Theory and its Philosophy" by Michael Potter. Potter not only develops set theory very clearly and elegantly but also weaves extensive philosophical and historical discussions in amongst the formal proofs. Reading these posts has prompted me to re-read my own copy. Thanks MarkCC!
The cardinality of 2^X (powerset of X) is indeed greater than the cardinality of X. This shows that 2^{\aleph_0} is larger than \aleph_0, but it says nothing about how it relates to \aleph_1.
Another way to say all this is that there are two hierarchies of cardinalities, the \beth hierarchy, which goes up by exponentials, and the \aleph hierarchy, which goes up by "next highest cardinality". The GCH says that the two hierarchies coincide for all ordinals.
Robert:
I'm not following your objection.
The statement that you objected to was that 2ℵ0 is greater than or equal to ℵ1. Since we know that 2ℵ > ℵ0, and ℵ1 is the first cardinal greater than ℵ0, then how can 2ℵ0 not be greater than or equal to ℵ1?
What am I missing?
Robert, .mau.: No, 2^{\aleph_0} is at least \aleph_1 in ZFC. Also it is always strictly greater than \aleph_0, even in ZF without the axiom of choice. But without choice it might be that 2^{\aleph_0} and aleph_1 are not comparable to each other, although they must still both be strictly greater than \aleph_0. At least that's my recollection.
Torbjõrn: The "almost everywhere 0" is necessary to make the ordering as usually defined a well-ordering - if you just picked any well-ordering of the set, you would not get a specific ordinal. The usual ordering is lexicographic, comparing first the largest index where two functions from B to A differ, which might not even be well-defined if they are not almost everywhere 0.
Torbjörn: What I meant when I said that c.e. is defined is that the output as stated has the same cardinality as c.e. Using the definition presented in the second paragraph, 2^Ï has cardinality the continuum (also it's not an ordinal, as Orjan pointed out). You need almost everywhere 0 to make the output well-ordered and to make it equal to the inductive definition.
Mark:
Ah yes, by definition. I forgot.
Ãrjan:
Let me see, you want us to compare functions from a domain to a codomain, so I guess you mean the range of the functions will represent well ordering over infinite sets. Um, by requiring almost everywhere 0 you can keep well ordered function values finite, so you can compare them. I think.
Antendren:
Sounds like you mean preordering a set may not help. Stated as integers instead of the cumbersome set definitions, like some large powers (n ⥠2) of 2 are bound to be larger than some small powers (n = 1) of 3.
If so, we first have to look at the output of the set operations, and then make the output well-ordered lexicographically. I think.
Ãrjan: thanks for the explaination. Since I am not used to work without the Axiom of Choice, it did not occur to me that two cardinal numbers could not be compared.
[blush] Um, oh, I forgot to thank Ãrjan and Antendren too. [/blush]
Best book I've (tried) to read is
Frank Drake - Set Theory: An Introduction to Large Cardinals
It's rather technical, this paper today, but still amazing:
arXiv:0707.1818
Title: Large continuum, oracles
Authors: Saharon Shelah
Subjects: Logic (math.LO)
http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.1818v1.pdf
Our main theorem is about iterated forcing for making the continuum larger than aleph_2. We present a generalization of math.LO/0303294 which is dealing with oracles for random, etc., replacing aleph_1, aleph_2 by lambda,lambda^+ (starting with lambda=lambda^{aleph_1). Well, instead of properness we demand absolute c.c.c. So we get, e.g. the continuum is lambda^+ but we can get cov(meagre)=lambda. We give some applications. As in math.LO/0303294, it is a "partial" countable support iteration but it is c.c.c.