Why I Won't Make It as a Philosopher

I think I missed this the first time around, but this weekend, I watched the bloggingheads conversation about quantum mechanics between Sean Carroll and David Albert. In it, David makes an extended argument against the Many-Worlds Interpretation of quantum mechanics (starting about 40:00 into the conversation).

The problem is, I can't quite figure out what the problem is supposed to be.

The argument has something to do with a thought experiment in which you take a million particles, prepared in a state such that a measurement of their spin will give an equal probability of measuring "up" or "down." The most likely outcome will be for roughly half of the spins to be up, and roughly half of the spins to be down.

There will be one branch of the wavefunction, though, in which every single spin will be up, and one branch in which every spin will be down. The odds against this are astronomical, as a matter of normal probability, but it can happen, so there must be a part of the wavefunction that describes that hugely unlikely event.

Albert seems to think that this poses some sort of insuperable problem for the Many-Worlds Interpretation. I can't really figure out why, though.

It's certainly true that any observer seeing such an outcome will be surprised, and for good reason, it's a one-in-two-to-the-million event. It's also true that this outcome must be represented somewhere in the wavefunction of the universe. Albert seems to take this as meaning that it's somehow unreasonable for that observer to be surprised by that result, and that this somehow poses an enormous problem for quantum mechanics.

The thing is, though, this isn't a problem that's specific to quantum mechanics. This is just the infinite-number-of-monkeys problem from regular probability. Given a large enough sample of random events, you will eventually find anything you like-- a large enough number of monkeys banging on typewriters will eventually produce the Skindhead Hamelt. If you flip a coin often enough, you will eventually get a run of a million heads in a row.

It's highly unlikely that any set of a million coin-flips will all come up heads, and you'd be absolutely right to suspect something funny was going on. Even with perfectly fair coins flipped in a perfectly random manner, though, somebody, somewhere is going to see a run of a million heads.

So I really don't see how this is a killer argument against Many-Worlds. Or, more precisely, I don't see how it's a killer argument against Many-Worlds specifically, as opposed to a killer argument against probability theory generally. But then, down that road lies madness, in the Zeno's Paradox sort of vein, in which you manage to philosophize yourself into believing that things that are manifestly true can't possibly be true (happily, you'll never make it all the way to the end of the road, because first you have to get halfway to madness, and then half of the remaining half, and...).

There are interesting questions to be asked about probability in Many-Worlds-- Sean brings up the most obvious, namely, "How do you recover the Born rule for probabilities from a system in which all outcomes happen somewhere?" That doesn't seem to be what David is talking about, though, because he declines to follow up on that aspect when Sean brings it up. Instead, he goes off on this weird thing about people being surprised by improbable events, and I'm not sure why.

I hope there's something deep and subtle going on here, but I honestly don't see what it's supposed to be. Which is probably why I'm an experimentalist, not a theorist or a philosopher.

More like this

It's quite simple, really. If a million quarters all come up heads, there will be one world in which the observer is surprised, and another in which he isn't. (Or an infinite number of each. Who knows?)

I'd like to conduct some tests. Anyone got a million quarters I could borrow?

Rt

By Roadtripper (not verified) on 19 Jan 2009 #permalink

I think David Albert's question is pretty reasonable, if I understand it correctly it is just a basic question regarding the operational meaning of probability in the MWI.

To paraphrase, if you believe the MWI then quantum mechanics makes predictions for a probability distribution for all worlds (rather than for example probability distribution for repeated identical experiments in one universe). The question is what is the operational meaning of such prediction for any one such observer, who does not have access that global probability distribution.

Albert argues, I think, that in effect QM makes no predictions at all for any individual world, it just makes that inherently unobservable predictions. For example, if your theory predicts something which you don't observe, you are always in a position to say you are in the tail of the distribution. There is no operational meaning (in a single world) to the statement that this is unlikely. This is in contrast for example to the probability as defined by repeated experiments - then you can quantify what you mean by an unlikely event.

The problem has been empirically evaluated. Every refracted photon in principle takes every path weighted by its probability. Minimum action optical paths are simply summed probabilities; bad actors would appear as off-axis optical aberrations. Very large multiples of 10^26-photon samplings are not untoward (the sun is unremarkably imaged). Camera lenses don't contradict the universe. (At worst they get you arrested if she was under 18.)

I'm not sure if this is exactly relevant to the discussion in the video (I confess, I did not watch it), but it is interesting to contemplate life in some of those "unlikely worlds." In some world or worlds, every coin flip would always come up heads. If we were in that world, our understanding of the probability distribution would be horribly flawed. Extending the thought, why should we assume that the true probability distribution for quantum events is what we observe, and not that we might simply be living in an highly unusual world?

Chad Orzel - those were my thoughts *exactly* as I watched that diavlog, except I also yelled at the screen a bit because it seemed like he was just being obtuse :)

Albert argues, I think, that in effect QM makes no predictions at all for any individual world, it just makes that inherently unobservable predictions. For example, if your theory predicts something which you don't observe, you are always in a position to say you are in the tail of the distribution. There is no operational meaning (in a single world) to the statement that this is unlikely. This is in contrast for example to the probability as defined by repeated experiments - then you can quantify what you mean by an unlikely event.

But again, I don't see why this is a problem for Many-Worlds specifically, as opposed to a problem for any probabilistic theory. It's always possible to be way out in the tail of the distribution, even when you're dealing with classical objects. I don't see the difference between "Many-Worlds predicts that there is some branch of the wavefunction in which wildly improbable things happen" is any different from "Probability theory predicts that a million monkeys banging on typewriters will eventually produce the collected works of Shakespeare."

I think the issue is not the existence of unlikely events, I don't think anybody is particularly impressed with that basic fact about QM. I think the issue is understanding precisely in what sense those events are unlikely, or what precisely is meant by the probability In the MWI, what does the statement that some measurement is unlikely mean operationally if you advocate the MWI.

To give an example: for the monkey experiment you can say (for example) that when repeating the experiment many times, most times you will not get the collected works of Shakespeare. This is then is the precise sense in which the event is unlikely (and thus you have a right to be surprised if it happens).

But this kind of thing does not work if your probability is not for the distribution of results in repeated experiments, but for the distribution of results among different potential "worlds", among which you have access to one only (incidentally, in my mind, at this point that any probabilistic theory can have different "worlds" realizing the different possibilities, except when trying to understand what that precisely means you'd have to struggle, among other things, with the issues Albert is discussing).

So, what Albert is struggling with when discussing MWI is defining precisely in what sense measurements assigned small probabilities are "unlikely", or even what does the probability of anything means in MWI, given that we only have access to one world, the one in which the measurement (whatever it is) has occurred.

Go back to Schrodinger's cat. But instead of making it 50-50 that the cat lives or dies, make its death the macro consequence of some very unlikely quantum event. In the many worlds interpretation (in this case, though, just two worlds) we don't know which world we're in until we open the box. We expect that we're in the most likely world, but there's a remote possibility we'll find that we're really in a very unlikely world: when we open the box, we'll find the cat dead.

Operationally, though, we still pick up cat food on the way home.

Let me also add why I think this issue is deadly for the MWI, and then I'll shut up.

We can play the game of defining the probability distributions of QM to be an aspect of some fictional many-world, which we have no access to. Be that as it may, this is NOT why we think QM is correct. Whatever reason we have to believe in the correctness of QM has to do with its accuracy in predicting some aspects of measurements performed in our world, which we may call their "likelihood" or "probability". So in order to understand QM we need to concentrate on that aspect of our measurements which make us confident that QM is correct. Any other definition of "likelihood" is a cop out, it just doesn't answer any interesting question.

Go back to Schrodinger's cat. But instead of making it 50-50 that the cat lives or dies, make its death the macro consequence of some very unlikely quantum event. In the many worlds interpretation (in this case, though, just two worlds) we don't know which world we're in until we open the box. We expect that we're in the most likely world, but there's a remote possibility we'll find that we're really in a very unlikely world: when we open the box, we'll find the cat dead.

This is one of the reasons why I've come around to thinking that the whole "worlds" frame is dangerously misleading. Talking about this in terms of different "universes" creates a false impression of equality among the "universes."

In reality, what you have is a wavefunction with two branches corresponding to the two different outcomes, and one of those branches has a larger amplitude than the other. That larger amplitude means a higher probability of that outcome, according to the Born rule. The two "worlds" are not created equal.

We can play the game of defining the probability distributions of QM to be an aspect of some fictional many-world, which we have no access to. Be that as it may, this is NOT why we think QM is correct. Whatever reason we have to believe in the correctness of QM has to do with its accuracy in predicting some aspects of measurements performed in our world, which we may call their "likelihood" or "probability". So in order to understand QM we need to concentrate on that aspect of our measurements which make us confident that QM is correct. Any other definition of "likelihood" is a cop out, it just doesn't answer any interesting question.

I still don't see how this is a problem.

I agree that Many-Worlds doesn't give a concrete prediction for a single non-repeatable event. No interpretation of quantum mechanics does, or even can.

The only way to establish probability is to repeat a measurement many times with identically prepared systems, and track all the outcomes. Quantum mechanics gives us predictions of the probabilities of events, and as a general matter, we tend to find that repeated measurements have outcomes whose distribution closely matches the predictions. Thus, we have some confidence that the theory is correct.

It's true that there are some branches of the wavefunction in which repeated experiments continually yield results that the theory predicts to be highly improbable (which contain a minuscule fraction of the total amplitude). The physicists in those branches of the wavefunction no doubt begin to question the validity of the theory. They may even decide to chuck the whole science thing, and herd goats for a living.

The existence of those branches doesn't invalidate the theory as a whole, though. The theory makes predictions that agree well with the results of repeated measurements over a huge range of experiments. That looks like a successful theory to me.

Chad, if I understand Albert correctly, and in any event just speaking for myself, the line of reasoning is not doubting QM as a correct theory, but using that fact to raise doubts against the MWI. The argument is basically that probabilities as defined by the MWI are not observable by any single observer, and that in any event, they are not related to the probabilities we measure when we test QM. Let me unpack:

The MWI is a particular interpretation of the probabilities that occur in QM. I think my favorite interpretation of those probabilities is closer to the one you express, the relative frequencies in a series of repeated experiments. This is NOT the interpretation given to the probabilities in the MWI, with the MWI interpretation repeating experiments gives you no further information.

Furthermore, as in above, the reason we think QM is correct is because it works for probabilities as defined operationally by the frequency-based approach. Any other interpretation we give to the word "probability" is then necessarily divorced from the way we interpret QM in practice.

The MWI is a particular interpretation of the probabilities that occur in QM. I think my favorite interpretation of those probabilities is closer to the one you express, the relative frequencies in a series of repeated experiments. This is NOT the interpretation given to the probabilities in the MWI, with the MWI interpretation repeating experiments gives you no further information.

I'm with you right up to that last clause. In what sense does repeating experiments not give you further information? Repeating experiments is the only thing that gives you any information at all.

Chad writes:

Quantum mechanics gives us predictions of the probabilities of events, and as a general matter, we tend to find that repeated measurements have outcomes whose distribution closely matches the predictions. Thus, we have some confidence that the theory is correct.

The problem here (and I agree it is as much a problem for classical probabilities as it is for quantum mechanics) is that probabilities are not observables, only relative frequencies. The theory does not predict that these two numbers will be the same, so measuring relative frequencies does not directly test the theory. The theory only predicts that the probability of the relative frequency differening significantly from the probability is very small (for an experimented repeated a large number of times).

So when we reject or accept a probabilistic theory on the basis of experiment, we are doing something slightly ad hoc and irrational. We arbitrarily pick a cut-off C, and say if the predicted probabilities match the relative frequencies to within margin of error C, then we accept the theory, and otherwise we reject it. But that's not a purely logical conclusion, since we could have chosen a stricter cut-off or a less strict cut-off, and we would have reached a different conclusion.

By Daryl McCullough (not verified) on 19 Jan 2009 #permalink

Well, suppose you made the wrong measurement a thousand times, and you believe absolutely with no doubt in the correctness of QM and the MWI. Then you'd say that you are in an incredibly unlikely universe. Repeat the experiment one more time, get an unlikely result again, what would you say now?

Said differently, with the MWI you are always in a situation of making one experiment, which could be simpler (consisting of 100 measurements) or slightly more complex (consisting of 1001 measurement). In both cases you are a member of a single universe, which can be incredibly unrepresentative of the whole distribution, if you insist of interpreting the distribution according to the MWI.

The only thing changing when you get the wrong result 1001 times is the number you assign to your feeling of being incredibly unrepresentative. That number has no operational meaning in your one universe.

(BTW, I think at this stage I am only speaking for myself).

Moshe,

Whether you adopt MWI or not, it's still the case that probabilities are not observable, so there is a mismatch between what is predicted (probabilities) and what is measured (relative frequencies for repeated experiments). So QM is not predicting anything directly measurable. I don't see how the MWI changes anything.

By Daryl McCullough (not verified) on 19 Jan 2009 #permalink

I don't pretend to have a good philosophical theory of probability, I am just putting what Albert said in that context. Furthermore, I am expressing my prejudice that the MWI does not even address most of the issues you'd be interested in. Namely: we all agree that QM is a successful theory because it correctly predicts probabilities of measurements in some imprecise sense. Making this vague intuition precise is an interesting philosophical project. Inventing some fictional quantity, which we have never seen and never will, and discussing that quantity instead of what we do when we perform measurements in our one world, that will not get us anywhere.

The problem here (and I agree it is as much a problem for classical probabilities as it is for quantum mechanics) is that probabilities are not observables, only relative frequencies. The theory does not predict that these two numbers will be the same, so measuring relative frequencies does not directly test the theory. The theory only predicts that the probability of the relative frequency differening significantly from the probability is very small (for an experimented repeated a large number of times).

So when we reject or accept a probabilistic theory on the basis of experiment, we are doing something slightly ad hoc and irrational.

Sure.
But doing anything else goes down the road to Zeno Land, where you starve to death in an apartment full of food because you believe it's impossible to reach the kitchen. You've got to pick some definition of "agreement with theory," or else you can't get anything done.

Logical consistency is all well and good, but it's no substitute for results.

The only thing changing when you get the wrong result 1001 times is the number you assign to your feeling of being incredibly unrepresentative. That number has no operational meaning in your one universe.

I still don't see how this is a problem specific to Many-Worlds, and not quantum theory generally. What distinguishes the Many-Worlds case from a bad run of luck in the collapse interpretation of your choice? Either way, you only perceive one world, a world in which the results of your experiments stubbornly refuse to agree with a naive expectation of what should happen.

But you never say that you had a bad run of luck, in practice. If you get the same result 1000 times, you just say this is the experimental result, and toss any theory that predicts otherwise. This is the process whose success we are trying to understand and formalize.

I think therefore that taking incredibly small probabilities too seriously is missing the point. As a statement about observations, the idea of probability is an approximation, which only makes sense when coming with the sort of cutoff Daryl mentions. This is easier to achieve in interpretations other than MWI, where you are forced to take the tail of the distribution quite literally.

Wow, this old cheastnut again!

I remember the advertisments for this video blog, when? - eight months ago? - with requests for issues to talk about. I have to say, I found it quite disappointing. I know it was just a blog, not a lecture, and it didn't need to prove anything, but the marketing did promise a great deal (but isn't that always the case...)

Clearly QM works. Clearly it is probabilistic on an operation level (i.e. repetition of the same experiment over and over again produces results with the relative frqeuncies predicted). The predictions of what is posible work (despite being mathematically dubious for anything much beyond the harmonic oscilator). Nobody has any idea what is going on 'behind the scenes' (although some speak as though they did...).

The vast majoriy of practising quantum mechanics don't care becuse it works - and why should they.

I am in complete agreement with you, Chad. He seems to be saying that MWI is basically unfalsifiable because it predicts (nearly) all outcomes. There are two problems with this argument:

1) The argument works just as well when we apply it to regular probability theory or regular QM. If I have a theory that says the probability of a coin-flip being heads is 0.5, then this theory cannot be falsified whether I get heads or tails. Even if I somehow flipped a million heads in a row, this should not be surprising, because my theory predicts that this must happen eventually if I keep on flipping coins. Therefore, probabilities are unfalsifiable non-science, Achilles will never pass the tortoise, the Babel fish disproves God, etc. etc.

2) Let's say we have two competing theories, one of which is MWI and the other which says all particles will be spin up. If we find that all the particles are spin up, of course this will be evidence against the former and in favor of the latter! This is just a simple application of Bayes' theorem. You just have to accept that there exist a small number of worlds in which we draw the wrong conclusions because we got unlikely results. In fact, I understand that this has happened many times in this world in the history of science. So?

Moshe:

I suspect that we're hitting the point of mutual incomprehension, here, so I'll stop pushing it. I did want to say thanks, though, because I can see that what you're saying is the same as what Albert is trying to argue in the dialogue. I don't agree that it's a problem, but talking about it here has made it a little more clear to me what he was trying to get at.

Yeah, there is only so much you can achieve discussing things this way, but it is still fun. I have to say in this particular case I succeeded in confusing myself more than anything else, I'm not really sure what Albert is saying any more.... Maybe I'll sit down sometime and try to write down for myself all the reasons I distrust the MWI, could lead to a more coherent discussion.

Let me take a shot at addressing miller's concern (which, I think, is identical to Chad's):

1) The argument works just as well when we apply it to regular probability theory or regular QM. If I have a theory that says the probability of a coin-flip being heads is 0.5, then this theory cannot be falsified whether I get heads or tails. Even if I somehow flipped a million heads in a row, this should not be surprising, because my theory predicts that this must happen eventually if I keep on flipping coins. Therefore, probabilities are unfalsifiable non-science, Achilles will never pass the tortoise, the Babel fish disproves God, etc. etc.

I think there is a difference in premises here. Albert says that it would be surprising if it happened in the coin-flipping scenario, but not if it happens under the MWI (you seem to be saying that it would not be surprising even in the coin-flipping scenario). In the former scenario, we have a frequentist notion of what it means for flipping a million heads to be 'improbable' --- it's just something that rarely happens in practice. So we could say, based on our past observations of coin-flips in this world, it is highly improbable to flip so many heads in a row.

How is this different in MWI? Under MWI, every time you try to measure the spins of one million particles, there is always one you who finds them all heads up. The thought is that, for that one 'you' who repeatedly, at each branching of worlds, finds them heads up, MWI implies that it is not surprising that you keep finding 1 million heads up. For, as a believer in MWI, you know that there is always one of 'you' that will get this result; that there is one such 'you' is a phenomenon you would expect to happen with a probability of 1 (this is not the case in the one-world coin-flipping scenario; at each flip your expectation that a version of 'you' will, for that particular sequence of flips, get all heads up, is almost zero). This 'it always happens' feature of MWI, I think, is the root for Albert's claim that there is a problem about not being 'surprised' by such improbable-measurement events in MWI. miller note that even in one world, it 'always happens' eventually, but this does not address the point taht if you have just one world, your expectation that the rare outcome will occur is low every time you do the experiment, but under MWI, the rare outcome occurs with a probability of 1 every time you do the experiment.

One obvious way to get around the 'it always happens' argument is to claim that your 'surprise' is legitimately based on the fact that there are many more 'you's in the spin measurement scenario who do not find all spins up or all spins down, and hence one should be surprised that one happens to be the particular person who measures all spins up. But Albert objects to this because he doesn't think it's meaningful to speak of "the probability that I will be the 'me' who measures all spins up". As an intuition pump, he uses the example of the amoeba splitting into two, and asks if it's meaningful to speak of "the probability that the unsplitted amoeba will be the amoeba that on the left after the split". Now, if you disagree with Albert's claim that such a notion is not meaningful, then indeed you would not find it problematic to talk about "the probability that I will be the 'me' who measures all spins up". I haven't thought too deeply about why one should think such a notion meaningful (or not).

Hope that clarifies things a little.

This 'it always happens' feature of MWI, I think, is the root for Albert's claim that there is a problem about not being 'surprised' by such improbable-measurement events in MWI. miller note that even in one world, it 'always happens' eventually, but this does not address the point taht if you have just one world, your expectation that the rare outcome will occur is low every time you do the experiment, but under MWI, the rare outcome occurs with a probability of 1 every time you do the experiment.

See, this is why I've started to think that the usual practice of describing the different branches of the wavefunction as "separate universes" is bad. It creates a false impression of parity between the possible measurement results.

What you've really got is a very large wavefunction with different amplitudes in branches corresponding to the different possible outcomes. The less probable outcomes have smaller amplitudes, which correspond to the probabilities of those outcomes. They're not really equal "worlds."

This sort of argument fails for me, because it attributes too high a degree of reality to the "worlds" of the interpretation. There is no operational difference between finding yourself in the deeply improbable branch of the wavefunction and finding yourself in a single universe whose wavefunction collapses into the deeply improbable state. The only reason to think that there is a difference is a sort of semantic confusion brought on by taking the idea of the different branches as "separate universes" too literally.

"What you've really got is a very large wavefunction with different amplitudes in branches corresponding to the different possible outcomes. The less probable outcomes have smaller amplitudes, which correspond to the probabilities of those outcomes. They're not really equal "worlds.""

This is also how I've always understood MWI, and with this understanding there is no real difference in how "surprised" you should be with the MWI interpretation vs any collapsed wave function interpretation.

What I don't know is what is the actual difference supposed to be, apart from semantics, between the MWI and collapsed wavefunction interpretations.

The surprise that the MWI-adherent, let's call her Molly, encounters is not that the coin landed heads a million times, but that this particular instantiation of Molly was the one to have observed it. It's like a raffle--no one expects to win a raffle, and no one should be surprised that *someone* will win the raffle, but any individual should be surprised that he/she was the one out of so many to have won.

"no one should be surprised that *someone* will win the raffle, but any individual should be surprised that he/she was the one out of so many to have won." An odd statement, in terms of philosophy and biology. Every naturally conceived human individual emerged from a sperm outswimming millions of others to be first to penetrate an ovum. All of us are unlikely. Each of us has at least one point mutation subsquent to that conception. The "observer" in QM and the "observer" in SR are not modeled in the physical theory, albeit presumed to conform to physical theory at the quantum and bulk levels. We are all winners. Thereafter, it matters not what cards we were dealt, but how we play them.

Probability theory predicts that a million monkeys banging on typewriters will eventually produce the collected works of Shakespeare.

Actually, this is incorrect, as follows:

Let U be the set of k-ary sequence of n characters, and for all sequences A of m

By Avi Steiner (not verified) on 21 Jan 2009 #permalink

I note that Avi Steiner's proof contains a devious yet clever proof of the following corollary:

Probability theory predicts that 10^200000 monkeys banging on typewriters will eventually produce the collected works of Shakespeare.

I have a degree in probability and say that you are correct. It is not an argument against many-worlds.

Now consider that infinite monkeys argument. I never liked that one -- you can't get monkeys to type enough letters -- so let's do something realistic. Come up with some reasonable encoding that translates numbers to letters. Apply that encoding to the decimal digits of pi. With probability one you will find the Complete Works of Shakespeare in there somewhere, letter perfect.

But that is not all. With probability one it will be in there infinitely many times. I figured this out and checked it with my professors to make sure, and they nodded yes.

This is true of ANY finite subset. It will occur infinitely many times. A zillion to the zillionth power zeros a row in the decimal expansion of pi. Probability one. Make the subset as large as you like, pile on powers on top of powers, it makes no difference. As long as the subset is finite, then it will be in there infinitely many times with probability one. Once you have accepted that pi is an infinite sequence of random digits, then that is the result you get.

But you say, pi isn't random, it's deterministic. I can tell you that "random" simply means "unpredictable." As you watch the digits go by, you can't predict them. If someone else has a printout of the digits of pi then he can predict the digits. It is random to you and not to him. There is no contradiction in this. Randomness is purely subjective.

Here's another one. Try to choose an integer at random, with every integer having equal probability of being chosen. You can't. The probability that you will be able to denote the integer is zero. There is a largest number you can denote, huge though it may be, and with probability one that number you chose at random is greater than that. You can't denote it by any finite means, which is all that you can do. So how can you say you know what it is? You don't.

This is very much the same as a controversial issue back in the old days. It is called the Axiom of Choice. You assume that it is possible to make infinitely many arbitrary choices from an infinite set. It looks innocent enough, but gives rise to the the BanachâTarski paradox which says that it is possible to decompose ("carve up") the 3-dimensional solid unit ball into finitely many pieces and, using only rotations and translations, reassemble the pieces into two solid balls each with the same volume as the original. This is pretty silly, which is the point. The "balls" are sets of real numbers, and the uncountable infinities involved have these weird properties. So one could conclude that the whole "real number" idea is an unrealistic fantasy, albiet a useful one. Real balls are not made of real numbers.

One can also come up with a completely consistent mathematics if you assume the axiom of choice is FALSE! The results are different. It is up to the user: decide it is true, or decide that its converse is true. Either way works.

So getting back to many worlds, they have to describe the sets and operations they are using in more detail before one can apply any of this classic math.

-----

A concept that most people have trouble with the concept of probability zero. It is not the same as impossibility. The classic example is the probability of a real number being an integer. The probability is zero, but it isn't impossible. If you flip an infinite number of coins, the probability of ANY specific result is zero. Now let's look at this in the language of measure theory. Any specific result is an infinite sequence of H's and T's. The measure space is the set of all of those infinite sequence. Any one result is stuffed into a set with one member so it can be a subset. Any singleton set is of measure zero, therefore probability zero. Any finite subset is of measure zero, therefore probability zero.

This is all simple, it is just a matter of getting used to the notation and a different way of thinking about probability. The point of all this is to get the elements of time out of it. Advanced mathematics is allergic to the idea that anything can change. X is always equal to X. This computer programming stuff of X = X+1 is not allowed. This was done to make things easier to deal with. The idea of probability as a process becomes very hard to handle. Sets and subsets that never change are the way to think about it.

Assuming you have absorbed that, the set of prime numbers is a set of measure zero in the measure space of natural numbers with each number having equal probability. This is a fancy way of saying, "If you could chose a natural number at random, which you can't, then the chance it will be prime is zero." In case you are growing worried that all subsets are of measure zero, the set of odd integers is of measure 1/2. The set of integers divisible by ten has measure 1/10. We don't try to prove this, we just define it that way and then measure other sets by working up from that. Maybe you get the idea.

So Albert seems to be bothered by sets of measure zero. Tough. There they are. They aren't going to go away. Deal with it. Let me reassure him though, that he will never find himself in a set of measure zero. It is practically impossible, and physics is an applied science so it is all about practicality. They are worrying about something of no practical significance. It is like those people who are concerned about whether 0.9999... is equal to one. Close enough for physics.

By Patrick Powers (not verified) on 15 Sep 2011 #permalink