This is the alst week of the academic term here, so I’ve been crazy busy, which is my excuse for letting things slip. I did want to get back to something raised in the comments to the comments to the Born rule post. It’s kind of gotten buried in a bunch of other stuff, so I’ll promote it to a full post rather than a comment.

The key exchange starts with Matt Leifer at #6:

The argument is about why we should use the usual methods of statistics in a many-worlds scenario, e.g. counting relative frequencies to estimate what probabilities we should assign in the future. It is not simply about whether we can find a mathematical formula that obeys the axioms of probability, which we can clearly do just by postulating the Born rule, but rather it is about why observers in the multiverse should have any reason to care about it. Isn’t it obvious that this isn’t obvious?

I responded, probably a little too snippily (but see my earlier remarks about being crazy busy):

I guess I’m just a committed empiricist, but given that my experience of the world involves seeing a series of measurements with well-defined outcomes whose probabilities measured over many repeated experiments give values that match the Born rule, then I think I have ample reason to care about the Born rule.

I realize that, in some other branch of the wavefunction there is some other version of “me” who saw different outcomes to specific measurements. And even a version of “me” who lives in some “Rosenkrantz and Guildenstern Are Dead” branch of the wavefunction in which the repeated sequence of measurements give results that don’t match the Born rule, or make any sense at all. What’s not obvious to me is why I should care about what they see. Given that they’re in other branches of the wavefunction that are inaccessible to me, the fact that they see something bizarre does nothing to reduce the utility of the Born rule in my little corner of the wavefunction.

Moshe’s comment at #12, though, re-frames this nicely:

I think Matt and yourself are saying the same thing. If you allow me to paraphrase – the Born rule (with frequency based probabilities) is in practice the basis for any empirical test of QM, if the MWI does not reproduce this part of QM then it is simply incorrect. The burden of proof, once you allow things like the “Rosenkrantz and Guildenstern Are Dead” branch of the wavefunction, is to explain why the world around us looks nothing like that. In other words, why committed empiricists are almost always right in making deductions based on making repeated observations and looking at the probabilities of possible outcomes.

As you noted in a later comment, we went round and round about this a while back, with no real conclusion. As I said, though, I was probably to short with my reply to Matt, so I’ll use this opportunity to rephrase myself a little.

As in the previous discussion, I think the point where this breaks down, for me, is that I don’t see the distinction between the existence of highly improbable components of the wavefunction in which measurement outcomes defy probability and the fact that probabilisitc systems will necessarily include long runs that “look” like they follow very different statistics.

It’s sort of interesting, I think, that this comes in the same week as the latest Fermilab rumor, about which Sean wrote: “The real reason to be patient rather than excited by the bump at 150 GeV was that it was a 3-sigma effect, in a game where most 3-sigma effects go away.”

On the face of it, that’s kind of a ludicrous statement– if the statistics have any meaning, it can’t be the case that “most” 3-sigma effects go away. A 3-sigma effect should be wrong less than 1% of the time, a far cry from “most.” The vast numbers of measurements involved in particle physics, though, mean that over many years, you can accumulate a non-trivial number of cases where a 3-sigma measurement was wrong. Which is why the standard for detection is a lot higher, because ordinary statistical fluctuations can and do produce situations where a less-than-1% chance of being wrong about a detection comes through. They can, in fact, produce enough of them that physicists become jaded about it.

(There are people, for the record, who take the position that things like the routine evaporation of 3-sigma results show that we really aren’t handling the statistics properly, and argue for a completely different approach to the estimation of data uncertainties. I heard a long discussion of this from someone at MIT many years ago, but I didn’t follow it well enough to be able to reproduce his alternative method, or even Google up a good discussion of it.)

Given that sort of thing, I don’t see how you can really separate improbable branches in the Many-Worlds Interpretation from cases that really are governed by some underlying probability, but just hit a long run that “looks” like it follows some different rule.

If you look at something like repeated measurements of a 50/50 system– a whole slew of identically prepared spin-1/2 particles, say, or a large number of tosses of an ideal coin– Many-Worlds says that there is some branch of the wavefunction in which you get the same answer 1000 times in a row. And if your measurement came up “tails” or “spin-down” 1000 times in a row, you would be pretty surprised.

But the probability of that happening purely by chance isn’t zero. It’s not very good– around 1 in 10^{300}— but if you flip coins or measure spins long enough, you *will* eventually get a run of 1000 in a row. Or 10,000, or 1,000,000, or whatever ridiculously large number you would like your *absurdam* to reduce to.

So it’s not clear to me why the existence of an exceedingly unlikely branch of the wavefunction in which 1000 coin-flips come up tails is any different from the observation that in an infinitely large universe in which sentient observers can flip coins, one of them will sooner or later come up with 1000 tails in a row. We wouldn’t say that one freak run of results completely invalidated statistics (provided, at least, that subsequent tests reverted to the expected behavior)– we’d just say that that particular experimenter got really (un)lucky, but the underlying probability distribution really was 50/50 all along.

It might be that some sort of careful statistical analysis would show that unlikely events would be seen more often in a Many-Worlds type universe than they “should” according to non-quantum probabilities. It might be that the statistics of particle physics experiments have already shown this to be the case, conclusively proving Many-Worlds right. But I suspect that the real answer would be that such outcomes turn up exactly as often as they “should” in a collapse interpretation with probabilities given by the Born rule.

This is not to suggest that there’s anything wrong with trying to find ways to derive the Born rule. It’s absolutely something people should be working on, in both Many-Worlds and collapse-type interpretations of quantum mechanics. If there’s a way to get that out of the formalism without assuming it at some point, that would be a fantastic achievement. If there’s some natural way to measure probability by counting wavefunction branches, or through the mechanics of whatever drives the collapse of the wavefunction in some other model, that would be a really strong argument in favor of that interpretation.

Given that none of the obvious things work, though, I don’t think that a lack of such a derivation is a fatal weakness for any particular interpretation. It’s possible that in some sense this is a bigger problem for Many-Worlds than for collapse interpretations, but then I suspect that, in the grand scheme of things, any relative gain over Many-Worlds is offset by the need to find an explanation for the collapse.

(What we really need, of course, is a way to extract testable predictions from this stuff, and then experiments to test them. That’s part of why I find things like large-molecule diffraction measurements, or cavity opto-mechanics systems really fascinating– as you push quantum effects to larger and larger sizes, at some point, you might begin to put some meaningful constraints on some of the proposals for alternative mechanisms for quantum measurement. Which would at least narrow the field a little, if not settle the question.)