The Statistics of the Highly Improbable

This is the alst week of the academic term here, so I've been crazy busy, which is my excuse for letting things slip. I did want to get back to something raised in the comments to the comments to the Born rule post. It's kind of gotten buried in a bunch of other stuff, so I'll promote it to a full post rather than a comment.

The key exchange starts with Matt Leifer at #6:

The argument is about why we should use the usual methods of statistics in a many-worlds scenario, e.g. counting relative frequencies to estimate what probabilities we should assign in the future. It is not simply about whether we can find a mathematical formula that obeys the axioms of probability, which we can clearly do just by postulating the Born rule, but rather it is about why observers in the multiverse should have any reason to care about it. Isn't it obvious that this isn't obvious?

I responded, probably a little too snippily (but see my earlier remarks about being crazy busy):

I guess I'm just a committed empiricist, but given that my experience of the world involves seeing a series of measurements with well-defined outcomes whose probabilities measured over many repeated experiments give values that match the Born rule, then I think I have ample reason to care about the Born rule.

I realize that, in some other branch of the wavefunction there is some other version of "me" who saw different outcomes to specific measurements. And even a version of "me" who lives in some "Rosenkrantz and Guildenstern Are Dead" branch of the wavefunction in which the repeated sequence of measurements give results that don't match the Born rule, or make any sense at all. What's not obvious to me is why I should care about what they see. Given that they're in other branches of the wavefunction that are inaccessible to me, the fact that they see something bizarre does nothing to reduce the utility of the Born rule in my little corner of the wavefunction.

Moshe's comment at #12, though, re-frames this nicely:

I think Matt and yourself are saying the same thing. If you allow me to paraphrase - the Born rule (with frequency based probabilities) is in practice the basis for any empirical test of QM, if the MWI does not reproduce this part of QM then it is simply incorrect. The burden of proof, once you allow things like the "Rosenkrantz and Guildenstern Are Dead" branch of the wavefunction, is to explain why the world around us looks nothing like that. In other words, why committed empiricists are almost always right in making deductions based on making repeated observations and looking at the probabilities of possible outcomes.

As you noted in a later comment, we went round and round about this a while back, with no real conclusion. As I said, though, I was probably to short with my reply to Matt, so I'll use this opportunity to rephrase myself a little.

As in the previous discussion, I think the point where this breaks down, for me, is that I don't see the distinction between the existence of highly improbable components of the wavefunction in which measurement outcomes defy probability and the fact that probabilisitc systems will necessarily include long runs that "look" like they follow very different statistics.

It's sort of interesting, I think, that this comes in the same week as the latest Fermilab rumor, about which Sean wrote: "The real reason to be patient rather than excited by the bump at 150 GeV was that it was a 3-sigma effect, in a game where most 3-sigma effects go away."

On the face of it, that's kind of a ludicrous statement-- if the statistics have any meaning, it can't be the case that "most" 3-sigma effects go away. A 3-sigma effect should be wrong less than 1% of the time, a far cry from "most." The vast numbers of measurements involved in particle physics, though, mean that over many years, you can accumulate a non-trivial number of cases where a 3-sigma measurement was wrong. Which is why the standard for detection is a lot higher, because ordinary statistical fluctuations can and do produce situations where a less-than-1% chance of being wrong about a detection comes through. They can, in fact, produce enough of them that physicists become jaded about it.

(There are people, for the record, who take the position that things like the routine evaporation of 3-sigma results show that we really aren't handling the statistics properly, and argue for a completely different approach to the estimation of data uncertainties. I heard a long discussion of this from someone at MIT many years ago, but I didn't follow it well enough to be able to reproduce his alternative method, or even Google up a good discussion of it.)

Given that sort of thing, I don't see how you can really separate improbable branches in the Many-Worlds Interpretation from cases that really are governed by some underlying probability, but just hit a long run that "looks" like it follows some different rule.

If you look at something like repeated measurements of a 50/50 system-- a whole slew of identically prepared spin-1/2 particles, say, or a large number of tosses of an ideal coin-- Many-Worlds says that there is some branch of the wavefunction in which you get the same answer 1000 times in a row. And if your measurement came up "tails" or "spin-down" 1000 times in a row, you would be pretty surprised.

But the probability of that happening purely by chance isn't zero. It's not very good-- around 1 in 10300-- but if you flip coins or measure spins long enough, you will eventually get a run of 1000 in a row. Or 10,000, or 1,000,000, or whatever ridiculously large number you would like your absurdam to reduce to.

So it's not clear to me why the existence of an exceedingly unlikely branch of the wavefunction in which 1000 coin-flips come up tails is any different from the observation that in an infinitely large universe in which sentient observers can flip coins, one of them will sooner or later come up with 1000 tails in a row. We wouldn't say that one freak run of results completely invalidated statistics (provided, at least, that subsequent tests reverted to the expected behavior)-- we'd just say that that particular experimenter got really (un)lucky, but the underlying probability distribution really was 50/50 all along.

It might be that some sort of careful statistical analysis would show that unlikely events would be seen more often in a Many-Worlds type universe than they "should" according to non-quantum probabilities. It might be that the statistics of particle physics experiments have already shown this to be the case, conclusively proving Many-Worlds right. But I suspect that the real answer would be that such outcomes turn up exactly as often as they "should" in a collapse interpretation with probabilities given by the Born rule.

This is not to suggest that there's anything wrong with trying to find ways to derive the Born rule. It's absolutely something people should be working on, in both Many-Worlds and collapse-type interpretations of quantum mechanics. If there's a way to get that out of the formalism without assuming it at some point, that would be a fantastic achievement. If there's some natural way to measure probability by counting wavefunction branches, or through the mechanics of whatever drives the collapse of the wavefunction in some other model, that would be a really strong argument in favor of that interpretation.

Given that none of the obvious things work, though, I don't think that a lack of such a derivation is a fatal weakness for any particular interpretation. It's possible that in some sense this is a bigger problem for Many-Worlds than for collapse interpretations, but then I suspect that, in the grand scheme of things, any relative gain over Many-Worlds is offset by the need to find an explanation for the collapse.

(What we really need, of course, is a way to extract testable predictions from this stuff, and then experiments to test them. That's part of why I find things like large-molecule diffraction measurements, or cavity opto-mechanics systems really fascinating-- as you push quantum effects to larger and larger sizes, at some point, you might begin to put some meaningful constraints on some of the proposals for alternative mechanisms for quantum measurement. Which would at least narrow the field a little, if not settle the question.)

More like this

I'm afraid to ask this question, since it may be stupid or unrelated: Doesn't the Born rule require that you choose a particular basis? What determines that basis in MWI, when in MWI there is no fundamental difference between an observation and normal time-evolution?

A 3-sigma effect should be wrong less than 1% of the time, a far cry from "most."

This doesn't make any sense to me. If the Standard Model is right, then a 3-sigma effect will be wrong all of the time. "3-sigma" is about what percent of all measurements will have such a large discrepancy, not about what percentage of observed discrepancies will turn out to be signals.

"On the face of it, that's kind of a ludicrous statement-- if the statistics have any meaning, it can't be the case that "most" 3-sigma effects go away. A 3-sigma effect should be wrong less than 1% of the time, a far cry from "most.""

This is the standard misconception where p-values are assumed to be posterior probabilities. They aren't (which is why they're useless).

"On the face of it, that's kind of a ludicrous statement-- if the statistics have any meaning, it can't be the case that "most" 3-sigma effects go away. A 3-sigma effect should be wrong less than 1% of the time, a far cry from "most.""

Uh, no. It depends on the relative frequencies of the alternative and the null being true across lots of experiments.

A thought experiment: Somewhere on Earth is a human who has a unique disease that will cause him to explode with enough force to level a medium-size city. The only known test for this disease relies on the level of a chemical which is, in the healthy population, distributed Normally with mean 10 and std. dev. 1. This disease causes the person to have a level of at least 13 - exactly 3 sigma above the mean. Every person on earth is tested... there will be roughly 9 million people who have levels of at least 13, of which exactly 1 will be the diseased person.

The statistics has enabled us to reduce the false positive probability from 1 - 1/(7 billion), roughly, to 1 - 1/(9 million), roughly, which is a huge win, but we are still going to see roughly 9 million "wrong" 3-sigma effects for every "right" 3-sigma effect.

Chad, all: As I explained in The Born Equivocation thread, highly unusual events aren't the main problem with MWI. The main problem is it actually producing correct (found) statistics given its formulation. You start getting to the point by asking, "If there's some natural way to measure probability by counting wavefunction branches [in MWI] .... Yes, there is, but first some critique of the idea we could just assume the Born Rule is correct even in MWI. One has to show that the consequence of a theory is consistent with the BR or any other feature we wonder about. You can't specify a description, and then say "because you include Born rule as a foundational assumption of the theory." Sorry, that is not how logic works. If you assume "A", that has consequences. You can't just assume A and then add whatever B you want. It's like saying "I say it's a quadratic equation, but I need five solutions so that's what there are." No, if you can find five solutions it means you were wrong to assume it was a quadratic equation in the first place.

First you come up with a theory. Then you ask what it predicts as a logical consequence, you don't just get to make up a description and then demand that will produce whatever you want. Then you ask if that's what really happens.

Again, consider the case where we have unequal probabilities like 36% and 64% for outcomes A and B (different meaning than before.) MWI gives the wrong answer because it postulates that both outcomes happen (or else what is the point?) each time, instead of A happening 36% of the time and B happening 64% of the time. What you have if you do the experiment n times is a branching network, like a tree forking each time. The only intelligible way to define the equivalent of "chance" for an observer/version (ie, frequentist by counting in the abstract) is to take "how many different outcome paths" like AAABAABBB... etc. Well, that gives you the statistics for 50:50 chance instead of 36:64. In effect, the differential "strength" of the two waves goes to waste, since each becomes simply an existential entity for counting and not the occasion for genuine statistics as a further breakdown into various instances. There is then no meaning for "less likely version" of a given "world path." They all just "are." Otherwise we need to divide them up into more literal fragments of some kind. More complicated wrangles might help, but ruin the simplicity and supposed advantage of the essential concept, no?

I am not the first person to think of these problems, and I would like to see more acknowledgment of their severity (with thanks to dzdt and A Different Paul for trying to directly address these concerns in the BE thread, even if IMHO without success.)

Hi Chad, the short answer is that I agree with you (with some qualifications) that in the MWI equipped with an appropriate derivation of the Born rule, unlikely events should not be any different from unlikely branches of the wavefunction. But, in emphasizing how crucial the Born rule is to the MWI I was referring to the current understanding of the MWI - just unitary evolution and nothing else, without an accepted derivation of the Born rule. In which case I claim that: A. Despite the superficial similarity to QM, there is actually nothing to support the MWI experimentally and B. Worse, it is framework where you cannot make any inference from any set of observations.

In other words, unless you come up with a way for which the a priori probability amplitudes of QM translate to relative frequencies of repeated experiments done in a single "branch," it's all gibberish. Note that even if you come up with rule that assign some conditional probabilities (or relative propensities) to different branches, there is an additional step to translate those unmeasurable concepts to actual frequency based probabilities for repeated experiments done in a single branch (which is what's accessible empirically). It is not clear to me that this process of translation is always unproblematic, but I am sure that smart people with more experience have thought about it and have something to say (maybe Matt or someone else can comment).

One way around all of this is to simply postulate that this is the case, add the Born rule as an additional postulate beyond just unitary evolution. But, that would be disappointing - in this case it's true that the MWI would not be any worse than collapse model or any other "interpretation," but it won't be any better either. In fact it will probably be exactly equivalent to them (up to "what really exist" type questions which I see no conceivable way of answering). In my mind the only reason you'd be tempted to live with all the metaphysical baggage of the MWI is that it is more economical and minimal framework involving only unitary evolution.

I guess it comes down to me thinking about the MWI as being something more ambitious that just an interpretation, more as a generalization of QM which is potentially more logically coherent, so maybe that justified the higher bar I want to put on it. For example, decoherence kind of explains how you can get something like effective collapse (approximately and under the right set of circumstances, which means in effect it is a generalization of QM in the same sense statistical mechanics is a generalization of thermodynamics). I'd hope that something like that holds for the Born rule as well, otherwise a lot of the potential charm of the MWI is lost for me.

Great - now we have a bunch of non-philosophers trying to make sense not only of the philosophy of quantum mechanics, but also the analytic philosophy of probabilistic statements.

Every time I try to think about what "The probability of X happening is 78%" really means, I get hopelessly confused. (That's not entirely true; in some circumstances, it really means no more than "We did experiment Z n times and observed X 0.78*n times," in which case I do understand the statement.) I'd venture any honest person with the possible exception of a few philosophers who have seriously looked into this should answer the same.

By quasihumanist (not verified) on 02 Jun 2011 #permalink

Moshe: I guess it comes down to me thinking about the MWI as being something more ambitious that just an interpretation, more as a generalization of QM which is potentially more logically coherent, so maybe that justified the higher bar I want to put on it. For example, decoherence kind of explains how you can get something like effective collapse (approximately and under the right set of circumstances, which means in effect it is a generalization of QM in the same sense statistical mechanics is a generalization of thermodynamics). I'd hope that something like that holds for the Born rule as well, otherwise a lot of the potential charm of the MWI is lost for me.

I think that's really the key difference. I'm looking at it as an interpretation like any other, so I'm not particularly bothered by adding the Born rule as an axiom (which is more or less what you have to do in any other interpretation). since I have lower expectations generally, I don't see that as a problem.

There's a vague sense in which the discussion of the difficulty of finding probabilities seems excessively abstract to me. That is, the whole origin of the interpretation comes from considering the unitary evolution of the wavefunction, but for some reason, when it comes to discussion of probabilities, the wavefunction is sort of discarded in favor of odd discussions about counting branches and so on, as if they all have equal weight. But you've got the wavefunction right there, and one branch clearly has more mathematical weight than the others, so I don't understand why it's not obvious that there should be a Born-like rule that makes use of those mathematical weights. But I'm too tired to explain this in a coherent way right now.

quasihumanist: Every time I try to think about what "The probability of X happening is 78%" really means, I get hopelessly confused. (That's not entirely true; in some circumstances, it really means no more than "We did experiment Z n times and observed X 0.78*n times," in which case I do understand the statement.) I'd venture any honest person with the possible exception of a few philosophers who have seriously looked into this should answer the same.

I would agree with this. I'm always thoroughly puzzled by discussions of probability that don't use it to mean "If we did the experiment N times we expect P*N of those trials to come out this way, give or take."

As noted previously, though, I would never make it as a philosopher.

Well thanks Chad for addressing the branching problems (and hey, give me a personal nod by now once in awhile ;-) but I think you still don't appreciate how problematical it is (and yes, philosophical training does help.) The WF is not being discarded by critics of MWI, it just can't do the same job it used to if both (for simplicity)components of it continue to exist. In the old QM, the components gave different actual chances of A v. B happening. Weird or not, that was a coherent (pun as you wish) way to think.

But if both components continue, their "existence" is all you have to work with. Without literally having 36% of many examples of A and 64% of B anymore, the amplitudes and their squares (moduli) in effect "go to waste." I can't see any intelligible way to use that "mathematical weight" if there really aren't the right number of events. It is that attempt to use "weight" without real frequency which is "excessively abstract", not the straightforward attempt to find something to actually count like the branches. And as you say, counting is indeed what we need.

Some struggle to make this work, and entangle so to speak in messy additional factors and refinements (which ruins the whole appeal as a conceptually simple way out anyway) but it impresses me little. And if you just keep both branches, then counting branches is the only intelligible way to compare statistics (like I said, counting up AABBABABB ... and BABABBBA ... etc.), and that gives the wrong answer of "50/50" every time.

Here's another issue of concern: if decoherence is so important in why we don't see superpositions etc., then how come we can see "quantum statistics" too? IOW, how would we even find the statistics generated by interference (like double slit, MZI that *is* coherent, etc) but inferred from patterns of exclusive "hits" instead of staying a superposition? If we get statistics of either kind at the end, what's the point of something making a difference? See a proposed experiment at the name link.

(BTW, ALeyram's recent post in Turkish was not spam, it was relevant. I have it saved if anyone wants the text, including the OP. It expressed similar concerns about branches and probability as many have noted.)

Neil,

Your mental model of probability seems purely discrete, relying always on counting a finite number of discrete instances. It seems like you would have philosophical problems with a probability of $sqrt{2}/2$ or $\pi/4$.

Physical theories do come up with these sorts of probabilities, so having a mental model of probability that doesn't make sense of them seems problematic.

By quasihumanist (not verified) on 03 Jun 2011 #permalink

Quasihumanist, you misunderstand my point. Sure, a probability really can be 1/sqrt(2) etc. which means that if we do n very many trials we will approach a ratio of 0.7071... But in order to verify that, we have to count the instances! The question is not what kinds of probability "can exist" but how are they specifically generated in different models. In traditional CI the different squared moduli (as I meant to write earlier) can indeed produce any real number probability from 0 to 1 inclusive. That's because the WF branch somehow creates "a chance" of something happening. We need the discreet instances just to test the probability, not literally to define or generate it except as a abstract unlimited and ever-growing potential number of cases.

However, in MWI we are asking "what probability does the model generate?" Despite the misapprehensions of Chad and others that we can simply stick on the BR as an extra assumption, the BR does not logically *follow* from the description of the model nor is it even compatible. The model produces "branches" which are discrete per each "trial." If you count them up, they give the wrong answer of 50/50 (for two-way split) each time for the outcomes. Indeed, that's always literally the ratio of how many times A and B appear in any run of n times! REM that in MWI each experiment "really" turns out the same each time, but various selves just "end up" in one branching tree or the other.

And like I said, there is no intelligible concept of "strength" of a branch if it continues to exist - how can that make a different number of outcomes? If I'm in the weaker-amplitude "less likely" branch do I feel thin and tired? And if you add extra branches per shot you either have an arbitrary number, or infinite sets which are not commensurable.

That's what the model "gives" as such. It is not capable of producing other probabilities like irrational ratios even though such probabilities can be generated in other ways. IOW, the straightforward application of MWI is literally falsifiable, and gives the wrong answer. It would have to be tweaked in complicated ways to work.

No - I would argue that in your understanding of probability, you cannot assign different meanings to a probability of 1/sqrt{2} rather than a probability of 233,806,732,499,933,208,099 / 330,652,652,075,543,841,260(*)

(*) I googled for rational approximations to sqrt{2}.

Although I don't know any better ways of thinking about probability myself, I feel that thinking of probability as doing lots of experiments, counting occurrences, and dividing is not adequate here. It seems to me that if you think of probability only in a discrete frequentist way than you should require as a mathematical axiom that all probabilities be rational numbers with denominators of less than a few dozen digits.

By quasihumanist (not verified) on 03 Jun 2011 #permalink

Quasi, with all due respect you shouldn't be lecturing someone on what their understanding of something, like statistics, is. (Not to be confused on whether what they evidently do say is true or false, etc.) My essential point was that MWI gives *the wrong* probability, and it is 50/50 even if some other mechanism somewhere could produce other kinds. It's like saying, "dice produce fractional probabilities of the proportions of getting various rolls" such as e.g. a roll of two dice produce 3:36 chance to get "four" = 1/12 chance from permutations 1,3 ; 2,2 ; 3,1. (Similar for hands of cards, etc.) That is a feature of dice, dice, baby; not my mind!

I am well aware that in principle irrational-number probabilities can exist, but that is not what MWI produces! It is not my fault that MWI produces the wrong, fractional answer when what we want is various probabilities according to the Born rule. You got mixed up and thought because I was using the equivalent of dice or cards, that I literally couldn't imagine I.N. probabilities. That's careless of you, not some dullness on my part. And even if I couldn't imagine such probabilities, MWI is still wrong to provide "0.5" when we need either 1/sqrt(2) or 0.7071 ...end. (And why "a few dozen" - did I say anything about practical trial numbers? No.)

I really wish you could quit wasting time about trivial diversions that aren't even valid, and appreciate that a demonstrably false concept is popular among physicists - can you do that? How's all that splitty-chancy stuff workin' out for ya?

BTW, what do *you* think the "meaning" of a probability of 1/sqrt(2) is, given finite resources to build up answers and an operationalist critique of your idealism? Oh, I didn't say I was one of those; just askin ...

but for some reason, when it comes to discussion of probabilities, the wavefunction is sort of discarded in favor of odd discussions about counting branches and so on, as if they all have equal weight. But you've got the wavefunction right there, and one branch clearly has more mathematical weight than the others, so I don't understand why it's not obvious that there should be a Born-like rule that makes use of those mathematical weights.

I suspect that this isn't a QM problem so much as it is a stat problem. That is, I teach stats and this sort of nonsense pops up repeatedly. The cure is - as is usually the case - a specific experiment. The one I usually give is rolling a pair of dice, since it's so easy to make a 6x6 table and enumerate the outcomes. There are 11 different outcomes for the sum of the dice, two through twelve as sen on the universal chart. But there are only two ways to roll a three for a weighting of 1/18 while there are six ways to roll a seven, for a weighting of 1/6. I've found that when I put it like that, most students get it the first time around, and find it to be almost intuitive.

Now, if only I could describe an MW experiment that depended on the fall of a pair of quantum die . . . There's someone famous who had a memorable quote about most conceptual problems of a certain type actually being conceptual problems about statistics, but I can't remember who. Is there anybody here who has an idea as to the person or quote?

By ScentOfViolets (not verified) on 03 Jun 2011 #permalink

Every time I try to think about what "The probability of X happening is 78%" really means, I get hopelessly confused. (That's not entirely true; in some circumstances, it really means no more than "We did experiment Z n times and observed X 0.78*n times," in which case I do understand the statement.) I'd venture any honest person with the possible exception of a few philosophers who have seriously looked into this should answer the same.

I would agree with this. I'm always thoroughly puzzled by discussions of probability that don't use it to mean "If we did the experiment N times we expect P*N of those trials to come out this way, give or take."

Again, this seems to be a problem not with QM, but understanding basic stats. Again, look at dice, this time one roll of a single die. Every outcome is weighted the same (not that it matters), and so the average is the expectation value is 3.5.

Now, when someone says that the mean or expectation value of a single roll is 3.5, do they really mean there's a spot marked with 3.5 pips on the die? Of course not. And this is purely unremarkable, everyday statistics on a discrete set.

Why when going from this case to QM non-integer probabilities all of a sudden becomes some sort of booga-booga problem in some people's minds is beyond me.

By ScentOfViolets (not verified) on 03 Jun 2011 #permalink

Chad et al.,
So, I thought about this for a while (since the prior post) when I realized I had a negative reaction towards MWI and I couldn't place my scientific/mathematical fingers on why. It seems to me that there's nothing wrong with many worlds as an interpretation of quantum mechanics, as long as one is sure to think of it just as such - if a wavefunction contains vectors orthogonal to every vector I have access to, then it shouldn't matter that it's there in the first place.
But I think my problem (and maybe others') is philosophical (and as such, completely unfounded in serious study) and also science-PR related. In this interpretation, since every possible choice of quantum state of the entire universe since its formation is represented, there are included "universes" where every interaction since the big bang has defied what we know as the laws of physics. This is perhaps not a huge problem, they will be extremely unlikely and they will exist somewhere and nothing will work like we expect and that's that. But imagine a universe where a particular motion causes one small difference in how we consistently measure physical laws - an extremely low probability, but according to the interpretation, it exists absolutely. My first problem, then, is that I'm uncomfortable with the idea that an interpretation of quantum mechanics could provide such a possibility to undermine empiricism; my second comes from the idea that it might undermine OTHER'S empiricism. How do we definitively get our message across that "that's how the universe works" when the way we describe how the universe works leaves a finite probability that it's actually NOT how the universe works, we've just been monstrously (un)lucky? Since we have no method of testing "which world" we're in, we have to rely on Dr. Pangloss's prediction, that we're in (one of) "the best of all possible worlds".
I know... tl;dr, eh?

Ah, SOV, I refer to that very sort of dice experiment in my previous comment (did you read it?) That is indeed just how we investigate the chances of things happening in MWI. But remember, MWI is not like a die rolling a one XOR a two XOR a three ..., because the whole point is that all outcomes happen. MWI has its intrinsic weakness, it is not a booga-booga confusion among its critics.

In a binary case, we have "A and B" for the first trial, then the A-world adds A and B to itself and so on to make ever more branches. So if we flip three times we have AAA, AAB, ... BBB and you know that gives half A and half B in the set of eight permutations (the eight different "worlds" of experimental results.) That means 50:50 chances (I get tired of typing that) for A:B if I consider myself a split soul ending up in each branching.

But in the real world we need other relative probabilities too for various wave amplitudes - we just can't justify them in MWI. It has nothing to do with imagining non-integer chances or not (BTW, the misguided quasihumanist falsely charged, very silly after being corrected, that I could not imagine irrational ratios in principle, not "non-integer" chances or outcomes. That's his booga-booga problem, I'd say ask him but it isn't worth the trouble.)

Like I keep saying, these are "instances". The "weight" Chad and other defenders of DI and MWI ask about does not intelligibly make frequentist sense. It's like imagining more mystical "weight" to a die face, so when three comes up 1/6 times you imagine it more "chancy" than a two coming up 1/6 of the time (and nothing to do with the magnitude of the number written on the face, just imagine letters then, OK?) Yeah, I can literally weight faces to make an unfair die but that means *actually producing more of one outcome than others*. So again: MWI, actually "used" instead of having inconsistent postulates stuck on it (as if that could be done), gives the wrong answers.

Neil - just to make it clear - I am just arguing against your frequentist interpretation of probability, and I agree that MWI is inconsistent with your interpretation of probability. It strikes me as strange that there can't be a die where one side comes up exactly pi times (and not 3.14159 times) as frequently as the others when there is (or - at least - could be given a sufficient understanding of the dynamics of dice) mathematical description of the density distribution needed for such a die. (If it matters, dice and dynamics are classical in the previous sentence.)

SOV - Yes I claim I do not understand the philosophical foundations of statistics. (That is different from understanding or not understanding statistics itself or its mathematical foundations.)

Just to record my philosophical positions:

The probability of 1/sqrt{2} has no definite meaning at all, but rather a family of meanings which depend on context and agree sufficiently with each other that we use the same phrase for all those meanings. Theorists like to play certain games which produce 1/sqrt{2} (and I don't understand what probability means in these games), and 1/sqrt{2} is sufficiently close to experimental results (for which probability *is* interpreted in a frequentist way) that we have agreed to call it *the* probability.

Furthermore, the mere question of what interpretation of quantum mechanics is correct is a meaningless question, akin to asking whether colorless green ideas sleep furiously or calmly. This is because quantum mechanics itself is just a series of mathematical statements produced by games that theorists like to play, and the statements and games are generally accepted because they are sufficiently close to experimental results.

I am not claiming privileged positions for either mathematics or experiment here; they are playing their own games which happen to interact with the games of theorists in certain ways. Nor am I claiming these games are arbitrary; clearly they have evolved to serve our culture reasonably well.

By quasihumanist (not verified) on 04 Jun 2011 #permalink

"Furthermore, the mere question of what interpretation of quantum mechanics is correct is a meaningless question, akin to asking whether colorless green ideas sleep furiously or calmly." - you are correct if the subject is indeed merely "interpretations" of the same data. There is such a thing as "shut up and calculate" and I give Chad credit for acknowledging the usefulness of just doing that and not pretending we understand "what's really going on." But a model isn't just an interpretation, it makes predictions derived from its own properties. MWI is often couched as "an interpretation" but actually it is a model which (as usually described) makes a statement about how the wave functions evolve, and that has the consequence of producing the wrong probability for events if understood straightforwardly. That is the case regardless of whether you are right about non-rational probability values.

Neil,

Again, consider the case where we have unequal probabilities like 36% and 64% for outcomes A and B (different meaning than before.) MWI gives the wrong answer because it postulates that both outcomes happen (or else what is the point?) each time, instead of A happening 36% of the time and B happening 64% of the time. What you have if you do the experiment n times is a branching network, like a tree forking each time. The only intelligible way to define the equivalent of "chance" for an observer/version (ie, frequentist by counting in the abstract) is to take "how many different outcome paths" like AAABAABBB... etc. Well, that gives you the statistics for 50:50 chance instead of 36:64.

I actually don't think that is an intelligible way to define the "chance" in this case, because a given observer only ever sees one outcome path (or history.) Counting up different paths is meaningless; it doesn't correspond to any empirically possible set of measurements.

The correct way to predict probabilities in a frequentist fashion is, as Moshe said, to count along a single history. Does MWI constrain the paths through your network such that, along any one allowable path, the proportion of experimental outcomes always tends to 36:64?
Well, yes, and this is why branch weights matter. If you do the experiment n times, each possible path snakes along n branches, and its weight is (roughly speaking) proportional to the product of the branch weights. For large n, the set of paths where the proportion of experimental outcomes is close to 36:64 has a much larger total weight than the set of remaining paths.

In the limit, as n goes to infinity, the set of all paths with measured proportion different from 36:64--in other words, the histories where you could detect a definite violation of the Born rule--has total weight zero. Those paths simply don't exist.* And behavior in the limit is the only thing that matters; probabilistic predictions apply to limiting frequencies of infinite runs of measurements. (As Moshe and Chad have been discussing, we pretty much always interpret these predictions to say something about finite runs, and you could write a book about whether and why it's valid to do that--but that's not really MWI's problem, it's a general issue in the philosophy of probability.)

*[At least, this is true if you accept that quantum states with zero weight (or zero probability density, depending on how many states you're looking at) are inaccessible. This is a sort of superweak Born rule. OTOH, it's hard to imagine how anyone couldn't accept this, since otherwise quantum theory would be completely impossible to apply!]

To summarize, in terms of your network example: MWI gives you the correct 36:64 probability, not by pruning the network so that it contains 36% A-branches and 64% B-branches, but by prohibiting all infinite paths through the network which fail to contain 36% A-branches and 64% B-branches.

By Anton Mates (not verified) on 05 Jun 2011 #permalink

As Moshe and Chad have been discussing, we pretty much always interpret these predictions to say something about finite runs, and you could write a book about whether and why it's valid to do that--but that's not really MWI's problem, it's a general issue in the philosophy of probability.

If nothing else, maybe MWI is a way to get people interested in probability. Most of my students seem to think it's just a bit of methodology you have to learn in the service of doing the Good Stuff :-( And I'll say it again: A lot of them demonstrate precisely this type of confusion between possible occurrences and distribution of occurrences for what is bog-standard nothing-to-do-with-MWI probability and statistics. Oh, and with a side dish of some of the paradoxes and pitfalls of (non-mathematical) inductive reasoning.

By ScentOfViolets (not verified) on 05 Jun 2011 #permalink

Maybe I should expand on that. I suspect that Chad's lament is at the heart of the matter: that MWI is the many worlds interpretation. It gives the impression that complete and whole new universes are popping into being every second. If that were really the case, I'd expect people like Neil to ask where the energy to do that is coming from (and indeed I've had many people ask me with some puzzlement as to whether or not MWI violates conservation of energy.) I'll concede that thinking like this does lead one naturally to some notion of an integral number of new worlds created from discrete events. This doesn't make sense of course - the Poisson distribution, for example, where a singular event takes on a continuous value would lead one to believe that an infinite number of worlds would be created in a finite and short amount of time.

But let's look at why MWI is a red herring. Let's look at 1024 people tossing a fair coin ten times. Now in the ordinary run of things, people understand and expect that if you flipped a fair coin often enough it would come up heads ten times in a row. However, they are often fairly suspicious if their "fair" coin comes up heads ten times in a row at the beginning of the sequence of flips. In fact, getting a run of ten heads in the first ten flips is just 1/1024. But if you have 1,024 people doing this, you have a better than 63% chance that one of them will do so - irrespective of whether or not they are initially entirely different people or if they are the result of 10 splits in an MWI scenario.

This is where many probability and inductive paradoxes come into play, and is also the basis of an old and classic con: A person receives from out of the blue a stock tip via a phone call or letter. A stock tip that turns out to be correct. In the following days, they receive more tips, 2, 3, 4, or more which all amazingly pan out. They are then asked to "contribute" some money on the next call and are given an elaborate explanation as to why their benefactor can't pony up the money on their own.

You see what's happening of course: The con consists of an initial pool of, say 1,024 marks, none of whom are in contact with each other :-) Notice also that the split doesn't have to be 50-50 each time. It could be a one-in-four chance, a thirteen-in-fortyseven chance, etc.

And this is the other horn of the paradox, as it were. According to classic statistical theories, there is no purely logical way to distinguish between a finite run of random but improbable outcomes and action of a nonrandom effect. Flip a coin ten times, it comes up heads ten times. Is it a fair coin? Possibly. flip it ten more times, it comes up heads ten more times. Now what do you think? Flip it ten more times, it comes up heads ten more times. Do you still think this is a fair coin? The problem on the philosophical side is that logically, you can't make that distinction (just as logically, you can't make the distinction between one flip of a unfair coin coming up heads or one flip of a fair coin coming up heads.) This conflicts with gut empiricism that will lead most people to conclude that twenty coin flips in a row coming up heads implies an unfair coin - and 99.99 plus percent of the time, they're right.

Sorry for the ramble, most of which everyone already knows. But after all the back-and-forth, I still don't understand the objections some people have to MWI as a theory/interpretation of QM. I don't mean that I understand the objections and disagree with them, I mean that I don't get the objection, period. Maybe this little bit of discussion will clarify things for the objectors, and they can now rephrase what they mean so I can understand what they're getting at. Who knows? Maybe they're right :-)

By ScentOfViolets (not verified) on 05 Jun 2011 #permalink

Ah, one last bit on allegiances: to the extent I have a particular interpretation of QM, it's the usual shut-up-and-calculate one. It's true that if I were forced to pick another one, in some branch I'd say MWI :-)

But I think in these sorts of discussions, it's important to consider motivations. Some people just plain like the idea. Other people are not so much for MWI as they are against some other interpretation. And, uh, that'd include people like me. If you must pick between alternatives, well, MWI may be "nonparsimonious" in some respect. But at least it's a physical theory. It seems to me that the more "parsimonious" theory of some ill-defined observer collapsing the wave function doesn't even rise to the level of a physical theory.

Of course, some people take it that collapsing the wave function is a physical theory, albeit by a mechanism that is as mysterious now as it was early in the last century and I can't really gainsay them. But it seems to me that the sorts of people who go for this notion intersect significantly with the sorts of people who believe in ESP, astral projection, etc. (at least as an explicable though still unknown physical phenomena.) Maybe my distaste comes down to who I have to rub elbows with, a sort of class snobbery as it were.

By ScentOfViolets (not verified) on 05 Jun 2011 #permalink

Oops - an addendum to the addendum: I see this starts back at a post on Wilson's superb "Divided by Infinity". I suspect that there are significant numbers of people emotionally invested in MWI precisely because you get comforting fables of the you-will-never-die sort.

Trust me, I'm not one of them. Unless I missed it in the discussion, what makes "Divided by Infinity" great as opposed to merely an old story retold by a good writer is that this is a horror story. Not a nice sensawunda one.

Would anyone be surprised if I said that early on I came to the exact same you-will-never-die conclusion that MWI sorta implies? I hope not - I imagine this implication is routinely and independently discovered thousands of times every semester by bright young undergrads. And at least some of them, as was the case with myself, keep following the logic to reach the conclusion that the most probable end state is some sort of consciousness existing for eternity in a state of white light, white noise . . . for the rest of eternity. Does this seem like a desirable outcome? Sounds more like Hell to me.

No, I'm not fond of this particular implication of MWI at all. Not even in the Deepak Chopra sense.

By ScentOfViolets (not verified) on 05 Jun 2011 #permalink

SOV, you seem to have outdone me for consecutive posts ever in a thread there. OK, here's a simple retort: the branching wavefunctions in a binary choice experiment, regardless of labeling "intensity" (which has little meaning *if you don't let them do the work of statistical breakdown for real*), set up a branching tree of A`s and B`s. (I'm starting to use " ` " for plural since " ' " is "wrong" and As is arsenic ...) Well the numbers of A`s and B`s are equal, equally likely, by any true *counting* concept because of the *symmetry* of the geometry of it all. Like I said, the "intensity" goes to waste if you don't let it literally determine exclusive outcomes!

MWImbroglio fans: How's that branchy-chancy thing workin' out for ya?

PS: Reply to Anton soon.

Neil, point one: I have already told you that your objection doesn't even make sense to me. Point two: you are rather robotically repeating your objection without making any attempt to rephrase it in a way that I can understand.

And if you can't be bothered to even try to explain what you mean even after I (and it looks like several other people) tell you that we can't make heads nor tails of what you're getting at, well, all I can say is, you'd make a damn poor teacher.

That and since you can't even be bothered with the basic courtesy of trying to explain what you mean, and yet you want people to keep guessing, well, I'm not going to waste any more of my time on you.

Are we quite clear on this? Your problem isn't that you have odd notions about a particular subject. Your problem is that you're being an inconsiderate jerk.

By ScentOfViolets (not verified) on 05 Jun 2011 #permalink

SOV, I think you and others need to take responsibility for your understanding of what people say. I have gotten many complements here and there from those who have no itch to disagree (which so easily turns into seeming inability to understand.) Here I also had to put up with some red herrings into other misdirections. I keep repeating myself because many here keep repeating the same rebuttal attempts or claims they don't get my point, and I address each person. If neither of us changes, then why is it my fault? When are other people going to make clear, definite descriptions of their own here we can follow?

My argument is the simple idea of counting branches - a point many other thinkers have made as the same criticism. Look up for example Adrian Kent's objections cited at Wikipedia and his similar overview from 1997. Anton just above accepts that much, he just thinks that extra tweaking can save the model. This challenge must bother people, it isn't complicated. You already have a clue with "make heads or tails of ..."

I can't really see how to better reword something as simple as "binary choices lead to a branching tree with half of choice A and half of choice B found all the way through." That is equivalent to chance, or what is? There, was that hard to "get"? I already said it again earlier, even if too much (which can't then also be, not enough), right up there and very politely in #25! That was just the sort of thing you say I don't do.

I don't know, after all the illustrations, what else I could do. And each equivalent MWI experiment *is the same "tree" each time.* Your discussion about the coins etc. doesn't apply. So, since we need non-equal chances in many experiments, there's a contradiction. That should be easy to get. I don't know, after all the illustrations, what else I could do. You need to work harder on it, not me.

And if you mean I'm too sarcastic etc, well that's another issue but I wasn't most recently. It's disingenuous of you to suggest I "can't even be bothered" and didn't even tryto explain, when I did so AFAICT as directly as possible and just previously. That doesn't even fit with saying I repeated myself, and in many permutations (all those nice puns.) Nor are my ideas on this "odd", since I make the basic objection - it's the epicycle-like attempts to defend MWI with bells (I just can't help it) and whistles that are "odd." Finally: after your inconsiderate name-calling indulgence, I doubt clarity is possible. I tried anyway since I'm not a jerk, maybe I'm too nice.

Anton, thanks for a considered reply, yet which fails IMHO to direct towards a valid solution. SOV, read this too. Let's go over some things first:

1. Some say that MWI is just "an interpretation" of the same facts, but that is not really true (or is a half-truth, not good enough.) MWI posits a model, it makes a claim about what goes on. Models have consequences that derive from their assumptions and the workings thereof. You can't, as Chad and many seem to think, just tack on an axiom of your choice to make things "work" because that axiom may not be logically consistent with what the model leads to.

2. That model is, briefly and perhaps simplistically: "The Schroedinger equation simply continues to evolve" (unitary evolution with no special collapse events.) I have big problems* with whatever supposedly keeps the outcomes effectively separated, but let's charitably assume that can happen. As simplification, an asymmetric BS split gives say a superposition of 0.6|Aâ© and 0.8|Bâ©. That is "two components" of the superposition, right .... Let's call that SMWI (simple or simple Schroedinger-realist) and take as baseline, and say the burden of proof is on those who want even more complications (but isn't the goal "to simplify"?)

3. In our "apparent" (what I would call "real" instead) world, those waves actually produce exclusionary36:64 "chances" of detector clicks (defined in our actual world and by real experiment, not to be assumed a logical consequence of the model!) We (whoever the heck "we" are) "find" this by counting up hits, with variations but the usual statistical convergence etc. And various runs give varying results, just like tossing coins.

4. But in SMWI, there just "are" these two components of a superposition, and the lower-amp "other self" is just as real as the stronger one. They can't sense their relative "thinness." Note that we don't need over-hyped interference to model a split from a BS, right? (Only to prove the model at first, we don't need to keep on mucking with it!)

5. As we keep doing experiments the counter results presumably (without additional add-lib, over-imaginative epicycles) make up a tree. This tree is of course all the permutations of A with B. Remember that in all (?) MWI, every equivalent experiment *is the same experiment over again.* In SMWI it consists of the "structure" defining all the branching outcomes. You can want the WF strengths to matter, but ironically their mutual continued existence ruins that for them. For three tries there is AAA, AAB, ABA, ABB, BAA, BAB, BBA, BBB. That's going to hold up for the result and proportions thereof whether finite or imagined as limit to infinity, etc. (Sorry to repeat myself yet again, but for the record a compete list is here.)

6. I project a "self" along each of these. This effectively defines the probablity "I see", or what else does it mean? Sure, each "self" means one of those like BAA, but put them all together and the proportion and the symmetry do and must give 50/50 chances as such. (Just consider symmetry of counting since the "strength" goes to waste when the components aren't "collapsed", the presumed amplitude becomes more like the color of a bunch of switches ...I do admit amplitude affects things like what orientation of combined polarization etc, but that's sitll not "chance" anymore.) So SMWI or any "honest" branch structure cannot constrain the paths to get any frequentist result other than 50:50. Yet you say, "But wait, there's more ..."

7. First, you know there can't be a special "real me" that travels along some of these paths and not others, or with adjusted chances - like a ghost differentially animating zombies - that ruins the whole point of it all being real as the continued evolution, plus adds a weird extra being into it all. (Hey, believe that only real souls experience results and God tweaks those into the BR, with the rest "zombies" that feel nothing - naaa ...)

8. So your talk of branch "weights" and constraining of paths is contrived, and hard to get out of a continued evolution that simply makes a big bunch of complicated waves. (And just forget the degenerate case of density zero, that's the crusty way of saying the WF branch just doesn't exist.) It saddens me that supporters keep adding these questionable epicycles and forced unnatural axions as it were. You can't just "prune" some branches, finite or infinite as may be, in order to get results you like. That's the conceptual equivalent of scholastics discarding "incorrect" dissections. Sure, the universe does act other than naive 50/50 but that shows the weakness of the theory. You have to show they ought to be pruned and how, as a logical consequence of the model! And if the model is too much a Rube Goldberg contraption, that is suspicious and detracts from the pretense of parsimony.

9. Enough for this comment, but I ask you to read my *proposal paper at name link about a way to test the claim that decoherence effectively converts a formerly coherent superposition into a mixture. Meanwhile here's a dig at decoherence ideas for awhile: how come we still get "statistics" that show interference (ie, hits and not continued superposition) in the case of e.g. a coherent MZI, if "decoherence" is supposedly why we find "hits" and not continued superpositions?

PS Thank you for accepting my FBF request! People can be friends and still disagree on things.

Sigh. Neil. No, the responsibility is on you for clarifying what you mean. When people say they don't understand you, please take them at their word. Don't get snippy.

Now. I assumed you understood and accepted this since you never disputed it, but are you clear on the concept that there is no literal "branching" like forks on a river? If I talk about a "branch" of a function, it doesn't literally branch now, does it? I could just as easily talk about the "branches" of velocity distributions in a hetereogeneous gas and I don't think you would get the impression that the different molecules literally branch off. It's at best a metaphor, nothing more.

Are we quite clear on this? You need to say either "Yes, I agree that there is no real branching" and stick to it, or you need to say that "No, I disagree, and there really are literally new worlds branching off from the old one in MWI. And here's why." And it's only when you do the latter that you can make the arguments you've been making.

Do you understand this? You aren't allowed to say that the branching really is metaphorical and then turn right around and present the arguments you have been and which assume that the branching is real. People have been getting the impression, I think, that you understand that there is no real branching, and so they're confused when you present an argument that assumes that branching is literally true.

By ScentOfViolets (not verified) on 05 Jun 2011 #permalink

This is starting to get unduly snippy, in a manner that was, admittedly, entirely predictable. So here's your sixteen-hour notice: I will close comments to this post at noon tomorrow. Make your final arguments now, and keep them civil. Escalation of personal insults will be met with summary disemvowelling. Attempts to continue the argument in comments to posts other than this one will be lead to stronger countermeasures.

Apologies, Chad. I seem to recall a time not too long ago when you were talking about decoherence with an interferometry example and wondering why you seemed unduly short with this fellow Neil. Having had first-hand experience with what's going on now (I didn't really pay attention before), I can see why.

There's something peculiarly frustrating in having to go back and explain the same point over and over when you think that this time the confusion as been resolved.

But anyway, just so we're all on the same page (I thought this was something everyone already knew), the "branching" of MWI aren't literally branches, any more than, say, the branches in the possible pathways of a chemical or nuclear reaction are actually branches.

If anybody thinks these are literal branches, well, all I can say is they are arguing about something different from what is commonly considered to be the MWI interpretation.

By ScentOfViolets (not verified) on 05 Jun 2011 #permalink

Neil,

You can't, as Chad and many seem to think, just tack on an axiom of your choice to make things "work" because that axiom may not be logically consistent with what the model leads to.

I think everyone's aware of that. But most people here (and, for what it's worth, most physicists working in the area) disagree with you about what the empirical consequences of this model are. They consider it an "interpretation" because, they find its empirical consequences to be virtually indistinguishable from those of any other mainstream QM interpretation.

I have big problems* with whatever supposedly keeps the outcomes effectively separated, but let's charitably assume that can happen.

Generally speaking, of course, the outcomes aren't kept effectively separated. Worlds merge as often as they split; just as any random process with multiple possible outcomes represents a splitting of worlds, any random process with multiple possible histories represents a merging of worlds.

3. In our "apparent" (what I would call "real" instead) world, those waves actually produce exclusionary36:64 "chances" of detector clicks (defined in our actual world and by real experiment, not to be assumed a logical consequence of the model!) We (whoever the heck "we" are) "find" this by counting up hits, with variations but the usual statistical convergence etc. And various runs give varying results, just like tossing coins.

Correct, as long as we're talking about finite runs. Infinite runs all* converge to the same result: 36:64, exactly.

*or "almost all," in the the sense that the exceptions form a set of measure zero. But in your paragraph 8. you seem to accept that this is equivalent to "all".

4. But in SMWI, there just "are" these two components of a superposition, and the lower-amp "other self" is just as real as the stronger one. They can't sense their relative "thinness."

Yep, which of course fits with what we observe. When I happen to measure an improbable value in some system, I don't feel particularly "thin" or unreal.

5. As we keep doing experiments the counter results presumably (without additional add-lib, over-imaginative epicycles) make up a tree. This tree is of course all the permutations of A with B. Remember that in all (?) MWI, every equivalent experiment *is the same experiment over again.* In SMWI it consists of the "structure" defining all the branching outcomes. You can want the WF strengths to matter, but ironically their mutual continued existence ruins that for them. For three tries there is AAA, AAB, ABA, ABB, BAA, BAB, BBA, BBB. That's going to hold up for the result and proportions thereof whether finite or imagined as limit to infinity, etc.

Nope. It will hold up in all finite cases, but not in the infinite limit. The distinction between finite and infinite behavior is critical here.

In the infinite case, each logically possible outcome can be written as an infinite sequence like AABABBB.... The set of these outcomes is infinite, and uncountably so. (This is easy to see, since the set of paths can be one-to-one mapped to the set of real numbers, as expressed in binary notation.) Essentially, as you proceed down your tree and the number of parallel branches goes to infinity, the branches blur into a continuous spectrum. It no longer makes sense to talk about the wave function "strengths" of individual outcomes at this point, since they're all infinitesimal. Instead, one talks about the total wave function magnitude (squared) corresponding to sets of outcomes. (Analogously, you don't ask about the probability of an electron being detected at a particular point in space, but rather about the probability of its being detected within a particular region of space--an infinite set of points.)

And in the limit, the total magnitude of all the outcomes that deviate significantly from the expected 36:64 ratio is zero. Again, this is totally different from the finite case. When there's only a finite number of experiments, even the weirdest outcomes have nonzero magnitude, so you're perfectly right that they do have "mutual continued existence." But when the number of experiments goes to infinity, that no longer holds. No magnitude, no existence.

There are lots of mathematical systems where life is very different when you're working with infinite sets/sequences rather than finite ones. This happens to be one of them!

6. I project a "self" along each of these. This effectively defines the probablity "I see", or what else does it mean? Sure, each "self" means one of those like BAA, but put them all together and the proportion and the symmetry do and must give 50/50 chances as such.

But you can't put them all together. What would putting them all together mean? You can't call up your alternate selves and ask them what measurements they recorded. You--the you that's capable of remembering a particular set of measurement outcomes--are the self projected along a single history, and the probability you expect to see should be calculated with respect to that history.

7. First, you know there can't be a special "real me" that travels along some of these paths and not others, or with adjusted chances - like a ghost differentially animating zombies - that ruins the whole point of it all being real as the continued evolution, plus adds a weird extra being into it all.

Yep. There's one of you on every permissible infinite path, and although each of you sees a different sequence of experimental results, each of you sees their sequence eventually converge to the same 36:64 ratio.

But not all logically possible paths are physically permissible. MWI doesn't imply an Ultimate Ensemble multiverse, where every world that logically could exist does exist.

8. So your talk of branch "weights" and constraining of paths is contrived, and hard to get out of a continued evolution that simply makes a big bunch of complicated waves.

On the contrary, every wave has an amplitude (which is what the "weights" really are), and if the amplitude's zero in a particular region of Hilbert space, then, well, there's no wave there. It seems pretty straightforward to me!

You can't just "prune" some branches, finite or infinite as may be, in order to get results you like.

Ah, but no branches are pruned. Every branch is found along some infinite path; it's just that not all infinite combinations of branches are permitted.

You should be familiar with this idea from, say, the EPR experiment. QM doesn't forbid any particular spin value on either of two entangled electrons, but it does forbid the two from exhibiting particular combinations of spins.

9. Enough for this comment, but I ask you to read my *proposal paper at name link about a way to test the claim that decoherence effectively converts a formerly coherent superposition into a mixture.

I'd be glad to. It might take a while, though, as I've got coding to doâ¦.

Meanwhile here's a dig at decoherence ideas for awhile: how come we still get "statistics" that show interference (ie, hits and not continued superposition) in the case of e.g. a coherent MZI, if "decoherence" is supposedly why we find "hits" and not continued superpositions?

Take the following with several grains of salt, because my total formal training on decoherence took up about thirty-five minutes at the end of intro QM, and that was about a decade ago. But as I understand it, it's a question of bases. A pure state, as expressed in one basis, is usually a superposition of states when expressed in other bases. Therefore, a system can't decohere in all bases simultaneously, and the "hits" we find with respect to the measurement basis will, necessarily, correspond to superpositions in some other bases. And it's those other bases in which our statistics can show interference.

Let's apply this to the classic double-slit experiment (using a beam of electrons, say.) There are two bases of interest. The first basis includes (among other things) the position of the electron as it passes through the slits--did it pass through the left slit, or the right slit? The second basis includes the position of the electron when it smacks into the detector screen--did it hit here or here or here?

If both slits are open when you fire an electron through, decoherence doesn't occur until the electron hits the detector screen. Then the "hit" we record is a pure state in the second basisâ¦we can say that the electron hit exactly here. But that state is, itself, a superposition in the first basis, because the electron could have gone through either slit before ending up at that spot on the screen. And the statistical distribution of "hits" demonstrates interference between the left-slit and right-slit states in the first basis.

If, on the other hand, you block one slit or stick a particle detector in it, you force decoherence with respect to the first basis (since now the external environment "knows" which slit the electron went through.) That's why your "hits" pattern no longer indicates interference between the states of the first basis.

Now, when you say "a coherent MZI" (I assume you mean MWI?), I'm not sure what you mean. MWI features decoherence too; it just defines decoherence as entanglement between the states (in whatever basis) of a quantum system and the states (in a "perceptual basis") of the observer. Decoherence still only occurs with respect to a particular basis of the measured system, so you can still have your "hit" statistics reveal superpositions of states in other bases.

PS Thank you for accepting my FBF request!

No problem.

By Anton Mates (not verified) on 05 Jun 2011 #permalink