What's the Matter with Making Universes?

In a comment on a post from last week, Neil B. Asks a good question about my snarky response to the "make-your-own-universe" kit:

[Y]ou never explained why this "universe creator" could be considered based on a misapprehension. Considering the way multi-worlds QM theory is usually presented, IIUC; why would you (anyone?) say it doesn't work as advertised?

The short and unhelpful answer to this is "See Chapter 4 of my book when it comes out." I spent a lot of time wrestling with the best way to understand this stuff, and I think it came out all right.

The longer answer is, well, complicated.

Let's look at the description of the device again:

If two events are possible, quantum theory assumes that both occur simultaneously - until an observer determines the outcome. For example, in Schrödinger's famous thought experiment, in which his cat may have been killed with a 50 per cent probability, the cat is both alive and dead until someone checks. When the observation is made, the universe splits into two, one for each possible outcome. For example, Schrödinger's cat would be alive in one universe and dead in the other universe.

According to the theory, any kind of measurement causes the universe to split and this is the basis of Keats' new device. His universe creator uses a piece of uranium-doped glass to create a steam of alpha particles, which are then detected using a thin sliver of scintillating crystal. Each detection causes the creation of a new universe.

For one thing, I think it's a bit of a stretch to call this "quantum theory" as if it were the only thing out there. Many-Worlds is not "quantum theory"-- it's still just one interpretation of the theory among many interpretations (approximately as many as there are people who have thought deeply about this stuff).

The bigger issue, though, is the claim that this device creates some discrete and countable set of universes-- twenty trillion, or some such. This is presumably a count based on the number of uranium nuclei in the glass, assigning two universes for each possible decay. That's not the right count, though, even in the sort of Copenhagen/Many-Worlds mash-up interpretation they're describing. The actual number of "universes" created here is infinite.

Let's think about a much simpler example-- a single radioactive nucleus, inside a detector that will detect a decay with 100% probability. Let's say you put the nucleus inside the detector, and then sit and watch it. One second later, you get a "click" from your detector (the canonical term, even though I don't think I've ever made an actual measurement with a clicking detector). The way it's described in the explanatory text, that counts as the creation of two universes-- one in which you detected a decay, and one in which you didn't.

But if you think about it, that doesn't really make sense. After all, the decay is probabilistic, so there was a chance that the decay would happen after only half a second. So there was a new universe created there. You just happened to be in the bit that didn't notice the new universe.

But there was also a probability of decay at a quarter of a second, giving rise to a new universe then. And there was a probability of decay at an eighth of a second, a sixteenth of a second, and all the way down the Zeno's paradox regression to infinity. In the one second that you watched the detector waiting for the "click," you actually "created" an infinite number of "universes," one for each infinitesimal instant of time after the start of the experiment.

You don't need to hear a click, or see a scintillation to create a new universe-- the act of not detecting a decay does the job as well. You're creating new universes all the time.

(You could make something that would fit the given description better, by using something with only two discrete measurement outcomes. A pair of polarized sunglasses do the trick nicely-- each photon hitting the glasses is either transmitted or not, leading to two universes per photon. If you'd like an active detection component, a polarized beamsplitter and two detectors would do.)

Of course, an infinity of universes, each differing only in the decay time of a particular nucleus, is even harder to get your head around. Which is why the "creating universes" language is a little unfortunate-- what we really have is a single universe, whose wavefunction is growing exponentially over time, spawning new branches every time a measurement occurs. Which is still a lot to swallow, but is at least self-consistent.

The other problem with the explanation as written is that it seems a little Copenhagen-ish in the emphasis on measurement. It implies that new universes are only created when you watch the detector, which heads down the "Is the Moon there when I'm not looking at it?" path to madness very quickly.

Anyway, that's what I can offer by way of a relatively concise explanation of why I look askance at the "make-your-own-universe kit." Others may be able to elaborate further, or just tell me that I'm an idiot in the comments.

Tags

More like this

Via Boing Boing, a "conceptual artist" is selling a make-your-own-universe kit: According to [a slightly garbled explanation of quantum mechanics], any kind of measurement causes the universe to split and this is the basis of Keats' new device. His universe creator uses a piece of uranium-doped…
I seem to have been sucked into a universe in which I'm talking about the Many-Worlds Interpretation all the time, and Neil B keeps dropping subtle hints, so let me return to the whole question of decoherence and Many-Worlds. The following explanation is a recap of the argument of Chapter 4 of the…
So, I've put myself into a position where I need to spend a substantial amount of time thinking about weird foundational issues in quantum mechanics. This has revealed to me just why it is that not that many people spend a substantial amount of time thinking about weird foundational issues in…
A continuation of the lecture transcription/ working out of idea for Boskone that I started in the previous post. There's a greater chance that I say something stupid about quantum measurement in this part, but you'll have to look below the fold to find out... At the end of the previous post, I…

When I thought I understood many-worlds it was how I think you are describing it. One wave function, many aspects to that wave function.

When I measure the cat it's not that it suddenly chooses whether to be dead or alive, it's that the bit of the wave function that somehow represents me holding a dead cat has become orthogonal to the bit that has me holding the alive one.

It should be clear I don't really get it, but then Copenhagen makes even less sense when you think about that too much as well..

I never got where the energy came from to create a new universe in the first place. Also wouldn't there have to be 2^N universes created every delta of time, where N=# of particles in ALL the universes and delta is the theoretical quantum unit of time?

By Zippy the Pinhead (not verified) on 07 Nov 2008 #permalink

I've always been under the impression that pretty much all interpretations of quantum mechanics are wrong. I'm not saying this because I think I've solved this problem and everyone else is an idiot. It's much more just the feeling I have that the puzzle stems from the failure of our language to capture something like this. Human language is just so classical at its core (since we are) that it just can't give a good description of quantum mechanics.

The words "measurement" and "observation", I find suspect. They have too much metaphysical baggage associated with consciousness or whatnot. Even the word "interpretation" is pretty weighed down in metaphysical content. I don't really have a better suggestion of how to think about this but my philosophy is just not to take any "interpretations" of quantum mechanics very seriously. Not bad for discussion at the bar, but otherwise it's pretty much just going to lead you into pointless confusion. And this comes from a theoretical physicist who loves pointless discussions.

I never got where the energy came from to create a new universe in the first place. Also wouldn't there have to be 2^N universes created every delta of time, where N=# of particles in ALL the universes and delta is the theoretical quantum unit of time?

That's one of the reasons why the whole "creating universes" formulation is problematic. It gives the impression that you have whole vast universes coming into existence, with new matter created out of God knows what, which is absurd.

It's more accurate to say that we have one universe, described by a single wavefunction with infinitely many branches to it. Those branches do not interact with one another in any measurable way, making them effectively separate "universes," but there's only one universe worth of mass-energy at any given time.

That's not a perfect description, either, but it's closer.

Chad, thanks for "hoisting" me from comments (and I am also reminded how much better 2008 is than 2000!) You've got a point about the continuity issue and radioactive decay - why don't MW enthusiasts get a better grip on taking that into account? You seem to contradict yourself, by first saying that alternatives exist for all the continuous uncertain times when the nucleus decays, in and of itself. But then you later say "what we really have is a single universe, whose wavefunction is growing exponentially over time, spawning new branches every time a measurement occurs." Well, do you really need a "measurement" or not?

BTW I don't believe the MW idea or the decoherence solution is viable, since the original waves should just keep interacting as waves (like classical waves in a ripple tank) unless and until some collapse "put in by hand" meddles and makes them localize (see my jeremiads against deco in the Cosmic Variance thread "Quantum Hyperion.")

One of the oddities of the whole problem is, if the decay time of a nucleus is uncertain, then the wave function is constantly leaking out instead of being a "shell" emitted at a certain time? Does that mean versions emitted at greatly separated times can interfere with each other, and also if using light? It seems that makes a mess of the WF, which can be spread out over space but at least needs some reasonable interval to have been created during, to get appropriate frequency structure etc?

Well, I think a challenge for the measurement problem is: Consider a MZ interferometer. A photon hits the first beamsplitter, and we know from interference that the WF splits up into two legs, which can interfere in a second BS before hitting detectors (we know because we can get bright-fringe all-A-detector hits etc.) But I ask, how come interacting with the BS doesn't "collapse" the photon? After all, those are metallic atoms and redirect it, they just don't absorb it if it goes through. Why must absorption be what localizes it, and is that then what really makes "a measurement": absorption? But reflection is a sort of absorption/re-emission, so the distinction isn't clear cut. Maybe, when something "dwells" inside something else? - but that is more an poetic metaphor than a clear physical theory.

PS: This talk of "portions of wavefunctions becoming orthogonal" etc, and making separate worlds just doesn't cut it. The waves themselves just add up by simple superposition, and should stay that way all in one common universe as was always indicated by Schrodinger evolution. Our distinctions about ways they can be separated out and isolated into parts, about "mixtures" etc, are just ad hoc ways to talk about what measurements and collapses force after the fact. Like I said at CV, attempting to introduce such talk and even interpreting the waves as "probability" takes the results of collapse for granted surreptitiously, and makes using them to "explain" collapse a circular argument. (Finally, when I see defenders of deco say "appear to" collapse I recognize a double-talk alert phrase.)

You seem to contradict yourself, by first saying that alternatives exist for all the continuous uncertain times when the nucleus decays, in and of itself. But then you later say "what we really have is a single universe, whose wavefunction is growing exponentially over time, spawning new branches every time a measurement occurs." Well, do you really need a "measurement" or not?

The latter use of "measurement" is an unfortunate phrasing, caused by writing this late at night. The wavefunction spawns new branches every time an event that causes a split in the wavefunction occurs-- that could be lots of different things.

I don't believe the MW idea or the decoherence solution is viable, since the original waves should just keep interacting as waves (like classical waves in a ripple tank) unless and until some collapse "put in by hand" meddles and makes them localize (see my jeremiads against deco in the Cosmic Variance thread "Quantum Hyperion.")

This is the bulk of Chapter 4, and something I spent a lot of time trying to get my head around. They do interact and interfere with one another-- decoherence doesn't make them no longer behave like waves. What decoherence does is obscure the interference effects that would let us detect the interaction.

The example that makes it clear to me is a Mach-Zehnder interferometer that you send single photons into. You can't see an interference pattern with a single photon, so how do you know that interference occurs?

Well, you repeat the experiment many times, and you build up the pattern out of many single-photon detections. Because the interference happens the same way every time (provided you've set your interferometer up properly), repeating the experiment lets you determine the probability of a photon being detected at each of the output ports.

Adding decoherence to the system is like adding a random phase shift on one arm of the interferometer. Each photon passing through gets a random shift, and that shift is different every time. In this situation, you can't build up a pattern by repeating the experiment, because the interference pattern would be different each time. When you try to add those patterns together, they smear out, and you end up with nothing.

It doesn't mean that the photons have stopped interfering, though, or that they've stopped behaving like waves. They interfere just fine, but you can't see the effects because of the random phases. If you keep track of the phase shift, though, you could go back to the data and select out only those photons that got the same shift, and you'd find an interference pattern, just like you did in the first case.

Decoherence works the same way. The different branches of the wavefunction still behave like waves, and still interfere with one another, but interactions with a larger environment add random phase shifts that obscure the intereference pattern. Decoherence doesn't supersede the normal laws of physics, it just prevents us from detecting them through repeated experiments.

OK, thanks and I'll need to look into that more. Will your book be online? I still so far don't think decoherence can explain collapse, I don't think it can suddenly place a "hit" of a photon or electron in one spot and keep interaction away from every other spot in the universe. Those interfering/interacting waves are still spread out over a wide area, and "measurement" is supposed to suddenly pack them into a little spot - that doesn't sound like what waves do with each other as such.

Remember also that the detectors may be many km apart with no material or radiative exchanges. Also, you still haven't said why interaction with a beamsplitter in say a MZ doesn't collapse the photon - we know it doesn't, because the hits on channel A and B would be equally likely after the second BS. BTW, since MZ interference can be set up for assured A channel hits and no B channel hits, the pattern of hits isn't really probabilistic - it is an assured result. That is easy to forget when thinking of buildup of spots on a screen etc.

Also, remember that in quantum "seeing in the dark," placing an object in the MZ can make a hit appear at B even though no photon was absorbed by the object - so we would know that interference had been *stopped* at least. This stuff really just doesn't make sense, maybe we should just live with that.

Let me sharpen the point about the mystery of not collapsing at the first beamsplitter: The first BS in an MZ is a glass cube with a metal film across a diagonal. The photon in effect is absorbed and re-emitted by the metal atoms, but the wave is split down two paths as we know from getting the bright "fringe" of all-A channel hits. But I don't have to have absorbing "detectors" at A and B, I could have frosted screens instead. If I did, I would then get an exclusive flash at one or another screen (if say I adjusted the paths so the interference wasn't the same as before.) I could even put the screens up before the wave could reach the second BS, and get a hit on either one but not both.

Furthermore, I could use either frosted screen and see the photon directly (well, only for sure with a perfect eyeball right up against it but the point works in principle) or I could use UV photons and phosphor, etc. Well, that is "absorbed and re-emitted." So why does the photon collapse then? The first BS was a metal film, why shouldn't the photon "decide" there which way to go instead?

Will your book be online?

If you mean "Will it be available in ebook form?", that is a decision that will be made above my pay grade. It'll be coming out from Scribner, sometime next year (barring catastrophe), and what's available when and where will be up to them.

I still so far don't think decoherence can explain collapse, I don't think it can suddenly place a "hit" of a photon or electron in one spot and keep interaction away from every other spot in the universe. Those interfering/interacting waves are still spread out over a wide area, and "measurement" is supposed to suddenly pack them into a little spot - that doesn't sound like what waves do with each other as such.

You're mashing together bits of Many-Worlds and Copenhagen a bit, here. The whole point of Many-Worlds is that there isn't a "collapse" of the wavefunction.

My understanding of it (which I would not attempt to pass off as complete, by any means) is that Many-Worlds theories explain the measurement process as an entanglement phenomenon. That is, when a photon passes through an interferometer (it's cleaner to think about this in terms of a Mach-Zehnder), the wavefunction splits into two pieces. When the photon is detected at one of the two detectors, the state of the photon becomes entangled with the state of the detector-- the states are now "(photon at detector 1)(detector 1 recording a photon)" and "(photon at detector 2)(detector 2 recording a photon)". The wavefunction for the photon itself still extends over a wide range of space, but the different pieces are now entangled with different detector states.

This is what gives rise to the whole idea of the defferent results as separate universes-- the piece of the wavefunction corresponding to the photon going one way is entangled with wavefunction components corresponding to detector readings, and the mental states of scientists recording those readings, and so on. All of that stuff is distinct from the piece corresponding to the photon going the other way.

Strictly speaking, if you were engaged in some quixotic project to write down the wavefunction for the entire universe, you would need to keep all of these separate branches full of complicated entangled quantum objects around. Their presence would mean that any subsequent measurement would, on some level, involve the interaction of bazillions of different possible photon states entangled with other things. We can't see that, though, because of decoherence-- any attempt to detect an interference effect is doomed because of the random shifts that add up to prevent the formation of an interference pattern.

For the sake of everyone's sanity, we treat the measurement outcomes as if they are completely distinct and independent-- different universes, as it were. This is really just a matter of mathematical convenience, though.

OK, thanks Chad for more illumination on Decoherence w.r.t. Many Worlds. Still, I don't want to sound too repetitious but what about the issue of "collapse" happening later in the chain rather than at the first beam splitter, etc? Regardless of whether there's a real collapse/whatever and how to interpret it, there is still a problem why it should happen farther down the chain of interactions (most explicitly argued in #9.)

Chad: whether I agree or not (I'm in a mixed state), I very much appreciate your comment #10.

I still wonder what science fiction you most like or dislike that bears on the subject.