A Tale of Two Slits

(It’s just good, clean physics, I swear!) Last week, my buddy Lucas was watching a BBC documentary about petulant musician Mark Oliver Everett (of the Eels), and his quest to understand his father, the late physicist Hugh Everett.

Hugh Everett is famous as the discoverer of the many-worlds interpretation of quantum mechanics, which is a fascinating — although speculative — idea about how the Universe fundamentally works.

Imagine you have a wall with two slits — very close together but distinct — with a screen behind it. If I throw very small grains of sand at this wall, we can predict what’s going to happen. Some of the grains will go through one slit, some of the grains will go through the other slit, and the rest of the grains will get blocked by the wall. If I took a look at the screen, I will find two neat piles of sand stacked up against it.

But what if, instead of a particle like sand, I shot waves, like light or waves of water through the two slits? Because they’re waves, they can interfere with one another. Instead of two neat piles, we get a complex interference pattern. Some places have constructive interference, and the light appears more intense on that part of the screen. In other places, there’s destructive interference, and there’s less light (or even no light at all) in those places.

So, here’s where things get weird. What if — instead of sand or light — we shot electrons through those double slits. Electrons are supposed to be particles, right? So they should make two neat little piles. But they don’t. They interfere with each other, and make the pattern on the screen that only waves are supposed to make.

Well, physicists are clever, so they decided to try this little trick: let’s shoot the electrons one-at-a-time at these two slits. Most electrons that you fire smack against the wall, but a few make it through. After a few hundred electrons, you can’t really tell what’s happening, but after tens of thousands make it through, and you add up where they landed, here’s what you find:

You still get an interference pattern! Somehow — and this seems nuts — each electron is interfering with itself! How is that possible? Is part of the electron passing through one slit and part passing through the other slit?

Well, we can do one more experiment: this time, we shoot electrons one-at-a-time at this wall, but at each slit, we shine a bit of light, and detect which slit the electron goes through. As each electron is fired, one (but never both) of the detectors goes off, telling you which slit the electron went through. But — and here’s the crazy part — the pattern on the screen now shows no interference, and instead we just get two separate peaks corresponding to the two “classical”, particle-like paths the electrons could have taken.

What are the possible explanations for this? Well, the standard quantum mechanical interpretation (i.e., Neils Bohr’s Copenhagen interpretation) says that everything is always a wave, and it’s only when an observation is made that this wave collapses, and things act like a particle. Everett’s idea, the many worlds interpretation, holds that everything is always a particle, but whenever there are multiple possible outcomes, they all happen, but we don’t know which one happened in our universe until we “look”.

The weirdest part? These two incredibly different interpretations — one postulating googols of other Universes, one postulating that the laws of physics are not deterministic — both give rise to the same observations. In other words, these interpretations are not only both valid, but as far as we can tell, are indistinguishable in our Universe. Wrap your minds around that one!

Comments

  1. #1 toad
    June 1, 2009

    This has always been my favorite example of quantum mechanics. Thanks for covering it!

  2. #2 Eamon
    June 1, 2009

    Seconded!

  3. #3 RF
    June 1, 2009

    Hello, Ethan, I have a short n00b astronomy question for you [maybe this isn't the spot for such questions, but it is the one most likely to get a quick response].

    This morning, around 0:30 I went out with my camera, because there was a beautiful night sky. I took some 30 second exposures to see how my camera deals with the night sky and one of the brightest objects that I captured is a very nice blue dot with what appears to rings. Due to slight movement of the camera and the slow drift of the night sky the rings are very uncertain and could be something else. I am having some trouble finding adequate software to determine if it is possible, so I am asking you if this was Saturn? [from what I have been able to determine so far, it isn't, but I am a n00b and you are nerd :) ]

    position:
    51 29′ 13” N
    3 10′ 14” W

    looking straight up, maybe just slightly tilted to the North-East.

    PS – If it isn’t Saturn, what is it then? I will somehow send you a 5-min image sequence [10 30-second exposures] probably in a few hours.

  4. #4 chezjake
    June 1, 2009

    Very nicely explained. Chad and Emmy will be envious.

  5. #5 Gingerbaker
    June 1, 2009

    In his book Timeline, Michael Crichton talks about the two slit experiment using single electrons and posits the idea that each single electron is not interacting with itself, but rather with electrons from another universe, which is a fun idea. And useful for his plot. :)

    BTW, Ethan, you promised us an entry on the ‘spooky effect at a distance’ a couple of weeks ago. My blood spooky index is falling into the danger zone – will you be posting on this soon? (We can only hope) :)

  6. #6 Lobster
    June 1, 2009

    It’s that whole determinism thing that spooks me. If there is such a thing as actual chaos, it allows for free will (good) but it also allows for causality violations (WTF!). It means that there are things which are unknowable, not by us, not by anyone or anything. Minor, unimportant things like where a particle will land, but still things my gut tells me we ought to be able to predict with enough information.

    But then, the beauty of quantum physics is it tells your gut to go play with blocks while the grown ups talk.

  7. #7 BenHead
    June 1, 2009

    A better way (and certainly, in modern times, the more common way) of considering many-worlds is that everything is always a WAVE. There simply is no collapse, because there need not be. The linearity of the equations of QM and the way terms separate when a system is in a superposition and/or entangled state obviates it. Look…

    If the electron is in a superposition of having gone through slit 1 and slit 2 – a|e1>+b|e2> – and it doesn’t interact with anything else, those terms can interfere with one another if they spread and cover the same spacetime coordinates (which is exactly what happens). But if I look at which slit it went through first (remember, assuming no collapse), then my state and its state become entangled – a|e1I1>+b|e2I2> – and now the electron has no independent state, so it can’t interfere with itself, so long as the “I” portion of that remains distinct. (And of course it’s not just “I”, it’s the macroscopic detector that told me, the air particles that were heated by one bulb or the other lighting up or whatever, etc, so those states will effectively always be distinct.)

    The moniker “Many-Worlds” is very pop-sciencey; it was not created by Everett, but by those later who popularized his interpretation. There’s only one world, but it exists in an incredibly complex superposition of states which cannot interact with one another due to entanglement and decoherence. But there’s no vagueness about what these worlds are; they’re very strictly mathematically defined, and done so exactly by the good old Schrodinger Wave Equation.

    For those who argue that they are not, in fact, in a superposition, check out something freaky. If I’m in a state like a|e1I1>+b|e2I2> and you don’t ask me, “Which slit did it pass through?” (because then you’d end up entangled, too) but instead, “Do you know for sure which slit it passed through?” look what happens… Because the equations are linear, each term can be treated seperately. So in the term with coefficient “a” I’ll be sure it went through slit 1 and answer, “Yes,” and in the term with coefficient “b” I’ll be sure it went through slit 2 and answer, “Yes.” Assuming our brains and minds are bound to rules of quantum mechanics, we’d have no way of knowing if we existed in a quantum superposition (except of course for the quantum suicide method – DON’T TRY IT AT HOME!)

    I love this stuff…

  8. #8 Damon B.t
    June 1, 2009

    Great post, Ethan.

    RF:
    Download and install Celestia (http://www.stellarium.org/). It’s free! It will help you figure out where you’re looking and what you’re looking at. And it’s a bargain; did I mention it’s free?

  9. #9 Andrew
    June 1, 2009

    You explained that experiment way better than my physics professor this past semester and its still tough to get my head around. Got to love quantum physics!

  10. #10 Bob
    June 1, 2009

    How do we know that “shining a light” on the electron as it goes through the slit doesn’t change the what it does? Is it possible that the mere act of looking doesn’t change anything or collapse a wave? But, that the measuring tools do something physical to interfere with whatever an electron normally does? I may not know what I’m talking about.

  11. #11 acce245
    June 1, 2009

    Maybe Bob is onto something y’all missed: perhaps photons interact with electrons. Perhaps Schroedinger and Heisenberg have a say. Remember, you can’t know exactly where they are and their velocity. Perhaps the photons affect this velocity when they show you where the electrons are? Also, the electrons may or may not be there, so it could be you are missing some when you aren’t adding in photons too. Like, some actually go through the wall instead of the slit or something.

  12. #12 Vedapushpa
    June 1, 2009

    Could this ‘two-slit’ physics/physical phenomenon… has any reference or similarity to the emotion -intellect, mind-brain ‘differential cognition’ factor ??

    I seek your Pardon if you should feel — me with my psycho-philosophical approach has made bold to state this Perspective.

    Thanks.
    Vedapushpa
    India

  13. #13 david
    June 1, 2009

    Indeed, I don’t think anything spooky is going on here. This can be tested by performing variations of the experiment. For instance, reduce the observation apparatus incrementally, first by removing the human observation of which slit the electron passes through, the remove any electronic recording, followed by disabling the detector but activate the light emitter, etc.
    My guess is this has already shown us that that the phenomenon described is merely physical systematic error, and nothing metaphysical.

  14. #14 eddie
    June 2, 2009

    I think it’s helpful to think of the electron not just as a wave, but as a wave packed, as per deBroglie. The electron is localised because of the interference between the individual members of the packet; each member being a pure sine wave.

    As the individual sine waves that make up the packet have definite momentum, the uncertainty principle means that they have indeterminite position; they have equal probability of being anywhere. This is how the uncertainty principle describes photons.

    Now I’m not sure if the electron packet can be said to be made up of this collection of photons, or just to contain them in some way, but it’s definite that it interacts with the world around it by exchanging the photons with other charged particles. This is what QED describes.

    In the double-slit experiment, the electron packet is interfering with the packets of all other charged particles in the apparatus and with the detector, and when you introduce a test of which slit it went through, it similarly interacts through exchanging photons with the particles in the test device.

    What we consider as a ‘result’ of the experiment is when a particular photom from the packet is exchanged with the detector. It is considered to be random which photon is exchanged but it may not ultimately be, we don’t know yet.

  15. #15 eddie
    June 2, 2009

    s/packed/packet

    Also, it’s because the electron packet, which is spread out, exchanges photons only one at a time, that it appears to be a point particle. The individual photons may have indeterminite position, but the exchange events have definite position.

  16. #16 Jared Grubb
    June 2, 2009

    There is an EXCELLENT video on youtube that tells this experiment in simple terms for even laymen… I highly recommend it:
    http://www.youtube.com/watch?v=DfPeprQ7oGc

  17. #17 Craig
    June 2, 2009

    The BBC documentary is called Parallel Worlds – Parallel Lives, you can watch it here
    http://www.onlinedocumentaries4u.com/2009/02/parallel-worlds-parallel-lives.html

  18. #18 Savage
    June 2, 2009

    This is the best description of the double slit experiment that I have seen. Even better than Richard Feynman’s in “The Character of Physical Law”.

  19. #19 varada
    June 2, 2009

    real nice description… :)

  20. #20 JJXanadu
    June 2, 2009

    I’ve always been fascinated with the double slit experiment, and I like this explanation. However, what about a third interpretation?

    Bohmian Mechanics very easily solves the question about whether or not an elementary particle is a particle or a wave. What Bohmian Mechanics purports is that the trajectory of an electron (always a particle) is effected by a wave as given by the Schrodinger Equation, rather than the electron having to ‘select’ whether or not to be a wave or a particle. Hence, the distribution of electrons would still be similar to the standard approach.

    Some contemporary physicists believe this will be the way we discuss quantum mechanics in the future (I just finished a course with Brian Greene, and he said this was the approach that he prefers).

    Do a little research and see for yourself. Why then, you might ask, if it works so simply don’t we use this approach? Politics and ease of use (the math behind Bohmian Mechanics is a bit more difficult and not as ‘clean’). However, a great benefit with Bohmian Mechanics is that we don’t have to turn a blind eye to this unknown thing called wave collapse, which is something that physicists just throw their hands up at and ignore…

  21. #21 Andrew
    June 3, 2009

    Regarding Bohmian Mechanics… that’s very interesting. I hadn’t heard of that interpretation before, and although it’s expectedly heavy reading, I’m making headway.

    My question is, though, that Wikipedia (as well as other sources) regularly describe Bohmian Mechanics as “non-relativistic”. Isn’t this an even bigger problem? We are after all fairly sure of relativity thanks to observational measurements, aren’t we?

  22. #22 torvald stnersen
    June 3, 2009

    I read about this kind of experiments for the first time in David Deutsch’ “The fabric of reality”. That’s from 1997 and he wasn’t talking about electrons but about photons going through slits and as I recall his conclusion was that this PROVED the existence of parallell universes and that because of the implications of this experiment alone, quantum computers were theoretically possible. His explanation was very thorough yet accessible to laymen like myself. Recommended!

  23. #23 E
    June 3, 2009

    Petulant? No, you are.

  24. #24 duane
    June 3, 2009

    What about the walls of the slits?
    There is a thickness to the walls and maybe
    1) The electrons are repulsed or even hit those edges and change the trajectory
    2) The material that makes up the walls affects the flight-path of the electrons
    3) Does this material eject electrons?

  25. #25 Regis Chapman
    June 3, 2009

    Hi. As a Vedantic-minded person who knows about science, I have a logical bone to pick with this statement:

    “The weirdest part? These two incredibly different interpretations — one postulating googols of other Universes, one postulating that the laws of physics are not deterministic — both give rise to the same observations. In other words, these interpretations are not only both valid, but as far as we can tell, are indistinguishable in our Universe. Wrap your minds around that one!”

    This to me seems very simple, and I am not sure why it can be thought to be confusing so that I would need to wrap my head around it. Maybe it’s because in Vedanta, we are used to considering logic to serve a conclusion of the oneness of everything, so it’s possible it becomes more obvious from this perspective.

    In any case, If one postulates that the laws of physics are not deterministic, this implies a variety of outcomes available- and the first interpretation precisely provides an outlet for this. It seems that the only difference between them (and this is also quite simple) then becomes the wave vs. particle conclusions. Yet for me, I would say that the second implication of this is that it’s both.

    So, since I am neither Neils Bohr, Einstein or Everett, then I must have missed something.

    Thanks,
    Regis

  26. #26 lenny
    June 3, 2009

    Bohr? I could never trust anyone who claimed to have a knowledge of physics but still couldn’t hit a curveball..
    What is Ted Williams’ opinion? Someone ask his head.

  27. #27 qoJ
    June 4, 2009

    I think Duane may be on to something with his post:

    What about the walls of the slits?
    There is a thickness to the walls and maybe
    1) The electrons are repulsed or even hit those edges and change the trajectory
    2) The material that makes up the walls affects the flight-path of the electrons
    3) Does this material eject electrons?

    /Does this mean Starfleet will have to upgrade the transporters Heisenberg Compensators to Everett Rectifiers?

  28. #28 Sophos
    June 4, 2009

    My question is here. Anyone tried shooting photons one at a time to the double slit? What was the result?

  29. #29 Kriegsfall
    June 4, 2009

    There are tons of references to the photon and buckyball double slit experiments on the web, so I’m not going to bother posting links. Instead, a short summary:

    The double slit experiment has been reproduced with a variety of bits of matter and radiation, most notably photons, electrons, and buckyballs (the largest molecule, made of 60 carbon atoms). Light was shown to behave like a particle when individual photons were observed passing through the double slits. Buckyballs were observed behaving as a wave when fired through the slits and not measured.

    Perhaps one of the most famous double slit experiments was the single photon experiment, in which (not surprisingly) single photons of light were fired through the double slits. Just like electrons, when fired one at a time the photons naturally created an interference pattern, and a two-column particle pattern when measured.

    There is no possibility that the detectors themselves could alone cause the behavior observed in all the double slit experiments. The experiment has been preformed many different times by many different groups with widely varying methods of detection and subject matter. Regardless of what you use, be it electron, proton, photon or molecule, or what equipment you use (very different detectors must be used for each type of subject matter), the results are always the same: Unobserved, matter behaves as a wave, and when observed it behaves as a particle.

  30. #30 zayzayem
    June 5, 2009

    Oh my… my head just exploded. This is why I steer clear of physics and stick with genetics, viruses and small children (generally not all at the same time).

    Those little electrons, they get up to mischeif when people aren’t looking.

  31. #31 llewelly
    June 8, 2009

    If there is such a thing as actual chaos, it allows for free will (good) but it also allows for causality violations (WTF!).

    Typical (but not all) definitions of free will require causality violations to be under your control. Any causality violations not under your control decrease your free will. Additionally – all definitions of free will require the ability to make useful forecasts. Causality violations make the universe much harder to predict – greatly hampering forecasts. (Yes, this does mean that most definitions of free will contain an element of inherent contradiction.)

    Note chaos does not allow for free will. Nor does it require a non-deterministic system; the overwhelming majority of work on chaos has been stuyding chaos within deterministic systems.

    Note most (all?) quantum physicists are confident quantum mechanics does not imply causality violations. (Though some experiments appear to imply causality violations to those who understand only classical physics.)

    It means that there are things which are unknowable, not by us, not by anyone or anything.

    An alternative interpretation is that asking for the ‘exact’ location of a particle is a nonsense question, like ‘What happens when an unstoppable force meets an immovable object?’ .

    In any case – any likely thinking entity will necessarily have a brain made of a number of particles (or energy) which is tiny compared to the number of particles (or amount of energy) in the universe. Said brain’s memory capacity and computational power will necessarily be very small relative to the total number of particles in the universe (to say nothing of their myriad possible interactions). Thus, there is plenty that is unknowable without quantum mechanics. (In math it is sometimes useful to distinguish between what is unknowable for pragmatic reasons, and what is unknowable even if such pragmatic considerations could be ignored. But when a philosopher tries to distinguish between the two, it usually means he’s headed off into the weeds and likely to emerge with a paper whose meaning is unknowable.)

  32. #32 Neil B ♪
    July 12, 2009

    BTW, decoherence (mentioned above) is a false path to understanding why our world isn’t found to be composed of superpositions. IOW, decoherence can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH and decoherence. Briefly, I charge that deco is a circular argument and has other flaws. It indulges several fallacies in the form it is often touted. Now of course, decoherence can affect the patterns or information status etc. of hits and the interaction of waves. It has a role. And yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.

    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.

    Well, that argument is fallacious for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)

    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.

    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! Without the supervention of a special collapse process, the DM has to be just a tabulation of the chances of having various amplitudes, not of the “probabilities” that only collapse can create IMHO. There wouldn’t be any “hits” to even be trying to “explain.”

    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  33. #33 Neil B ♪
    July 12, 2009

    BTW, decoherence (mentioned above) is a false path to understanding why our world isn’t found to be composed of superpositions. IOW, decoherence can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH and decoherence. Briefly, I charge that deco is a circular argument and has other flaws. It indulges several fallacies in the form it is often touted. Now of course, decoherence can affect the patterns or information status etc. of hits and the interaction of waves. It has a role. And yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.

    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.

    Well, that argument is fallacious for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)

    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.

    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! Without the supervention of a special collapse process, the DM has to be just a tabulation of the chances of having various amplitudes, not of the “probabilities” that only collapse can create IMHO. There wouldn’t be any “hits” to even be trying to “explain.”

    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  34. #34 Neil B ♪
    July 12, 2009

    BTW, decoherence (mentioned above) is a false path to understanding why our world isn’t found to be composed of superpositions. IOW, decoherence can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH and decoherence. Briefly, I charge that deco is a circular argument and has other flaws. It indulges several fallacies in the form it is often touted. Now of course, decoherence can affect the patterns or information status etc. of hits and the interaction of waves. It has a role. And yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.

    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.

    Well, that argument is fallacious for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)

    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.

    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! Without the supervention of a special collapse process, the DM has to be just a tabulation of the chances of having various amplitudes, not of the “probabilities” that only collapse can create IMHO. There wouldn’t be any “hits” to even be trying to “explain.”

    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  35. #35 Neil B ♪
    July 12, 2009

    BTW, decoherence (mentioned above) is a false path to understanding why our world isn’t found to be composed of superpositions. IOW, decoherence can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH and decoherence. Briefly, I charge that deco is a circular argument and has other flaws. It indulges several fallacies in the form it is often touted. Now of course, decoherence can affect the patterns or information status etc. of hits and the interaction of waves. It has a role. And yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.

    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.

    Well, that argument is fallacious for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)

    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.

    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! Without the supervention of a special collapse process, the DM has to be just a tabulation of the chances of having various amplitudes, not of the “probabilities” that only collapse can create IMHO. There wouldn’t be any “hits” to even be trying to “explain.”

    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  36. #36 Neil B ♪
    July 12, 2009

    BTW, decoherence (mentioned above) is a false path to understanding why our world isn’t found to be composed of superpositions. IOW, decoherence can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH and decoherence. Briefly, I charge that deco is a circular argument and has other flaws. It indulges several fallacies in the form it is often touted. Now of course, decoherence can affect the patterns or information status etc. of hits and the interaction of waves. It has a role. And yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.

    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.

    Well, that argument is fallacious for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)

    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.

    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! Without the supervention of a special collapse process, the DM has to be just a tabulation of the chances of having various amplitudes, not of the “probabilities” that only collapse can create IMHO. There wouldn’t be any “hits” to even be trying to “explain.”

    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  37. #37 Neil B ♪
    July 12, 2009

    BTW, decoherence (mentioned above) is a false path to understanding why our world isn’t found to be composed of superpositions. IOW, decoherence can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH and decoherence. Briefly, I charge that deco is a circular argument and has other flaws. It indulges several fallacies in the form it is often touted. Now of course, decoherence can affect the patterns or information status etc. of hits and the interaction of waves. It has a role. And yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.

    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.

    Well, that argument is fallacious for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)

    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.

    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! Without the supervention of a special collapse process, the DM has to be just a tabulation of the chances of having various amplitudes, not of the “probabilities” that only collapse can create IMHO. There wouldn’t be any “hits” to even be trying to “explain.”

    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  38. #38 Neil B ♪
    July 12, 2009

    BTW, decoherence (mentioned above) is a false path to understanding why our world isn’t found to be composed of superpositions. IOW, decoherence can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH and decoherence. Briefly, I charge that deco is a circular argument and has other flaws. It indulges several fallacies in the form it is often touted. Now of course, decoherence can affect the patterns or information status etc. of hits and the interaction of waves. It has a role. And yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.

    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.

    Well, that argument is fallacious for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)

    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.

    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! Without the supervention of a special collapse process, the DM has to be just a tabulation of the chances of having various amplitudes, not of the “probabilities” that only collapse can create IMHO. There wouldn’t be any “hits” to even be trying to “explain.”

    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  39. #39 Neil B ♪
    July 12, 2009

    BTW, decoherence (mentioned above) is a false path to understanding why our world isn’t found to be composed of superpositions. IOW, decoherence can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH and decoherence. Briefly, I charge that deco is a circular argument and has other flaws. It indulges several fallacies in the form it is often touted. Now of course, decoherence can affect the patterns or information status etc. of hits and the interaction of waves. It has a role. And yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.

    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.

    Well, that argument is fallacious for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)

    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.

    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! Without the supervention of a special collapse process, the DM has to be just a tabulation of the chances of having various amplitudes, not of the “probabilities” that only collapse can create IMHO. There wouldn’t be any “hits” to even be trying to “explain.”

    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  40. #40 Neil B ♪
    July 12, 2009

    BTW, decoherence (mentioned above) is a false path to understanding why our world isn’t found to be composed of superpositions. IOW, decoherence can’t even come close to explaining (away) the collapse of the wave function (from extended superposed states into a localized state representing only one of the original combination.) Interested readers can delve into the discussion I started at Tyrannogenius (Dish on MWH and decoherence. Briefly, I charge that deco is a circular argument and has other flaws. It indulges several fallacies in the form it is often touted. Now of course, decoherence can affect the patterns or information status etc. of hits and the interaction of waves. It has a role. And yes, I know proponents say deco doesn’t really/finally “explain collapse” anyway. But I’m saying it can’t tell us even a little about why and how the waves don’t just stay all mixed up together in an extended state. Below are some of my rebuttals.

    One decoherence argument looks at e.g. randomly-varying, relative phase shifts between different instances of a run of shots of single photons into a Mach-Zehnder interferometer. Their case goes, the varying phases cause the output to be random from either A or B channel instead of any guaranteed output (into e.g. A channel), that is otherwise dictated by interference – in the normal case where phase is strictly controlled. They tend to argue, such behavior has become “classical.” Somehow we are thus supposedly moved away from even worrying about what happened to the original superpositions that evolution of the WE says typically come out of both channels at the same time – until they get “zapped” by interaction with a detector.

    Well, that argument is fallacious for many reasons. First and foremost is the very idea of using what may or may not happen in preceding or subsequent events of an experiment, to argue the status of any given event. I mean, if the phase between the split WFs happened to be 70°, then the output amplitude in channel A = 0.819…, and the output amplitude in channel B = 0.573576… . In another case, with a different relative phase, the amplitudes would be different, umm – so what? There is still a superposition of waves, and the total WF exists in both channels until “detection” works its magic. That’s what the basic equation for evolution of the WFs say. They don’t have a post-modernist escape clause that if things change around the next time and the next time you run the experiment, then any one case gets to participate in some weird “socialized wave function” (?!)

    And, what about the case where we don’t have messed up phases but a consistent e.g. 70° phase delta across instances – then what? So there really isn’t or shouldn’t be a collapse then, but waves remaining in both output channels? That isn’t what happens, you know. Chad said, the other WF doesn’t have to go away (like to “another world”), they just don’t interfere anymore. But that isn’t really the issue: the issue is that the calculation says there’s amplitude in both channels – and then how the photon ends up condensed at one spot.

    The use of the density matrix doesn’t really solve or illuminate any of this either. One trouble with the DM is, it’s a sort of two-stage mechanism (in effect.) First, you start with the “classical” probabilities of various WFs being present. OK, that makes sense for actual description because we don’t always know what WFs are “really there.” But then there’s mishandling of two types. First, the actual detection probabilities are usually compiled out of the WF interactions (squared combined amplitudes.) But that takes a “collapse” mechanism for granted and can’t be used later in an argument attempting to “explain” it. If we just have Schrödinger evolution, the DM would just tabulate the likelihood of having various combinations of amplitudes, and that’s all! Without the supervention of a special collapse process, the DM has to be just a tabulation of the chances of having various amplitudes, not of the “probabilities” that only collapse can create IMHO. There wouldn’t be any “hits” to even be trying to “explain.”

    Briefly, roughly: the decoherence argument is largely an attempt to force an implicit ensemble interpretation on everyone, despite the clear conflict of the EI v. any acceptance of a “real” wave function each instance, that evolves according to e.g. a Schrödinger equation.

  41. #41 Blah Blah
    July 27, 2009

    What would happen if you just measured one slit?

  42. #42 dalani
    September 26, 2009

    Two questions:

    Should I post an answer?

    if Yes say Yes
    If No say nothing

    If I post, will I help a physicist’s career
    but not mine?

  43. #43 Young Entrepreneurs
    March 23, 2011

    I love Quantum Physics.
    The weird thing is that this seem to be unreal and not logical explanations are intuitively very logical. I can’t explain that.
    You say:
    “Everett’s idea, the many worlds interpretation, holds that everything is always a particle, but whenever there are multiple possible outcomes, they all happen, but we don’t know which one happened in our universe until we “look”. ”

    but isn’t it true that they all happen at all time. Situations have multiple outcomes, at all times

    Write more about Quantum Physics Ethan
    What books would you suggest on this topic?
    I have “The Schrodinger’s Cat”, but haven’t read it yet.

    Martyna

  44. #44 Faux Brick
    July 15, 2011

    Hey
    I remember watching “What the Bleep do we know” and feeling amazed by the possibilities, mostly our human possibilities, and also the world’s opportunities
    amazing!

  45. #45 Erik
    Netherlands
    March 7, 2014

    I’m posting this without any knowledge of quantum physics, or even physics for that matter. Just thinking about this article and trying to find a more rational explanation.
    But could it be that:
    The electrons smack against the wall between the two slits and split into multiple particles, some entering the slits. And because of the standard gravity law a vibration (wave) arises on impact and thus an interference pattern is the end result?
    Assuming that the interference is only visible when electrons mis one of the slits. If it is also visible with the certainty that the electrons do not miss this theory does not apply.

  46. #46 Jim Slater
    March 26, 2014

    “…we can do one more experiment: this time, we shoot electrons one-at-a-time at this wall, but at each slit, we shine a bit of light, and detect which slit the electron goes through. As each electron is fired, one (but never both) of the detectors goes off, telling you which slit the electron went through. But — and here’s the crazy part — the pattern on the screen now shows no interference, and instead we just get two separate peaks corresponding to the two “classical”, particle-like paths the electrons could have taken.”

    Okay, I’ve heard this repeated many times. But can someone tell me, where is the PROOF that when we detect, or record, or measure the electrons going through the slits, we get two “bands” on the screen? I have never seen a single reference to any experiment which clearly demonstrates this. If and when I do, I will be convinced that this actually does happen. Otherwise, it is just unsubstantiated conjecture. And believe me, I really DO want to be convinced. So, can someone, anyone, point me the way to the actual physical experiment where the detectors are turned on and the interference pattern collapses and two bands, or peaks, appear on the screen???