In chapter 2 of How to Teach Physics to Your Dog, there’s a footnote about the ubiquity of uncertainty principle analogies in the mass media:

To give you an idea of the breadth of subjects in which this shows up, in June 2008, Google turned up citations of the Heisenberg Uncertainty Principle in (among others) an article from the Vermont Free Press about traffic cameras, a Toronto Star article citing the influence of YouTube on underground artists, and a blog article about the Phoenix Suns of the NBA. Incidentally, all of these articles also use the Uncertainty Principle incorrectly–by the end of this chapter, you should hopefully understand it better than any of them.

Add the Washington University in St. Louis psychology department to the list of institutions that don’t understand quantum mechanics as well as my dog– in an otherwise interesting New York Times article about study skills Benedict Carey writes:

Dr. [Henry L.] Roediger [III] uses the analogy of the Heisenberg uncertainty principle in physics, which holds that the act of measuring a property of a particle alters that property: “Testing not only measures knowledge but changes it,” he says — and, happily, in the direction of more certainty, not less.

Once again, that’s not really what the Uncertainty Principle says. It’s a semi-classical analogy used to make the fundamental physics of uncertainty more palatable to classically trained physicists in the 1930′s. Quantum uncertainty is really about the fundamental nature of matter, and is an unavoidable consequence of the dual particle and wave nature of quantum objects. You cannot measure both the position and momentum of a particle arbitrarily well not because the act of measuring one perturbs the system, but because those quantities do not exist.

OK, “do not exist” is a little strong, but then, I needed something provocative to get people to click through to the full post. Werner Heisenberg probably would’ve been ok with that phrasing, though– he was a big proponent of the idea that it doesn’t make any sense to talk about what quantum objects are doing between measurements. I’m not quite that radical, though.

Provocative phrasing or no, the origin of uncertainty really does spring from the idea of particle-wave duality rather than any ideas related to the act of measurement. It comes from the fact that, fundamentally, the position of a quantum object, like an electron or a photon, is a particle-like characteristic, while its momentum is associated with the wave nature of the object. Mathematically, the momentum of a quantum object is given by Planck’s constant divided by its wavelength (or, equivalently, the wavelength associated with a quantum object is determined by Planck’s constant divided by its momentum).

You can’t have a quantum object with both an arbitrarily well-defined position and an arbitrarily well-defined momentum because that is a contradiction in terms. An arbitrarily well-defined position would be described, mathematically, by a wavefunction that looks like a spike– the probability of finding the particle at a given position in space would be zero everywhere except the one point where the particle is really located. There’s no way to assign a wavelength to a spike, though– it doesn’t oscillate in any meaningful way– so the momentum of such a particle is completely undefined.

If you want to come at this from the other direction, a particle with a perfectly well-defined momentum would look like a perfect sine wave extending through all of space. Anywhere you looked, you would find the wavefunction oscillating up and down in a perfectly regular manner. Such a perfect wave, though, has a completely undefined position– the probability of finding the particle at any given position is the same non-zero value for every point in space.

Thus, there is absolutely no way to describe something that has a well-defined position at the same time as a well-defined momentum. The best you can do is to construct a “wavepacket,” a wavefunction that oscillates in a fairly regular manner in some region of space, but is zero outside that region. This necessarily leaves both position and momentum uncertain, though– the oscillation region needs to be big enough to contain a few wavelengths, which means the position is uncertain by at least that much, and the fact that the oscillation is constrained in space leaves the wavelength slightly uncertain as well. You can arrange for both of these uncertainties to be fairly small, if you put your wavepacket together in the right way, but you can never make either of them zero.

(If you’d like more math jargon and less hand-waving, formally, the position and momentum descriptions of the wavefunction are Fourier transforms of one another. The Fourier transform of a delta function is a constant, which gives you the result for the extreme cases. The minimum uncertainty state is a Gaussian wavepacket, whose Fourier transform is another Gaussian, and if you work through the math, you find their widths are related in exactly the way you expect from the Uncertainty Principle.)

(I should also insert the obligatory disclaimer here about this being the state of affairs for conventional quantum mechanics. A proponent of Bohmian mechanics would say that the position and momentum of any individual particle are, in fact, perfectly well-defined, but they are not known to the observer. And the initial position and momentum of any individual particle are randomly distributed over a small range of values, so that repeated measurements of the properties of many individual particles will give you exactly the same probability distribution that you would find from using conventional quantum mechanics. The end result is the same, but the path to it is a little different.)

Incidentally, this particle-wave argument is also the basis for the famous “zero point energy,” the energy that any quantum system has when it’s in the lowest energy state available. The lowest energy state is the one that makes the energy as close to zero as you can get while still allowing the object to have both particle and wave nature. This energy cannot be extracted, because it’s a fundamental consequence of the dual particle and wave nature of every object in our universe. Anybody claiming to have a device that produces free energy by extracting the zero-point energy by whatever means (a famous scam involves a “state below the ground state” in hydrogen, for example) is a fraud and very likely an evil squirrel.

So, quantum uncertainty does not, in fact, have anything to do with perturbing a system by measuring it. It’s true that measuring a quantum system changes its state, but that’s a different part of the theory, and nothing to do with Heisenberg’s famous principle. Quantum uncertainty is built into the deep structure of the universe, and is an inescapable consequence of things having both particle and wave characteristics.

Of course, that doesn’t make as catchy and educated-sounding a reference for lazy writers who aren’t physicists, so I don’t expect to see the measurement-based version of this analogy disappear any time soon. But now you know more than lazy writers (and psychology professors), so you’ll understand why they’re wrong, and why it makes me mutter angrily every time I hear this line trotted out.

(If you’d like a longer version of this explanation, with graphs of probability distributions, and discussions of bunnies, have I got a book for you…)

Comments

  1. #1 Matt Leifer
    September 7, 2010

    I guess I have been criticizing your quantum blog posts for long enough now that you have taken to inserting a standard “except in Bohmian mechanics” caveat. Whilst I could nit-pick the description of how things work in the Bohmian case, I will avoid that because I don’t want people to get the impression that I actually believe in Bohmian mechanics, rather than just using it as a counterexample.

    Your description of the rigorous version of the uncertainty principle that we teach to undergrads is perfectly good. This version is actually due to Robertson, rather than Heisenberg and is probably the most succinct way of capturing the HUP rigorously. As you mention, Heisenberg’s original argument, based on the microscope thought experiment, is semi-classical and so it is inapplicable to general quantum systems. On the other hand, Heisenberg’s argument is phrased in terms of measurements causing disturbance to the system, so the people who use the HUP as an analogy aren’t actually misquoting Heisenberg too badly. Of course, Bohr later browbeat Heisenberg into changing his language when describing the uncertainty principle, and Bohr’s own description of the uncertainty principle given in his Lake Como lecture is essentially the same as the one you gave above.

    Nevertheless, there are versions of the HUP that do quantify the disturbance caused by a measurement (see http://arxiv.org/abs/quant-ph/0609185 for example). It is possible that Heisenberg might have preferred one of these formulations rather than the Robertson one had they been available in his day, since they seem to capture his original intuition more closely. It is not clear to me which one we ought to think of as the modern version of the HUP and, therefore, it is not obvious to me whether we should bother calling out all usages of the HUP as an analogy.

    One final point is that all these formulations are phrased operationally, in terms of probability distributions for observables, so they don’t really imply anything about the underlying “state of reality”, whatever that may be, i.e. they don’t imply that nature is fundamentally indeterministic or that measurements are really disturbing the “state of reality”. In a hidden variable framework you can prove that measurements must disturb the hidden variable state, otherwise you couldn’t reproduce Stern-Gerlach experiments for example, but as far as I know nobody has tried to quantify the extent to which the hidden variable state must be disturbed by a measurement. I think it would be interesting to try. In particular, I would be intrigued to see if and where hbar would show up in the argument.

  2. #2 Raskolnikov
    September 7, 2010

    I guess the confusion about the “perturbing argument” is the following:

    Even for a classical system, a measurement will perturb its state, but for classical systems, we can hope to reduce the perturbation. In principle, we can make it arbitrarily small.

    This is true for a quantum system as well, but, HUP tells that while we reduce the perturbation on one variable, we increase it on the “canonically conjugated” variable. I remember Feynman uses this often in his introductory lecture on quantum mechanics to explain HUP in the context of the two-slit experiment.

  3. #3 maxwell
    September 7, 2010

    ‘If you’d like more math jargon and less hand-waving, formally, the position and momentum descriptions of the wavefunction are Fourier transforms of one another.’

    He swings and misses!

    The reason why there is an uncertainty in nature is not because there a Fourier transform between these variables. The bandwidth of a pulse of light is not the result of the Heisenberg Uncertainty Principle.

    Heisenberg came to his unintuitive result by using a fundamental theorem in linear and matrix algebra, on which he based his version of quantum mechanics: The Cauchy-Schwarz inequality theorem. Because both position and momentum are operators in the Hilbert space of a particle of interest, they have a commutator. Because this commutator is non-zero (the operators do not commute), there is an inherent uncertainty in knowing both simultaneously due to the Cauchy-Schwarz ineqaulity.

    To further drive this point home, the operators that correspond to the projection of the total orbital momentum of a particle onto different directions in a coordinate system are not related to one another via a Fourier transform. Yet despite this fact, there is an inherent uncertainty in knowing the projection of the total orbital angular momentum in every direction simultaneously. This is because the commutators between these different operators are non-zero. Therefore, there should be an inherent uncertainty as a result of the Cauchy-Schwarz theorem.

    This is one thing about quantum mechanics text that drives me crazy. Almost all refer to the ‘time-energy uncertainty principle’ despite the fact that THERE IS NO OPERATOR IN ANY HILBERT SPACE FOR TIME! How can we have a true uncertainty principle with no operator when the uncertainty principle was derived for operators?!

    It’s all this sloppy language that leads non-experts to use HUP improperly in a given situation. If we make explicitly hard for them to understand by using operators and esoteric linear algebra theorems, they might stop bothering with the quantum metaphors.

  4. #4 Clay B
    September 7, 2010

    How does this jive with the more familiar notion of momentum = mass * velocity? It seems if you can measure the position accurately (more than once), you know its velocity accurately, and hence its momentum.

  5. #5 Guy Incognito
    September 7, 2010

    Yikes! As an undergrad at Washington University in St. Louis (albeit in Physics/Bioengineering) let me assure you that at least some people have some intuition towards the Heisenberg Uncertainty Principle!

  6. #6 Peter Morgan
    September 7, 2010

    I second Matt Leifer’s comments.

    I add that (as well as being comfortable with a Bohmian and other descriptions) we should also be comfortable with a description of an experiment that is instrumental enough that we don’t talk about “measurements”, but instead talk about actual records of the state of parts of the apparatus (which may well be records of times at which the state of an Avalanche PhotoDiode changed) — at least on one or two days of the week. In such a description, we can’t record what perturbation a given recorded change of state might have caused, because we have no access to what the record would have been if the given change of state had not happened. The idea of a perturbation requires knowledge of what would have happened if something else that in fact did happen had not happened, which is not accessible to an experimenter.

    On other days of the week, if we want there to be something in the places between the experimental apparatus that causes what we see in the apparatus, I suggest that we might think in terms of the relatively restrained causality afforded by a quantum field rather than in terms of particles.

  7. #7 Moshe
    September 7, 2010

    I sort of disagree…the physical statement is about measurements, and the famous Heisenberg microscope thought experiment is one way to demonstrate this in an intuitive way. Those are the indisputable facts, and then there is the formalism that incorporates those facts, your favourite one involving wave functions (as opposed to PVOMs, Bohmian QM, or unicorns and rainbows). But that formalism is not unique, different people like different formalisms, all of which are identical for all practical purposes (when measurements are concerned), but supplement this information with different degrees of unmeasurable quantities. As long as you make statements about physical measurements, you can circumvent discussion of which unmeasurable quantities “exist” (whatever that may mean).

  8. #8 Matt Leifer
    September 7, 2010

    Maxwell,

    No, what you described is the Robertson uncertainty relation, which is the one taught in undergraduate experiments. Heisenberg never derived this. Heisenberg derived his result by extrapolating from his microscope argument, and this argument is fundamentally semi-classical. The description that Chad gave originates in Bohr’s Como lecture. It is perfectly correct if you are only interested in position-momentum uncertainty, but of course does not apply to observables that are not canonically conjugate quadratures. There are also versions of the time-energy uncertainty relation that make sense, albeit ones that do not rely on the Robsertson derivation, e.g. uncertainty in a time-of-flight measurement vs. uncertainty in energy. However, I do agree that time-energy uncertainty is often used in a handwaving way in situations where it does not have a rigorous interpretation.

    Clay B,

    If you do the experiment you describe, i.e. measuring position twice, then you will obviously get *a* velocity, i.e. a vector with dimensions L/T, but this cannot be interpreted as the velocity that the particle had before the experiment. Suppose you do start in a definite velocity, or equivalently momentum, state and then make a position measurement. The state will become sharply defined in position, which means that the velocity will become extremely uncertain. This wavefunction will quickly spread out and become very widely spread in position, due to the uncertainty in its momentum. The second measurement will give an essentially random outcome, unrelated to the original velocity of the particle.

  9. #9 Matt Leifer
    September 7, 2010

    Moshe,

    I’m surprised to see POVMs (which is what I am assuming a PVOM is) lumped in with Bohmian mechanics and unicorns. POVMs can be implemented in the lab and, arguably, every actual measurement made in a lab to date by a human being has been of a POVM rather than of an ordinary Hermitian observable. Hermitian observables are just a special case of POVMs, so you are not losing anything by going to that formalism. Also, there is no problem having both POVMs and wavefunctions in the same formalism because one is a type of observable and the other is a type of state. Just because you don’t happen to need them to calculate scattering amplitudes in high energy physics doesn’t mean that they ain’t part of the very fabric of the usual quantum formalism.

    Finally, a `discussion of which unmeasurable quantities “exist” (whatever that may mean)’ is fun so I’d rather not circumvent it. I haven’t given up on the idea that physics is supposed to describe an actual pre-existing world rather than simply correlating data.

  10. #10 Brian H
    September 7, 2010

    @maxwell: You seem to have swung and missed yourself. Saying position-momentum uncertainty comes from the Cauchy Schwarz theorem is only strictly true in the operator formulation of QM. If one chooses to quantize via the path integral then it clearly comes via fourier transform. Or if one does QFT then both the position/momentum and energy/time relationships are simple results of the fourier transform relationship.

    One formulation of the physics doesn’t have a monopoly on the truth.

    I would also point out that [A,B]~1 is equivalent to the existence of a fourier-like relationship between a and b. So while it doesn’t every kind of uncertainty in QM, invoking the F.T does just fine for position/momentum uncertainty.

  11. #11 Moshe
    September 7, 2010

    Matt, POVM (yes, I should not be using acronyms, typos really mutilate them) are measurable, and everything else in that list has some degree of unmeasurable information (e.g the phase of the wavefunction), added on for one reason or another. Statements involving only measurable quantities are something we could all agree on, everything else is mostly about semantics.

    I am sympathetic to your last paragraph, but I never figured out for myself what is precisely the goal of that sort of discussion and what is the methodology to achieve that goal. I am sure people closer to the subject have their own set of answers to these questions. Without a clear idea of this, the discussion seems sort of arbitrary, which is only fun to a degree.

  12. #12 Kasparov
    September 7, 2010

    There is nothing called wave particle duality, it is only mentioned in any book in its 1st chapter and never again in the later chapters. It is only mentioned for historical accounts. these quantum particles are not waves, they are quantum particles, and one detects them as particle. That is why fundamental physics is the physics of elementary particles not waves. Please stop the wave particle duality crap. These historical concepts are spread in almost all quantum mechanics books and introductory modern physics books, and they have caused lots of harm and damage to students intuitions already.

  13. #13 bomoore
    September 7, 2010

    One thing is certain – verbal communication about physics is as uncertain as it gets! How can there be such disagreement between physics-math people over the results of one experiment? Could this be a philosophical problem, a “supernatural thinking” problem? That is, are people trying to draw conclusions, to find meaning, in what is simply an experimental result?

  14. #14 Clay B
    September 7, 2010

    Matt (#8),
    I believe you, but that sounds like what Chad objects to in the third-from-last paragraph.

    Clay

  15. #15 ADD
    September 7, 2010

    The online version of the article has been revised to say:

    Dr. Roediger uses the analogy of the Heisenberg uncertainty principle in physics, which holds that the act of measuring a property of a particle (position, for example) reduces the accuracy with which you can know another property (momentum, for example): “Testing not only measures knowledge but changes it,” he says — and, happily, in the direction of more certainty, not less.

    A note at the bottom says:

    This article has been revised to reflect the following correction:
    An earlier version of this article described incorrectly the Heisenberg principle of uncertainty. The principle holds that the act of measuring a property of a particle reduces the accuracy with which you can know another property, not that the act of measuring such a property alters that property.

  16. #16 CCPhysicist
    September 7, 2010

    My recollection is that the German word used by Heisenberg could also be interpreted as “fuzzy” or “blurry”, which is a much better articulation of what is going on. Uncertain always seems to carry the connotation that we have no idea at all where something is, when the reality is that our picture is always — fundamentally — out of focus. Not because of any flaw in the lens, or even because of the finite aperture, but because nature itself is blurry. We can, in fact, take very sharp pictures that clearly show the specific and precise outline of that blur: the charge density of the H atom, of a nucleus, whatever.

    Matt @ 8:

    Heisenberg’s published microscope illustration was actually wrong (and retracted by him in proof), so it is a good thing the Uncertainty Principle was derived from his quantum mechanics rather than from that argument! If the problem was a simple as having disturbed the velocity when you measured the position, you could just measure the change in momentum of the ejected particle and reconstruct what you claim to have destroyed. Quantum mechanics says it wasn’t there in the first place.

  17. #17 maxwell
    September 7, 2010

    Brian H,

    ‘If one chooses to quantize via the path integral then it clearly comes via fourier transform. Or if one does QFT then both the position/momentum and energy/time relationships are simple results of the fourier transform relationship.’

    That’s great, but either way what you are talking about is NOT the Heisenberg Uncertainty Principle, which was the topic of conversation.

    Sure, any two variables related by a Fourier transform will have an ‘uncertainty’ associated with them, as I pointed out is true of the bandwidth of a laser pulse.

    But the Heisenberg Uncertainty Principle is something physically distinct from a Fourier transform. I used the example of the projection of the orbital angular momentum of a particle to make this point perfectly clear. Those quantities are not related by a Fourier transform, yet there is still an inherent, physically real uncertainty in knowing the projection onto each direction simultaneously.

    Did you skip that part of my comment?

    That said, in QFT how does one deal with the fact that the projection of the spin angular momentum of an electron cannot be known in all direction simultaneously? I’d be interested to know how renormalization deals with that one via Fourier transforms.

  18. #18 onymous
    September 7, 2010

    That said, in QFT how does one deal with the fact that the projection of the spin angular momentum of an electron cannot be known in all direction simultaneously? I’d be interested to know how renormalization deals with that one via Fourier transforms.

    In QFT, as in QM, angular momentum has an operator algebra satisfying commutation relations that correspond to the Lie algebra of the rotation group. Does that answer your question? It’s really no different from QM, except that it fits into the larger structure of Lorentz invariance.

    I don’t know what you’re getting at with the second sentence about renormalization and Fourier transforms. Maybe it’s worth pointing out that conserved quantities, like angular momentum, are not renormalized.

  19. #19 maxwell
    September 8, 2010

    onymous,

    Brian H claimed that uncertainty, a la the Heisenberg Uncertainty Principle, in QFT only comes from Fourier transforms. I was using the example of angular momentum projections, either for orbital or spin, to show that Fourier transforms cannot account for the inherent uncertainty present in knowing the projection operators of the total angular momentum onto the different directions of a coordinate system. As you point out, this is also the case in QFT.

    Thanks.

  20. #20 Joseph
    September 8, 2010

    Does Dr. Roediger have a fake goatee, by any chance? Fuzzy ears and a bushy tail?

  21. #21 Alex
    September 12, 2010

    Good explanation! One question. You say:

    Once again, that’s not really what the Uncertainty Principle says. It’s a semi-classical analogy used to make the fundamental physics of uncertainty more palatable to classically trained physicists in the 1930′s. Quantum uncertainty is really about the fundamental nature of matter, and is an unavoidable consequence of the dual particle and wave nature of quantum objects. You cannot measure both the position and momentum of a particle arbitrarily well not because the act of measuring one perturbs the system, but because those quantities do not exist.

    When, and more importantly how, was it discovered that Heisenberg’s Uncertainty Principle is not a result of the measurement altering what is being measured, but instead is more fundamental? How did they drop this “semi-classical” formulation?