Quantum Monty

The big Monty Hall book is rapidly coming together. I may even have the first draft done in the next few weeks. It's certainly been a lot more work than I expected when I began. Originally I envisioned a straight math book, where each chapter would present a different variation of the problem followed by a discussion of the sorts of mathematics needed to solve it. To a large extent it is still that, but I was a bit taken aback by the sheer quantity of academic literature that has been produced on the subject. My bibliography is likely to contain more than a hundred items. A discussion of some representative sample of this literature seems called for. Alas, that means first reading it, and then understanding it.

One of the more intriguing angles that arose during my research was several papers written by physicists developing quantum mechanical versions of the problem. Alas, after reading these papers it became clear that the level of mathematics and physics involved was well beyond anything I could reasonably discuss in the book. Still, I wanted to say something about these papers, if just to acknowledge their existence.

I decided to write a short section explaining why physicists might find it worthwhile to consider a quantum mechanical version of the Monty Hall problem. The current draft of that section is below. It occurred to me, however, that my own understanding of quantum mechanics and quantum information is decidedly rudimentary, which made me worry that I might have said something foolish in writing this section.

So why not turn it over to the EvolutionBlog brain trust? Have a look and let me know what you think. If I've stepped in it anywhere, please let me know in some gentle and feeling-sparing way. I have left the few bits of TeX code unaltered, but it will not be difficult to read around it.

Physicists have likewise taken notice of the Monty Hall problem, and the professional literature records several quantum mechanical versions of the basic game. Consider \cite{dariano}, \cite{zander}, and the references contained therein, for example. Sadly, the level of mathematics and physics involved in presenting the details of these versions is well beyond anything I care to discuss in this book. That inconvenience notwithstanding, a few words ought to be said about what the physicists are up to.

In section $4.7$ we took a brief look at information theory. Our concern there was Shannon's insight that the amount of information received upon learning that an event has taken place is inversely related to probability; we learn more from hearing that a highly improbable event has taken place than we do from learning that something commonplace has occurred. This, however, is not the only issue with which information theorists concern themselves. There is also the problem of storing and manipulating information in efficient ways.

Everyday experience tells us that information may be stored in any physical medium that can exist in at least two, distinct, states. Computers, for example, store information in the form of sequences of zeros and ones. By altering the precise sequence of zeros and ones stored in the computer's memory, we also alter the information recorded there.

Storing information in this way requires assuming, first, that each digit in the sequence has a definite value, either zero or one. There is no bizarre intermediate state where the digit is simultaneously partially zero and partially one. Furthermore, any given place in the sequence records a zero or one independent of what we know about the sequence. I may not know whether, say, the fifteenth digit in the sequence is a zero or one, but it has a definite value nevertheless. If I examine the sequence and look specifically at the fifteenth digit, I will then learn a fact I did not previously know (i.e. whether that digit is zero or one). My examination will disclose to me the state of a certain physical system, but it will not by itself cause any change in that system.

We further assume that the value of any particular digit in the sequence is independent of all the other digits. Knowing that the fifteenth digit is a zero does not tell me the value of the forty-third digit, for example. Were this not the case the quantity of information I could store would be severely compromised. If the value of the forty-third digit were determined by the value of the fifteenth, then I could not use the forty-third digit to store any new information.

These ideas feel so natural they hardly seem like assumptions at all. Just think about a printed book. If I pull a book at random off my shelf I will not know what is on page fifty-one until I open it and start reading. But no one doubts that \textit{something} is printed there independent of my state of knowledge. It is not as though the book consists of a lot of blank pages, only to have words suddenly appear from nowhere as soon as I open it. Equally obvious is that the words printed on page fifty-one do not in some way determine the words printed on page eighty-three. Of course they are independent. How could it be otherwise?

These assertions, obvious to the point of being trivial for large-scale physical systems, are simply false for systems the size of an atom. Physicists have devoted quite a lot of time to studying atoms, and the data they have collected can be explained only by abandoning the dictates of common sense. Consider, for example, the electron. If we view the electron as a simple particle, we might assume that it has certain attributes independent of our knowledge of what those attributes are. It has a certain position, for example, or momentum, or spin. It would be absurd to think these properties simply spring in to existence the moment we decide to take a measurement.

And yet, modern physics tells us this is precisely what happens with subatomic particles. The electron does not have a definite position until we presume to measure it. Prior to taking a measurement there is only a probability distribution that tells us how likely we are to find the electron in a given region of space. Likewise for any other attribute you would care to mention. It is as if the digits stored in our computer are no longer either zero or one. Instead they have some probability of being zero and some other probability of being one, only deciding which value to take when we actually go and look. So long as no one is looking there is only a so-called ``superposition'' of possible states. It is no longer zero \textit{or} one, but rather zero \textit{and} one simultaneously.

Then there is the phenomenon of entanglement. Under certain circumstances the attributes of one subatomic particle can be correlated with those of another in such a way that knowledge of one immediately gives you knowledge of the other. For example, entangled subatomic particles might be such that if we discover that one of them has an upward spin, we know immediately that the other has a downward spin, regardless of any physical separation between the particles. By itself this is not so puzzling. Consider, though, what we said in the previous paragraph. Neither particle has a definite spin until we measure it. They have only a superposition of all possible spins, each held with a certain probability. Nonetheless, upon measuring the first we learn something about the second. But how does the second particle know the first has been measured?

There is nothing like a superposition of states or quantum entanglement in our everyday experience with relatively large objects. These are precisely the sorts of considerations that lead many physicists to speak of ``quantum weirdness.'' And if our common sense notions of the behavior of matter go out the window in considering subatomic particles, so too do our intuitions about information. It was from this realization that the field of quantum information theory was born.

Does all this strike you as strange? Counterintuitive? Preposterous, even? If it does, then you are not alone. Most physicists share your feelings. Quantum information theory is an entirely different beast from what we are used to, and even professionals in the field can find it difficult to keep their bearings. What is needed is a sort of anchor, something that will allow us to keep one foot in more familiar conceptual territory while we explore this new terrain.

And that is where the Monty Hall problem comes in.

Careful study of the classical problem revealed much regarding the nature of probability and classical information. At first the problem is beyond the intuition of all but the most savvy customers, but it rewards the effort put in to understanding its subtle points. By finding quantum mechanical analogs for each aspect of the problem, we create an environment for studying the nature of quantum information in which we have at least some intuition to guide us. Quantum Monty Hall problems are not studied to help us determine optimal strategies in subatomic game shows. Rather, the insight gained from studying them aids us in applying modern physics to more practical pursuits.

The history of mathematics and science records many instances of this. Probability theory is nowadays an indispensable part of many branches of science, but for centuries it was studied solely in the context of gambling and games of chance. In the early days of computer science and aritifical intelligence much emphasis was placed on the problem of programming a computer to play chess. Insight into difficult mathematical or scientific problems often begins with the earnest consideration of trivial pursuits.

We have already seen how the Monty Hall problem opens the door to virtually every aspect of elementary probability theory. The next two chapters will describe the impact of the problem among cognitive scientists, psychologists and philosophers. Now it seems it is helping shed some light on issues in quantum physics. Is there nothing the Monty Hall problem can not do?

Tags
Categories

More like this

The previous collection of things everyone should know about quantum physics is a little meta-- it's mostly talking up the importance and relevance of the theory, and not so much about the specifics of the theory. Here's a list of essential elements of quantum physics that everyone ought to know,…
Last time on Sunday Function we talked about two types of symmetries that a real function might have: odd and even symmetry under reflection about the y-axis. Much more than I expected even as an undergraduate student, these types of symmetries turn out to be of amazingly fundamental importance in…
I'm teaching our sophomore-level modern physics course this term, which goes by the title "Relativity, Quantum Mechanics, and Their Applications." The first mid-term was a couple of weeks ago, on Relativity (special, not general), and the second mid-term is tomorrow, on Quantum Mechanics, and then…
I get asked my opinion of Bohmian mechanics a fair bit, despite the fact that I know very little about it. This came up again recently, so I got some suggested reading from Matt Leifer, on the grounds that I ought to learn something about it if I'm going to keep being asked about it. One of his…

I'm too braindead today to comment heavily on the detailed content, but it does seem quite interesting, and akin to the standard 'talking to non-physicist' explanations I've heard before. Which is good.

Stylistic point: First paragraph, you have "the level [...] is well beyond anything I care to discuss in this book."

I was a little confused to see the "I" in there, and you don't use it elsewhere.

Very cool, though.

By Aaron Lemur Mintz (not verified) on 28 Mar 2008 #permalink

Were you ever able to contact Monty Hall to see if he would be up for writing a cover blurb or short forward? According to IMDB he's still alive.

As someone who spends their days in the quantum world...

I liked everything except the paragraph with
"And yet, modern physics tells us this is precisely what happens with subatomic particles. The electron does not have a definite position until we presume to measure it."
Technically an electron can have a well defined position, it's just that it then doesn't have a well defined momentum! (Actually "technically" a particle can't be well localized due to the infinite amount of energy required to do this...but this comes from the physics of quantum field theory, not from textbook quantum mechanics.) The next sentence is also a bit problematic
"Prior to taking a measurement there is only a probability distribution that tells us how likely we are to find the electron in a given region of space."
Prior to taking a measurement really there is a probability distribution but also information which is in the phase of the wavefunction. So really there is _more_ than a probability distribution prior to a measurement.

I think I would try to rewrite the offending paragraph to explain that, for an electron, it can have a definite value for one property of the electron, but that then other properties you could measure aren't determined. It's actually kind of hard to compare this to classical information systems...what other measurement on a classical bit can you perform besides is it zero or is it one.

I like your choice of wording for entanglement (hard to do in a few short paragraphs!) and your explanation for why you should study quantum version of the Monte Hall problem is spot on.

Enjoyed reading that. I second Dave Bacon's motion for the amendments he proposed, as if my second were actually needed here. The only other picky point that popped out at me was the statement "Likewise for any other attribute you would care to mention." Okay, I'll mention charge. Doc Bacon has already explained how the gain of definite knowledge of one variable implies no definite knowledge for some other variable. This is true for numerous pairs like position/momentum, energy/time, etc. They share a non-commutative relationship: measuring A then B isn't guaranteed to be the same as measuring B then A. But there are some things that aren't stuck in one of these obnoxious non-commutative relationships. Such as electrical charge. You can pin that down as tight as you want without it costing you info about some other variable. Thus "... any other attribute you would care to mention" left you open to this little gotcha. And if I found it, you KNOW what's gonna happen if it appears in a book...

By Emory Kimbrough (not verified) on 28 Mar 2008 #permalink

Not relevant to QM stuff, but I don't like the popular claim that the information associated with an event is related to that individual event's probability.

Here's one reason why--it leads to incorrect answers.

Information transmission is reduction in entropy of a random variable. Say variable S has 10 outcomes (ten sided di), one S1 with probability 1/10000, another S2 with probability 1/10, but the entropy of S is 3 bits. How much information is transmitted when you are told that S1 happens? Or how much when told that S2 happens? The exact same amount--the entropy of S, conditioned on this message, is zero, so 3 bits of information are transmitted by either message. Their priors don't matter.

So while I have seen many popularizations (e.g., Dembski's book [!!!]) say that individual events have associated information which is simply log(1/p) for that event, I think this is quite sloppy. I am happy to talk about the 'surprise' associated with that event, and entropy as the average surprise. But the idea of 'information' really doesn't make sense until you have already built up to the concepts of entropy and entropy reduction.

This is discussed superficially here.

If you want me to take a look at your bit on infotheory in the earlier chapter, just email me (here is my web site). It's something I have spent a lot of time on (e.g., I wrote this paper when working out how information theory and another framework relate to one another).

Dave and Emory-

Many thanks for the helpful suggestions. I will revise the section accordingly.

John-

Let me finish the book first, then I'll worry about the jacket material! :)

Aaron-

Thanks for the comment. I unabashedly use the first person throughout the book, especially in the introductory paragraphs of sections.