The recent discussion of reviews of The God Delusion has been interesting and remarkably civil, and I am grateful to the participants for both of those facts. In thinking a bit more about this, I thought of a good and relatively non-controversial analogy to explain the point I’ve been trying to make about the reviews (I thought of several nasty and inflammatory analogies without much effort, but I’m trying to be a Good Person…). Unfortunately it requires me to explain a bit of physics… Please, please, don’t throw me into that briar patch.
Some people say that the last really significant thing Einstein did in his career was a paper with Boris Podolsky and Nathan Rosen, called “Can quantum-mechanical description of physical reality be considered complete?” (Wikipedia’s EPR paradox page points to a pdf version here). In the paper, they point out that for a particular sort of quantum state, there are strong correlations between measurements made on two different particles, even though quantum theory says that the state of the individual particles is indeterminate. One interpretation is that the measurement of one particle determines the state of the other, but these correlations should be observed even if the time difference between the measurements is too short for information to travel from one measurement position to the other at the speed of light. Einstein referred to this as a “spooky action at a distance,” a wonderfully colorful phrase that physicists are contractually obligated to drop into every discussion of the EPR paradox. It’s also the origin of the title of the Classic Edition post on the subject that I put up a little while ago.
Einstein favored a type of theory now known as a “Local Hidden Variable” (LHV) theory, in which each particle in the pair has a definite state that is not known to the experimenter. Other people, most notably Neils Bohr, held to the orthodox quantum interpretation that the state is completely indeterminate until a measurement is made. This was thought to be nothing but an abstract philosophical debate, until John Bell came up with an ingenious theorem that allowed people to make a testable prediction about what would happen in certain types of EPR experiments. Bell showed that there are perfectly general things you can say about the correlations you would expect to find in a LHV theory (that is, without specifying the details of any particular theory, the mere fact that it has local hidden variables puts certain limits on the possible outcomes), and that for certain experiments quantum theory predicts results that are outside those bounds.
In the 1980’s, Alain Aspect did a series of experiments to test Bell’s theorem, which earned him a spot in the Top Eleven Physics Experiments of All Time, and proved that LHV theories can’t possibly describe reality. Or, rather, they almost prove that LHV theories can’t work, which is the whole point of this post, but you’ll have to click below the fold to see why…
Aspect did three experiments, each more sophisticated than the last, but each experiment left a loophole. The loopholes get smaller as you go on, but to the best of my knowledge, they haven’t been fully closed.
The first experiment sets up the basic parameters. Aspect and his co-workers used an atomic cascade source that they knew would produce two photons in rapid succession. According to quantum mechanics, when these two photons are headed in opposite directions, their polarizations are correlated in exactly the same way as the EPR states. Each photon could be polarized either horizontally or vertically, but no matter what polarization it has when it’s measured, the other will be measured to have the same polarization.
Since the goal here is to measure correlations between polarizations, Aspect set up two detectors, with polarizers in front of each detector, and measured the number of times that the two detectors each recorded a photon for different settings of the polarizers. Their results showed that the measured correlation was nine standard deviations outside the limits Bell’s theorem sets for LHV theories, which means there’s something like one chance in a billion of that happening by accident.
So, LHV theories are dead, right? Well, not really. There’s a loophole in the experiment, because the detectors weren’t 100% efficient, and there was some chance that they would miss photons. Since the experiment used only a single detector and a single polarizer for each beam, they could only infer the polarization of some of the photons– a vertically polarized photon sent at a vertically oriented polarized produces a count from the detector, while a horizontally polarized photon produces nothing. In some cases, then, the absence of a count was the significant piece of information, and was taken as a signal that the polarization was horizontal when the polarizer was vertical.
This leaves a small hole for the LHV theorist to wiggle through, as it’s possible that either through bad luck or the sheer perversity of the universe, some of those not-counts were vertically polarized photons that just failed to register. If you posit enough missed counts, and the right counts being missed, you can make their results consistent with LHV theories.
So, they did a second experiment, to close the detector efficiency loophole. In this experiment, they used four detectors, two for each photon, and polarizing beamsplitters to arrange it so that each photon was definitely measured. A vertically polarized photon would pass stright through the beamsplitter, and fall on one detecotr, while a vertically polarized photon would be reflected, and fall on the other detector. Whatever the polarization, and whatever the setting of the polarizer, each photon will be detected somewhere, so there are no more missed counts.
They did this experiment, and again found results that violate the limits set by Bell’s Theorem, this time by forty standard deviations. The probability of that occurring by chance is so small as to be completely ridiculous.
So, LHV theories are dead, right? Well, no, because there’s still a loophole. The angles of the polarizers were set in advance, so it’s conceivable that some sort of message could be sent from the polarizers to the photon source, to tell the photons what values to have. If you allow communication between the detectors and the source, you can arrange for the photons to have definite values, and still match the quantum prediction.
So, they did a third experiment, again using four detectors. This time, rather than using polarizing beamsplitters, they put fast switches in each of the beams, and sent the photons to one of two detctors. Each detector had a single polarizer in front of it, set to a particular angle. The switches were used to determine which detector each photon would be sent to, which is equivalent to changing the angle of the polarizer. And the key thing is, the switch settings were changed very rapidly, so that the two photons were already in flight before the exact setting was determined. A signal from the detector to the source would need to go back in time in order to assign definite values to the photon polarizations, which isn’t allowed for a LHV theory.
This version of the experiment, like the other two, produced a violation of the limits set by Bell’s theorem for LHV theories. The violation is smaller, only five standard deviations, because the experiment is ridiculously difficult, but it’s still not likely to occur by chance.
So, LHV theories are finally dead, right? Not really, because the experiment only used a single polarizer for each detector. This re-opens the detector efficiency loophole, and lets LHV theories sneak by in the missed counts.
Aspect quit at this point, though, because closing both of these loopholes at once would require eight detectors to go with the fast switches, and, really, who needs the headache? More recent experiments have improved the bounds, but to the best of my knowledge, none have completely closed all the possible loopholes (and there are plenty of them).
There aren’t many people still seriously pushing LHV theories these days. Pretty much everyone regards Aspect’s experiments as having settled the EPR question in favor of quantum theory, and more recent experiments have only squeezed the range of possible LHV theories down further. While it’s theoretically possible to find some local hidden variable theory that would fit the current limits, it would require such a fortuitous arrangement of detector efficiencies and spooky actions that nobody really thinks it would work.
The point is, though, that those loopholes are still there. Any responsible treatment of the subject has to acknowledge them. And, more importantly, anyone who wants to design a new experiment to test Bell’s theorem needs to account for those loopholes. Tightening the existing bounds is all very nice, and there are much more efficient ways to do the experiments these days than what Aspect did back in 1982, but those aren’t breaking new ground. The loopholes that are left seem faintly ridiculous, which is why you don’t find many people working at closing them, but they’re there, and you need to deal with them.
And that’s the analogy to Dawkins and the things Eagleton and Holt said about his book (I bet you were wondering whether I had forgotten about that…). The modern versions of the “ontological argument” for God may be awfully intricate, but they’re not really any worse than the loopholes in experimental tests of Bell’s theorem (in fact, divine intervention is probably about as credible an explanation of the results as some of the proposed loopholes). Ridiculous and complicated as they may seem, those are the arguments that need to be addressed, in the same way that a new Bell’s theorem experiment would need to deal with the faintly absurd loopholes that remain in the existing experiments.
I don’t personally find the various loopholes all that convincing– when I lecture about them, I refer to the messages from detector to source as being carried by invisible quantum gremlins– any more than I find the “ontological argument” credible. Intellectual honesty demands that those loopholes be addressed when discussing the results, though, and intellectual honesty demands that somebody writing a book that purports to dismantle the arguments for the existence of God deal with the strongest modern versions of the “ontological argument.” If Dawkins does blow that off, as both Eagleton and Holt claim, then he’s failed to meet his obligations as an author, and they’re exactly right to call him on it in their reviews.