In comments to my review of “The Race for Absolute Zero”, I promised to try to write up an explanation of BEC on the blog. A bit of preparatory Googling demonstrates, though, that I already did, in the fall of 2006, talking about identical particles, Pauli Exclusion, and BEC. You might’ve thought I would remember doing that, given that I have a mind like a steel wossname….

Having spent a bit of time thinking about this in the last day or so, though, I don’t really want to waste that effort, so I’ll repeat a little of the earlier discussion in a slightly different way, starting with an explanation of Pauli exclusion that I’m ripping off from Feynman (always steal from the best…).

The whole business of quantum statistics starts from the fact that it’s impossible to tell fundamental particles apart. You can illustrate this pretty nicely by thinking about a really simple experiment, illustrated at right. You have two identical particles– let’s call them electrons, to be concrete– and two detectors. For convenience, we’ll label the electrons 1 and 2, depending on their starting position, and label the detectors A and B.

So, you send these two electrons at the two detectors, and you get two clicks. Boringest. Experiment. Ever. Right? Yes and no. It’s not an exciting experiment, by any stretch, but it turns out to involve some awfully subtle physics, if you look at it closely.

Let’s think about what really happened here. We know that we had two electrons, and both were detected, but we can’s say for sure which detector detected which electron. The electrons don’t really have labels on them– that’s just something we do by convention when we draw the diagrams. The detector at position A may very well have detected the electron from position 1 (case I in the figure above), or it may have gotten the electron that started at position 2 (case II). There’s no way to say for sure, since it’s impossible to mark the electrons in any way.

Because it’s impossible to tell the difference between case I and case II, whatever description we write down for this system can’t depend in any way on the labels we gave things. Somebody else coming along later looking at the same set-up might choose to label things in a different way, calling our electron 1 electron 2, and vice versa. They should see exactly the same result in the experiment, though, so when we write a wavefunction to describe the experiment, the labels can’t matter.

So, let’s look at how we would go about this. It’s easy to imagine writing a wavefunction for each of the two individual cases– the details don’t matter, so let’s just call them |case I> and |case II>. The probability of detecting the two electrons by either of those methods is given by the square of the appropriate wavefunction. (The probability doesn’t have to be 100%, as there are other things you could imagine happening– both electrons going to the same detector, one or both not being detected at all, etc.)

We want to end up with a wavefunction that doesn’t depend on the labels at all, so let’s look at what happens when we swap the labels on the two electrons. We’ll call these case I* and case II*, and they’re shown in the figure at right. But if you look at these, you see that case I* is really the same thing as our original case II, and case II* is the same as case I. Which means that the wavefunctions will also be the same:

|case I*> = |case II>

|case II* > = |case I>

So whatever we do with the wavefunction, all we really need to worry about is the original set of two functions, |case I> and CaseII>.

If you fiddle around with these for a bit, you can easily convince yourself that there are two and only two ways to put these together so that nothing we measure changes when you swap the labels. These involve adding or subtracting the two functions to make two new functions:

|sym> = |case I> + |case II>

|anti> = |case I> – |case II>

“Wait a minute,” you may be thinking. “If I switch the labels on the |anti> state, I end up with a wavefunction that’s different than what I started with. In fact, it’s the negative of what I started with– |case II> – |case I>.”

That’s true, but it turns out not to affect anything we can measure. Measureable quantities all depend on the **square** of the wavefunction, and the square of -|anti> is the same as the square of +|anti>.

This does provide a handy way of dividing the universe up into two types of particles, though: there are particles that you find in states like |sym> (called “bosons” after the Indian mathematician Satyendra Nath Bose), and then there are particles that you find in states like |anti> (called “fermions” after the Italian phsyicist Enrico Fermi). This picture also lets you see the origin of the so-called “Pauli exclusion principle,” which says that two electrons can never be found in exactly the same place. This is the bit I’m swiping from Feynman (in QED), and it’s typically clever.

Think about what happens when you let the two detectors get really close to one another, almost on top of each other, as shown in the picture at left. In that case, there’s really almost no difference between case I and case II– each electron leaves its starting position, and goes to exactly the same place. I didn’t say how you would go about writing down the wavefunction for the two different cases, but however it’s done, I hope you’ll agree that when the two detectors are right on top of one another, the wavefunction for case I has to be the same as the wavefunction for case II:

|case I> = |case II>

Given that, let’s look at what happens to our two exchange-independent wavefunctions:

|sym> = |case I> + |case II> = 2 |case I>

|anti> = |case I> – |case II> = 0

The symmetric wavefunction, where we added the two cases together, gets bigger, so you’re more likely to find the two particles hitting the two detectors. It’s actually four times as likely as for either case I or case II alone (because you square the wavefunction to get the probability). Bosons are more likely to be found in the same place than in two different places.

The anti-symmetric wavefunction, on the other hand, goes to zero– when case I and case II become the same, the difference between the is zero. There is no chance at all in this model of finding two fermions in exactly the same place at the same time. There’s a slight loophole in that you can put two fermions in the same position if they have different internal states, something that we’ve neglected to this point, but that’s easy enough to add to the model, and the general rule still holds: two fermions can never exist in exactly the same state (including both position and internal state) at the same time. This is the famous “Pauli exclusion principle,” and it’s the reason why we have chemistry.

Electrons, protons, and neutrons are all fermions, and can only ever be found in wavefunctions like |anti>, that change sign when you swap the particle labels. This holds for larger numbers of particles– it’s difficult to see how to do this with more than two particles, but it can be arranged through a rather irritating mathematical technique called a “Slater determinant”. This determines the structure of nuclei, the arrengement of electrons in atoms, the electrical properties of materials, and a host of other phenomena.

Bosons, on the other hand, are more likely to be found in the same state than in different states. Photons are bosons, and you can think of a laser as a collection of a huge number of photons all occupying exactly the same state. You can also make composite bosons by sticking together an even number of fermions, which is the basis for superconductivity (electrons in a superconductor “pair up” to form bosons, which then condense), superfluidity in liquid helium (^{4}He atoms are bosons, and ^{3}He atoms can “pair up” in the same way that electrons do), and Bose-Einstein Condensation (various isotopes of a half-dozen different atoms are composite bosons, and can be put into a state where large numbers of atoms occupy the same quantum state).