ResearchBlogging.orgWhen one of the most recent issues of Physical Review Letters hit my inbox, I immediately flagged these two papers as something to write up for ResearchBlogging. This I looked at the accompanying viewpoint in Physics, and discovered that Chris Westbrook already did most of the work for me. And, as a bonus, you can get free PDF’s of the two articles from the Physics link, in case you want to follow along at home.

Since I spent a little time thinking about these already, though, and because it connects to the question of electron spin that I talked about yesterday, I think it’s still worth writing these up. So:

OK, what are these papers about? Essentially, they’re both talking about a new method for measuring the temperature of extremely cold samples of fermionic atoms, by looking at fluctuations in the number of atoms at different points in the cloud.

So, they’ve just made a better thermometer? Is that all it takes to get into PRL these days? Actually, it’s more complicated than you might think. We’re talking about samples of a few million atoms, at temperatures a few billionths of a degree above absolute zero. This isn’t a matter of sticking a tiny little mercury bulb or thermocouple contact into a sample of atoms. There aren’t enough of them to measure with any macroscopic object, and bringing them into contact with anything would destroy the sample completely. To measure temperatures at this level, you need to go back to the fundamental definition of the temperature of a gas.

OK, temperature is a measure of the average kinetic energy of the particles in a gas. So, you just need to measure how fast the individual atoms are moving. Big deal. That’s how it’s traditionally been done, but when you start talking about the particular systems they’re dealing with, the situation becomes much more complicated. The quantum statistics of the particles comes into play, and it turns out that when you’re dealing with ultra-cold fermions, measuring the average energy is no longer very effective.

OK, what? Let me back up a bit. The definition of temperature in terms of average energy is a perfect description of the problem if you’re talking about an ideal gas: hard-sphere particles that are distinguishable from one another rattling around in a box. when you put quantum mechanics into the mix, though, some weird things start to happen when you get to very, very low temperatures.

Right, like a Bozo Condensate. That’s Bose-Eisntein Condensate (BEC), and that’s one example of the sort of thing I’m talking about. BEC is a phenomenon that takes place when you get a large number of the right sort of atoms together at very low temperatures. Atoms with an even number of particles making them up (protons, neutrons, and electrons) behave as bosons, and “want” to be in the same state. When the temperature gets very low, all of the atoms will “condense” into a single state, generally the lowest-energy state available to them.

At that point, the average-energy method of temperature measurement gets a little dodgy. All of the atoms have the same energy, and it’s the lowest energy they could possibly have.

So they’re at absolute zero? No, because there are still some fluctuations in the sample– some atoms will wander in and out of the condensate, occasionally taking on a higher energy. But it becomes damnably difficult to measure the actual temperature, because you’re talking about looking at small fluctuations in the energy, rather than the energy itself. It’s very tricky.

So that’s what they’re doing here? sort of. They’re using fluctuations to measure the temperature, but the atoms they’re dealing with aren’t bosons, but fermions, which makes things much more complicated.

How so? Well, unlike bosons, fermions can’t stand to be in the same state. They’re subject to the Pauli exclusion principle, which says that no two fermions can occupy exactly the same state. So, when you get a lot of them together, you necessarily end up with them occupying higher energy states.

The basic physics involved is the same thing that gives you chemistry. Electrons are fermions, so you can never have two electrons in exactly the same state. When you start adding electrons to an atoms, then, the first one goes into the lowest energy state, the second goes into the same energy state with the opposite value of spin, but then that fills up the lowest energy state. The next electron in has to go into the second-lowest energy level, and so on. As you move up through the periodic table, the last electron added to each atom is in a higher and higher energy state, and the exact energy state of that last electron is what determines the chemical behavior of the different elements.

i-a932a2d4b4776db0501d526eb2c62fd2-fermi_dist.pngSomething similar happens when you add fermionic atoms to an atom trap. The first atom in goes into the lowest possible energy state, the next one into the second-lowest state, and so on. The difference is that we’re generally dealing with thousands or millions of atoms, way more than the tens of electrons in the atoms of the periodic table, so we tend to talk about this as more of a continuous distribution, because it’s too difficult to count all the individual atoms. The situation is described by something called the “Fermi-Dirac distribution function” (sometimes just “Fermi function,” for compactness), and it looks like the graph at right, which I grabbed from this page.

The Fermi function gives you the probability of a given state containing an atom, and it’s usually plotted as a function of energy. It’s equal to 1 for all the states up to a certain energy, the Fermi energy, determined by the total number of atoms you’re working with, indicating that all the states below that energy contain exactly one atom. Above the Fermi energy, the Fermi function is zero, indicating that none of those states contain any atoms at all.

So you’re saying that the last atom in your low-temperature sample of fermions actually has a high energy? It’s not high in an absolute sense, but it’s much higher than it would be in a system of bosons, where every atom would have the lowest possible energy.

That must play hell with measuring the temperature. Exactly. Once you get down to a low enough temperature, the average energy of your sample stops really changing. The last atom in has an energy determined by the Fermi energy, and that doesn’t change. You can’t measure the temperature by measuring the individual energy any more, you need to look at fluctuations.

What do you mean? Well, if you look at the graph above, you’ll see that there are a bunch of different lines on the plot, corresponding to the Fermi function for different values of the temperature. you’ll notice that as you raise the temperature, the step at the Fermi energy where the function drops from 1 to 0 gets more rounded.

That rounding of the edge occurs because when you have a finite temperature, atoms in your sample can move up and down in energy by an amount related to the temperature. An atom just below the Fermi energy can move up to a state just above the Fermi energy, which reduces the probability of finding an atom occupying the just-under-the-Fermi-energy state, and increases the probability of finding an atom occupying the just-over-the-Fermi-energy state.

This temperature effect shows up in real space as a fluctuation in the number of atoms at a given position. The low-energy states in a trapped sample of atoms correspond to atoms that are near the center of the trap, while the high-energy states correspond to atoms that spend most of their time near the outer edge of the trap. If you take a picture of the trapped cloud of atoms, you get a cross-section of energies, and you can measure how many atoms there are at various energies by looking at how the number of atoms varies with position.

So, you would expect a bigger variation in the number of atoms out at the edge of the cloud, where the high-energy states are? Exactly. The low-energy states are always fully occupied, because there’s nowhere for the atoms to go– they could use their thermal energy to jump up a small amount, but the state they would jump into is also occupied, so they can’t do that. Out on the edge, though, the thermal energy can be enough to take an atom past the Fermi energy into an unoccupied state. That means the number of atoms in those states can fluctuate depending on which exactly atoms have jumped up or down.

What you expect to see, then, is that the fluctuation in the number of atoms you see in the center of a trapped sample of fermions at very low temperatures should be much, much lower than you would expect from normal counting statistics, while at the outer edge of the cloud, you would expect to see more fluctuations. And that’s exactly what these two papers see.

How do they measure the fluctuations, though? Pretty much the simplest way you can imagine. They take pictures of the cloud of trapped atoms, and divide those pictures up into pixels. Then they count the number of atoms they see at each pixel, and repeat the measurement 20-odd times. Averaging all the different atom counts together gives a good measurement of the average number of atoms in each part of the cloud, and they can get the fluctuations by looking at the standard deviation of all their individual measurements.

Isn’t that awfully tricky? I mean, lots of things could happen to make the number of atoms in a given pixel change. That’s right. In order for this measurement to succeed, they need to have absolutely everything locked down as well as possible– laser intensity, atom loading, cooling efficiency, etc. It’s a major technical challenge, but these papers are from two of the very best experimental groups in AMO physics (Wolfgang Ketterle’s group at MIT, and Tilman Esslinger’s group in Zurich. And that’s why this gets into Physical Review Letters.

So this method works well? Yep. They get fluctuation distributions that match exactly with what you would expect from the Fermi distribution, and they can work backwards from the distribution to obtain the temperature, which agrees very well with their estimates by other methods. In principle at least, this can be used to measure accurately to lower temperatures than any other method.

That’s pretty cool. Isn’t the fact that they can directly observe these fluctuations at all pretty awesome by itself, though? Yes, yes it is. Quantum mechanics is cool and weird, and this gives you a direct look at one of the coolest and weirdest aspects of the whole thing.

Westbrook, C. (2010). Suppressed fluctuations in Fermi gases Physics, 3 DOI: 10.1103/Physics.3.59

Müller, T., Zimmermann, B., Meineke, J., Brantut, J., Esslinger, T., & Moritz, H. (2010). Local Observation of Antibunching in a Trapped Fermi Gas Physical Review Letters, 105 (4) DOI: 10.1103/PhysRevLett.105.040401

Sanner, C., Su, E., Keshet, A., Gommers, R., Shin, Y., Huang, W., & Ketterle, W. (2010). Suppression of Density Fluctuations in a Quantum Degenerate Fermi Gas Physical Review Letters, 105 (4) DOI: 10.1103/PhysRevLett.105.040402

Comments

  1. #1 Grep Agni
    July 27, 2010

    I can’t see the full papers without paying for them, and in any case I have always found PRL papers completely opaque.

    Could you explain a bit more about how a picture is taken? For example, does taking the picture disrupt the gas by adding energy to the system? If the experimenters used laser trapping to cool the atoms, I suppose it might be possible to use the trapping lasers as a probe, but I have no idea how that would work (especially since AIUI the laser frequency is chosen specifically NOT to interact with the atoms of inteest.

  2. #2 Sophie
    July 27, 2010

    Note you can access the PDFs of the PRL articles for free if you do so via “Physics” website. Click on “accompanying Viewpoint in Physics” in the first paragraph of the blog and then click the Download PDF link beneath the articles’ titles at the top of the Viewpoint article.

  3. #3 agm
    July 28, 2010

    Chad, thank you for writing these up. And for the Research Blogging in general.