# How Do You Make Negative Temperatures, Anyway?

Last week’s post talked about the general idea of negative temperature, with reference to this much-talked-about Science paper (which also comes in a free arxiv version from which the figures used here are taken). I didn’t go into the details of how they made a negative temperature gas, though, and as it’s both very clever and hard to follow, I figure that deserves a post of its own.

Right, so last time you said that negative temperature just means you’re more likely to find fast-moving atoms than slow ones, so all they need to do is whack these atoms in the right way? Right? No, it’s more complicated than that. You can’t just do something that excites all the atoms to higher velocities, because that’s just the gas-in-an-airplane situation we talked about last time. If you just make all the atoms move faster, you’ve just shifted your thermal distribution upward without changing its width, which doesn’t do anything to the temperature.

What you need is a way to make the probability of finding fast-moving atoms increase exponentially in a way that mirrors the usual probability distribution. But as I said at the end of the last post, if you’re going to do that, you need a maximum possible energy, so free atoms aren’t going to get the job done.

Why is that? Well, if you’re thinking about it in terms of the probability of finding atoms at different speeds, a system where the energy can increase without limit has the enormous practical problem of requiring an infinitely large probability at infinitely large energy, which is impossible to manage. You need to introduce some sort of cut-off to make the system work.

Dropping the word “practical” in there suggests there’s some more fundamental problem that we haven’t talked about. Well, yeah. As a number of other good explanations have described– see Matthew Francis and the sci.physics FAQ (h/t Bryan Feir in comments to my earlier post), you can understand all this in terms of entropy. Negative temperature means that entropy decreases as energy increases, and the only way that can happen is if there’s a maximum possible energy. I don’t find the concept of entropy quite as intuitive as thinking about distributions of moving atoms, though, so I didn’t go with that explanation.

So, how do you put a cap on the energy of a bunch of moving atoms? Deputize Maxwell’s Demon to hand out speeding tickets? That’s an amusing mental image, but not what they did here. The trick they used for this experiment was to put the atoms in an “optical lattice,” which is a regular pattern of bright and dark spots created by interfering multiple laser beams. For reasons that aren’t worth going into, the atoms have a lower energy when they’re in the dark spots, which creates a periodic array of sites where the atoms would like to be, separated by regions where they’d rather not be. This is very much like the situation experienced by electrons in a solid, which have lower energy when they’re near the atoms in the solid lattice, and higher energy in between atoms.

This analogy, where atoms in an optical lattice behave like electrons in a solid, makes these systems very interesting to condensed matter physicists, and you can find lots of posts talking about these sorts of systems in the condensed matter category for this blog.

And how does this give you a maximum energy? It has to do with the “band structure” of the lattice, which is what happens to quantum mechanics when you stick lots and lots of atoms together. If you have a single atom, an electron can be bound to that atom in one of a large number of discrete energy states, but will never be found with an energy in between those states. If you bring two atoms close together, an electron could be bound to either of them, and thus the rules of quantum physics tell you that it must be split between them. It turns out that there are two different ways to share an electron between them, with a very small difference in their energies. Bring in a third atom, and you’ve got three states, separated by a tiny difference in energy, and so on.

As the number of atoms increases, it becomes impractical to keep track of all the individual states, so the overall effect is to spread out those initial discrete states into broad bands of allowed energies. Rather than saying that an electron bound to an atom can have an energy of exactly -1.2 electron volts, you end up with a system where an electron in a solid containing a large number of atoms can have any energy between -1.0 and -1.4 electron volts. In an extremely narrow technical sense, there are really a bazillion discrete individual states corresponding to different allowed energies, but it’s a distinction that makes no difference, unless you’re Brian Cox trying to sell a book.

And what does this have to do with negative temperature? Well, the atoms in the optical lattice behave rather like electrons in a solid, so their energies are properly described by band structure. Which means that there’s a maximum energy they can have while still remaining within a given band, then a big gap in energy before the next band. And these atoms are coming out of a Bose-Einstein Condensate, so they’re at an extremely low initial temperature, which means they’re essentially all within the single lowest-energy band of the optical lattice.

In that scenario, if you cock your head sideways like a puzzled dog and squint, this looks like a system of atoms that have a maximum allowed energy associated with their motion. The energy states within the band resemble the states of atoms moving through space, because quantum-mechanical tunneling lets them move from one lattice site to another, and thus move through the lattice. States with higher energy correspond to atoms that move through the lattice faster, up to a point. When they reach the upper end of the band, though, there’s no way for the atoms to increase their energy by just a little bit– they would need a big jump to get to the next band, and there’s no way to get that energy.

That’s what they’re showing in the upper part (part B) of the “featured image” above, which I’ll reproduce here for those reading via RSS:

Part of Figure 1 from the arxiv preprint of the paper discussed in the post.

Those little cartoons of squiggly lines represent the lattice, and the green balls are the atoms inside. The lattice doesn’t just go up and down in energy because there’s also a trap which holds the atoms in a specific region of space, which produces the sort of “bowl” effect, increasing the energy at particular points.

So, you put the atoms into this lattice, and that gives you a maximum energy, and thus a negative temperature. It’s not quite that simple. Making a negative temperature would mean that most of the atoms in the sample have high energies, corresponding to relatively rapid motion, and thus they would all fly away, and your “negative temperature” sample wouldn’t stick around long enough to be interesting. To make something useful, you need to keep them around.

So, how do you do that? Well, you take advantage of the fact that the atoms interact with each other. If you have two atoms in the same lattice site, that interaction will either increase or decrease the energy of that pair of atoms. If it increases the energy, then, loosely speaking, the atoms repel each other; if it decreases the energy, they attract each other. This interaction can be tuned from repulsive to attractive by applying a magnetic field, for reasons that don’t really matter here (the term to Google is “Feshbach resonance” if you want to know more). If you arrange the right sort of attractive interaction, the atoms’ mutual attraction will counteract their tendency to fly apart, keeping the negative-temperature sample stable.

That’s pretty clever. So, they just dial in this attractive interaction, and presto, negative temperature? If only it were that simple. In fact, there are two additional problems, both relative to the fact that the atoms with the attractive interaction want to clump together. When you have these atoms in the lattice, their lowest-energy configuration is for all the atoms to be in a single site. But when they do that, they also realize that they’d be happier bound together in a solid than existing in the gas phase, and the whole thing implodes.

So, how do you fix that? By inverting your trap, to artificially increase the energy of the central sites of the lattice– basically, you go from the upward-facing “bowl” in the left side of the figure to the downward-facing bowl on the right side. That tends to push the atoms outward, balancing their attraction.

OK, so that fixes the clumping problem. what’s the other problem? Well, if you have atoms that want to clump together, you can’t make a large Bose-Einstein Condensate to load into your lattice. And if you try to use an inverted bowl trap to stop them from imploding, you can’t trap anything for very long, which again means you don’t have any atoms to start with.

But there’s a way to fix this? Yep, which exploits another idea from condensed matter physics. What you do is make a large BEC using atoms with a repulsive interaction in a normal trap. Then you load them into an optical alttice, but you crank the lattice up really high, much higher than you need to do a negative temperature. When you do that, you create a situation where the atoms can’t easily move from one site to another, and where there’s a significant energy cost to having two atoms in the same lattice site. If you’re really careful about how you prepare your sample, this produces a “Mott insulator” phase, which is a term from condensed matter physics that here means you have one and only one atom at each lattice site.

And this helps you, how? Well, the Mott insulator phase occurs in both the attractive and repulsive interaction cases, provided you choose your parameters carefully. So, it allows you to get the best of both worlds– you can use repulsive interactions while making your condensate, which lets you collect a large number of ultra-cold atoms. Then you can put them into a Mott insulator phase by cranking up the power of your lattice laser beams. Once they’re in the Mott insulator, you switch the magnetic field to change the interactions from repulsive to attractive, and invert your trap, then reduce the lattice laser power. If all goes well, this gives you a situation where the atoms want to be in a stable negative temperature state– they want to have energies close to the maximum for the lowest energy band, but they also want to stick together, because of their attractive interactions.

That sounds… really complicated. Yeah, well, that’s why it’s in Science. It’s a very involved process, but then, you wouldn’t expect making a negative temperature to be easy.

It’s probably the most confusing point of the paper, though, because they rush through it very quickly, lightly tossing off a Mott insulator reference in a way that’s utterly baffling unless you’ve followed their work closely. Which I have, because I used to do very similar experiments, back when I was a post-doc– nothing this complicated, but I’ve followed the field ever since, and thus was able to piece together what was going on.

So, how do they know that they’ve made this negative temperature state, anyway? It’s actually surprisingly simple: they prepare the state, then they shut off the lattice and the trap, and let the cloud of atoms expand. After a long-ish time (milliseconds, which is a long time when you’re talking atomic physics), a picture of the expanding cloud looks like a picture of the momentum distribution– atoms that are moving slowly (low momentum, low energy) cluster near the center of the cloud, while atoms that are moving quickly (high momentum, high energy) are toward the outside.

That’s what’s going on in the lower part of the figure above, with the giant 3-D peaks. On the left, when they have repulsive interactions, they see a big lump of atoms in the center of the cloud (a bit spike in the 3-D density plot. when they switch to the attractive interactions case, where they expect a negative temperature, that central peak goes away, split into four peaks at the corners of a box drawn around the center of the cloud. (This box is the “Brillouin Zone,” pronounced something like “Bree-won,” if you want to wow people with your fancy physics words, but the details aren’t that important.) You can see it a little more clearly in their Figure 3:

Figure 3 from the arxiv paper discussed in the text. The graph shows the probability of finding atoms at a given energy; the insets show the density profiles corresponding to positive (blue) and negative (red) temperature clouds of atoms.

The insets here show both the real data and the theoretical fit for the density of a cloud with positive temperature (blue outline) and a cloud with negative temperature (red outline). These are the two-dimensional data that gets turned into the eye-catching three-dimensional plot in the figure above.

The graph shows, roughly speaking, the probability of finding an atom at a given energy within the band. The blue points are for a positive temperature sample, where as you would expect, the low-energy states are more likely to be occupied, while the negative temperature sample has the atoms mostly in high-energy states.

Why does the– I have no idea why they’ve plotted kinetic energy as negative here. I assume they’re measuring the energy relative to the center of the band for some reason, but it’s not obvious why they did that, and there’s only so much time I can spend parsing these things, OK?

OK, I guess. So, that convincingly looks like a negative temperature sample. What’s this good for, again? Well, right at the moment, it’s good for getting an article in Science. This is basically a proof-of-principle kind of experiment– a very impressive proof of a very cool principle. They don’t do anything with it other than show that they made a negative-temperature gas, and that it’s reasonably stable– figure 4, which I won’t reproduce here, shows that they can keep these things around for half a second or so, which is pretty impressive in its own right.

As for applications, in the Abstract of the arxiv version, they tease the reader with the idea of Carnot engines with efficiency above 100%; somebody at Science must’ve balked at that, because in the published article, it’s moved to a citation in the introductory material. In theory, though, having a negative-temperature reservoir to incorporate in your heat engine would allow you to get more work out than you ought to. Given that they’re working with about 100,000 potassium atoms at a few billionths of a degree above absolute zero, though, this is highly unlikely to revolutionize our use of energy.

They also tease a possible cosmology analogue in the conclusion, because if you cock your head sideways and squint, the combination of the tendency to collapse due to attractive interactions and the balancing outward push due to the negative temperature sort of looks like a dark energy kind of scenario. Maybe.

In a more plausible vein, this is a fairly unique system in terms of both thermodynamics and condensed matter physics, and the kind of thing that physicists love to play with. It lets you look at a range of parameters that are easy to study on a computer, but very difficult to reach experimentally. Which is the kind of thing that makes theorists’ eyes light up. It’ll probably generate a lot of discussion and toy models; what effect this will ultimately have on condensed matter physics is harder to predict, but I’m sure it will keep people occupied for a while.

Braun, S., Ronzheimer, J., Schreiber, M., Hodgman, S., Rom, T., Bloch, I., & Schneider, U. (2013). Negative Absolute Temperature for Motional Degrees of Freedom Science, 339 (6115), 52-55 DOI: 10.1126/science.1227831

1. #1 Wilson
January 15, 2013

Thanks for this. I confess to not following all of it, but what I did follow I found interesting.

I also wanted to say a particular thanks on behalf of RSS users everywhere for the consideration you show to us in including ‘featured’ photos in the main body of the article. I, for one, really appreciate it.

2. #2 ashoka
india
January 18, 2013

just i say thanks.

3. #3 Pete
United States
January 20, 2013