I'm an experimentalists through and through, and have always known better than to attempt real theory. On two occasions, though, I've been forced to do a little bit of computer simulation work in order to interpret my results. One of these was for the time-resolved collisions experiment, and worked out well. The other was when I was a post-doc, and was... less useful.
The situation we were dealing with in my post-doc work was a Bose-Einstein condensate of rubidium that we chopped into several pieces with an optical lattice. Whenever you do this, there is necessarily some uncertainty in the number of atoms in each of the pieces, and quantum mechanics lets you relate that to the uncertainty in the phase of the wavefunction corresponding to each piece. By adjusting the height of the lattice and the density of the BEC, you can manipulate these uncertainties, which lets you explore all sorts of nifty phenomena.
What I was trying to simulate was the dynamic behavior of this system after a sudden change in the lattice potential. The steady-state case is relatively easy, but looking at changes in the distribution turns out to be a nightmare, at least in the realistic case. There are well-established techniques for solving this problem in the case of two pieces, or for a large number of pieces provided that each piece contains a large number of atoms, but neither of those apply. In our experiments, we had a dozen or so pieces, with a large variation in the number from the center of the condensate out to the edge, and that's a really hard problem. I never made much headway, and when the grad student on the project finally did get it worked out, they ended up using a technique I've never heard of before (the "Truncated Wigner Approximation," whatever that is).
I was reminded of this this morning, by Doug Natelson's report on his recent Nature paper.
The work in question has to do with the behavior of junctions between nanometer-sized pieces of nickel, which turn out not to behave like big pieces of nickel:
The surprising result in our case is that we see indications of this Kondo process in atomic-scale junctions between chemically homogeneous (e.g., all the atoms are Ni) ferromagnetic metals. The data are pretty clear, and indicate that this spin-related process competes with ordinary ferromagnetic exchange in these nanostructures. It would appear, from accompanying theory calculations by our coauthors, that the very act of whittling the ferromagnetic metal down to the atomic-scale junction is enough to mess with the electronic properties of the metal that we'd ordinarily consider to be intrinsic.
As I remarked not too long ago, dealing with lots of atoms is difficult, and requires a lot of abstract and hard-to-explain theoretical apparatus. Doug's post and paper are a nice reminder, though, that for all the technical heavy lifting, the results of condensed matter theory are incredibly good, and work to describe a huge range of phenomena.
What's really difficult is the case where you have too many particles for single-particle techniques to be useful, but not enough for the large-number techniques to work. Numbers between 3 and 1015, say.
It's kind of an inverse Goldilocks story-- the "just right" case is one of the extremes, not the case in the middle.
- Log in to post comments
I happen to be currently doing some truncated Wigner calculations for Bose-Einstein condensates in lattices. If you like I can write a blog post on the technique at the weekend. Essentially what you do is put noise on the system by adding Bogoliubov modes (excitations) on to the ground state, with amplitudes sampled from the relevant distribution, reproducing the correct quantum statistics.
The name truncated Wigner comes about because the calculations are done in the Wigner representation (i.e., a the field operator is represented in a different Hilbert space basis); and the equations of motion are truncated at a low order (which makes calculation possible).
I hope your ignorance of the technique is feigned since your name is on the paper ;-)