I spent an hour or so on Skype with a former student on Tuesday, talking about how physics is done in the CMS collaboration at the Large Hadron Collider. It's always fascinating to get a look at a completely different way of doing science-- as I said when I explained my questions, the longest author list in my publication history doesn't break double digits. (I thought there was a conference proceedings with my name on it that got up to 11 authors, but the longest list ADS shows is only eight). It was a really interesting conversation, as was my other Skype interview with a CMS physicist.
Toward the end of our talk, he asked a really good question that never would've occurred to me to ask, namely: "Why do you AMO guys do experiments in lots of little labs? Why not just build one really amazing Bose Einstein Condensate machine, and have people bring their detection systems there to do their experiments?"
It's an interesting question, because the "user facility" model is very common in a lot of other branches of physical science. Astronomers, after all, do most of their work by applying for telescope time on a smallish set of large instruments, and a lot of condensed matter work is done with giant magnets or synchrotron facilities where people prepare samples and bring them to a large instrument to be tested. And, of course, particle and nuclear physics are done at a handful of huge accelerator facilities.
There are advantages to the user facility model, of course. You can have full-time technicians to keep everything operating at peak performance, and not suffer the regular reversals that small labs do when the senior student graduates and moves on, or the post-doc who wrote the LabView code to automate everything takes a permanent job back in his home country. If something breaks, you can have a dedicated team there to repair it quickly, as opposed to a new batch of students learning on the fly, or whatever.
But then, as I pointed out, there are disadvantages as well, as you can readily see in particle physics these days. The LHC is currently down for a year and a half of upgrades, and during that time, nobody is getting any new data at the high-energy frontier of particle physics. (They have terabytes worth of old data to sift through, of course, but if somebody has a brilliant new idea about a new way to detect something, there's nothing the can do about it.) Similarly, if your small experiment that plans to use the particle beam from some accelerator or reactor breaks just before your scheduled beam time, well, try again in six months or a year, assuming they give you another block of time. And if your observing run at a big telescope happens to coincide with a stretch of bad weather, ditto.
In a small scale experiment, where everything fits in one lab, a single equipment failure sets you back a few days, maybe. In a user facility model, it can knock you out for months. If Wolfgang Ketterle's labs flood, they're out of commission for weeks or months, but Eric Cornell and Bill Phillips and Debbie Jin and Immanuel Bloch and Markus Greiner and a host of others are still up and running and science marches on; if the LHC goes down, nobody in the international particle physics community gets new data until it's fixed. That has a big impact on the careers and psychology of the people working in the field-- I suspect I'd go nuts if I had to work in a user facility model where I didn't have full control over my own experiment.
There are also advantages to the small science model. A greater diversity of labs allows for cleaner replication and extension of results. The Higgs Boson was confirmed thanks to detection in two different experiments, but those are really just two different detectors on the same particle accelerator. Nobody would seriously claim that what they see is some odd effect peculiar to Switzerland, but on a philosophical kind of level, having major discoveries dependent on a single apparatus is a little problematic. Small science allows for the replication of important results in lots of little experiments in labs at all sorts of institutions. We're absolutely certain that BEC was achieved in 1995 by Cornell and Wieman because within a couple of years there were a dozen other labs that also produced BEC.
The huge diversity of labs also allows a wider range of types of experiments, each optimized for a particular type of thing. If you want a good, dependable source of ultra-cold atoms, rubidium is unquestionably the way to go-- "God's atom," according to a joke that probably originates with Eric Cornell. But there are good reasons to go with other atoms, if you want to study other types of physics. Lithium has very different collisional properties and a fermionic isotope; strontium has several different stable isotopes of both quantum statistical characters, and can be laser-cooled to much lower temperatures; dysprosium offers a gigantic spin angular momentum for the study of dipolar interactions, and so on. You can do a little bit of everything with rubidium, but other elements offer options to do specific experiments really well.
And then there are further optimizations around what kind of technique you want to use. If you want a huge number of atoms, you probably want to use a magnetic trap, but that limits the range of states you can study, and the rate at which you can repeat experiments. If you want to study different magnetic states, you want an optical trap, but that generally limits you to smaller numbers of atoms. If you want to study collisional interactions, you need to apply big and uniform magnetic fields, but that tends to restrict the optical access to the system; if you want to look at optical lattices, you need a system with a lot of room to get extra laser beams in, but that limits your ability to tune the collisional properties. If you want single-site imaging, you need a large-aperture lens right up next to the atoms, but that limits what kind of trap you can make. If you want to look at low-dimensional physics you need a different kind of trap than if you want to look at bulk superfluid properties, and on and on. There isn't a single way to make BEC that's optimal for everything, but a host of different ways of making and studying the condensates, each with advantages and disadvantages.
And, of course, making a BEC is only the first step. Most of the work in ultra-cold atom physics comes after the condensate is created-- each experiment involves an intricate sequence of additional magnetic fields, laser pulses, microwave pulses, changes in trap parameters, etc. It's not just a matter of bringing in a new detection system (the vast majority of BEC experiments use fairly similar detection technology, based on imaging of the distribution of atoms in or from the condensate), it's a whole room full of extra apparatus.
Now, there are parallels to this in the particle physics community-- the LHC collides lead ions for a few weeks every year to look at a different regime of physics, and there are experiments like LHCb that are carefully tailored to studying a very specific range of things. But there's a fundamental difference in the distribution of the experimental effort between high-energy physics and AMO physics. High energy physics is interested in, well, high energy-- the main concern is to slam particles together with the greatest energy possible, and track everything that comes flying out at high speed. Cold-atom AMO physics deals with particles that are stable and will stick around for a long time, so is much more concerned with poking and probing and manipulating the states of the products. There's much less standardization, for lack of a better word.
Ultimately, though, the reason for doing small science the way we do comes down to resources: there's no reason not to run a ton of little labs. A really high-end BEC experiment might run you ten million dollars; another LHC will cost you ten billion. For the money it takes to build a single top-of-the-line high-energy facility, you can set up a thousand top-of-the-line AMO labs, each optimized for its own particular corner of cold-atom physics. The cost of small science, even though there are some small inefficiencies in the local lab model, is just so much lower than big science that it makes sense to have a broad, diverse set of little experiments rather than trying to concentrate things in a smaller number of bigger facilities.
(The "featured image" up top is the title slide from an overview talk I gave at DAMOP a couple of years ago. Ultracold atoms are only one piece of that, but I'm too lazy to piece together another graphic for this post, so this will suffice.)
- Log in to post comments
“Why do you AMO guys do experiments in lots of little labs?" - because you can!
That is a rather worrying question. My instinctive reaction is that the primary, if not only, reason for massive shared facilities is that people do it because they must.
There's also a bureaucratic/organizational tendency for all efforts to grow. In space astronomy it's hard to get funding for the "small" (~$200M) Explorer satellites relative to a single blockbuster "Flagship" mission, such as JWST. And those missions must fly and be successful, so they tend to gobble up any other funds around. Their unified goal helps them advocate, while the diverse community wanting small missions has a hard time coming together and being heard.
Not all of astronomy requires big facilities. There are a ton of telescopes out there working every night, and there is some really interesting stuff coming out of them (looking for things that go kaboom in the night, looking for stars flickering because of planets passing in front of them, etc.)
When you look at scientific impact, though, big projects seem to have a bigger impact per dollar. The Sloan Digital Sky Survey has a huge number of citations because it was a large data set (though a small telescope by modern standards) that everyone could use.
Martin @2: There is also the minor detail that your hundreds of person-years and millions of dollars of effort can be all for nought due to some factor beyond your control, like a first stage motor anomaly (been there, done that) or some critical system failing before you can collect your science data (ditto). And in this era of constrained budgets, there is no guarantee that you will get another chance.
This is why you should not assign Ph.D. projects that depend on data from spacecraft which haven't been launched yet, or if you do, have a backup plan ready to deploy in case of launch failure. It's less of a problem for ground-based user facilities because you are more likely to get another shot at it, but the problem still exists in these cases.
I tend to agree that more that big projects are user facilities because there is no way for individual researchers or groups to get the funding needed for things like accelerators or large telescopes. Interestingly with AMO physics looking to move into space, they actually are pursuing the same user facility ideas, e.g. with the Cold Atom Laboratory (http://coldatomlab.jpl.nasa.gov/) on the ISS.
@Steinn #1. Martin's comment about small projects tending to grow is an important one, and the transition can be extremely painful for the participants used to a different model. I'm currently part of the SuperCDMS dark matter search.
The original CDMS experiment was really small (a couple of dozen people, I think), CDMS-II is around 70, and SuperCDMS is looking to end up at 100 or so. We are in the process of formalizing and bureaucratizing a lot of our processes, and abandoning the informal collegiality the experiment used to have. That's really hard, and even though the collaboration agrees that it's necessary, we don't all like it.
Why is it necessary? Because we're building a huge, complicated, and extremely expensive experiment: A couple of hundred pure, single-crystal germanium hockey pucks, instrumented for cryogenic readout deep inside a six-stage cryostat (which goes from room temperature down to 40 mK), all surrounded by 15 to 20 tons of shielding. Oh, and that whole thing will be installed 5 km down a mine in Ontario.
We have to have bureaucracy and structure to make sure the project gets built; to make sure all the software is developed, maintained, and is usable by all our collaborators; that the papers written by small analysis groups are properly vetted and approved by the whole author list _before_ they escape into the wild; and so on.
It's the same early stages that particle physics went through, back in the 1970's. There are still lots of fixed-target experiments with single-digit groups. But for collider experiments, the size, material, and readout channel requirements all scale as some power of the energy, and the number of people you need to create such devices scale the same way.
Reminds me of this essay from Freeman Dyson's "Eros to Gaia"
http://www.dynamist.com/tfaie/bibliographyArticles/dyson2.html