One of my current thesis students has been plugging away for a while at the project described in the A Week in the Lab series last year, and he's recently been getting some pretty good data. I've spent a little time analyzing the preliminary results (to determine the best method for him to use on the rest of the data), and I thought I'd explain a little of the process here.
Here's the key graph from the first set of results:
What we're working with here is a system where we feed krypton gas into a vacuum system, and illuminate it with light from two sources: a really expensive ($7000) vacuum ultraviolet lamp at 123 nm, and infrared light from a diode laser at 819 nm. The combination of those two wavelengths should excite some of the krypton atoms into a "metastable state," that is, a particular atomic energy level with an extremely long lifetime. To detect the metastable atoms, we shine in another laser at 811 nm, a wavelength that only atoms in the metastable state will absorb, and look for the fluorescence as the excited atoms re-emit 811 nm photons. The graph above shows the amount of fluorescence as a function of the pressure of the gas in the source region, which is the key measure of how well we're doing: the pressure tells us how much gas we're putting into the system, and the fluorescence tells us how many metastables we've made.
Of course, getting to this graph takes a bit of effort...
The raw data files we acquire look like this:
What we see here is a plot of data saved from a fancy digital oscilloscope, showing the signal from a photo-multiplier tube (PMT) as a function of time. The PMT picks up light emanating from the region where we're making the metastable atoms, and puts out a voltage proportional to the amount of light. The particular tube we're using gives negative output voltages, so downward-going spikes are peaks in the intensity.
We sweep the frequency of the 811 nm laser back and forth over a small range around the atomic resonance frequency, which accounts for the multiple peaks in the signal. We see four complete peaks, as the frequency increases, then decreases, then increases then decreases again. There are also two partial peaks at the edges, because the peak happens to be close to the turnaround point for the particular laser parameters used for this run.
You can also see that the signal doesn't go all the way to zero-- the least negative value, corresponding to the smallest amount of light hitting the PMT, is about -280 in the screwy units that the digital scope uses for its saved files. This represents the background light level, due to light scattering off various surfaces in the system, and making its way into the system. The actual peaks are much smaller than the background for this data set-- the peak height is something like 40 units, compared to a background of 280 units. To get at what we actually want to measure, we need to separate out the peaks from the background.
We do this by fitting peaks to the data, as shown in this graph:
We read the raw data files into a data analysis program (SigmaPlot, for those who care), and chop each file into four pieces, each containing one complete peak. Then we use a built-in fit routine to determine a mathematical formula that describes the data (a Gaussian peak, because of the Central Limit Theorem). The points in this graph are a subset of the raw data shown above, and the solid line represents the best fit the computer could generate, which is pretty good in this case.
From the fit, we extract four numbers: the height of the peak, the width of the peak, the center position of the peak, and the background level. We get four fits from each raw data file, and all of these numbers are potentially useful. To get the full range of data we're after, we repeat this process for twenty-odd data files, and collect the results together to make the following graph:
This shows the peak height (which is the best measure of the number of metastables created) as a function of the elapsed time since the start of the experiment. Each file yields four data points, and the scatter in those points gives you some idea of the quality of the data-- there are four points at each time value, and they're fairly close together except at the very end, when the signal gets small and the fit routine can't generate sensible results any more (I did fits to more files than this, but left out the rest of the ones that were obviously garbage).
Why does the signal vary in time? Well, the way the experiment works at the moment is that we open a valve to let gas into the system, which results in a very high initial pressure. As that gas gets pumped away, the pressure drops, and the signal changes depending on the pressure. The drop is very rapid early on, and slower at later times, and we record the pressure value for each data point at the file is saved. Replacing the time values with the pressure readings gets us back to the graph at the top of the post:
This reverses the horizontal axis, so the group of points all the way out to the right is from three minutes in, and the lousy thirteen-minute data is all the way on the left. The early high-pressure points are stretched out a bit, and the later low-pressure ones are bunched together. There's a clear peak at about 70 mT, which is a little higher than I'd like, but makes perfect sense, given what's going on here.
But I'll save the explanation of what this graph means for another post...
- Log in to post comments
"To detect the metastable atoms, we shine in another laser at 811 nm, a wavelength that only atoms in the metastable state will absorb, and look for the fluorescence as the excited atoms re-emit 811 nm photons."
My understanding of ordinary fluorescence is that the emitted photon is always lower energy (and hence longer wavelength) than the absorbed photon. Is there something special about the metastable state that allows the atom to emit a photon with the same energy as it absorbed?
It depends on the orbitals used in the transistion. If an electron is excited from energy level A to level B and then falls back down the photons on either side had better be the same energy and hence wave length.
However if it is a more complex process, such as: A -> B -> C -> A where E_A < E_C < E_B then the incoming photon needs energy (E_B - E_A) and the outgoing photon will have energy
(E_C - E_A) = ((E_B - E_A) - (E_B - E_C))
and hence a lower energy and longer wavelength.
My understanding of ordinary fluorescence is that the emitted photon is always lower energy (and hence longer wavelength) than the absorbed photon. Is there something special about the metastable state that allows the atom to emit a photon with the same energy as it absorbed?
That may be true for fluorescence in complicated molecules, where there are vibrational modes and the like to take up some of the energy, but in atoms, there are fewer places for the energy to go, so there's almost always a high probability of emitting the same frequency that was absorbed.
For the proper choice of "ground" and excited states, it can work out that the only option for the decay of the excited state is to return to the "ground" state. That's called a "cycling" (or sometimes "closed") transition, and those states are the basis for laser cooling. "Ground" get scare quotes because it doesn't need to be the real ground state of the atom-- the metastable state that we use acts as an effective ground state for the 811 nm cooling transition, which is a cycling transition.
There are other transitions that start in the metastable state, though. One is at 770 nm, and that decays back to the metastable state 77% of the time, with the remaining 23% going to a different state via emission of an 819 nm photon. You still get the same wavelength back 3/4 of the time.
"That may be true for fluorescence in complicated molecules, where there are vibrational modes and the like to take up some of the energy, but in atoms, there are fewer places for the energy to go, so there's almost always a high probability of emitting the same frequency that was absorbed."
That explains my understanding, as the only fluorescence I have any experience with is of complex organic molecules. Thanks for the more-detailed explanation.
Isn't it a little worrying that you are throwing out experimental data because your peak finding algorithm fails to work? It would be nice to see graphs of these runs (I presume you are looking at them anyway) so as to check that there isn't something different going on.
Are you sure the peaks are Gaussian because of the CLT? I suspect they're Gaussian just because that's a convenient shape to fit to your peaks. The CLT applies to the distribution of a mean, which I don't think you have here.
Isn't it a little worrying that you are throwing out experimental data because your peak finding algorithm fails to work? It would be nice to see graphs of these runs (I presume you are looking at them anyway) so as to check that there isn't something different going on.
I didn't show them here, but I do look at the data files, and there are no discernible peaks in the signal. The fit algortithm gives nonses results, because it's just fitting noise.
Are you sure the peaks are Gaussian because of the CLT? I suspect they're Gaussian just because that's a convenient shape to fit to your peaks. The CLT applies to the distribution of a mean, which I don't think you have here.
The Central Limit Theorem reference is an obscure joke, from my grad school days. Whenever we had a peak, the default fit function was a Gaussian, because it's a convenient shape to work with. Every now and then, somebody would ask "Why are you fitting a Gaussian to that?" and the answer was "The Central Limit Theorem," which for idiot experimentalists translates to "everything's a Gaussian"...
There is a sense in which the signal we're looking at is the mean of a random distribution-- we're looking at the fluorescence of a large number of atoms, with a Maxwell-Boltzmann distribution of velocities, which means that the atoms all have different Doppler shifts, leading to the broad peak that we see. The distribution of velocities is the result of random collisions between atoms, and so on, so it's actually appropriate to fit a Gaussian to it (actually, it should be a Gaussian convolved with a Lorentzian, but the natural width of the Lorentzian atomic line is so narrow that it might as well be a delta function), and not completely ridiculous to invoke the CLT as a reason.
But mostly it's because SigmaPlot has a built-in Gaussian fit function.
personally I don't like the term fluorescence for atomic light scattering. It seems to confuse students ... and causes a lot of spelling difficulty. Fluorescence is linked with a wavelength change, in my head anyway.
For imaging of a MOT or BEC i'd refer to it as 'scattered light'.