As I’ve mentioned before, I’m schedule to teach a class on “A Brief History of Timekeeping” next winter term as part of the Scholars Research Seminar program. Even though I have a hundred other things to do, I continue to think about this a lot.
One of the goals of the course is to introduce students to the idea of doing research. This was primarily conceived as a humanities/ social sciences sort of thing, so most of the discussion I’ve seen about these has been in terms of library research. Of course, as a physicist, I very rarely need to look things up in the library. when I think about research, I think about measuring stuff. So, I’m thinking about ways to incorporate some timekeeping measurements into the course– asking students to either do something that enables a precise measurement of time, or to do something that evaluates a timekeeping method.
To that end, I did a measurement over the last few weeks to see how plausible this was. The materials were really simple: a cheap timer/stopwatch from Fisher Scientific, borrowed from the teaching lab stockroom, and the Internet. Specifically, the time readout on the NIST webpage.
NIST helpfully provides a display on their webpage of the official US time, synched to NIST atomic clocks. I started the time at exactly 1pm on May 2, and then compared the reading on the timer to the NIST time at various intervals over the next few weeks. As you can see from the picture, as of the last measurement on May 25, the stopwatch was running slow by 20 seconds.
All told, I made six measurements, shown on the following graph:
This plot shows the number of seconds the timer was behind the NIST time as a function of the elapsed time according to the NIST clock. The points are my measurements of the delay, the solid line is a linear fit to the data.
There are a couple of really nice things about this measurement. First, even though the measurement apparatus is ridiculously simple– I estimated the delay time by holding the timer up next to the screen as seen in the picture above– you get a really precise measurement. The slope of the line in the graph is 0.000010 seconds per second, or about one part in 100,000. This works out because the measurement is extended over three weeks, and there are 86,400 seconds per day.
In addition to giving a good sense of the sort of precision that can relatively easily be obtained in time measurements, this is a nice lead-in to the discussion of atomic clocks, and specifically an analogy for why the fountain clock technique is so useful– it allows a much longer time between measurements, which makes it possible to do incredibly precise measurements of time and frequency.
It’s also interesting to look at this in terms of the advancement of technology. This is a fairly cheap timer, and loses about 0.9s per day. Amazingly, this is comparable to the famous watches of John Harrison from the 1760′s. The very best performance of one of Harrison’s watches was accurate to within 0.08 s/day, while a second test found a somewhat more reasonable 0.83 s per day, which was three times better than the official standard for the longitude prize (which Harrison was screwed out of for political reasons, as recounted in Dava Sobel’s Longitude), an astonishing achievement. And now, these are cheap throwaway stopwatches.
Anyway, I was very pleased with the outcome of this test. I might check a couple other timers, and different types of timers, but I think this sort of thing could definitely work as an assignment for the timekeeping class: ask the students to measure the performance of some sort of timer with a seconds readout, and see how different things stack up. Another possibility would be to check the performance based on environmental factors– if I threw this timer into the fridge, how would that affect its operation?
Hmmm….. Check back in a few weeks, and we’ll see what we see.