Uncertain Pop Quiz Results

We had 45 responses to yesterday's poll/quiz question-- thank you to all who participated. The breakdown of answers was, by a quick count:

How do you report your answer in a lab report?

  • 0 votes A) 4.371928645 +/- 0.0316479825 m/s
  • 3 votes B) 4.372 +/- 0.03165 m/s
  • 18 votes C) 4.372 +/- 0.032 m/s
  • 21 votes D) 4.37 +/- 0.03 m/s
  • 2 votes E) Some other answer that I will explain in comments.

So, it's a narrow victory for D, among ScienceBlogs readers.

The correct answer and the reason for the poll are below the fold.

As far as I'm concerned, the correct answer is D). There's absolutely no reason to report digits of the answer past the first digit of the uncertainty, because 0.002 is much smaller than 0.03. Even if you report that next digit, it's way smaller than the error associated with the measurement, and serves only to give a false impression of precision.

C) is close, but I think that the proper procedure is to round the uncertainty to one significant figure, and round the reported value to the same number of decimal places as the first digit of the uncertainty. This is what we have agreed upon as a department, and this is the procedure that is spelled out in our lab writing guides.

(Someone in comments mentioned the "rule of 19," which is that you keep two digits of uncertainty when the first is a 1. I'm pretty much ok with that, though I'd still round down to one digit for anything less than 15.)

The reason for the question is that I've been grading labs recently, and my students almost universally choose the equivalent of B) (those that don't go for A), at least. I get ridiculous numbers of uncertain digits reported, all the time. And even when I take ten minutes of class time to go over the rules, they stick with A) or B). Even when they've hand multiple lab classes explaining this procedure, they mostly stick with B), and I've had senior physics majors give me lab reports taking option A), which is just mind-blowing.

For some reason totally beyond my comprehension, "Round the uncertainty to one significant figure, and the reported value to the same number of decimal places" just baffles my students, year in and year out. I can't for the life of me understand why-- it seems like common sense to me-- so I thought I'd try polling people on the Internet to see if there's some sort of deep-set attachment to B) that I just don't share. Maybe there's some evolutionary advantage conferred on savannah-dwellers who like lots of digits in their math, and I'm the result of a late-arriving mutation (get me Steven Pinker, stat!)...

It turns out, though, that you all are mutants, too, so that's out. But if anybody out there can shed any light on this difficulty, or suggest some way to teach this that will actually be effective (docking points isn't enough-- believe me, I've tried), I'm at the end of my rope, here.

If you answered D)-- or even C), I'd take C)-- how did you learn that rule? And what would you suggest for helping students break out of B)?

More like this

Basically I just learned that there's no point in reporting any more significant digits than you can report accurately. If your error of measurement corresponds to the third significant digit, that's what you report, because it means you really have no idea that the fourth digit is accurate. Since you have no idea, there's no point in reporting it -- you're just wasting ink.

Your students probably report an absurd number of decimal places for the same reason my do: they copy and paste from Excel without editing.

Related point: I mentioned to my students that when they report numbers with decimals, they should keep the number of decimal points consistent (sig figs aren't an issue because they are always measuring integers -- ie, number of animals) because it looks better. They stared back blankly wondering why I was lingering on such a trivial point. They obviously don't get asthetics (or maybe I'm just a bit OCD). In case your wondering, decimal points are an issue even though they are measuring integer values because they calculate proportions and expected numbers.

When I was working in the California state treasurer's office, we received a paper from a university mathematician on the indexation of tax brackets. For purposes of his model, the professor gave an example in which he said something like, "Assume a 20% inflation rate over a five-year period, yielding an average annual inflation rate of 3.713728934%." We just laughed and put it aside because no one could take it seriously. In those days, calculators were still fairly new, and the poor guy was just copying down all the digits on the display of his HP-65. His "hyper-accuracy" made his paper look ridiculous.

Looking at the available options, one striking point is that D has the fewest digits in both the answer and the uncertainty.

There may be no quick fix for teaching. It's strange how small things can reveal a lot about deep understanding.

Some speculative suggestions as to the what's not understood:

- Scientists don't over-claim. If a scientist says something they should be able to back it up with confidence. I think many undergraduates, let alone the general public, don't appreciate how much checking, double checking and cross checking routinely goes on in research.

- Deep familiarity with numbers as tools. As has been noted, extra digits are meaningless. But to understand that you have to first understand the digits that are meaningful. I think the temptation to include extra digits may arise from treating all numbers as some sort of magic charm - the more the better.

If these sorts of things are at the root of the problem then teaching this should be about helping to develop scientific maturity as much a technical recitations of the rules. (Of course I'm sure lots of teachers see it this way already - the trick is getting the students to.)

I think April gets it exactly right. For most of us, that understanding comes with time and experience.
.

By Mark Paris (not verified) on 28 Apr 2006 #permalink

I've encountered a similar problem on geology field trips with students using GPS to locate themselves - they'll copy a grid reference down which is effectively giving their location to the nearest centimetre, when their machines are usually telling them that the accuracy is (at best) +/-10 m or so. Cue lecture on the difference between 'precision' and 'accuracy'...

Joe, please excuse me. Joe gets it exactly right. I stupidly saw the date and put that as your name. I'm an idiot.

By Mark Paris (not verified) on 28 Apr 2006 #permalink

I can't see that you can do anything but keep docking those marks, and dock more for these errors than for getting the actual value wrong. I seem to remember that was how I had such things hammered into my skull. It's the equivalent of a language student using the wrong tense of a verb.

By Chris Surridge (not verified) on 28 Apr 2006 #permalink

I said in the earlier thread that I would consider C or D acceptable (as Chad apparently does), but I prefer C (while he prefers D). To be honest, when I was grading undergrads just a couple years ago, I might have crossed out the extra digit (but not taken off points) for C, turning it into D.

I think it depends on your philosophy of what a lab report is.

If it is only to demonstrate knowledge, then D shows a knowledge of the significance of the answer. Someone writing it as C may understand that the last digit is insignificant, but D better demonstrates that they have this knowledge. C assumes both the writer and reader understand the true significance attached to the answer. D requires no assumption by the reader (grader in this case) that the writer understands.

If it is as a step on the way to true scientific communications (journal articles and the like), I'd argue C is the better answer, largely for the sake of rounding errors. True, as Chad said, that last 2 is a small digit, but rounding errors can add up. In my CRC's list of constants, an extra "insignificant" digit is retained, even when that digit is a 2 (or a 1 or a 0). Writing C doesn't "over-claim," because a scientifically literate audience should know exactly what the significance is.

Someone in the other thread made a point about whether it is an intermediate or final number. True, no one is likely to perform more calculations using the results of an undergraduate's lab report, but this just brings us back to the question of what the point of having students write lab reports is. If it is practice for professional publications, where the numbers might be used by others, I'd still argue C.

As I said, I would have preferred D a couple years ago when I was actually grading undergraduate reports and had more recently been writing them myself. Now, from the viewpoint of someone who isn't actually grading lab reports (my teaching has decreased and shifted to graduate students and written and oral exams), but is writing journal articles, I prefer C.

All of that said, a uniform department policy is a good thing, and if the policy is D, so be it.

I was fortunate enough to get significant digits and error estimates introduced in a kickass high school physics course, so it made sense in university labs. But probably it was only getting marks docked that pushed me into the habit of actually thinking about it before I wrote.

People are lazy, and usually won't put effort into things they don't care about if they don't have to. It probably takes some time working with physics to really care about these things.

I wonder if it has less to do with students' understanding and more to do with grades. When I was a student, I vaguely remember having the idea that if I wrote down more digits it would show that I wasn't lazy and maybe bump up the grade.

Another explanation might be the idea that your report should reflect the amount of work you put in (a myth that exists even beyond college). I bet that some students can't believe that they worked so hard and spent so much time measuring that darn velocity, and now they're only supposed to write a few digits.

By Anonymous (not verified) on 28 Apr 2006 #permalink

I've noticed similar problems with far too many people, and I believe that it is often because of the approach that is employed by professors when discussing error analysis. I'm don't think that, the way these things are often taught is the best way. I mean talking of these as tricks & thumb rules, with sayings like "Round the uncertainty to one significant figure, and the reported value to the same number of decimal places" is in my opinion not the best way to go aboput it, because it gives the whole subject of error analysis a magical aura. I think it is worth the time and effort and go into a bit of statistics and talk of random variables as probablity distributions, which are to a good degree characterised by their expectation value and the variance.

The trouble you see, is that, what your students are doing is perfectly correct if they see it as maths over real numbers, but actually in reality all experiments don't give you one number, but an ordered pair (number, error). So what you are working with is not the field of real numbers, but the field of these ordered pairs of numbers. So you have to define addition, multiplication etc. over this field in a sensible way, and explain to your students how all that comes about.

I'm sorry for going on about like a mathematician, but I feel that extra effort in going into abstraction and formalism, is often rewarded by deeper understanding.

By Yagnavalkya (not verified) on 28 Apr 2006 #permalink

Anonymous, that goes back to what Joe mentioned. I see this in the new college grads that I sometimes work with. They run a simulation and, if they get an answer, that's what they put down. It doesn't matter if it's twice what experience tells me it should be, or even an order of magnitude more than it should be. They lack an understanding of what the numbers mean and how they relate to reality. They also see a variation of 10 out of 1,000 and scale their plots to show that, when experience tells me that for our kind of work, that difference is meaningless for several reasons (the sensors we use can't see it most of the time, plus our simulations have so many uncertainties I consider it a victory to get within 10 percent). I have to tell them to plot from zero to show that the value actually changes very little.

By Mark Paris (not verified) on 28 Apr 2006 #permalink

I didn't have time to post yesterday, but I think C is the correct answer, although I admit that D is almost always an acceptable answer.

Here's an extreme example that shows why.

The manager of a chemical plant wants to maximize profits by running the plant at the maximum temperature. If the maximum temperature is exceeded. If you reported the wrong temperature, you lose your job, assuming survived the explosion. If you reported the right temperature, the manger loses his job.

Your numbers won't work because 4.372 - .032 = 4.340 = 4.34.

So, I'm going to slightly change your numbers. I'm also going to move the decimal place to give more realistic numbers for temperature. I'm going to use 437.0 instead of 4.372 and 3.4 instead of .32.

437 - 3 = 434
437.0 - 3.4 = 433.6
437.0 + 3.4 = 440.4

So the real number, the temperature at which the plant will really explode, is between 433.6 and 440.4. Let's say that the actual number is 433.7.

You report that the maximum value is no less than 434. The temperature controller reads 4 digits, so the plant manger sets it to 433.7 figuring that he can turn on the emergency coolant if the temperature increases to 433.8. He thinks this still gives him another 10th before the actual explosion. He knows he's on the edge, but he thinks he's on the safe side. However, 433.7 is the actual maximum temperature and the plant explodes.

Of course this example is extremely unlikely to happen. The odds of making a critical mistake by reporting D instead of C are about the same as the odds of winning the lottery. So, I have no problem with any organization adopting D as the standard. However, I wouldn't call D correct unless such a standard was adopted.

When I taught significant figures, I found that my example stopped almost everyone from choosing A. I don't recall a problem with people choosing B, but I'd address it by forcing people to report the number twice early in the semester: 4.372 +/- 0.032 m/s and 4.340 m/s to 4.404 m/s. Once people get this right you could stop requiring the second version.

I think one reason so many students botch this is that they simply don't undestand that the whole concept of significant figures is there to indicate how much uncertainty there is in a measurement. This is mainly because they don't usually understand that measurements have any uncertainty to begin with.

Textbooks don't help mettera by immediately launcing into significant figures rules with little or no demonstration of what the heck these rules of thumb are supposed to do.

What I usually do is, before I mention the first word in class about significant figures, take the students into the lab and have them repeatedly make some measurements with simple lab instruments - measuring the same amount of water poured from a graduated cylinder using an analytical balance works pretty well for this. Then I ask them to report an average mass of water, but do in in such a way as to demonstrate how confident they are of the results. Usually, they'll write the measurement correctly on their own at this point - without word one having been said about sig figs.

If they discover the concept on their own before they try to memorize sig fig rules, then they seem to be better at using the rules once they learn them.

I would take NIST as a decent source for significant figures since a big part of their job is setting standards. They seem to go for C):

http://physics.nist.gov/cgi-bin/cuu/Value?md|search_for=atomnuc!

Since the uncertainty is often a standard deviation after repeated measurements it also has its own uncertainty (more measurements will change your standard deviation as well, especially for a small number of measurements) so I think C) makes sense.

Alfred,

I'm afraid I wouldn't trust you to run a chemical plant.

What is an acceptable probability of explosion for a plant to operate on? How do you think that value would relate to standard error bars as discussed in this post?

Joe

In my intro astronomy class (general science), at one point I had a question where I asked students (working in groups of two) to make a scale model of the Milky Way and the Andromeda galaxy using quarters to represent the galaxies. I gave them the "real" numbers (to about two sig figs, which if anything is an overestimate), and said a quarter is about 1cm across.

Most people got out calculators and came up numbers. One pair of kids didn't have a calculator, but eyeballed it, saw that the size/distance ratio was quite close to 2/3 times 100, and said "70 cm".

After the exercise was over, I talked to the whole class (about 100). I told them two different ways of working it out -- one that gave something close to 70 cm but with 6 or 8 significant figures, and the way where they came up with 70 cm. I asked them which was the better answer, and almost *unanimously* they said the one with 6 or 8 sig figs. I asked why (trying not to give away the fact that I thought they should be laughing at the 6 or 8 sig figs), and the answer they gave was that it was "more precise".

Now, I'd given them "rules" on sig figs before (which I don't fully stick to, as long as they're within one-- we don't do uncertainties in this class, so I don't ask for "precise" estimates of uncertainties, but I want them to be reasonable), but somehow it didn't stick in. I explained again. I often use the example that I'm 5'10" tall. Then I pull a hair off of the top of my head (which doesn't really work any more, since I don't have any left); I ask how tall I am now. 5'10" minus 10 microns?

I remember in 8th grade thinking that sig figs was a bunch of arcane and annoying tax-code-like rules. They key to understanding that is understanding uncertainties, at least in the broad general sense. But too many kids think that "uncertainty" is "error"-- which can be a synonym, but to them "error" means "being wrong". The key question is, how well do we know? It's just not a natural question to ask, I think. It is to those of us who are scientists -- and the readership of this blog is higly biased that way -- but we've had it pounded into us over years.

BTW, in response to the question, I might answer C in this case. The difference between +-0.026 and +-0.034 is a quarter the size of the uncertainty, and thus *might* be significant (depending on your "uncertainty on the uncertainty), arguing for an extra digit in uncertainties. My own preference would be to accept either +-0.03 or +-0.032 on a lab report, as a good understanding of what uncertainties mean could lead to either answer. I would't put in an arbitrary rule favoring one over the other.

In my own work, I *do* sometimes report "too many" digits of uncertainty. The only reason is table formatting-- if I have a bunch of quantities, some more certain than others, in a table, it's neater if they all line up in the column, and sometimes it's nice to have all of the values in a column have the same number of digits. This will mean that some of the values have two or even three digits in the uncertainty.... That's a style over substance thing, which in general I abhor, but I also abhor absolutism....

-Rob

In my undergraduate analytical chemistry textbook, I read that it was common practice to report figures as in C), but to put the extra insignificant figure on the quantity and its uncertainty in subscripts. This might be a variant that is worth considering as a reporting standard.

I choose C) because there are several occasions where you might need relative uncertainties, and one s.f. on the uncertainty is often not accurate enough to work out a relative uncertainty that can be compared to other relative uncertainties.

Alfred, just to emphasize what Joe said, error bars are not absolute. They represent Gaussian errors with the plus minus value being the standard deviation. You expect to be outside the range of error about a third of the time.

(OTOH, I've seen values reported with different values for the plus than the minus. I assume this means some sort of skewed distribution, though I'm not sure exactly what it's supposed to represent.)

By Aaron Bergman (not verified) on 28 Apr 2006 #permalink

I didn't vote because I couldn't make up my mind between C and D. My gut was telling me D, because that's what I'd want as an engineer. My brain was whispering that there was some official standard that woudl have made it C.

Regardless, as for how to fix this....

If what you want is to get them to do it your way, I will give you a solution to your problem that should get at least a 90% improvement rate over three weeks. I call this the Geometric Progression Method.

Simply announce that, for the next lab they turn in, errors in dealing with significant digits will incur an automatic 10% penalty. Doesn't matter how many errors on the lab, any error of that sort docks 10%. Then, the next week, similar errors are 20%. Then, the third week, 40%. Then....

I myself have never needed to go past 20%. Yes, I am a shithead. But we knew that. What made it effective was that very few people had any doubt that I was a sufficient shithead to follow through on that threat. Very few people upped the ante to 20%, and no one ever took it to 40%. (Of course, I gave out a 0% grade early in that semester, as I recall. Please note, the recipient of that 0% got a perfectly respectable B+ for the semester.)

Someone else mentioned elemental laziness as the problem, and that's probably what it is. "Is it worth my time to remember this crap? Aaaah, probably not." Well yes, Bucko, yes it is!

Now, as for getting them to understand the importance of it... you probably need to get them to experience a situation where the problem actually means something. I suspect that at a practical level, that's impossible, and they're just going to end up getting brutalized by their first graduate advisor or boss.

It's a lot easier to come up with weird precision errors in computer science. Are your undergrads required to take a course on numerical computation? If yes, you might put your head together with the guy who teaches that, and try to come with a scenario where it bites them in the ass in the numerical methods class.

By John Novak (not verified) on 28 Apr 2006 #permalink

(OTOH, I've seen values reported with different values for the plus than the minus. I assume this means some sort of skewed distribution, though I'm not sure exactly what it's supposed to represent.)

I've done this. Gaussian error bars are almost always an approximation, Central Limit Theorem be damned. The real probability distribution for some given physical quantity of interest given the data frequently isn't Gaussian.

When I've given asymmetric error bars, I give the 68%ile level on either side-- what *would be* one sigma if the errors were Gaussian. Look, for instance, at this paper, and in particular the bottom-left panel of Figure 12. If you marginalize over Omega_M to get errors on just w, clearly the errors are asymmetric (the probability distribution stretches more down than it does up). In this case, I quoted the best-fit value of w to be -1.05+0.15-0.20. (There are better, newer results on w in the 2006 Spergel WMAP paper that you can find on arXiv.org.)

-Rob

My example is meant to teach freshmen, including freshmen who have no intention of being science majors, about significant figures. It is not meant to teach anyone how to run a chemical plant. I'm intentionally playing fast and loose in order to make the point to my intended audience, which appears to be Chad's intended audience as well.

I've changed careers, and it's been a long, long, time since I taught science. However, I don't recall learning about Gaussian errors in any of my undergraduate science classes, and I certainly didn't teach it to my freshman chemistry students. So, I would expect the students we are talking about here to think that the errors are absolutes and that the probability of an explosion jumps from 0% to 100%.

As I said, I'm trying to teach students why significant digits is an important concept, not how to run a chemical plant.

If I remember correctly I used to report of too many figures because it "didn't hurt" and I though the numbers should always be reusable later, as perhaps the NIST standard argues.

However, both for papers and in similar industry reports, that has been seen as misleading by most, which is why I changed from C to D. The argument being, that if it isn't obvious that the number should be reused, the inquirer should reproduce the number to his needs. (Which could in a few cases be by asking the authors for hard to get raw data for his/her own analysis.)

By Torbjörn Larsson (not verified) on 28 Apr 2006 #permalink

Whenever the question is of the form: "Why do students...." one can do no better than to read the Tall, Dark, and Mysterious archives.

I'm generally pretty candid and introspective regarding my flaws as a teacher, but dammit, there is NO WAY that I could possibly be so bad as to bear any responsibility whatsoever for this one student of mine - the one who needs a B - writing, by way of interpreting a confidence interval for a quiz, What this means is that we are 95% sure that the mean, which is equal to 123, is between 15.05 and 16.95. No amount of bad teaching can produce such nonsense. I may not have been presenting the finer points of sampling as clearly as I might have, but, as God is my witness, I have not been unteaching my English-speaking, college-aged students how the positive real numbers are ordered.

Look on the bright side. At least your students are writing down their answers. If they were allowed, they might just tape their calculators to the lab report and avoid the pain of writing altogether.W

By Peri_P_Laneta (not verified) on 28 Apr 2006 #permalink

I had a history teacher once that told us, when writing papers, to write a thesis statement and then back it up with argumentation. But when actually grading our papers, what he graded on was HOW MANY FACTS we included. So we all learned to ignore his lectures about how he wanted a well-argued thesis, and gave him unorganized fact dumps.

Similarly, in high school science classes, experiments received a grade based on how far our answer was from the "real" answer. If the real answer was 3.14, and we wrote down 3, we'd lose points. If we wrote down 3.1634123412, we wouldn't lose points. Pretty obviously, we erred on the side of spurious precision. (Also pretty obviously, none of the experiments actually worked, so we usually just back-calculated our results by taking the real answer and adding like 8.3% error to it, but that's not relevant to your point...)

If you want people to really care about appropriate precision, you need to give them an incentive to care by marking them down for spurious precision and imprecision both.

Mike Kozlowski is exactly right -- and it's a more general thing. You have to tune your assessment to the learning goals, or it will happen the other way around. Students figure out what they will be tested on, and that is what they will learn. If you say you want them to understand, but then test recall, they will memorize all the terms, and not bother thinking deeply about them.

(Of course, the same is true for professors... if their teaching is evaluated entirely on student evaluations that don't measure how much the students learned, but how happy the students were, canny professors start adjusting their classes to primarily make students happy.... Hence the "hidden curriculum".)

-Rob

Incentive to fix the problem (the problem being to understand why they are required to do things a certain way): dock them credit if they don't discuss, in plain, non-technical English, what they're doing, why, and what the numbers tell them. If they try to obscure things with jargon to hide the fact that they didn't complete the lab or don't understand what's happening and were too lazy to come ask, if the explanation doesn't match the numbers, or if it doesn't match what they did with the numbers, or if there's no way in hell the numbers support what they claim did unreasonable things, hit them hard and explain why. It takes more time up front, but you make it up on the backside of the semester when either they've largely stopped doing such dumb things, or they persist in doing such dumb things but you've already established precendent and thus don't have to stress over giving partial credit. And they can't even justify the claim that you're being an arse since you wrote down the explanations.

This, matched with judicious use of discretion and allowing them to make up credit by hunting me down and demonstrating that they did the work/figured things out, worked well for me in the lab reports and classwork I've assessed for the profs I've worked for. And you'll essentially never ever have to adjust a grade, because most students are simply too lazy or too busy to come hunt you down. I was absolutely profilagate with the opportunity to make up credit at my old school and at my current one, and in three years I've had three people ever come to me about it.