Imagine that you are doing a physics lab to measure the velocity of a small projectile. After making a bunch of measurements to four significant figures, and doing a bunch of arithmetic, you get a value of 4.371928645 m/s. After yet more gruelling math, you find the uncertainty associated with this number to be 0.0316479825 m/s.
How do you report your answer in a lab report?
(There was talk a while back about getting ScienceBlogs some fancy poll software that would allow me to do this with radio buttons and automatic counting, but I don't know how to do that yet, and I'm curious about the answer now, so we'll do this old-school. Choose from the list of answers provided below the fold, and post a comment giving your answer.)
(Please note that you don't need to be a scientist, or have taken a physics class in order to answer this. In fact, the less physics background you have, the better-- I'm genuinely curious about what the average person thinks the right answer should be. The correct answer, and the reason for the question, will be posted here tomorrow.)
- A) 4.371928645 +/- 0.0316479825 m/s
- B) 4.372 +/- 0.03165 m/s
- C) 4.372 +/- 0.032 m/s
- D) 4.37 +/- 0.03 m/s
- E) Some other answer that I will explain in comments.
- Log in to post comments
D was the way I learned this in college. Or maybe it was C -- that was 20 years ago. But I'm sticking with D.
E) 4.37 +/- 0.04 m/s
Better round up. Or at least that's what I recall from physics classes.
C. I like to go to three decimal places (biologist don't give a damn about your sig figs). Seriously though, you said 4 sig figs, and it does not make sense for your error to have more decimal places than your estimate. That's why I'd pick C over B.
C. 4 sig figs in the measurement means 4 sig figs in the answers.
I'd use D as I was always told that taking errors to more than one significant figure was pointless. I can't give a convincing reason why C isn't strictly correct but I'd regard anyone who used is as overly pedantic and/or a physicist.
but 4.37 +/- 0.04 is too conservative. The range (to an inappropriately large number of decimal places) is 4.4035766275 - 4.3402806625
I used to know this, in my Chemistry days. But instead of guessing between C (4 sig figs) and D (why give more places than you are sure of) based on faulty memory, let me think about the information that is meant to be conveyed. The uncertainty could be taken in two ways: first we could say that it is an exact boundary of possible correct answers, in this case from 4.340 to 4.404 (using C as the solution). Second, we could be giving an indication as to how accurate the answer is, so we are positive about the 4 m/s, and pretty sure about the 0.3, everything else is up for grabs. D gives a better answer in that perspective.
So it depends what the answer is meant for. If the measurement will be used for further calculations, C is the best answer. If the measurement is the final answer that is meant to be communicated to the public, D is the best answer.
I'd guess B. 4 sig figs in the result plus 4 in the uncertainty.
Geez. Too long since my college physics labs, which I gorked up anyway.
But, taking a guess, I would have to go with C. Four sig figs in the answer and a similar precision for the uncertainty. D is appealing but seems like less uncertainty than actually was found. Seems dishonest.
I say D. Since the first digit of uncertainty tells us the limits of knowledge, any extra digit by definition is not significant.
That is, of course, unless the leading digit in uncertainty is 1. Then I usually keep the second one. The relative difference between 1.0 and 1.4 is huge. The difference between 1.5 and 1.9 is still pretty big. One might even be able to argue to keep two digits if the leading digit is 2, but it's really pointless beyond that. The last useful digit of uncertainty (whether one or two) necessarily sets the last digit of the reported mean.
A more important justification for keeping only one digit of uncertainty is this: the calculation of uncertainty is probably based on statistical assumptions that can't be justified with your data set anyway. (Seriously, is the distribution of measurements even close to being Gaussian?) This might be an argument for Flaky's answer, but I doubt such a large relative round-up is really necessary.
I bet if I perused my measurement uncertainty books, I would find several different answers to this. That's why I always tell my students, "uncertainties are not Holy Writ -- they're best guesses."
D. Any more digits convey precision that isn't really there.
C would be acceptable, but I'd mark down a student who gave A or B, or at least add a note pointing out what's wrong.
Partly, its because I'm an economist, and our uncertainties tend to come out of statistical models that probably don't hold exactly, so I think it is better to be conservative about what we report. I suppose I might go with C if there was some meaningful difference between, say, 4.40 and 4.404 (the upper limits one would calculate from C and D). But again, we don't have anywhere near enough precision to distinguish those, so acting as if we can is misleading.
(I realize that "precision" has a technical meaning; that's not how I'm using it here...)
I'd go for C, although I'd prefer to express the uncertainly to fewer figures.
I'd go with C. BTW, I note that the computations required to perform the "grueling math" were done with a calculator. In my day we didn't have fingers^H^H^H^H^H^H^Hcalculators. All the comps would have been done w/4 sig figs based on the precision of the measuring instrument (and why bother hand-calculating stuff you're just going to throw away?)
I vote D, but with some trepidation. I'm operating under the assumption here that by saying that the measurements were "to four significant figures," you mean that each individual measurement was made with an instrument with an inherent precision compatible with four significant figures. The uncertainty of the overall measurement, though, looks compatible with only three sig figs, so that's all I'd report. Reporting more than one sig fig in the measurement error seems a bit off to me.
On a related note, I particularly enjoy it in the literature when someone reports error bars on a number that amounts to a wild-assed guess. In some recent papers on high-efficiency gamma scintillators that I've been reading, one group has a tendency to say that they estimate their collection efficiency to be 0.95+-0.05. So, not only are they estimating their collection efficiency, they're estimating the quality of their estimate, and their including unity (perfect collection efficiency) in their error bars!
E. The answer is 42 with infinite precision because it says so in the Bible. Or was that the Hitchhikers Guide to the Galaxy? I always get them confused.
Sorry to comment again but I seem to recall using the notation 4.37(2) +/- 0.03(2) to show to how many figures a result was secure while providing an additional one if using the result for ongoing calculations.
I would go with D, but to a certain extent I might consider other values depending on exactly what's being measured and exactly how dependent your requirements are on the level of precision. In other words, if the goal of the experiment can be satisfied with X.X, then I would definitely give D.
I'm operating under the assumption here that by saying that the measurements were "to four significant figures," you mean that each individual measurement was made with an instrument with an inherent precision compatible with four significant figures.
That is correct. Each of the measurements going into the calculation was recorded to four significant figures.
I'm going with B for the same reason as one of the other comments: four significant digits in both the result and uncertainty.
Since the magnitude of your uncertainty measurement is subject to the same constraint as your initial measurement of velocity the most precise and accurate measurement is 'C'.
What I used to explain was that the '0's were part of the sig. fig so that what you would have is:
4.373 * 10 (1), and 0.032* 10 (1)
Mike
D, without any other info. I would be mindful of precision vs. accuracy: the process under consideration may be inherantly variable, and the "error" actually represents this variability. Also, by convention we count in decimal, in which a long string of numbers may be quite short in some other base.
This requires some care. The best answer is D, although for the problem as stated I can't rule out that special circumstances might justify some other answer.
"Measured to four significant figures" is a red herring. The uncertainty (which we assume to be correctly calculated) is telling us that there are other sources of error, which are more strongly limiting.
Does it matter? Imagine we attempt to improve our results by making our measurements more precise, say to 6 or 7 significant figures. We would find that we are just measuring some other jitter or noise in the system more precisely. That needs to be reduced before we can improve our answer.
(Some experimental physics behind me - hope I got it right...)
D.
There's no particular reason the number of sig figs in the numbers going in should be the same as the number of sig figs in the answer.
(Of course as a theorist, my thought immediate go to wondering about the difference between propogating flat errors versus Gaussian errors, but whatever.)
thought immedate -> thoughts immediately
Weird....
I'd personally go with C, as there's no point giving the error to a higher number of decimal places than the data it's related to, but there's little justification in giving it to a lower number IMO.
Flaky's right to mention rounding, though, the different schemes have different biases and I forget which is best.
To be honest, I can't remember significant figures ever being much of a concern in my physics classes. I can tell you the answer from an analytical chemist's standpoint, though. Either C or D is acceptable, and I prefer C.
The number of digits reported in A isn't meaningful.
The error and measurement don't match up in B. A number of people have made comments about them having the same number of sig figs, but this doesn't matter in addition and subtraction.
Skipping over C for the moment...
D reports the value to the least significant digit and the error to the same decimal place, which is often the standard.
I guess my answer kind of is E.
Getting back to C. The reason I say it is acceptable is that, in the real world, the number you report may be used for further calculations. Rounding errors can add up. Some texts recommend making the last digit (the first insignificant digit) a subscript, but I don't think this is necessary as long as the error is provided. Where I would expect this is in a place where the error isn't strictly given (e.g. tables of measured constants) and the last digit would otherwise be assumed to be significant.
Regarding the sig figs in the initial measurement. Just based on what we've been told, it doesn't appear to matter, but it could. With a noise siginificantly larger than the digits of your initial measurement, it is possible to to get a valid answer with more digits than your measurement device can display (although it would take a lot of measurements). However, if whatever you're measuring is very stable relative to the digits displayed by your instrument, the calculated error can be, well, erroneous. You'd have to look at the distribution of your data to see whether your error calculation was valid. Anyhow, that doesn't really have anything to do with the measurement, as the calculated uncertainty suggests variation well above the digits of the display.
I'm going to say C even though I have no theoretical underpinning for that answer.
This is an interesting question and I had to go back to my data to look. In my excel file I have the equivalent of C as the answer, but my SD are much smaller (100x) than my rates. I would probably report D or a mixture of D and C (4.37 +/- 0.032) in an article.
I tell students to round if they use the equivalent of A.
D.
I'm not an average person - starting grad school in fall
Joe is absolutely correct to point out that the uncertainty points to sources of error larger than the precision of the measuring device.
Significant digit calculations are too simplistic anyway for me to put too much stock in. I like to quote Cliff Swartz's excellent text, Used Math, in which he calls significant digits "a first approximation to error analysis."
Coincidentally, this morning I made some measurements for a new intro lab on linear relationships that involved measuring the mass and dimensions of solid PVC cylinders. A quick least-squares fit in Excel gives a density of 1.43469623781085 g/cm^3 with a 95% uncertainty of 0.068889222. I would normally have my students report the answer as 1.43 +/- 0.07 g/cm^3 or 1.43(7) g/cm^3.
If we go to kilograms per cubic meter, on the other hand, all of the numbers would be multiplied by 1000. I find myself at least tempted to report the numbers as 1435 +/- 69 kg/m^3. Why? Because to leave the uncertainty to one digit requires going to scientific notation which is pretty silly. It's convenient to terminate the number at the decimal point in this case.
I'd say 1.43 +/- 0.07 g/cm^3 is the best way to report it, but I would not object to 1435 +/- 69 kg/m^3 if a student handed it in. Of course I know very few students who would be proactive enough to make such a choice. They tend to want to be told what units you want the final answer in. Given the spectacular mistake they sometimes make, I usually oblige.
What you're trying to do is present the answer your measurements will support. That you can only measure to 4 sf's means you can't present an answer with >4 sf's, so that establishes one boundary and A is out. I don't see how you could give a meaningful estimate of error to more sf's than the measurements that generated it, which means B is out.
Now, C or D? Aaron says:
but I can think of one reason: if the fourth sf is reliable, why not provide it? Given that you can measure to 4 sf's, D seems overly conservative to me, so I'm going to say C.
Note, though, that to a biologist the correct answer is "about 4.5", since most biological processes laugh off any error less than about 10%.
C. But I'd keep A written down in my lab book for future reference.
Hmmm. I said:
but Colst points out that, in fact, you can -- you just need to justify it by having a lot of measurements and showing how the errors are distributed. Given that we don't have that info from the question as given, I'm sticking with C.
What I don't understand is the obsession with significant digits. Significant digits are a rough rule of thumb, for quickly estimating error in measurements, but when you are doing a proper error analysis, there is no sense in bothering about significant digits.
D is what I'd normally report, though C is probably a better balance between unnecessarily overreporting, and broadening the error bar.
D, if a lab report. But Colst has a point for such numbers that may be reused later.
If it were an intermediate result, then it wouldn't be reported, only recorded. So, I think it's fair to assume that this value is what is to be reported as the final result. I think the way the problem is stated is somewhat ambiguous ("measurements to four significant figures"), so some assumptions are justified in dealing with that. Does it mean that the instrument reports in four digits, and the additional digits result from averaging? Or does it mean the instrument reports with many digits but the results are only reliable to four?
I'll go with D...I have background, but I've forgotten all of it.
D. You don't have four signficant figures.
I'd use C, but understand the arguments in favor of D. My preference is to see one digit more than may be strictly required, since it's trivial to adjust.
As it happens, I do have an undergrad degree in Physics, but I'm afraid my days of calculating and propagating errors in lab experiments are some years behind me.
I'd say D, if I remember correctly from physics lab (which, I'm ashamed to say, was last semester; already, the information has drained from my head...). The error should already have taken into account any "significant figures" related issues.
The lab didn't really teach us error analysis very rigorously, which is kind of unforunate, but the rules of thumb I vaguely remember say something like "keep only the first digit of error unless the second digit is big enough to affect rounding, so if it's bigger than 4 or 5", or some sort of order-of-magnitude separation of that sort.
For C, the range of values within error is 4.340 to 4.404, which when rounded off to the first digit of error, gives you 4.34 and 4.40. This is the same bound of error as part D, so the second digit of error doesn't change the rounding. Thus, drop the second digit of error and report the first three digits of the answer along with the first digit of error. Any other numbers are already noise relative to the error.
D
Partly it depends on what the math is. There may be 4 sf in the measurements, but by the time you've finished the computations, you've probably lost precision.
My initial pick was D, so you should use that for your tallying. However reading Flaky's comment before commenting I was more tempted by his solution (4.73 +- 0.04) or solution C
C before reading the comments, although I have now been convinced by the arguments of B.
What I don't understand is the obsession with significant digits.
What I mean to indicate by sf's is the smallest unit my device is capable of measuring. So 4 sf's means that we can measure the distance the projectile travelled to 0.001m and the time of flight to 0.001s -- distance in millimeters, time in milliseconds. That's why I say it doesn't make sense, absent lots of measurements and an analysis of error distribution, to report velocity in units smaller than mm/ms, that is, m/s to three decimal places.
C: Sigfigs imposed by observation. Report variance with the same resolution as the mean.
C.
The rules for significant figures that are taught in into chem would lead you to answer c.
D. Errors should only be 1 sig fig.
c, because both figures are reported to 4 sig figs.
C.
It's sort of weird, because all of my intro classes made a relentlessly big deal about grading based on paying attention to sigfigs... and then everything after that, they just didn't care. To the extent that when I couldn't remember or was confused, even the TAs couldn't tell me.
So if the right answer is B, I blame the American educational system.
Clearly D, since that's the one that truncates at the uncertain digit. It's not question of the precision, but rather where you start being "uncertain" in the train of numbers.
D, for the reasons stated above.
I agree with Bill Hooker's answer above - though I would lilke to qualify it along these lines...
It all depends on what the application of the reported number is going to be. Is is some Q mechanicle measurement? Is the measurement of some sort of machine tool head motion which needs to produce to tolerances of order of microns?
Based on that one would choose... None the less, errors of order of a fraction of an angstrom uit, hmm that is pretty accurate.
Is this a result of the fact that this error estimate is simply an outcome of a mathematical expression that involved some nice numbers like pi, e etc or is that estimate itself based on other experiments etc. This would become really important if the experiment was in the realm of nanosciences ... I would not trust the error estimate that is purely an outcome of a mathematical expression that estmates errors - the error estimates must be tempered by realistic capabilities ("least counts") of the instrumentation/technique involved in the experimentation.
I wonder if it's a Canadian/US thing but you guys are all saying "significant figures", while I'm a bio kid not a physics kid, I was always taught with talk about "significant digits". Also, though the right answer's already out, I would have went with C as well. The whole keep it to four sig-digs thing.
errr... I meant to say "bio cum philosophy kid" up above.
Personally I believe the correct answer is D = 4.37(3) ms^{-1}
but you have to watch out for "the rule of 19", which is
to say that if the most significant digit of the error is "1"
then you use two significant figures instead of 1 in the error
and you would write it as 4.372(19) for example.
Sadly "the rule of 19" is arbitrary and you could equally well
adopt "the rule of 29" or "the rule of 39" etc. Its up to you.
Just be consistent.
I suppose I would choose C, because the data is to 4 significant figures.
A is clearly wrong, and B doesn't make sense: there's no point in showing more decimal points in the uncertainty than the final answer.
Practically speaking I don't see any real difference between C and D. Though, if you were going to throw away the uncertainty and just give a number, the 4.37 from D would probably be best.
D:
Going through:
A) 4.371928645 +/- 0.0316479825 m/s - too many sig figs
B) 4.372 +/- 0.03165 m/s - your certain answer should not extend past the first digit of the error.
C) 4.372 +/- 0.032 m/s - see above
D) 4.37 +/- 0.03 m/s - error begins where answer terminates.
E
The answer should be expressed as (4.37 +/- 0.03)m/s
You need the parentheses because multiplication takes precedence over addition and you can't add a speed to a pure number.