“After all, facts are facts, and although we may quote one to another with a chuckle the words of the Wise Statesman, ‘Lies – damned lies – and statistics,’ still there are some easy figures the simplest must understand, and the astutest cannot wriggle out of.” –Leonard Courtney, 1895
“The first and worst of all frauds is to cheat oneself.” –Philip James Bailey
In the study of any scientific field, there are two great perils that you have to be careful to avoid: fraud and incompetence. Incompetence could be as innocuous as making a simple mistake in your analysis, a contamination of your data set or samples, or other generally honest mistakes.
In science, we have all sorts of ways of correcting for incompetence. We demand that experiments and observations have their methods detailed and that the experiments be reproducible. We have multiple teams check their work and search for the claimed effect. It is not on authority that results are accepted, but only after the verified soundness of hundreds or even thousands of tests, trials, and analyses that solid conclusions are reached. That’s why science requires that results and methods be transparent, so that they can be checked.
But even after all that, you might ask yourself, “well, okay, those might be your conclusions, but how sure are we that they’re correct?” Fortunately, we have a system in place for testing it.
In particular, that system is math, and the way we quantify our confidence in a result is through statistics. While it’s often said that statistics can be used to prove anything, the truth is that we have — as scientists — standardized methods that we use to calculate our confidence in models. We have standard tests that we use that tell us when to accept or reject data, and since we record everything we do, if you give any number of competent scientists the same data sets, they will not only give you the same answers for what the data say, they will give you the same confidence levels attesting to the significance of the results.
Unless, of course, they’re acting unethically.
And when that happens, this goes beyond an innocent mistake, or even gross incompetence, and into the realm of fraud. Scientific fraud is generally thought of as deliberate falsification or misuse of data to arrive at a misleading, dishonest, or simply untrue conclusion.
And perhaps one of the most dangerous places for fraud to appear is in a scientific context that impacts the health, safety, and security of our world. And that’s why, when it comes to the most contentious scientific issue of our times, climate science and global warming, it’s all the more important to expose any fraudulent claims that are made.
Because we only get one Earth, and it’s important to get the science concerning it right. So if the Earth is experiencing global warming, we want to know. And if the warming has stopped, we want to know that, too. So last month, when I wrote about the largest global temperature study ever done, I was unsurprised at the firestorm that took place in the comments section. (500+ and counting!) After all, there were previous studies done that claimed to have measured global average temperature.
Although the vast majority of climate scientists accepted these results, there were a sizable number of vocal objections to possible errors that may have unfairly biased these results. And so the largest study ever done was undertaken: the Berkeley Earth Surface Temperature project, or BEST.
A number of scientists, many of them avowed skeptics that the Earth was, in fact, warming, led this project. And, as I reported last month, they not only released their findings and results, they also opened up the entirety of their data to the public, so that anyone could analyze it!
What did they find?
A stunning agreement with the prior results, and confirmation that all the teams involved did a great job accounting for the potential pitfalls that the BEST team was worried about.
And yet, if you were to listen to the words of Judith Curry, one of the BEST team members and authors, you might come away believing that somehow, this data indicates that the warming has stopped. As she herself said, in an interview with the UK’s Guardian:
This is “hide the decline” stuff. Our data show the pause [in temperature rise], just as the other sets of data do. Muller is hiding the decline. To say this is the end of scepticism is misleading, as is the statement that warming hasn’t paused.
Those are some very strong statements! (And although Curry claims she was taken out of context at times, she also stands by these particular statements, quoted above.) The “hide the decline” graph she refers to is this one, also published by BEST.
Her contention, it would appear, is that taking a ten-year average is masking the fact that, over recent times, the temperature hasn’t risen, or at least that the warming has paused!
But we have the data, and so we can check this for ourselves.
The above graph shows that the temperature, since 1970, has risen at an average rate of about 0.25° Celsius per decade. If the temperature hasn’t risen — or hasn’t risen as quickly — over the most recent times, then perhaps this is something to legitimately look at. But if the data indicates no recent “decline” or “slowing” at all, then this is a fraudulent contention. Let’s get right into it.
But we don’t just eyeball it; this is science. So rather than use the full data set, let’s cut off all of the pre-2001 data, and then let’s analyze it.
So, only looking at this tiny fraction — around 9 years’ worth — of data, we know that there are going to be significant statistical uncertainties. Nevertheless, we still want to do our best fit to this data set, and see what it says. Anyone can do it themselves, but I’m going to borrow the graphs of tamino, who has done the same standard statistical analysis that I would. In fact, this is no different than the statistical analysis that any undergraduate trained in even a 100-level science or statistics course would use.
And what do we find? The slope — which indicates rise — is only 0.03° C per decade, with an uncertainty of ± 0.13° C.
It is small enough, as Curry stated, that it is fair to state that, based on this, the warming has stopped.
Except, if this is the data you used, you’re committing scientific fraud. Because those temperature readings are all very reliable, except for two data points. You need to look not only at the data points from this data set, but the reliability of those points. Which they published, by the way.
So, let’s take a look.
Those last two data points have temperature uncertainties of 2.8° and 2.9° C, respectively, while the next largest uncertainty is a mere 0.21° C! Why’s that? The April and May 2010 data points are based on data from only 47 stations, all located in Antarctica, as opposed to the prior month (March 2010), which had data from 14,488 stations!
So what do you do, if you’re a responsible scientist? You don’t use those data points. You throw those two unreliable points out. And if you do that, know what happens?
Two things: the slope of the line increases to 0.14° C per decade, and the uncertainty drops to ± 0.11° C. Well, that’s a big difference! You might contend, based on this, that over the last nine years, perhaps the warming has slowed a little, but it certainly hasn’t stopped.
But it gets even worse for claims that the warming has slowed. Because as the Berkeley team themselves showed — in agreement with other teams — nine years is not enough time to make accurate measurements. Have a look at what year-to-year variations show:
As you can verify for yourself, there are plenty of intervals as long as 13 or even 15 years where the temperature doesn’t appear to rise. As the BEST team themselves notes:
Some people draw a line segment covering the period 1998 to 2010 and argue that we confirm no temperature change in that period. However, if you did that same exercise back in 1995, and drew a horizontal line through the data for 1980 to 1995, you might have falsely concluded that global warming had stopped back then. This exercise simply shows that the decadal fluctuations are too large to allow us to make decisive conclusions about long term trends based on close examination of periods as short as 13 to 15 years.
And this agrees with that other paper I linked to, above, which says:
Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.
So let’s do just that, and take the most recent 17 years on record.
Now the slope is + 0.36° C per decade, which appears to even be higher than the longer-term, 40-year trend. In fact, tamino has gone a step further, and calculated what the warming (or cooling) trend is, up to the present day, if you go back to any given year, starting as early as 1975 or as late as 2005! What do we find?
It’s actually remarkably consistent, and you need to take a time period as short as five years, which is certainly not statistically significant (look at those error bars!), in order to see the warming appear to stop. Curry claimed she was taken out of context, but came back with a joint statement (with Muller, lead author of BEST) that stated the following:
We have both said that the global temperature record of the last 13 years shows evidence suggesting that the warming has slowed. Our new analysis of the land-based data neither confirms nor denies this contention. If you look at our new land temperature estimates, you can see a flattening of the rise, or a continuation of the rise, depending on the statistical approach you take.
But why would you say such a thing? Remember that other thing you said about 13-year periods? Remember? I quoted it above, but I’ll quote it again:
…the decadal fluctuations are too large to allow us to make decisive conclusions about long term trends based on close examination of periods as short as 13 to 15 years.
Yes, you can see a flattening, if you do the scientifically unethical thing, take an insignificant portion of the data, and present it as significant. You also need to make the huge statistical errors of keeping the bad data points that you know are bad, and to cherry-pick your starting year and month to be April 1998 (or just a couple of months before), which happened to be the hottest month recorded (at the time), worldwide, since the invention of the thermometer. (And even if you do that, you still see warming, just by a slightly smaller amount.)
But, if you’re the scientist who knows better than to claim there’s a flattening (or worse, a decline that’s being hidden), and you do it anyway, that’s not an honest mistake.
- Go and get the raw data from the source — BEST — itself,
- do your own analysis of how the temperature has changed over time,
- and see what you get.
Because you’ll find that there is a game being played, but it’s quite the opposite of “hide the decline.” There isn’t a decline to hide; when you look at the scientifically reliable data, the incline is all there is. The only game being played is the fraudulent cherry-picking of data to play “hide the incline,” and I refuse to sit by silently while this dishonest game is played.