Corrections to the McKitrick (2002) Global Average Temperature Series

Last week I wrote about Paul Georgia's review of Essex and McKitrick's Taken by Storm. Based on their book, Georgia made multiple incorrect statements about the physics of temperature. Of course, it might have just been that Georgia misunderstood their book. Fortunately Essex and McKitrick have a briefing on their book, and while Georgia mangles the physics even worse than them, they do indeed claim that there is no physical basis to average temperature. They present two graphs of temperature trends that purport to show that you can get either a cooling trend or a warming trend depending on how you compute the average. McKitrick recently was in the news for publishing a controversial paper that claimed that an audit of the commonly accepted reconstruction of temperatures over the past 1000 years was incorrect, so it only seems fair to audit Essex and McKitrick's graphs. As we will see, both of their graphs are wrong, and their results go away when the errors are corrected.

In their briefing, Essex and McKitrick claim that physics provides no basis for defining average temperature and:

"In the absence of physical guidance, any rule for averaging temperature is as good as any other. The folks who do the averaging happen to use the arithmetic mean over the field with specific sets of weights, rather than, say, the geometric mean or any other. But this is mere convention."

Physics does, in fact, provide a basis for defining average temperature. Just connect the two systems that you want to average by a conductor. Heat will flow from the hotter system to the colder one until the temperatures are equalized. The final temperature is the average. That average will be a weighted arithmetic mean of the original temperatures. Which is why the folks doing the averaging use weighted arithmetic means rather than the geometric mean.

i-6d17e0c8fdade19466262afaae1d042f-essexwarm.png Of course, even if they were right and there were other equally valid ways to calulate the average temperature, they still need to show that it actually makes a difference, so Essex and McKitrick present an example the purports to show that whether you use the arithmetic or some other mean can affect whether or not you find a warming trend. They constructed the graph on the left by taking monthly observations from ten weather stations and averaging them with the arithmetic mean. They found an overall warming trend of +0.17 degree Celsius per decade.


i-7cb53be49348ea9f749715ae5c1dd41b-essexcool.png

They next present a graph where they

"treat each month as a vector of 10 observed temperatures, and define the aggregate as the norm of the vector (with temperatures in Kelvins). This is a perfectly standard way in algebra to take the magnitude of a multidimensional array. Converted to an average it implies a root mean square rule."

Note that nobody, but nobody, averages temperatures this way. Anyway, when they calculated the trend they found an overall cooling trend of +0.17 degree Celsius per decade.

They triumphantly conclude:

"The same data can't imply global warming and cooling can they? No they can't. The data don't imply global anything. That interpretation is forced on the data by a choice of statistical cookery. The data themselves only refer to an underlying temperature field that is not reducible to a single measure in a way that has physical meaning. You can invent a statistic to summarize the field in some way, but your statistic is not a physical rule and has no claim to primacy over any other rule."

I looked at their graphs and something seemed wrong to me. Their root mean square average gives almost the same answer as the arithmetic mean. For example, it gives the mean of 0 and 20 degrees Celsius as 10.2 instead of 10 degrees. It didn't make sense to me that it could make as big a difference to the trend as what they found.

McKitrick kindly sent me a spreadsheet containing the data they used and I almost immediately saw where they had gone wrong. You see, some stations had missing values, months where no temperature had been recorded. When calculating the root mean square they treated the missing values as if they were measurements of 0 degrees. This is incorrect, since the temperature was not actually zero degrees. Because the overall average temperature was positive this meant that the root mean square was biased downwards when there were missing observations. And since there were more missing values in the second half of the time series, this produced a spurious cooling trend.

When calculating the arithmetic mean they treated missing values differently. If only eight stations had observations in a given month, they just used the average of those stations. This isn't as obviously wrong as the other method they used, but the stations in colder climates were more likely to have missing observations, so this biased the average upwards and produced a spurious warming trend.

I filled in the missing values by using the observation for that station from the same month in the previous year and recalculated the trends. Now both mean and root mean square averaging produced the same trend of -0.03, which is basically flat. When analysed correctly, their data shows neither warming or cooling, regardless of which average is used. The different trends they found were not because of the different averaging methods, but because of inconsistent treatment of missing data.

I also calculated the trend with their root mean square average and ignoring missing values, and with the arithmetic mean and replacing missing values with zero (spreadsheet is here). As the table below shows, the averaging method made almost no difference, but treating missing values incorrectly does.

Trend
Missing values Mean Root Mean Square
Ignored 0.16 0.15
Treated as 0 degrees -0.15 -0.17
Previous year used -0.03 -0.03

I emailed McKitrick to point out that arithmetic mean and root mean square did not give different results. He replied:

Thanks for pointing this out. It implies there are now 4 averages to choose from, depending on the formula used and how missing data are treated, and there are no laws of nature to guide the choice. The underlying point is that there are an infinite number of averages to choose from, quite apart from the practical problem of missing data.

Incredible isn't it? He still doesn't understand basic thermodynamics. And he seems to think that are no laws of nature to guide us in estimating the missing values so that it is just as valid to treat them as zero as any other method, even for places where the temperature never gets that low.

Tags

More like this

Nice work Tim.

And nice to see they can find some time in between bagpipe lessons to botch more data analysis.

I wonder if Heartland or Heritage will pony up funds to send these two amateurs to a couple of science courses at the local Community College.

D

"You see, some stations had missing values, months where no temperature had been recorded. When calculating the root mean square they treated the missing values as if they were measurements of 0 degrees."

Oh dear God. Say it isn't so.

How is the actual average temperature calculated? It seems like there ought to be some sort of numerical integration done over area and altitude, with density of measurements higher in areas where the temperature gradient is known or expected to be greater (e.g. locations where coastal areas abrutply meet mountainous areas).

In any case, a simple average doesn't seem like it passes muster. Maybe it makes more sense to calculate and focus on an estimate for total thermal energy in the atmosphere. Then just look to see trends in this measure, up, down, stable, rate of change or whatever.

Ben, the total thermal energy in the atmosphere varies with height and airmass. Vertical profiles of the atm with rawinsonde balloons (depicted as a 'Skew-T' - Skewed Temperature) attempt to get a shot of this profile in order to better analyze and forecast the energy differences.

It is very common to have more thermal energy in one layer than the other, and also for airmasses to be totally different aloft than at the sfc - occluded fronts are fronts not reflected necessarily at the surface, and the stacking of low pressure systems differs as well (a negatively tilted trof axis, for instance, runs SE-NW (n hemisphere) whereas a 'normal trof' axis runs SW-NE, but of course trof axes can run straight N-S and then tilt, etc.

D

I know that the thermal energy varies with height. I also understand that the atmosphere has somewhat distinctive layers. So why not compute the total energy in each layer (as they are defined) and the thermal energy for the atmosphere as a whole, and look at that, wrather than look at temperature? Seems to me a trend up or down or stable in total thermal energy (and in each layer) would be more interesting scientifically than the average temperature, however it is computed. After all, if the "average" global temperature is rising, would this not be reflected in the total thermal energy of the atmospheric system (or its layers)?

er, replace "thermal" with "thermodynamic". i.e. total internal energy of the atmospheric system.

Ben, you're talking about having decent measurements of moisture, wind (OK with balloons) and friction (not OK with balloons).

What you are talking about is currently done in a certain sense using the term 'vorticity'. But the computation does not extend to the entire atmosphere.

To get to what you want, first you already have folks competing for computing time to run their models. What you envision - beyond vorticity - is likely years away, but you are correct - there is far more useful information in your approach. It just has to wait.

In the meantime, the current atmospheric models use vorticity.

D

Yeah, but you should be able to get those effects of viscosity on some larger scale using some sort of mean, the way it is done in CFD codes to take care of turbulence in the boundary layer. Calculating the turbulent flow in the boundary layer is intractible for a big problem, but using some averaging techniques you can deal with some of its effects. Can't this be done to get an "estimate" of the inernal energy in the atmosphere?

This is good, ben. Fun!

OK. We're back to scalar issues and computing power again. The other issue is the equations you use to solve for the analysis.

This is the current 00Z map of a particular model's (ETA) solution of the northern hemisphere 500 hpa (mb) isobar.

This is the current 00Z map of another particular model's (AVN) solution of the northern hemisphere 500 hpa (mb) isobar.

Open them in 2 windows and reduce the windows to place them side by side. Look at the 24 hr forecast (upper right).

The absolute vorticity is the best way to proxy the inertial energy of the atm at a particular point (which happens to be a solution for a grid - the grids nest at a particular granularity depending upon the forecast model). The scale at the left is, simplified, the upward velocity (seen, for our purposes, as 'positive' inertia).

So. Look at, say, SE Minnesota.

The blue blob is a forecasted upward vorticity area that likely is indicative of possible thunderstorm activity, but you need the amount of moisture in the lower atmosphere to know whether there is enough moisture to condense and then have strong lift.

Anyway, see the difference between the first URL I gave you (the AVN model) and the second (the ETA model) solution over SE MN? The ETA has a spot of green. I've never forecast weather in that area, so I don't know whether that much lift can cause hail. But if there is ample low level moisture, I might look hard to see whether there are other indications for hail. The AVN doesn't have green, so that solution says there is less lift in that area.

Which model is right? Wait a day.

Maybe the AVN is a better solution today, because that set of equations dealt with conditions better. Maybe tomorrow the ETA will do a better job.

The point?

There is not always one best solution.

Viscosity is very tricky at these scales due to the heterogeneity of the earth's surface, which contributes unevenly to the effects on the lower and successively higher layers, which have different densities and are moving at different speeds.

I appreciate your averaging/mean comments, which is done at the climate scale - aggregated averaging, if you will, for a point on the earth's surface.

My point is that we are still not at the level of cheap computing power to solve for your intuitive sense of how to visualize the system. We are getting there. But because of the vast scale involved, we have a ways to go.

Climate is what you expect, weather is what you get.

OK, that was fun and a nice diversion from my work.

Best,

D

"I looked at their graphs and something seemed wrong to me. Their root mean square average gives almost the same answer as the arithmetic mean.You know very little about statistics - you are confusing a linear regression with an arithmetic mean.Both are summaries using the root-mean square - one over another variable - TIME, the other not.

Louis, you just left thirteen comments. You seem to have left a comment every time you had a thought. I deleted all of them except for one. To encourage you to collect your thoughts before commenting, I am limiting you to one comment at a time.

I have not confused linear regression with arithmetic mean. McKitrick constructed the first graph by first taking the arithmetic mean of the ten temperatures for each month, and then doing a linear regression on those mean temperatures.

yarg, I wasn't really talking about simulation anyway. All I was saying is to use existing data and fit it to a model for internal energy. Can't this be done? I'm not talking about forecasting, I'm talking about backcasting.

Ben, you don't need to do anything that complicated. You don't want the average over the whole atmosphere, just the average temperature at the surface. So you just do an inegration over area, using the stations that are near to each point on the surface to interpolate temperatures. It's all explained here.

Essex and McKittrick's argument is far stupider than I imagined. I figured they just had some weighting they thought was better physically motivated than the standard one. But to claim that the root-mean-square temperature, with missing data treated as zero, is just as good as an arithmetic integration? How could anyone even take that seriously?

I suppose what they're really trying to do is just create lots of confusion in order to argue that the whole notion of average temperature is meaningless. But this is in itself incredibly ignorant.

This seems to keep happening to me-- I try to be fair and end up giving these people way more credit than they turn out to deserve.

So Essex and McKitrick say "treat each month as a vector of 10 observed temperatures, and define the aggregate as the norm of the vector (with temperatures in Kelvins). This is a perfectly standard way in algebra to take the magnitude of a multidimensional array. Converted to an average it implies a root mean square rule."

This is utter nonsense because it implies that magnitude of a vector is somehow related to something that might be considered the midpoint -- which, under some circumstances, the mean is a very good estimate of a midpoint of a distribution. The word "magnitude" does not imply midpoint. Obviously, E&M don't understand that you choose a statistic in a certain situation because it has certain properties, not because it is used elsewhere.

Thanks for debunking yet another piece of disinformation!

By BoulderDuck (not verified) on 22 May 2004 #permalink

To continue Paige's comment, doing what E&M did makes no sense even if there are no missing data points because temperatures separated by three days or so are not independent of each other.

Let me do a Louis here. I just looked at the graphs again. The little dishonest buggers did not remove the annual cycle. Nuff said

uh-oh Tim, Lott has another article on guns on foxnews.com. Better hop-to on that one :)

I'm gonna do a Louis too... Most of the article is fine, but then he whips out some of his dubious statistics at the end. The article would be just fine without them, too bad.

This is the kind of stuff that would pass the Bush administration's Data Quality Act with flying colors. These guys are to science, as calligraphy is to journalism. I have the strange feeling that some sort of context is missing here, but then when they say that temperature has no physical meaning ???

Sorry, ben.

Just trying to illustrate graphically the difficulty in characterizing work across such scales.

Vorticity is work up or down (sometimes both in a parcel of air, esp. a thunderstorm). Advection is 'work' horizontally. But you want temperature, I think, to reflect useful work. Water vapor is potential work - another complication.

Anyway, back to my computing power argument & I'll shut up now. thanks for the indulgence.

D

ah, I see, I didn't know the atomospheric science lingo. I was thinking of vorticity as I understand it in terms of basic fluid mechanics.

Excellent work!

By James Lindgren (not verified) on 26 May 2004 #permalink

I want to add one more comment about Essex and McKitrick. This has nothing to do with thermodynamics, but simply a comment on the statistical arguments made by E&M. They can't seem to understand the usage of the arithmetic average, they argue strenuously against it and try to have you believe that there are many, many ways to compute an average that are just as good as the arithmetic average -- in effect they want to throw out the use of any average -- and then they make use of a regression? That makes no sense at all.

But there's a larger, more worrisome context. Just as many on the far right have used "sound science" to mean no science at all, I believe that E&M are beginning to lay out the groundwork to throw out all statistical arguments. They can say they are using "sound statistics" or whatever, while throwing out all legitimate statistics for bogus reasons. In fact, one can imagine the right-wingers eventually arriving at a set of arguments to disprove any statistical analysis. Imagine if E&M's attempts to make using the mean meaningless catch on with the right-wingers. Since basically all of environmental science, medical science, and many other fields are essentially statistical in nature, any time a scientist comes up with a result that the right-wingers don't like, they can say "Well, there are many ways of calculating a statistic, so that doesn't prove anything."

You nailed it Paige. They attack science all the time. Any way they can.

Just take a look at Tech Central Station any day of the week - one of their apparent agendas is to cast doubt on any finding that prevents their sponsors from enjoying unfettered profitability.

That blog that louis writes for is another example. Follow their little links around and note how many times you can find an example of exactly what you said above [and how many rubes fall for it].

D

Tim,
editing my comments makes my case.

Your bat.

I read some of Tim's defense of temperature averaging. I am not sure however how correct it is or what assumptions he is making. Please feel free to point out any errors I make and feel free to correct anything ( I took thermodynamics about 4 years ago and to be honest I don't have a very solid grasp of the subject) but I believe that in order to temperature average you need to do something a little more complicated than just do an arithmetic average. For instance consider a system of air and vacuum held in a partitioned box such that the air is on one side of the partition and the vacuum is on the other (assume that the box and partition are perfect thermal insulators). The volume of box on either side of the partition is the same. Now the arithmetic mean of the temperatures is what exactly for this system? Can you even define an arithmetic mean for this system or are arithmetic means only for systems which are in some sense homogeneous. Also if I take away the partition and assume that no heat is lost or gained by the system then you have an adiabatic process in which for an ideal gas (T*V)^(gamma-1) = constant where gamma = Cp/Cv = 1.4 for air. Cp and Cv are the specific heats at constant pressure and temperature respectively. So the formula for the new temperature of the whole system is something like

T2 = T1V2^(0.4)/(V1^0.4) =T1(2V1)^0.4/(V1^0.4)
= T1
2^0.4
= 1.31 T1

I have obtained the formula for adiabatic processes from http://stp.clarku.edu/notes/chap2.pdf

I have no idea of what possible weighted arithmetic formula could lead to anything like the temperature I have just given for the whole system when the partition is removed. You may object that I have averaged with a vacuum but I have only used a vacuum because its simple. In fact vacuum is just the special case of no air, if you put a very small amount of air in the second partition it approximates a vacuum. You may also object that I am not using samples at the same pressure. But isn't air at different places at different pressures. Doesn't the pressure at sea level vary over time and space. Otherwise why do barometers exist? And pressure varies with humidity and variations in pressure are basically what lead to climate so how can climate exist if pressure is the same everywhere. Is the justification for arithmetic averaging that the pressure variations are small?

Temperature as far as I know only makes sense when a system is in equilibrium meaning that all time derivatives for state variables are basically zero (there are no changes in state, pressure, volume etc). Let me quote Richard Tolman:

" It is of interest first of all to point out once more that the temperature of a system is in any case a quantity to which we can assign precise meaning only for systems which are in a condition of equilibrium" pg. 563 The Principles of Statistical Mechanics Richard C. Tolman

Secondly you can perform an even more interesting experiment although I am not sure what the outcome is. Take the same partitioned system but this time make the volumes different. The volume of the box on one side is 1/100 the volume on the other. Air is in both boxes and temperatures are the same which implies the pressure in the smaller box is 100 times greater than the bigger box. Now remove the partition. What happens? I don't know. Someone with a better thermodynamics background who is reading this can answer. But my feeling is that the temperature will change which of course doesn't make sense from the point of view of arithmetic averaging.

By Should be writ… (not verified) on 13 Aug 2005 #permalink

Sorry I made some mistakes in my previous comment. But my argument is untouched. Namely formula and calculation shoud be

T*(V^(gamma-1)) = constant instead of

(T*V)^(gamma-1) = constant

and

T2 = T1*V1/V2 = (1/2)^0.4 = 0.757 T1

T2 = 2^0.4 T1 = 1.3 T1

But anyways I don't see how this comes about from arithematic averaging

By Should be writ… (not verified) on 13 Aug 2005 #permalink

"For instance consider a system of air and vacuum held in a partitioned box such that the air is on one side of the partition and the vacuum is on the other (assume that the box and partition are perfect thermal insulators). The volume of box on either side of the partition is the same. Now the arithmetic mean of the temperatures is what exactly for this system? Can you even define an arithmetic mean for this system or are arithmetic means only for systems which are in some sense homogeneous."

Well, these publications discussing the "frigid cold" of outer space have always been a pet bugaboo of mine. Vacuum has no temerature, since temp is average kinetic energy and the average kinetic energy of zero mass is undefined. Sometimes, you see discussions of the "scorching heat" of the naked sun in outer space, which is a little more reasonable; in fact, when you do encounter a particle out there it's zipping along at such a good clip that the average temp is actually enormous; total heat is minimal, though.

So no, you can't define an average temp for that system you describe, but it's not because it's not "in some sense homogenous", it's because one of the heterogenous temperatures is undefined, by definition. Of course, "in some sense homogenous" may be defined as "homogenous in avoiding the complete absence of mass", in which case things would be fine.