McKitrick update

McKitrick has added a correction his page describing his paper that purports to find economic signals that I posted on here. McKitrick admits to mixing up degrees and radians but claims:

There was a small error in the calculation of regression coefficients in our paper. Our conclusions were not affected by this problem

As I noted in my post, correcting the error halves the size of the economic signal in the warming trend, reducing it from 0.16 (out of 0.27) to 0.09. McKitrick’s correction states:

Outside the dry/cold regions the measured temperature change is significantly (previous: primarily ) influenced by economic and social variables.

That’s quite a difference, so how can he say that their conclusions were not affected? Well, all the conclusion says is that there were socioeconomic effects, without mentioning their size. The size of the effects, which change substantially, are only mentioned in the body. And the “bombshell” nature of the paper touted by Michaels et al in their TCS article depends on socioeconomic effects being the primary cause of the warming trend, something that McKitrick has now retracted.

McKitrick has also failed to correct or even acknowledge another serious problem in his paper—he has not corrected his standard errors for clustering. This is required because his socioeconomic variables are all the same for the stations in the same country. This means he will find some variables to be statistically significant when they are not really so.

Nor has McKitrick explained why he decided to take the cosine of the absolute latitude in the first place. Calculating it correctly makes no difference to the model, while calculating in incorrectly makes the model fit worse. There does not seem to be any theoretical or empirical justification for this change to his model. As John Quiggin observes:

a trawl back through the files makes it pretty clear that this error was not exactly an innocent mistake. It seems pretty clear that McKitrick tried some regressions with (absolute) latitude as the explanatory variable, didn’t like the results he got and switched to the cosine (note that, if you were starting here, you wouldn’t need to take the absolute value, since cosine is a symmetric function). Because of the degrees-radians mistake, this variable came out insignificant, as desired, and McKitrick didn’t do the checks that would have revealed the error. Asymmetric error-checking is a standard problem with cherry picking, as illustrated by the work of John Lott.

Comments

  1. #1 Louis Hissink
    September 9, 2004

    Be that it may, and factoring in my removal from Sydney to Perth, and taking account other comments in this blog, I will in the near future write an essay, or article, as you wish, on the use of intensive variables and the resultant errors associated with basing quantitative estimates on such variables; We in the mining industry know it as sample volume variance, and problems related to.

    As we in the mining industry are focussed on making profits, any statitistical technique which fails to stand up to empirical test is resoundly rejected. Our livelyhood depends on it.

    As far as the statistical analysis of atmosphere temperatures is concerned, you should carefully consider the fact whether you are analysing the statistics of the thermometers, or the atmosphere.

    Think on it. I have.

    I will publish the essay in due time, and seriously suggest that if you wish to criticise me, make sure you are familiar with the geostatistical literature.

    LAGH

  2. #2 Tim Lambert
    September 9, 2004

    Louis, if you want to write something about temperature it might be a good idea if you made yourself familiar with basic thermodynamics. I suggest a first year physics text.

  3. #3 Louis Hissink
    September 9, 2004

    Tim,/p no need, I have those texts./p You, as many of your cohorts, can’t distinguish models from reality./p

  4. #4 dsquared
    September 10, 2004

    If this essay is going to try to establish that thermometers don’t measure the temperature it’s gonna be an uphill struggle …

  5. #5 Eli Rabett
    September 10, 2004

    Well, in thermodynamics thermometers establish temperature, or at least the “ideal gas thermometer” does:) Stat mech is a newcomer, and you can show that the thermodynamic temperature scale is equivalent to the ideal gas scale.

  6. #6 John Quiggin
    September 10, 2004

    By the way, I saw on another blog (can’t remember which) that Bizarre Science had stopped, and when I visited, it had disappeared altogether. Perhaps gravett.org is short of server space.

  7. #7 Scott Church
    September 11, 2004

    All, Statistics isn’t one of my stronger areas, and I’m not sure what Louis means by sample volume variance. But if I’m reading him right, we’re discussing whether or not McKitrick’s analysis is evaluating actual atmospheric temperatures or statistical variance in thermometer calibration, measurement methods, etc. – whether we’re measuring statistical fluctuations or atmospheric temperatures. From where I sit, it looks like all this misses the point. The issue here isn’t temperatures – it’s correlation. MiKitrick and Michaels aren’t trying to prove that temperature measurements are statistically flawed and inaccurate. They’re taking the existing surface temperature record as a given and attempting to prove that it is not due to actual atmospheric temperature trends, but to an “economic signal” which is sort of a generalization of the urban heat island concept to all human activity – farms, factories, etc. are locally raising temperatures near weather stations and making the atmosphere appear to be warming when it is not. Instrument flaws play somewhat of a role in their effect, but generally, the point about statistical variance and samples seems to me to be largely beside the point. The existing surface record is already quite large and fairly well distributed, so I doubt there’s a concern with sample size. The issue here is where are the temperatures we’re measuring (and that we believe) coming from.

    To this end, McKitrick is attempting to demonstrate that global warming is not causing the observed temperature signal, human activity near weather stations is, and he’s trying to do this by using multiple regression methods to demonstrate that human activities and the surface record are strongly correlated. The real problem with all this (apart from blunders with datasets) is that to be really reliable, multiple regression techniques must account for all variables that might impact the outcome and do so in a manner that guarantees ahead of time that the sampling of each is truly independent. To this end, it regularly stumbles over the principle that correlation alone does not imply causation, and it is too easy to fall far short of the needed inclusion of all variables, and then to try to make up the difference with guesswork – which all too often has a way of leaning favorably toward the result that the analyst wants to get.

    The case of latitude clarifies this. It has been shown numerous times that global warming is strongly tied to latitude – it is much more noticeable in the northern hemisphere than the tropics or southern hemisphere, and this has been shown to follow strictly from the physics of the oceans and lower atmosphere. We reached that conclusion independent of any economic factors and would have expected it regardless of corrections for urban heat island or economic signals. Likewise, economic activity is also tied to latitude, but for completely unrelated geo-political reasons. Growing seasons are best in temperate latitudes, not the tropics or poles, economies and development have followed the distribution of wealth, etc. Economies are driven by a bewildering number of factors ranging from Wall Street to war, as well as climate. The upshot is that atmospheric warming and economic activity will both be affected by latitude, but for completely different reasons. So they will likely correlate to some extent regardless of whether or not they’re causally related.

    What McKitrick and Michaels need to do is show that they’re econometric analysis truly accounts for all factors that drive global economies and climate dynamics without spurious cross-correlations. This strikes me as a herculean task. We all remember what happened to Lott when he tried to use similar methods to explain firearm use!

  8. #8 Eli Rabett
    September 11, 2004

    Hi Scott and Tim, I don’t disagree with what you are saying, but I believe what Louis was getting at is that if you want to assign a global temperature and you sample temperature at points, you have to then assign an area over which the temperature is representative. This IS tricky and is the subject of a great deal of thought among those who compile surface temperature records and also among those who compile temperature fields from the (A)MSU. I would put more faith in them than Louis (well maybe not Christy and Spencer, who have been wrong now what 5 or 6 times??)

  9. #9 Scott Church
    September 11, 2004

    Eli, Yes, I agree completely! This is a problem, and not just for the surface temps, but the radiosonde records as well. It’s less of an issue with the MSU products because they get a pretty decent global view (that’s their real strong point), but it is a big deal for the radiosonde record in particular, and this is one of the biggest problems with both of the Singer, Michaels, & Douglass papers we’ve been following recently. They present the UAH Ver. D analysis as being verified by radiosondes without addressing the fact that the sondes do no such thing because of precisely what you describe. This needs to be addressed as well as the question of correlation.

  10. #10 Tim Lambert
    September 11, 2004

    My comments about Louis were based on his previous wrtings about temperature. His blog has been deleted, but you can get an idea here.

  11. #11 Eli Rabett
    September 15, 2004

    One reads more and more. If McKitrick used the same socioeconomic variables for all areas of each country and two of those countries were Russia and Canada, the whole thing is crap. It’s Bill Gates walks into the room and the average income goes to a billion bucks, over and over again.

    Take a look at the distribution of income in those two large countries. They occupy just about all of the world north of 60 north, where most of the warming is (and should be accourding to models). It gets a lot poorer as you go to the far north. Pah

  12. #12 Louis Hissink
    September 15, 2004

    Bizarre Science closed because Aaron and I have had changed circumstances – work is occuping our attention – so we don’t have the time to post to the blog. If we were inhuman that would not have been a problem. So the Blog was stopped.

    We have not run out of server space, but time to run a blog.

    As for estimating the earth’s atmospheric temperature, Eli Rabett has pointed to the problem.

    This is not news at all for us in the mining business. It is called ORE-RESERVES.

    Conventional statistics do not work in this case because the unit of statistical analysis, “an individual object”, does not exist.

    Calculating the temperature of the atmosphere is the same as cutting up a human body into smaller bits and measuring each bit to estimate the global temperature.

    Now do we understand ?

  13. #13 Eli Rabett
    September 16, 2004

    The atmosphere is well mixed, the mantle is not

  14. #14 Louis Hissink
    September 16, 2004

    Eli,

    This is the problem, is it not? What on earth has the mantle to do with ore-reserves, earth temperature and what not?

    May I suggest we close this discussion?

  15. #15 Eli Rabett
    September 17, 2004

    The tendency to be cryptic is hard to restrain. What it means is that the atmosphere is much more homogeneous than the ground, and that temperatures measured at any point in the atmosphere, whether on the surface or in the middle troposphere, will be very similar to temperatures measured at points that are even far away. An interesting task is to determine the minimum number of stations one needs to measure global temperatures to some accuracy. The number is relatively small as can be judged from the fact that only the very small number of instrumental records that exist back to 1820 can be used to produce a meaningful global surface temperature record

    Thus, any set of temperature measurements in a global network is strongly overdetermined. On the other hand, I think it would be dangerous to sample at 100 points and use that to determine the composition of the entire earth.

  16. #16 Louis Hissink
    September 17, 2004

    Eli,

    The fact is that we get widely varying temperatures of the earth’s atmosphere – negative values in some places, orextremely high positive values in deserts, for example, surely must negate your assertion.

    So the “mixing” mechanism seems pretty mixed up.

  17. #17 Eli Rabett
    September 18, 2004

    Well, to make the point more clearly, I’ll quote a post from Paul Farrar on sci.environment. However, before I do, let me point out that it is the local correlation of temperatures, where local can be even non local as long as the temperatures are correlated over time. This is long, but good.

    In sci.environment 1999/07/13 by Paul Farrar

    On this I would say “quite good enough” back to around 1860, the year depending on how picky you are. Large scale programs of measurement with standardized, centrally-calibrated instruments (unfortunately with national differences) really took off in the 1850s, especially after the Brussels Convention of 1852. It doesn’t matter how over-represented some areas are because you still don’t assign those values to other regions of the globe. It matters, though, how undersampled some regions are. However, for global average temperature the sampling requirements are far, faaaarrrrrr less than one might think on first consideration. The reason is that, for time-averaged temperature anomalies, the values at different locations, often far apart, are highly correlated. And as you increase the time averaging periods the correlated areas assume large, coherent patterns, with surprisingly few degrees of freedom.

    Several posters, such as Russell and Grumbine (both of whom, not coincidentally, actually work in climate-related jobs), have mentioned the issue of degrees of freedom in the climate system. For temperature this can be interpreted as the number of thermometers you need to describe the climate system. As you increase the time and space averaging, the number drops precipitously. For example Kaplan et al. (1997, Reduced space optimal analysis for historical datasets: 136 years of Atlantic sea surface temperatures, _J. of Geophys. Res., 102_, 27,835–27,860) needed about 30 to adequately
    describe the monthly variation for the Atlantic. However, Mann et al. needed only 5 to describe the modes contributing to global average temperature in the 20th century. (1998, _Nature, 392_, 779-787).

    That’s right — you need 5 thermometers to measure the global change temperature of the earth.

    In practice, however, you need many more than this, if for no other reason than to show that 5 are enough. Since the 1850s we have had a lot of sites (by the end of the 1860s, total measurements were in the millions). Although there have always been large areas with few measurements, there have always been enough. The extra sites also reduce error. Even though they may be in different locations, because of the large-scale correlations, any measurement, anywhere, reduces the error of the global averages. A series in
    Sydney, for instance, would help cancel the errors of Seattle. This error cancellation is enhanced by the nature of the error statistics characterizing near-surface thermometers; ie, largely random, normal.

    A good, very recent, review article dealing with the issues is the cover article for the current (May) _Geophysical Research Letters_, which I highly recommend, especially for the references, which should be sampled as needed. For the subject of this post, see especially the Jones, Osborn, & Briffa (_J. Climate, 10_, 2548-2568) one on sampling error.

    Urban warming is a legitimate issue, and the source of a small real error, but it tends to pop up here (sci.env) as a red herring. The issue for global change is not whether urban stations warm (or sometimes cool), but whether this affects to any significant degree the global average computations — and repeated studies show it doesn’t. See the refs from Jones.

    The above, and the factors you mention, are why we have a
    sufficiently accurate and reliable measured surface temperature record. It is not perfect, but it is quite good enough to show the existance of climatically significant global warming over the past 150 years. (Whatever the cause.) The possible errors are small compared to the trends. This is not true for Spencer and Christy’s satellite “temperatures.” Whatever the merit of S&C’s data may prove to be, it will not, to the huge majority of oceanographers and meteorologists who study this issue, be either a replacement or refutation of the surface measurement record.

  18. #18 Louis Hissink
    September 18, 2004

    Eli,

    Your quote suggests that 5 thermometers are all that is needed to measure the global change in the earth’s temperature.

    My point has always been that temperature is not a quantitative measurement, and from reading your comments, this fact remains obscure.

    As for the temperature record, as long as that record is mapped to the individual thermometers, I have no disagreement with the conclusions.


    Louis

The site is currently under maintenance and will be back shortly. New comments have been disabled during this time, please check back soon.