Short term temperature trends

I've realised that I've been dismissing a load of nonsense merely on the grounds that its discussing short trem trends, without troubling to look at those trends. But everyone else is talking about them, so why shouldn't I?

So Anyway, Lucia says that trends since 2001 are negative, based on a fitting procedure no-one has ever heard of. John V says they are positive.

Looking at the yearly numbers from 2001, they look positive. Ditto from the graph.

So I'm rather confused as to where this whole "temperatures are falling this century" meme comes from. Has anyone bothered to try to sort this out? Someone must have blogged this... please point me to where.

[Update. Perhaps I've conflated the wackos "no warming this century" with Lucia and Taminos "since 2001". It probably makes a difference whether you include 2001 or not (as my commentors point out). The obvious point then is that if whether you use an individual year or not, or the first 2 months of 2008 or not, strongly affects your trend, then... your data series is too short -W]

More like this

One of the most common sleazy tricks used by various sorts of denialists comes back to statistics - invalid and deceptive sampling methods. In fact, the very first real post on the original version of this blog was a shredding of a paper by Mark and David Geier that did this. Proper statistical…
Very few relationships in this world are monotonic. Not the price of stocks, not the traffic on this blog, and not global climate trends. Maybe if more people understood this, we'd have less nonsense about climate change clogging the media. By monotonic, I mean, if you plot a trend on a standard x-…
[Update: see comments. We're having some dispute about whether to bet on the monthly averages (the scientifically respectable thing to do) or daily min (the wildly exciting popular choice). I need to bother work out the numbers. Until then, you'll have to be patient (2011/3/31; I've adjusted the…
This is just one of dozens of responses to common climate change denial arguments, which can all be found at How to Talk to a Climate Sceptic. Objection: Temperatures plummeted over the last year (2007-2008). If you look at this data from the Met Office Hadley Centre you can clearly see that in…

It comes from the simplest possible place: People using numbers to lie.

They cherry picked the warmest year ever (1998, ignoring the fact it was a statistical tie with 2005 and is just an annual temp instead of a long term climate average regardless) and say "See? It has gotten colder since 1998!" Tada! See? It's been _cooling_ for the last 10 years. ;)

[You weren't listening. This is 2001 on, not starting at 1998 -W]

It's the same trick they did back in January when they proclaimed that "100 years of warming wiped out in one year": They used short term variability to argue that long term averages were going the direction they want it to be going. If you are allowed to cherry pick the size of the interval (and, of course, make sure it is short enough that normal variation is larger than the signal) you can get any result you want.

By Benjamin Franz (not verified) on 12 Apr 2008 #permalink

OT but the boyz at the Climate Sceptic mailing list are accusing a "man named Connolley" of censoring Wikipedia,and are considering exposing him. You're in shit now, William.

"In the past I have read of one section of Wikipedia being
censored to prevent any global-warming sceptics from publishing our point of view, by a man named Connolly, IIRC. Now we have evidence (following URL)
detailing how virtually any aspect of global warming covered by wikipedia is censored to show only the warmist point of view"
- I. McQueen

[Should be fun, if they can learn to spell. This is Oreskes, I think -W]

A human lifetime should still be considered short-term variability when it comes to climate fluctuations. Those who nitpick at microscopic variations and misrepresentations of numerical data aren't concerned with the science or truth, just at having their political agenda represented.

By 1nfinite zer0 (not verified) on 12 Apr 2008 #permalink

Can you point to the specific comment at which John V concludes that he has corrected Lucia's calculations?

The last comment that I see from John V is Comment 1016, the last sentence of which is:

BTW, now that we've cleared up the coincidence of OLS trends from Jan2002 matching C-O trends from Jan2001 I'm pretty confident that your calculations are correct. The spreadsheet *looks* right.

Thanks

[I'm using his original comments. Quite what the above means I'm not sure. Anyway, have you seen this properly analysed anywhere? -W]

Isn't Lucia breaking her own little rule about only using data after the prediction is made? Yes, SRES dates from 2000, but the 0.2C/decade in the IPCC 4AR can't be based solely on SRES, or else the prediction wouldn't have changed from 3AR. To satisfy her rule I'd guess she would need to use the .15 to .2C/decade prediction from 3AR.

This is in addition to Frank's argument with her.

[I'm less interested in that than in the validity of her temperature trends, which simply don't look plausible. Someone must have looked into this -W]

When I calculate the trend using OLS on the hadcrut3gl temperatures from Jan 2002 (not 2001!) through Feb 2008 I get a negative value.

[OK, but from 2001 they are positive. Looking at the (yearly) data I can't see how they can be negative. Is the fitting procedure she is using heavily overweighting the endpoints? -W]

Tamino did an analysis.

And lucia has a response, saying that using the surface records versus all temp sets (including the satellite records) is cherry picking, among other things.

As I read Comment 1016, John V explicitly says:

... I'm pretty confident that your calculations are correct.

What's not to understand about that???

Or am I totally screwed up on this?

[He does say that. But what does it mean? Are his own calculations wrong? Is it indeed true that the method Lucia is using can pull implausible trends out of plausible data? I want to know more -W]

Lucia also says she's in the process of teaching herself statistics, and it's odd that she thinks that she's better at it than a professional statistician like Tamino.

As luck would have it I have been having this exact argument over at Andrew Dessler's site. And it does seem to be true. If you use the monthly data you will get a lower trend than if you use the annual values. I did the calculation for the GISS L/O data and with the annual you get +0.15. With the monthly you get +0.02. Probably because with the monthly you get to include the low Jan and Feb data. I will note that it is not quite an apple to apple since I generally use the D-N annual values.

John

[Re annual/monthly: so what if you do the fit over the same period, and compare annual and monthly? -W]

By John Cross (not verified) on 13 Apr 2008 #permalink

William:
If you the series of comments after the one you link, you will see that John V and I discuss the date and method issue at length. He ends with:

John V March 11th, 2008 at 7:25 am Edit This

I've been able to confirm that we were working on the same numbers. (A small step, but definitely important). I will try to find time to understand C-O so I can check the rest. Today does not look good though...

BTW, now that we've cleared up the coincidence of OLS trends from Jan2002 matching C-O trends from Jan2001 I'm pretty confident that your calculations are correct. The spreadsheet *looks* right.

As for the method of fitting the trend: It's called Cochrane-Orcutt. It is suitable when the lagged residuals exhibit AR(1) noise. If you google, you'll find the first hit is Wikipedia. As you know, that online encyclopedia has volunteer editors who work tirelessly to ensure only good information is included. Their discussion is here.

The derivation of the transformation is rather transparent. But after it's done, one can apply ordinary least squares to the transformed variables. The main advantage of the method over OLS is that you get less scatter about the "true" underlying trend. This permits one to get out of the region of high "beta" error quickly.

The method of calculation is discussed in the blog article. (You should also be aware that if you wish to repeat this, you must update with the most recent data, particularly for GISS. Their method of determining the monthly data results in the monthly data changing for quite a long time after they are first recorded. )

Of course, one needs to check the lagged residuals. In the first analysis, they were near the margin of the 95% confidence intervals suggeging there might be remaining residuals. However, that changed in the later analysis mostly due to the various agencies updating their monthly data, but also for other reasons. You can see the various methods I applied to examine the lagged residuals here:
Accounting For ENSO: Cochrane OrcuttL

Brian
The issue of selecting the start year is important.
For what it's worth, 0.15 C/century also lies outside the uncertainty intervals if we begin analysis in 2001.

Also, I initially did say that ordinarily, I would evaluate the AR4 based only on data after 2007. But readers pointed me to references showing the provenance of those particular projections is in an earlier document and so a suitable date seems to be 2001.

If you'd like to suggest a date, I'd be happy to run both along. It's just a matter of running another spreadsheet. People are interested, and I have no objections to running two start dates. (Running a zillion would be too much work, and difficult to communicate.)

So, feel free to suggest one.

Even just starting with 2007 would be interesting, as promised my readers a discussion of beta error, and using a start of 2007 would be useful. Plus, should the 2001 falsificaiton be an outlier, running each month will show that when it flips to "failed to falsify" but then never flips back. If 2C/century is wrong, but say 1C/century is right, we should see flipping back and forth. But, with respect to discussing beta errors, running both together would probably interest them. I can do those both some time this week -- assuming NOAA has their numbers available.)

FWIW: I also wrote that projections should hind-cast. So, it is a problem if they don't and in that sense, I could pick the date when the AR4 itself specifies as the beginning of projections. The figures with the projections show data up to 2001. I can't think of any other start dates that could be justified, but if you know some reason unrelated to the data itself, why some other date would be a good candiate, suggest it.

I do think it's a proble if the analysit can just pick a year of their choice. For now, I'm sticking with starting in 2001. If we start in 2002, the slope is more negative because we were at the peak of ENSO. If we go back to say 1998... well... I think the choice of that date is clearly not justifyable. (Many have begged me to pick that. )

Starting in January 2001 happens to put the beginning of the period is a slight "low" relative to the mean behavior, thus giving less negative trends than most choices. But, I didn't pick 2001 because of the features of the data. I picked it because either 2007 or 2001 make the most sense to me.

Also, if you have other features that you think should be included in the test-- tell me. I added ENSO. If you have concrete information about PDO, AMO or anything else, I'd be happy to either a) include it of b) discuss it qualitatively. I'll be happy to tell you (There are some difficulties in incorporating MEI into the mix. I'll be happy t admit them. I'm trying to figure out the best way to incorporate these. But for now using the MEI seems more or less reasonable.)

Boris:
I did not say using surface temperature is cherry picking. We discussed this at some length, and you know I also used surface measurements.

When Dan mentioned the Tamino's analysis, I did, state this in comments:

Steve-
It's Easter. My family is here. I haven't seen a trackback, so presumably, I'm not linked, which suggest Tamino must not want his readers to stray over to read alternative views.

How much you wanna' bet the post will exhibit:
a) cherry picking by selecting GISS- the data set with the highest slope.
b) ignoring the existence of other data sets. Doing this ramps up the uncertainty introduce by "instrument error". Imagine if you measure the same thing with 4 different instruments, then average, this reduces the data error. In contrast, picking on ly one increases the uncertainty due to measurement error.

Of course, picking the instrument that best exhibits what you wish after the data come in is... well... as I asked before "Raniers or Bings?"

So, it's picking the only instrument set out of the set of five well respected sets and doing so knowing this is the case, that is cherry picking. It appears I managed to guess the data set he picked. :)

When I started my blog posting the open discussion thread back in.. January (December) I didn't know which times, or data sets gave which results now. But at this point having blogged awhile, I certainly do. At this point, I know that nearly any climate blogger knows that, recently if you used GISS, you get either the most "up" trend (or least down) and if an analyst pick Hadcrut you get the most down.

However, I am showing results not including ENSO using five data sets individually, and then doing the full analysis with and without ENSO using the average of five data sets. I am using the latest data-- not stopping in December 2007, and I"m using the a method that is specirically appropriate when the residuals form an Ordinary Least Squares fit show serial autocorrelation in the residuals.

[I'm not convinced by your use of C-O, on the grounds that people don't use it for climatology. One thing I'm rather unclear about is whether it just gives you different confidence limits or if it affects the trend line too. If the latter, what trends do you get from LS?

Second, I'm still unsure what years you are using. You say "since 2001", does that include 2001? -W]

Lucia is using Cochrane-Orcutt regression to remove the effect of autocorrelation. I'm not sure that this is a good thing, but what happens if you do it is that you get a much larger error range, and this she does. The trend is negative, but less significant. So it is not really advancing her argument; the error ranges for trends from the C-O analysis include the ranges for simple regression, so they are less effective for disputing the IPCC estimates.

Her analysis is done on a simple mean of the four data sets. As someone there pointed out, this is doubtful, because they aren't independent. You can use her spreadsheet to do the individual datasets, with divergent results.

The C-O analysis does pull down the trend. I think that is because the latest dip in Jan-Feb (she did not have March figures) shows little correlation with previous numbers, and so stands out after correlation removal.

I think I now have a better handle on Lucia's analysis. I agree that it is better to use a AR(1) model, with C-O regression. The reason is, as she says, that otherwise the variability of the slope will be underestimated, and the IPCC value will be spuriously rejected. That could happen even with C-O.

I tinkered with her spreadsheet, and tested the slope based on data finishing in Oct 2007. The ordinary regression, at 95% gave a positive gradient of 0.6, range -0.4 to 1.6, C/century. C-O gave grad 0.1, range -2.0 to 2.2. So the IPCC value is not rejected for C-O.

The change of slope given another three months data is large, and this should not happen with regression. The statistics show what happened. The sd for the residuals for C-O is 0.077, but the residuals for Nov, Dec and Jan are 0.1, 0.13 and 0.27. These are outliers, the last being about 3.5 sd. That most likely means that the errors are not truly normally distributed - ie the regression model (even with the C-O adjustment) is not appropriate.

Maybe the ENSO adjustment can fix this; I'm not sure. The key observation is that whatever happened in January can't plausibly be explained by a conventional linear model.

[OK. So my comment is, who says the data is AR(1)? It doesn't seem especially likely a priori. If C-O is affecting the mean slope, then I don't trust it.

The usual thing people do is use LS, but then adjust the d.o.f. by (1-ac) when calculating the confidence limits. Its not clear why Lucia isn't doing this. Is she just picking methods randomly off the shelf? -W]

lucia,

So, it's picking the only instrument set out of the set of five well respected sets and doing so knowing this is the case, that is cherry picking. It appears I managed to guess the data set he picked. :)

But Tamino uses GISS and CRU data, so you didn't even guess right after the fact. :)

Also, doesn't the IPCC project surface temperatures? Do you have any evidence that averaging in temperatures for a huge swath of the troposphere (satellite measurements) is a valid comparison?

And, these data sets use some of the same instruments, so I don't think your "instrument error" point is valid either.

Twice, now I am violating my normal rule about leaving ridiculously long comments. (I think generally, long comments should be turned into posts rather than hogging other people comments. But this content doesn't seem post worthy, or I've already blogged about it.)

Pliny--
All that is required to reduce the uncertainty due to "measurement noise" is for the measurement errors in one instrument to be uncorrelated with that in the others. I don't analyze each independently, average those and based my uncertainty intervals on that.

Averaging over the instruments cannot reduce the uncertainty in the determination of the trend due to weather variations. That weather signal dominates the data reported by all instruments instruments and do introduce strong correlation. I have explained that in a post here.

With regard to falsification: It is useful to reduce the "white" aspect of instrument errors, as that is not expected to have the same spectral properties as the actual weather. Averaging over all instruments is an appropriate way to do this. Not averaging maximizes the amount of instrument noise in the data that are analyzed.

Also, I discussed Tamino's article about the uncertainty intervals here.
Tamino's disucssion explains why if you use OLS when there are serial autocorrelations, the uncertainty intervals for OLS are very high.

They are. That's why one shouldn't use OLS. It results in an unnecessarily large amount of uncertainty in calculating the trend because it's the wrong method.

You will note that I demonstrated that the residuals remaining after applying the CO fit to all four data sets appears white. The link is given above.

William: If we correct for serial autocorrelation, the estimate of the trends don't jump around as with OLS. Much of the jumping a round is due to using OLS when there serial autocorrelation exists. Also, introducing the MEI index further reduces the uncertainty intervals. I believe you suggested I should consider ENSO? :)

Boris:
In his first post addressing my falsification of the 2C/century Tamino used only GISS. That's here. In later posts, which coincidentally were posted after I commented on his decision to use only GISS, he included Hadcrut.

I have blogged on what he said in both those posts. His arguments don't reverse the falsification.

William (again):

On this:

Perhaps I've conflated the wackos "no warming this century" with Lucia and Taminos "since 2001".

You actually need to be careful responding to that argument too. Because as a matter of statistical fitting using a confidence interval of 95% (or even 95%) there is no warming this century. The difficulty with that argument is that that test has no power. The beta error is high. ( And, FWIW, the low power is also a reason why the trends jumping around doesn't actually mean the falsification is wrong!)

One of the reasons you are having trouble dispelling the "no warming this century" claim is you guys have been communicating the proof of AGW as "failure to falsify" with little additional informatio. (Or worse, gooey claims like 17 years is enough! So, it's your own incomplete explanations that are biting you in the hindquarters.

You need to stop with the "17 years is enough" etc. and start discussing the power of a test. Power is 1-beta error. "Failure to falsify" means nothing when the power of a test is low, and the "no warming argument", is a failure to falsify argument. That's the correct way to discuss counter that argument-- not "17 years is enough!" (I discussed this in Falsifying is hard to do.)

(If you look at the curve, if the "true" underlying trend for this decade is 2C/century, and I had only annual data, and used one series only, the beta error is about 50%. So, half the time, you would "fail to falsify" 0 C/century. If you look at the beta error results for 17 years, you may suddenly realize that some of your intuition about long times has to do with interpreting "failed to falsify" as meaningful. )

As for short data sets: There are some inherent difficulties with both short data sets and statistical tests in general. The main problem for short data sets is beta error, which means that "failed to falsify" teaches us nothing.

There are other difficulties with short data sets. It's just not the one you seem to be suggesting.

Now, for the most important bit:

With regard to the falsification that bothers you so much: The falsification exists. Statistical outliers happen. In physical systems they generally happen for a reason. So, yes, the falsificatino at 95% either means a) 2C/century is wrong or b) "weather" events that don't happen very often, happened. And it just so happens, it happened during the period selected for analysis.

So, unless you honestly belive 2001 is cherry picked, or there is something wrong with not throwing out recent data, then either (a) or (b) seems to have occurred.

I've always thought most likley reason for a false positive for this test would be the PDO shift. My "falsification at 95% confidence" is still a true statement even if the cause is PDO. But that doesn't mean we can't try to figure out if it is the PDO. Identifying whether a falsification is a true positive (that is 2C/century really is false) or a false positive is the normal response to getting a falsification. It's not to simply deny statistical falsification occurred. (That false positive happen for physical reasons is indisputable. The full variability if GMST has physical causes and so, events that happen 1% of the time, still have physical causes.)

The possibilty that the false positive is due to the PDO can be explored. To do so requires some estimates about the relationship between the PDO and GMST. If the PDO is, like baby bear's bed, of "just the right" magnitude, it will be big enough to explain a short outlier due to a shift that occured at just the "wrong" time, but not be so strong as to overturn the empirical support for AGW based on measurements since the mid or late 70s.

Doing such an analysis has the potential of simultaneously getting rid of one denialist argument of why the run up since the 70s means nothing, while identifying a known, named physical oscillation that could result in the data we see. But, to achieve that potential, someone has to suggest quantitative estimates of the PDO (or other possible oscillations.)

If you, or anyone, has such numbers, I'd be happy to do an analysis. You know Tamino will look at whatever I do. So, there is some cross checking on the analysis. If the realistic values of the PDO can explain the outlier, I'd be perfectly happy to say that. (And estimate the period of ambiguity too boot.)

But right now, you need to realize that there is a flat spot. It's not just one month, one day etc. Pointing to downturns due to volcanos isn't going to explain it away. Something at least somewhat quantitiative needs to be done to convince others that the flat spot is consistent with 2C/century. (Or, we can just wait for 17 years until the PDO switches again, meanwhile, this will oscillates in and out of falsifiation. Everytime it unfalsifies, I'll say say "Could be beta error!", and I'll state the beta error and explain. I've already promised my readers another post on what to expect to happen if the falsification is a true positive, and what to expect to happen if it's a false positive. I've thought through the discussion, but I need to run the numbers. )

[That was a long comment, and you've probably attempted to do too much in it.

To me it seems clear that you haven't justified using C-O over LS. Moreover, using the different methods produces very different results. Asserting, as you appear to, that C-O is obviously correct won't do. If you've even demonstrated that the data are AR1 (which would be tricky, since they aren't...), I've missed it. The usual thing is to use LS and then deflate the d.o.f. - if you've got a reason for not doing that, you haven't explained it.

As to "So, yes, the falsificatino at 95% either means a) 2C/century is wrong or b) "weather" events that don't happen very often, happened. And it just so happens, it happened during the period selected for analysis" - yes, I think (b) is entirely likely. After all, if you'd done the analysis and nothing interesting had come out, you wouldn't have published it, or if you had no-one would have bothered read it. As far as I can tell, if you do the fit from 2001-7, you get nothing interesting. Now that March is in and warm, its entirely likely that your analysis collapses anyway -W]

lucia,

Regardless of who posted what when, Tamino uses both GISS and CRU. Yes, he did use GISS at first, but, as I noted on your blog, he has stated why he thinks GISS is a better measure of temp. Your initial prediction that Tamino would use GISS could have been based on knowing Tamino's preference for all I know. After all, you made that statement after Tamino had indicated he preferred GISS. Let's not assume bad faith, however.

I've always thought most likley reason for a false positive for this test would be the PDO shift. My "falsification at 95% confidence" is still a true statement even if the cause is PDO.

Sigh. And, yes, it's still and apples-to-oranges comparison with no meaning.

All you've manage to "prove" is that a prediction of a uniform rising trend without noise of 0.2C per decade isn't going to be tracked by the actual temperature record. The projections were never meant to be used this way. All you've "proved" is that you've falsified an unintended interpretation of the projections.

Hey -- what's my name doing over here? :)

My stats knowledge is limited to a single undergrad course many years ago, so I'm not qualified to have an opinion on OLS vs C-O. I was very surprised that the computed trends would be so different. My original comment to lucia (linked in the post above) was triggered by a coincidence -- the OLS trend from Jan2002 closely matches the C-O trend from Jan2001.

I'm curious about the results using March2008 data. I'm also curious about the trends using other regression techniques. (IIRC, lucia did mention one other technique before this became such a hot topic).

I've been too busy with my real job(s) to dig any deeper. Hopefully someone, preferably someone with a crowd around their soap box, has the time and inclination to update the trends.

[Ah, jolly good, I was hoping you might comment. Your comments have been taken as endorsing Lucias's position, which I somewhat doubt. No-one seems to have the time to do a proper comparison, though Tamino has done a number of different periods -W]

There's a page out there in Google that says:
"Sorry William, but statistically, your funky little graphs tell us practically nothing..."

By Hank Roberts (not verified) on 14 Apr 2008 #permalink

[Your comments have been taken as endorsing Lucias's position...]
On statistical grounds, I can neither endorse nor refute lucia's position. I merely checked her Excel spreadsheet for glaring errors and reported that I found none.

I've committed myself to learning R so I don't have to wait for others to do these analyses. I just need to find the time...

Sorry, lisa, but anyone who claims that 20% of the worlds cities will be collapsing (affecting 107 million americans, among other things) within four years is just a bit out there ...

Neither here nor there on lisa's link, dhogaza, but we should bear in mind that human societies have a rather poor track record when it comes to anticipating and adjusting to major disruptions. Tracking the response of Californians to the new *promise* (OK, only a 99.97% chance) by scientists that there will be another big one within the next thirty years will be instructive in that regard.

By Steve Bloom (not verified) on 15 Apr 2008 #permalink

One large earthquake in California within the next thirty years is on a slightly different scale than the collapse of modern civilization starting as soon as four years from now, which is more or less the claim of the site referenced...

Below table lists the world's cities that are likely to collapse completely or partially by or before 2012¹ in the first wave of collapse...

Is your city safe?

To prevent misuse of data, commercial exploitation, or property speculation, the project coordinators are withholding names and specific details of the first phase of world's collapsing cities until further notice. See table below for general information.

Followed by a table claiming that 500+ cities will collapse partially or complete by or before 2012.

That's out there.

The near-certainty of Another Big One in CA isn't!

I guess my point is just that if such a thing really was imminent our response to it would be less than optimal.

By Steve Bloom (not verified) on 15 Apr 2008 #permalink

To prevent misuse of data, commercial exploitation, or property speculation,

I can deal with a few hundred thousand people dying, but I'll be damned if I''ll abide property speculation.

Our models use data from multiple sources, some of which are believed to be reasonably unbiased and accurate.

Well, there you go. Of only AGW denialists were so honest.

"The warmth of 1998 was too large and pervasive to be fully accounted for by the recent El Nino. Despite cooling in the first half of 1999, we suggest that the mean global temperature, averaged over 2-3 years, has moved to a higher level, analogous to the increase that occurred in the late 1970s."

i think this should win an award for short term analysis.
hansen99

By steven mosher (not verified) on 19 Apr 2008 #permalink

Why? He doesn't state that those 2-3 years prove a trend, he's suggesting that they're *consistent* with a trend, that the extreme value is a result of adding a strong El Niño to an existing trend computed over a long period of time.

And, oh, on the "global cooling has started" front, land temps in March were the highest recorded, while temps at sea were (obviously, since the global average was only the third highest recorded) weren't - entirely consistent with the current La Niña cycle we're in.

Dhog.

"we suggest that the mean global temperature, averaged over 2-3 years, has moved to a higher level."

[I'm not about to defend H99 -W]

You need to lighten up. I dont see the word "consistent"
in this quote I see "moved to a higher level"

and no they didnt prove it. They suggested it. What lucia did
was show that the trend for the past 6 years was inconsistent
at some level of confindence ( say 95, 90 whatever) with a prediction of .2C per decade. Nothing more nothing less.

[No. Lucia has not done this. Lucia has used one, rather non-standard method, on one very carefully selected timeframe. Using the standard obvious methods on 2001-and-onwards produces a completely different result. Her methods are not robust or reliable -W]

No suggestion on her part ( or mine ) that the downward trend would continue or that the downward trend was some how disproof of AGW. In fact, I've repeatedly stated that I expected the warming to pick up again in due course, but as a matter of fact, one could do statistics on 74 months of data and one could construct wide error bars for that, and those error would rule out at some level of confidence trends of certain magnitude. You can do analysis on short periods. It just happens to be very uncertain. The shorter, the more uncertain. But if you predicted the next 10 years would see 10 degrees C increase, then after a few years I could probably rule that out with some confidence if the actual figure were say 1C.

By steven mosher (not verified) on 19 Apr 2008 #permalink

William,

Lucia has not used ONE METHOD that is unreliable.
She used OLS, and reported those results. She used o-c
a standard method, cookbook stuff, and reported those results.
Some of her readers suggested modelling MEI and throwing that in, she did that. Other people complained about averaging all five temperature series, so she did the analysis 6 ways. Now, JohnV is suggesting some other work.

IT'S CALLED OPEN INQUIRY. what if we do it this way, what if we do it that way? does this method work? is that method robust? here is my data. here is method.
does our answer change if we change methods? which method is better? Since Lucia is a practicing engineer she is used to doing this kind of 6 ways from sunday analysis.

By steven mosher (not verified) on 23 Apr 2008 #permalink

Since Lucia is a practicing engineer she is used to doing this kind of 6 ways from sunday analysis.
Posted by: steven mosher | April 23, 2008 8:44 PM

I always thought that "analysis" involves understanding, not just throwing numbers into black boxes you don't understand to see if the results match long-term trend projections that don't take into account short-term variability due to ENSO, volcanic eruptions etc and therefore were never intended to be short-term forecasts in the first place ...

All she's proven is that the system is noisy therefore can't be expected to fit a linear trend short term, as though the world of climate scientists have ever claimed anything to the contrary.

But the likes of you just lap that stuff up like it's ambrosia, don't you?

dhog,

"All she's proven is that the system is noisy therefore can't be expected to fit a linear trend short term, as though the world of climate scientists have ever claimed anything to the contrary."

She hasnt proved anything of the sort. What she has done is
apply a standard method to a dataset. She has estimated the trend in that dataset and noted the errors associated with that estimate. And she has noted that the IPCC projection
for the same time period is outside of those error bars.
That's it. This will change in the future. You should actually read what she writes. She doesnt bite;

WRT to finding trends. It is true that if your goal is to FIND or confirm that a small trend ( .2C decade) is present in a noisy signal, you need a lot of data. So as to avoid beta errors or type II errors. to RULE OUT a specific trend
Say, 1C per yeat, to RULE OUT or falify such a hypothesis, you can do this with less data, see alpha error.

So, the amount of data required to identify small trend is larger than the amount required to rule certain trends.

Anyway, you and I won't ever agree on much, even when we agree.

By steven mosher (not verified) on 25 Apr 2008 #permalink

Well, one nice thing about standards is that there are so many of them, as they say in computing.

Why does William say "relatively nonstandard" while Steven says "standard" about this particular method? Basis? Evidence of use of this method in other publications?

[The method Lucia used is not standard, it is essentially unheard of within climatology. There are other, standard, methods of accounting for autocorrelation, which have the advantage of not assuming the data is AR1. Why Steven thinks its std is a mystery to me - is he claiming to understand this area? I didn't think he did - W]

By Hank Roberts (not verified) on 26 Apr 2008 #permalink

She hasnt proved anything of the sort.

Her original claim was to have proven to a 95% confidence level that IPCC projections are wrong.

Now, perhaps her journey has included enough education for her to realize her original claim was crap. I haven't followed her journey. Not worth the time or effort since the *premise* that a six-year interval including a strong La Niña can be meaningfully compared to IPCC projections is bogus in the first place. It's a misrepresentation to claim that the IPCC projects monotonically increasing temperatures with no short-term variability due to ENSO, volcanos, etc.

You can smear as much lipstick on that pig as you want, it's still going to be a pig.

And, no, I'm unlikely to agree often with a man who compares Mann's work to the Piltdown Hoax.

William. C-0 is Unheard of in climate science? Do you want an opportunity to check on that?

Enviroentat statisics. Chapter 7. page 269

www.stat.unc.edu/postscript/rs/envstat/intro.ps

I havent had the time to run down all the cites in that
chaper but I did see one reference to Tom karl.

There are more, of course

http://www.agu.org/pubs/crossref/2005/2005JD005895.shtml

So, not unheard of.

The technique is standard cookbook statistics. The method has been used and explored in many areas of climate science.
That does not make it a correct method for all time series, but it is surely not unheard of. present company excepted.

By steven mosher (not verified) on 28 Apr 2008 #permalink

Belated thanks to William for the inline reply:

[The method Lucia used is not standard, it is essentially unheard of within climatology. There are other, standard, methods of accounting for autocorrelation, which have the advantage of not assuming the data is AR1. Why Steven thinks its std is a mystery to me - is he claiming to understand this area? I didn't think he did - W]

Did you ever get any sensible answer about this stuff? New people (or new userids, anyhow) keep appearing proclaiming the truth of whatever it is Lucia is supposed to have done to refute climatology and pointing vaguely to her site as evidence (sigh). There's one now at RC going on about it.

Not that I think this stuff is ever laid to to rest.

[No, I never got an answer, or expected one -W]

By Hank Roberts (not verified) on 05 Aug 2009 #permalink