McIntyre's irrational demands

In a comment to post on the Barton letters, Ed Snack claimed that

Michael Mann made an error in MBH98, he confused the square root of the cosine of the latitude with the cosine

Now if you look at MBH98, cosine latitude is only mentioned here:

Northern Hemisphere (NH) and global (GLB) mean temperature are estimated as areally-weighted (ie, cosine latitude) averages over the Northern hemisphere and global domains respectively

I did a bit of searching and found that Snack's source is this
statement in the supplementary material for von Storch at al's paper
"Reconstructing Past Climate from Noisy Data" DOI:
10.1126/science.1096109:

Our implementation of the MBH method essentially follows their
description in their original paper (S17). The statistical model was
calibrated in the period 1900-1980. Monthly near surface-temperature
anomalies were standardized and subjected to an Empirical Orthogonal
Function Analysis, in which each grid point was weighted by (cos
φ)^(1/2), where φ is the latitude (Mann et al. 1998
erroneously use a cos φ weighting).

But the area of the grid cells that MBH use is proportional to cosine
latitude and not to the square root of cosine latitude so I posted a
comment
suggesting that von Storch was mistaken.

Steve McIntyre then pounced on
my comment, presenting evidence that von Storch was correct. He even
stated that my comment was more worthy of criticism than McKitrick's
mixing up of degrees with
radians
in a journal paper
touted as a bombshell that refuted global warming.

It seems that if you want the output from PCA to be weighted by area,
the input has to be weighted by the square root of area. I don't know
enough about PCA to know for sure who is correct here, but certainly
von Storch's criticism has not been refuted, so I
retracted
my comment.

Neither von Storch nor McIntyre seem to think that the weighting issue
is very important. Von Storch just mentions it in passing and McIntyre
as not bothered to find out what effect it has on the final
reconstruction.

Nonetheless McIntyre repeatedly demanded that I post a ferocious
denunciation of Mann's weighting error. He felt that I was obliged to
do this because my single post on McKitrick's mixing up degrees with
radians when calculating the cosine of latitude meant that I
specialized in cos latitude problems. Now his demand is rather
irrational. Firstly, "cos latitude problems" is a gerrymandered
category engineered to create a false equivalence between McKitrick's
error of using degrees when he should have used radians in a linear
regression and Mann's error of not taking the square root of his
weights in a Empirical Orthogonal Function Analysis. Secondly, one
post out of almost 800 on this blog does not make me a specialist on
that topic. Thirdly, even on a topic where I do specialize, like,
umm, Lott, I still don't have to post on every little move Lott makes.

I explained this to McIntyre, but he insisted that I was this strange
"cos latitude specialist" thing. I don't think he was doing it to annoy me---he seemed to have completely convinced himself. He then felt entitled to
deliver a stream of jibes and insults, accusing me of hypocrisy, of
being petulant and of being a troll. He does this to others as well,
calling Gavin Schmidt and
Caspar Amman "Dumb and
Dumber"

He also falsely claimed that I attributed McKitrick's degrees/radians
mix up to McKitrick and McIntyre and falsely claimed that my
criticism of Essex and
McKitrick
was "mostly just
belligerence". Nor would he correct these falsehoods.

If McIntyre's dealings with climate scientists have been anything like
his behaviour towards me, with his irrational demands and unpleasant
manner, I can certainly understand why they might not wish to
correspond with him.

Tags

More like this

If McIntyre's dealings with climate scientists have been anything like his behaviour towards me, with his irrational demands and unpleasant manner, I can certainly understand why they might not wish to correspond with him.

Absolutely. Stevie Mac's acting like an a-hole.

Note the responses to the Barton letter - the science community is saying they make their information freely available to colleagues, not amateurs acting like a-holes because they have a character assassination site, which presumably should cow or goad someone into making a mistake.

All the people that conduct themselves in this manner are cowards. They should be treated with contempt. Hit them back. This false umbrage about Mann acting uncooperative is just a manufactured tactic of intimidation, nothing more. Mann cooperates with his colleagues all the time - he is in no way obligated to spend hours and hours on amateurs who act like a-holes. No one expects any other public figure - mayor, council mmember, senator - to spend an inordinate amount of their time on a minor consitiuency (that is: someone not conversant in an issue, but wanting to argue endlessly about that issue).

D

From what I can see, this boils down to three questions: 1) Is Von Storch right? 2) Did Mann et al misuse the cos weighting? and 3) What if they did?

1) The point of Empirical Orthogonal Function Analyses is to project a spacio-temporal dataset (in this case) onto a set of orthogonal basis vectors for which the residual variance is minimized. If I understand it correctly (and I'm not at all certain that I do) this involves generating the EOF's from the covariance matrix of the observational data in question, which amounts to maximizing [e^T]R[e] subject to the condition that [e^T][e] = 1, where e and e^T are the eigenvectors that will be the EOF's, R is the covariance matrix of the observational dataset, and ^T denotes "transpose". The covariance matrix will be given by R = (1/N)[X][X^T] where N is the number of observational data points and [X] is the dataset matrix.

So, here goes a big leap of faith that may require correction from someone more knowledgeable than me about these things.

If I'm reading all this right, we will be generating our basis eigenvectors not from our observations [X] but from their variance [X][X^T], and if we want the end result to come out dimensionally correct (areally weighted by latitude) we'll have to weight data points by cos(L)^0.5 rather than by cos(L).

Again, I am in over my head here, so if any of this needs correcting I'm all ears. In any case even if I missed the mark, from what I've seen most of the EOF type analyses of this sort do seem to use cos(L)^0.5 and not cos(L), so it appears that Von Storch is correct.

2) That said, based on the quote Tim gave from Mann et al, it's not at all clear to me that they did use cos(L) rather than cos(L)^0.5. The language could be taken to mean that they put cos(L) directly into their EOF analysis, or that they simply did the calculation so that they'd end up with a cos(L) correction overall after the method had been applied. The language they used lands on my ears either way. Do we know for fact that they really did use cos(L)?

3) Even if they did use cos(L) and not cos(L)^0.5 I doubt the impact would be significant. If the scenario I just laid out is anywhere near the truth, the overall impact would be latitude corrections that are "areally" weighted as cos(L)^2 rather than as cos(L). Both will vary from 0 to 1 between the equator and the poles, the "weighting" curves will be of a similar shape, but they will have profiles that differ in proportion somewhat so the overall effect will be to generate a set of EOF eigenvectors that will also be somewhat different in their relative proportions but otherwise very similar to what would have resulted from using cos(L)^0.5 in the original analysis. The suggestion that an error like this would lead to something radically different from the Hockey Stick and undermine Mann et al's overall conclusions is not very convincing.

By contrast, confusing radians and degrees in a cos function throws the argument of the function off 57 fold from what it should be causing a function that should vary smoothly from 0 to 1 to oscillate wildly, changing sign many times in the process. That absolutely WILL ruin the integrity of an areally weighted latitude analysis!

By my lights, McIntyre is way out of line here (what a surprise :) ). There is no rational basis for the claim that Mann et al's cos(L) "error" is even remotely as serious as McKittrick and Michael's even if it is real, and it doesn't change a thing regarding the Hockey Stick.

Scott, I looked at his source code and it certainly looks like Mann used cos rather than sqrt cos.

read (1,*) idum1,idum2,lon(i),lat(i)
cs(i) = dcos(lat(i)*pi/180.d0)

...

cslat(jc)=cs(j)

...

c set gridpoint weight on instrumental data
c
c default weight = cos(latitude)
c
do j=1,iabv
weight(j)=cslat(j)
end do

I think you may well be right that this doesn't make much difference. Moberg reported that their reconstruction is the same whether they do it unweighted or area weighted.

Why shouldn't scientists make their information free to A-holes (critics, amateurs) as well as colleagues (union card holders, buddies). Who cares if someone constructs a specious argument with it. In the end truth will out.

And I think it is perfectly reasonable of the Energy subcommittee to look into quality of energy related science.

Tim,

If memory serves me, you have access to a graph-rendering program of sorts that can produce curves from data points. It would be instructive to post plots cos(L) vs. cos(L)^0.5 and/or cos(L) vs. cos(L)^2 together. This would give a pretty good visual of the differences that would result from this error. Regardless of whether we're talking about the difference in magnitude of the two or the difference in proportional weight assigned to the various EOF's, I'll bet the impact would be minimal.

Steve McIntyre seems intent on demonstrating that he is a man of very little quality. Shameful.

Re: TCO #4

Simply because it isn't a simply a matter of handing over the data. In the case of M&M, through a combination of corrupted data that they had no way of recognizing as being corrupt, their inability to read and understand enough of the background research to even follow what MBH did, and an unseemly rush to get their attack paper to press has caused far more problems than it was ever worth.

Given that, can you think of a single reason why any competent researcher would, after making their data and methodology available to the research community - the experts - then turn around and hold the hands of a couple of amateurs who you suspect are only doing their "audit" in an effort to skewer you? How much time do they devote to the next 10 crackpots that decide to tackle the problem? How far back on the back-burner do you put your own on-going research in an effort to hold the hands every wing nut denialist out there?

As far as politicians meddling in "assessing" the merits of scientific studies, you might have a point if politicians weren't largely scientific illiterates. Barton is merely quoting chapter and verse from the denialists handbook and as such his review is nothing more than a thinly veiled attempt at continuing the hatchet-job that started with M&M.

By David Ball (not verified) on 09 Aug 2005 #permalink

Nonetheless McIntyre repeatedly demanded that I post a ferocious denunciation of Mann's weighting error. He felt that I was obliged to do this because my single post on McKitrick's mixing up degrees with radians when calculating the cosine of latitude meant that I specialized in cos latitude problems.

Really? It seems to me you have 11 posts in regards to McKitrick and you mention the degrees-radian thing in at least 3 of those posts.

Totally disagree. Hand over the data. Then argue about the interpretation/misinterpretation.

TOC: The data is in the public domain, posted/deposited on various web sites and has always been although there has been migration since 1998. Are you talking about something else?

By Eli Rabett (not verified) on 09 Aug 2005 #permalink

Yeah, I would include the details of the methodology as well.

I'm sorry, but I haven't kept track of all the details of data (all data, some, etc.), which studies, computer programs vs. algorithms, responses to Congress questions. etc.

My point is more philosophical and is a response to the earlier poster who said one should not share information about one's work with a critic if one thinks that critic is doing poor criticism of you. I say instead to share the information and let the fight occurr. i realize this is idealistic, but I think it's mainstream philosophy of science.

Let's say I had done experiments showing no effect of Cold Fusion (or demonstrating HighTc or whatever...in any case something that we both agree is true). And let's say that I have a very pernicious critic who is criticizing my work in (I believe) a flawed perhaps even tendentious manner. I would still share all the details and then fight it out on the details or let thrid parties judge the work and the criticism.

TCO, I guess that depends on who the critic is. Data and methodology information should always be made available, but how far does one go? The intent of this free access to information is to allow other experts in the field to repeat and possibly extend the original work. It is not intended to be put in a form that the man on the street can follow from point A to B. The operative word here is "peer" review. McIntyre is hardly a peer and his ham-handed handling of the whole MBH study clearly shows this.

By David Ball (not verified) on 09 Aug 2005 #permalink

Totally disagree.

I don't care if an "unqualified critic" sees the materials. Conversely, it is all too easy to say that some people who disagree with me are not "peers" and so can't see the details. At least have that argument when looking at the results of the criticism, not in an effort to forestall it by denying data, methods, etc.

I believe in evolution and were I doing research there, would have no problem letting the ID silly-billies see all the details of my work. Even if they can construct some silly arguments from finding out that I mislabeled a fossil or two, it's not going to validate ID or devalidate evolution. I have nothing to fear.

Disagree away. It doesn't change the facts. These materials are made available for peer review. They are also there for other interested parties to explore, like myself. Having said that, the authors are not under any obligation to go beyond making their data and methodology available.

I also don't care whether M&M agree or disagree with what MBH's work has shown. The real problem is that even after their "audit" was shown to have more holes than a Swiss cheese, they continued to make baseless allegations about the MBH study. Given the lack of rigor displayed in their original study, I'm not surprised that the experts want to have little to do with them.

McIntyre's current demands are merely another attempt to gain some legitimacy for their (M&M's) "work" at the expense of someone else. The many and varied mistakes by McKittrick documented by Tim here in this forum clearly show a pattern of scientific illiteracy that cannot be ignored.

Let's be clear what we're talking about. Any first year student knows that the inputs to most trigonometric functions are in radians, not degrees. It doesn't matter whether you are writing in Perl, C, R (I think), or any of a number of programming languages. For a supposed expert to make a fundamental mistake like is alright - mistakes happen - but then to claim that such an error has little or no impact on the results is a joke. If you look at the supposed error by MBH, and it may well be one, no-one can say with any certainty what the impacts on their results are. Add to that we're talking about a now 7 year old study and it's pretty clear that McIntyre is simply looking for some publicity. If an error is eventually confirmed in the MBH study, that's great. That's the nature of science: it's self-correcting, but that correction will have little or nothing to do with the agitating coming from a couple of strident amateurs.

By David Ball (not verified) on 09 Aug 2005 #permalink

I have no problem with someone (especially an economist of time) deciding to prejudge an argument by a person based on their credentials. But I see no reason to keep materials secret from such a person.

E Bright Wilson AN INTRODUCTIOIN TO SCIENTIFIC RESEARCH is the classic in the field of scientific method/ethics. There is lots of stuff in there about sharing enough details to allow complete reconsutructions, about not using appeals to authority, etc. NOWHERE DOES HE SAY "share supplemental materials only with 'peers'".

If McK is full of guff, fine. But that is no reason to restrict the source materials or details of the methods. Let his fallacies speak for themeselves. What are you afraid of? Could you imagine Michelson and Morley restricting the access to their data or to all the details of their experiment...even if the person asking for it was a crank?

BTW, I have my union card. Have published, etc.

TCO: You keep missing the point. It is not about keeping materials secret from someone. Rather it is about how far one goes and how much time one wastes to provide people with the material that they think they need. And, this is particularly true when they seem to need more material than others need...precisely because they are out of their field.

If Mann et al. had an infinite amount of time at their disposal, then you might have a case that they should use some of this infinite time in this manner. However, they have a finite amount of time and they have the right how to decide how to make the most productive use of it. And, they may decide that helping someone who is just trying to hit them over the head with a 2-by-4 (analogously speaking) is not such a productive use of their time.

By Joel Shore (not verified) on 09 Aug 2005 #permalink

That is a new argument: it is too much work to share the details. Of course at some point and in some cases this argument will obviously be right. Regardless, I still disagree with the earlier poster who said that details should be shared with peers, but not with duffers. Note that if you're going to share detials with a select group, the "too much work" rationale goes out the window.

TCO, feel free, at any time, to show where information has been restricted/withheld. You need to figure out the backstory to this before making statements like this.

By David Ball (not verified) on 09 Aug 2005 #permalink

That is a new argument: it is too much work to share the details.

I tire of the poor rubes falling for this sophomoric argument.

TCO, if you pester the plumber with dozens of ill-informed questions while he is working, he will, eventually, tell you to GFY.

Why? You are an ignorant idiot preventing him from doing his work. Your daddy should have taught you to let the man do his work.

Now. Change 'plumber' to 'climate scientist' and 'ignorant idiot' to M&M.

The world works the same anywhere on the planet. No difference.

I can't make it plainer than that.

HTH,

ÐanØ

Disclosure in the peer-review process is all well and good, and I think everyone here has made some good point in both directions. Certainly, ignorance should never be encouraged or given a stage presence that will grant it the appearance of having more credibility than it deserves. But on the other hand, the scientific process is self-correcting and as TCO says, ignorance will eventually be revealed for what it is.

But all this assumes that we're debating the merits and demerits of scientific ideas in open forums.

Here, I believe we've crossed over to something different. McIntyre is not merely debating Tim about a scientific point, or even the impact of an error in research on a point that's mutually important to both of them (as Tim pointed out, he doesn't appear to have even investigated the actual impact of the error). He's demanding that Tim pen a denunciation of another scientist's methods, regardless of whether the criticisms are compelling to him or not--And not on the basis of any specific criticisms he has of those methods, or even of the error itself, but because in a rather twisted mutation of the argument-from-authority fallacy he's identified Tim as a "cos latitude specialist".

If this has anything at all to do with the peer-review process or socratic dialog, I'm not seeing it.

Furthermore, Barton and the Far-Right in American government have aligned themselves with McIntyre as an ally and an authority--one who can be, and will be used as an authority for justifying all sorts of policy decisions.... and inquiries of a more calculated and political nature.

We might as well face it. Whether anyone admits it or not, we all know that Barton's inquiries are not about scientific debate. He's not a scientist. He does not participate in scientific forums. And most tellingly, his inquiries directed at Mann et al. were sent personally to them under the covers--not presented in the open where they would be subject to the sort of reasonable debate process we're assuming here. They got out in the open mainly because they were leaked. He certainly didn't gird up his loins and declare his complaint publicly.... until he had to. And.... McIntyre is at the center of that whole fray.

When scientific arguments turn into demands of other scientists, personal attacks, and politicians making "inquiries" with implicit threats behind them, the battlefield has changed.... and the Geneva Convention no longer applies! Trofim Lysenko did not rise to ascendency via peer-review or the free debate of ideas.

While this situation is certainly not that severe (no one is suggesting that Barton is a Stalin, or that McIntyre is a Lysenko), it strikes me as part of a larger trend in American politics that bears some chilling similarities. It's no accident that in last fall's U.S. presidential election, virtually every American nobel laureate in a scientific or medical field endorsed John Kerry rather than Bush!

  1. I'm sure there are lots of groups that would prefer not to share data with Congress when Congress asks for it, not trusting Congress to do the right thing. Tough.
  2. Dave, I'm responding to Dano to make it clear that I disagree with the policy of withholding data from "aholes" and sharing it with "colleages". I'm not clear that this has occurred. Perhaps Dano was just urging the policy, but it was not implemented.

"I'm sure there are lots of groups that would prefer not to share data with Congress when Congress asks for it, not trusting Congress to do the right thing. Tough."

We're not discussing Congress. We're discussing a particular Congressman--who is driven by extreme ideology and scientific illiteracy rather than the scientific process that I thought was the point here. And also, a scientist who appears to have grown weary of that process and is getting a rush out of confrontational passion fueled by the same ideology.

"Tough?" That's what Cotton Mather and the judges said during the Salem witch trials. Barton and McIntyre are on little better footing than them, and I'll bet good money that history will also judge them accordingly.

I'll trust this "Congressman" and his political and "scientific" colleagues to "do the right thing" when they prove that they're worthy of that trust. In the meantime, they join the ranks of Cotton Mather.

David Ball, you say:

"I also don't care whether M&M agree or disagree with what MBH's work has shown. The real problem is that even after their "audit" was shown to have more holes than a Swiss cheese, they continued to make baseless allegations about the MBH study."

I've followed the M&M versus MBH debate quite closely, not least because PCA falls within my area of expertise.

Rather than their audit having "more holes than Swiss cheese" I would characterize their work as quite devastating to MBH. Specifically:

  1. M&M have demonstrated that the hockey-stick is sensitive to the presence/absence of the controversial North American bristlecone pine (BCP) series. The debate about centered and non-centered PCA and the application of Preisendorfer's N-rule is all about getting the BCPs into the reconstruction.
  2. The R2 statistic is near-zero for the MBH reconstruction for the 15th century. McIntyre has shown (from the recently disclosed code) that the R2 stat was calculated but not reported by MBH. In his response to Barton, Mann states that the RE statistic is "preferred". Essentially, he plays down the R2 stat by saying a high value isn't sufficient to establish significance (true), but avoids the fact that a low (or zero) R2, as McIntyre has calculated, is a huge red flag. It's also at odds with Mann's claim that the MBH reconstruction survives cross-validation statistics.

It's difficult to believe that Mann doesn't understand this.

McIntyre might be an "amateur" as a climate scientist, but he is a good statistician, and MBH is primarily a statistical exercise.

By James Lane (not verified) on 09 Aug 2005 #permalink

James, I looked at this by the two researchers McIntyre called "Dumb" and "Dumber" and it seems that you get a hockey stick whether you do centred or uncentred ananalysis. Yes, excluding the bristle cone pines gets rid of the hockey stick. But what you then get is anomalous warming in the Little Ice Age and I don't think anyone believes that is correct.

As for r2, perhaps I'm missing something here, but if a test is not significant it doesn't mean that you accept the null hypothesis.

Dano
How about M&M = plumber, Mann et al = idiots?
Mann et al are not statiticians, they are climate scientists!
If you see a plumber doing something wrong you query it, I do when I renovate my house, after all the plumber wants to get out and get paid, I have to live in the house.
M&M are not arseholes and have the right to ask questions, they are not dumb questions they are asking. Upsetting the AGW brigade is certainly not politically correct, tough, live with it.
Regards from New Zealand
Peter Bickle

By Peter Bickle (not verified) on 10 Aug 2005 #permalink

Tim,

Using uncentred-PCA, you get a hockeystick using the first two PCs (the BCP series load on the first PC). Using conventional PCA, you need the first four PCs to get a hockeystick, as the BCPs load on the fourth PC. You invoke Preisendorfer's N-rule to justify inclusion of the fourth PC.

Regardless, the hockeystick is totally sensitive to the inclusion of the BCP series, that is to say, the MBH reconstruction is dependent on a handful of high altitude North American tree ring series. These series exhibit a 20th centuy growth spurt tht pretty well everyone accepts is not temperature related (including Graybill and Idso, who did the fieldwork).

Whether a reconsruction absent of the BCPs produces anomolous temperatures in the LIA or elsewhere is completely beside the point. It's apparent the the MBH98 reconstuction lacks skill.

"As for r2, perhaps I'm missing something here, but if a test is not significant it doesn't mean that you accept the null hypothesis."

Literally, that's true, but to have a significant RE stat and a near-zero r2 is a huge flag that something is wrong. You can't simply pick and choose statistical tests that support your argument. Further, to calculate a poor r2, ignore it, and not report it is extremely poor practice. And then to go on and claim that your reconstruction is supported by cross-validation statistics is pretty well outrageous.

By James Lane (not verified) on 10 Aug 2005 #permalink

The R^2 figure strikes me as a silly thing to calculate here. We know from simply eyeballing the data (and the nature of the task) that what we are doing is trying to construct a rough proxy from a lot of disparate and noisy series. Therefore the R^2 is going to be low (one thing that we do know is that climate change is of the order of one degree C (or one degree M) and the variance of annual temperature data is much more than that). What matters is whether these disparate and noisy series have common components that show the hockey stick shape.

to have a significant RE stat and a near-zero r2 is a huge flag that something is wrong

I don't think it is; I am willing to be convinced otherwise on this but my intuition would be that this is exactly what you would expect if you are looking for a small but genuine effect in some noisy data.

Re: James Lane #24

To "audit" something means that you follow the author's methodology by the book. That is the claim that M&M originally made. They didn't do what a number of other author's did and try and arrive at MBH's results using independent means. They claimed to have "audited" MBH which is completely silly not least of which because they: didn't have the proper data, didn't understand the author's methodology, didn't RTFR material to understand the back story, didn't use the author's own software, ...

Once they'd arrived at their "results", results that fly in the face of every climate study I've ever read - imagine getting strong warming during the LIA - M&M failed to ask the one question that any competent analyst should ask: "where did I screw up?". That has to be absolutely the first thing anyone doing any analysis does when they arrive at results that fly in the face of conventional wisdom, yet these "experts" failed that simple test. Instead, they marched off to a third-rate journal known for its highly questionable editorial practices and had their results published. Anything else they may have done, and make no mistake, MBH did make some errors in disclosure of their data and methodology, has been colored by M&M's very questionable treatment of their original "audit".

As for your points about the BCP series, that is somewhat debatable. I won't debate your expertise about PCA techniques. I'm a dedicated amateur, but I have played around with both MBH's original data and that supposedly used by M&M and there are a lot of questions to be asked. I'm honestly puzzled how and why MBH selected some of their data sets, why they used Rule N given that it often underestimates the number of significant EOF's, ...

That is in the nature of scientific inquiry, however: different scientists debating different methods of doing things. What M&M have done goes far beyond that and IMHO has cheapened the process, and silly demands of the kind McIntyre makes here do nothing to make me feel better about their motives or their science.

By David Ball (not verified) on 10 Aug 2005 #permalink

It appears that McIntyre and McKitrick made a mistake in their calculations of RE because they did not compensate for the variance of the instrumental record during the training period. The effect of this mistake is that MM's conclusion that the AD 1400 step of MBH98 is without statistical significance, is itself incorrect and that step does have significance using MM's methods.

http://web.mit.edu/~phuybers/www/Hockey/Huybers_Comment.pdf

The preprint also contains a suggestion for a better normalization of the No Am Tree ring series and some useful comments on the biases induced by normalizations in both the Mann Bradley and Hughes papers and the McKitrick and McIntyre ones.

By Eli Rabett (not verified) on 10 Aug 2005 #permalink

David Ball, it's a core principle of scientific philosophy that results be reproducible. If they aren't then either the original results is wrong or the methods have not been adequately disclosed. cf E. Bright Wilson's classic: AN INTRODUCTION TO SCIENTIFIC RESEARCH. cf. Cold Fusion sillies.

If the author's are not going to share their computer code (easiest thing), then they should share an algorithm. but of cousre they shared an algorithm that did not specify every single thing the code did. And then you blame the people who try to replicate the work? That makes no sense.

And spare me from some argument that MM are just too dumb to replicate stuff. Sheesh.

TCO,

"If the author's are not going to share their computer code (easiest thing), then they should share an algorithm."

They DID share their code. Tim even reproduced part of it in post #3 below. Where do you suppose he got it from? McIntyre?

Nice try, TCO, but if I want reproduceable results on matters scientific I go to a scientist. I don't go to a florist. That is the essense of "peer" review and any way you slice it, you can't make either of M&M into a peer.

As I suggested before, you should get your facts straight about who made what available. Frankly, you're working with information that has little basis in fact. Get the back story before making unequivocal statements about who did what.

By David Ball (not verified) on 10 Aug 2005 #permalink

Eli,

The Huybers paper you link to is an unpublished letter to GRL, and I imagine that M&M will be given an opportunity to reply. As such it is not the "last word" and it is appropriate to wait for the response for a full discussion.

Nevertheless, it is interesting that Huybers confirms two of M&M's most important findings.

First, Huybers agrees that MBH's unconventional PCA results in a highly biased PC1. He goes on to criticize M&M's own approach, arguing for a "third way" that involves rescaling the proxies to standardize their variance. That isn't a "standard" procedure in my experience. It will be interesting to see what McIntyre has to say about it.

Second, Huybers confirms the non-significance of MBH's r2 statistic. The rest of the discussion is about whether or not the (significant) MBH RE statistic is spurious (as M&M argue). However the latter is a sideshow. The important point is that you can't have a good model with a cross-validation r2 near-zero.

Scott,

Mann has only recently released his code, coincident with his reply to the Barton letters.

By James Lane (not verified) on 10 Aug 2005 #permalink

Dave, I've got my union card and I've done useful work across fields. Maybe I shouldn't? Do I have to be a tenured professor? I can tell how relevant someone's analysis by looking at it. I know senior VP level F50 Ph.D. trained execs who I don't trust their technical intuition. I know others who have a BS late in life who used to be technicians and are brilliant. Yes, this is not often the case. But it happens plenty. I see no need to prejudge and then keep people from looking at the data, method, details (and I'm not sure if that was done by MBH dudes...my initial point was against even the argument that this is reasonable from poster number one. Who cares if MM are unwashed. Let them have their swing at the bat. I'm no genius, but I can smell brains. I get the impression that Tim and M and M and M and B and H all have enough. Why not just let them duke it out on the actual math/science technique issues.

Ok, if they already shared the code, then fine. I haven't kept good track of all the he said, she said what happened when. Somehow, I had the impression that the code was being withheld for a while. That there was some big kerfuffle about not having to share it and all.

I'm not prejudging anything, TCO. I've followed this from the beginning. I've gone through the original MBH98 paper and I know there were flaws in it, not in the methodology, which seems fairly robust, but in disclosing some of the data that were used.

I'm also very aware of the gross errors perpetrated by M&M, both through their own sloppy techniques and poor analysis but also their own stupidity, and I use the latter term with all its intended vigour. Tim has extensively documented many of the nonsense statements of McKittrick. Read through them. See if they make any sense to you.

This isn't a case of the big bad scientists ganging up to close ranks around some of their own in an attempt to keep the diligent amateurs from finding out about their nefarious plot. It's rather the opposite, and this latest effort by McIntyre does nothing to change my view.

By David Ball (not verified) on 10 Aug 2005 #permalink

James, it seems to me that with respect to McIntyre and McKitricks claims, the point about their making a mistake in calculating RE for the NoAm Tree ring series is the most important. If this claim is true then a lot of the to and fro goes away.

It is also my opinion that the amount of bias introduced by Mann's normalization is exaggerated by McKitrick and others. That is pretty much confirmed by Huybers.

BTW, if you like von Storch, you like a fairly high climate sensitivity, much higher than implied by MBH98.

By Eli Rabett (not verified) on 10 Aug 2005 #permalink

David, you say:

"I'm also very aware of the gross errors perpetrated by M&M, both through their own sloppy techniques and poor analysis but also their own stupidity, and I use the latter term with all its intended vigour."

You keep asserting that M&M's work is full of errors. That doesn't seem to be the case for me. Why don't you itemise them so we can have a look?

By James Lane (not verified) on 10 Aug 2005 #permalink

Eli,

"James, it seems to me that with respect to McIntyre and McKitricks claims, the point about their making a mistake in calculating RE for the NoAm Tree ring series is the most important. If this claim is true then a lot of the to and fro goes away."

Not at all. The important point is that the cross-validation r2 statistic is near-zero. You can't have a true model in those circumstances. A significant RE is necessary but not sufficient.

"It is also my opinion that the amount of bias introduced by Mann's normalization is exaggerated by McKitrick and others. That is pretty much confirmed by Huybers."

I would hesitate to say this is confirmed before we see M&M's response to Huybers. As I said, Huybers view that that the proxies should be standardised for variance seems peculiar to me. (Of course, all this avoids the BCP problem.)

But both these issues appear to be beside the point. Using Huybers' preferred procedure, the variance explained by PC1 is still much closer to M&M than it is to MBH. For the significance issue, it's the poor r2 that is important.

I don't understand your reference to von Storch. I don't believe I've mentioned him?

By James Lane (not verified) on 10 Aug 2005 #permalink

James, you're expressing yourself very unclearly on the subject of R^2.

r-squared isn't a significance test and doesn't have significance levels. It's the ratio of the regression sum of squares to the total sum of squares. It's a measure of goodness of fit of a model, not one of statistical significance. You can have a model with a statistically significant effect and a low fit - in fact, for small effects in noisy data it's the only kind of model you can have (the first cohort of univariate lung cancer/smoking studies had r-squared figures around 0.2).

Could you be a bit clearer on what you mean when you say (repeatedly) that the R^2 figures in the Mann et al study are "not significant" since you can't be using "significant" in its normal technical sense?

d-squared,

That's a fair cop, except that I don't think I used that terminology "repeatedly", only in comment #36, which was careless.

r2 is inded a measure of model fit, and at a value of near-zero I would suggest that the MBH model is worthless.

By James Lane (not verified) on 10 Aug 2005 #permalink

Oh, boy, where to begin, James. This isn't really the forum for this, and I don't want Tim to have put the kibosh on it, but I'll post one thing to give you an idea. The original E&E article contained a host of questions that they claimed poked holes in the original MBH98: inappropriate data shifts, truncations, fills, ...

Many of the details were itemized at length on usenet, but frankly the noise level there was so high at the time that getting at the information is
a bit of a pain.

For example, one of the questions that was asked was whether there was an inappropriate truncation to MBH's series #11 from Central Europe. The following is what was itemized on usenet about it:

Series 11 is the Central Europe historical record, originally published
by Pfister. pcproxy lists values between 1550 and 1987. The series
goes to 1525. The data can be picked up from
ftp://ftp.ngdc.noaa.gov/paleo/historical/switzerland/clinddef.txt
and information on the compilation can be found in the same folder as
readme_swissindices.txt

The references for this data are:

Pfister, C. (1984): Das Klima der Schweiz von 1525-1860 und seine Bedeutung in der Geschichte von Bevoelkerung und Landwirtschaft. Bern

Pfister, C. (1992): Monthly temperature and precipitation patterns in Central
Europe from 1525 to the present. A methodology for quantifying man made
evidence on weather and climate. In: Bradley R.S., Jones P.D. (eds.)
Climate since 1500A.D., pp. 118-143. London

Pfister C., Kington J., Kleinlogel G., Schuele H., Siffert E. (1994):
The creation of high resolution spatio- temporal reconstructions of past
climate from direct meteorological observations and proxy data.
Methodological considerations and results. In: Frenzel, B., Pfister C.,
Glaeser, B. (eds), Climate in Europe 1675-1715.

of which I could locate the second. On page 121, second paragraph from
the bottom, one reads:

"The evidence increases in volume, density and diversity over time. For
the period 1525-1549 the entries originate mainly from chronicles and
annals. Accordingly, weather sequences are mainly described at a
seasonal level; information is missing for 43% of the months and the
enphasis is on anomalous rather than ordinary weather. In the second
period 1550-1658 monthly data from weather diaries and personal papers
are abundant....."

Again, On balance this illustrates the principal of RTFR and the danger
of someone unfamiliar with an area trying to do an "audit". To use the
central European index before 1550 would clearly have been a mistake. Note also that Bradley was one of the two book editors, so he surely knew a great deal about these series. Phil Jones was the other.

M&M shouldn't even have had to ask the question, James. If they'd read the references, references that were clearly available, they would have had the answers to the question. It's this kind of sloppiness that I'm referring to.

By David Ball (not verified) on 11 Aug 2005 #permalink

r2 is inded a measure of model fit, and at a value of near-zero I would suggest that the MBH model is worthless.

But why? We know this is noisy data. Why is the model fit more important than the significance of the PCs? I am nobody's idea of an expert on principal component analysis, but it seems to me that this is a noisy signal extraction problem being penalised for having a low signal/noise ratio. It doesn't mean the signal isn't there.

All you have to do to understand what d^2 is saying is compute r^2 for a straight line a + bx + crandom and look what happens to r^2 as a function of b/c, random going from -.5 to +.5

By Eli Rabett (not verified) on 11 Aug 2005 #permalink

David Ball claims that you should be able to work out which data series MBH'98 used from inspecting the references:
[quote]To use the central European index before 1550 would clearly have been a mistake.[/quote]
trouble is, nowhere in MBH'98 or the reference does it say these records should not be used prior to 1550; it just says the quality of the records goes down. Nowhere are the criteria set out; according to David Ball, we should "guess" what MBH did.

Trouble is, MBH's approach to the quality of the records is mixed. When it comes to the Gaspe tree records, they are so desperate to get the series in that they invent an extra four years of data ("padding") so that it will be included in the calculations; even when the Gaspe tree record includes only one or two trees for forty years. Now here, there is no ambiguity; it is quite clear that this sort of data should not be used ! And with an expert dendrochronologist on board MBH, how did this get missed ?

yours
per

I think Tim L raises the substantial point himself.
When McKitrick makes an error, Tim Lambert highlights it "McKitrick screws up yet again".
When someone suggests that Mann has made an error, "Clearly sqrt(cos) is wrong." "Looks like Mann is right and Ed Snack is wrong." "Looks to me like von Storch is the one who is wrong here."
When Mann is wrong, then [von Storch] "found an error in MBH98, though he does not seem to think that it was important".

It is very noticeable that TimL does not use the same language with Mann that he does with McKitrick. I wonder why.

yours
per

Trouble is, MBH's approach to the quality of the records is mixed. When it comes to the Gaspe tree records, they are so desperate to get the series in that they invent an extra four years of data

Ahhh...here's lil' per again, with his lil' language. Desperate. That's a big clue, right there.

Have you read the original paper yet, per, or are you still tap-dancing around that impediment to informed comment?

D

"Have you read the original paper yet, per..."
so tell me Dano-
do MBH add four years of data that don't exist to the Gaspe series ?
This is a nice and simple question- one so simple that you should be able to answer yes or no.
And as for desperate- tell me for how many other series do MBH invent data ?
And once you have looked at the data, perhaps you will tell us why the Gaspe series gets that special treatment ? Or is this just another anomaly you cannot understand ?

yours, per

Re #47.

No, Per, I know that when one goes to do any analysis one has to understand the data one is working with, it's nuances and it's limitations. That concept is the hallmark of doing good analysis. Good analysis is far more than mindlessly slapping numbers into an equation and looking at the output. It takes time and effort. You most certainly do not go through the process then raise flags in a third-rate journal questioning another author's work when you have been too lazy to read and understand all the documentary evidence pertaining to the data you are proposing to use. That is simply sloppy.

By David Ball (not verified) on 13 Aug 2005 #permalink

"No, Per, I know that when one goes to do any analysis one has to understand the data one is working with, it's nuances and it's limitations."
that is very good David. If you understand that, you will have no problem answering my question:
Did MBH add imaginary data points to the Gaspe series in MBH'98 ?
this is very simple; it is a matter of fact.
Where does anyone say that it is good practice to use dendrochronologies when there is only one or two trees ? This is straightforward bad practice; even the originating authors, Jacoby and D'arrigo, do not use this part of their data set for anything substantive !
So perhaps you can explain how these facts fit in with your theories about people who "have been too lazy to read and understand all the documentary evidence pertaining to the data".
per

Sorry, Per, I'm not getting into this again with you. This isn't the forum for it, and Tim has been very patient. I'll also make no secret of the fact that I believe - though can't prove it - that you are either a sock puppet for one of M&M or are affiliated with them in some way. The fact that you won't come clean about the relationship makes it very difficult to carry on a cogent discussion with you.

By David Ball (not verified) on 13 Aug 2005 #permalink

Dear David, you said:
"the gross errors perpetrated by M&M...their own stupidity..."
when you are invited to exemplify some of these gross errors, you can't identify a single, clear-cut example.
Indeed, when you start telling us about how important it is to read the references, and avoid using bad data, it becomes very quickly apparent that MBH have used sample sets which would fail your criteria for basic science.
I think there is a much simpler reason why you are running away.
yours, per

re: 31 from Mr. Rabett, on the Huyber's May'05 preprint comment on MM05

"It appears that McIntyre and McKitrick made a mistake in their calculations of RE because ..."

Followed by Mr. Lane's reply in 36:

"The Huybers paper you link to is an unpublished letter to GRL ... M&M will be given an opportunity to reply."

Well, I was curious if M&M had replied; so I simply GOOGLED "huybers mcintyre mckitrick comments", and found this link (3rd search entry) to a thread dated 18-Jul-05:

http://www.climateaudit.org/?p=265

Mr. Rabett - this ClimateAudit thread started over 3 weeks in advance of your poorly researched cheap shot. Given our little gasoline exchange the other thread, I again wonder whether you knew what you were posting here in 31 was already rejected/rebutted/false?

Read the thread. In addition to a GRL rejection (with rebuttal comment on-line by M&M, to match Huyber's premature release), it look's like Mr. McIntyre believe what Huybers brought up, when rebutted, actually make the MM05 case stronger against MBH'98? Unlike you I will wait for Huyber's reply to M&M, before deciding who's ahead in this argument?

But I do know one thing -- you're behind!

By John McCall (not verified) on 13 Aug 2005 #permalink

Comment #55 by John McCall perhaps requires some clarification.

The "on line rebuttal" in the linked ClimateAudit post is not for Huybers, it's for another (un-named) comment already rejected by GRL.

McIntryre does make a few remarks about Huybers in the comments thread, but also notes that Huybers submission (and another by Von Storch) probably will be published, and that M&M have replies for both of them prepared. However, neither of these replies are public - McIntyre says that it's his understanding that according to GRL policy they shouldn't be released before publication. So we still have to wait to see M&M's full response.

To be fair to Eli, when I made my own comments on Huybers here, I was unaware of (or didn't make the connection to) the linked thread on Climate Audit. However, I doubt that McIntryre's comment on that thread is anything like M&M's full reply to Huybers.

By James Lane (not verified) on 13 Aug 2005 #permalink

Thank you for the clarification -- my post usage of "rejected/rebutted/false" were to take into account at least the most likely of outcomes, based on the on-line comments from "Steve", as follows

"Steve: This one* and one by von Storch and Zorita are both in play. We have really nice replies to both articles. It's frustrating to me that Huybers has put this Comment on the internet - I didn't think that you're allowed to do this and haven't posted up our Replies, leaving this Comment unrebutted, when there are really excellent reply points.

On the RE point in Huybers, I did new simulations adding in a re-scaling step as MBH98 did it with white noise series and a simulated PC1. These also had a spurious RE statistics. This neatly added to our previous results and completely refuted Huybers' point."

*(my asterisk inserted above) refers to the Huybers_Comment.pdf

By John McCall (not verified) on 13 Aug 2005 #permalink

In fact, John McCall, you falsely claimed that it had been rejected, just as you have repeatedly made false claims about Moberg.

So, Tim
good to see you are following the thread. I have a question for you.
you are a computer expert, so I guess you must have some familiarity with maths- tho' I am not so sure about your statistics :-)
So MBH has now published code which showed that he calculated the R2 coefficient for his analyses in MBH'98, and knew when he published that these values showed inadequate model fit.
So what do you think about this ? Is this acceptable practice mathematically, and is it all right to simply hide data which don't support your case ?
yours, per

per will keep up this kind of thing for a very long time, *[Deleted: see comment policy TDL]* - as per has a veritable arsenal of little wordy-tricks to sow FUD:

Is this acceptable practice mathematically, and is it all right to simply hide data which don't support your case ?

See, what per is handwaving away from is that the rubes who fall for this stuff aren't sharp enough to know that:

  1. this was a first crack at it,
  2. subsequent studies all show recent warming is unprecedented.

Hide data my *ss.

per has a veritable arsenal of little wordy-tricks to sow FUD.

We can make our clever comments all we like, but may I suggest focusing on whether the FUD-sower has actually read the paper in question?

D

Dano- maybe you missed my previous question ?
"so tell me Dano- do MBH add four years of data that don't exist to the Gaspe series ?
This is a nice and simple question- one so simple that you should be able to answer yes or no."

You said, "Hide data my *ss."
I am a bit confused. MBH have published their code, and this code shows that they calculated the R2 statistic; yet nowhere do they publish this for the tree PCs in MBH'98. However, the R2 statistic has been calculated by others to be ~0 for the AD1400 step, and hence is seriously deficient. It seems to me to be inescapable that MBH knew of the defective statistic, and did not publicise it.
Since you are accusing me of "mendacicizing", which specific part of this is a lie ?
yours, per

dsquared said:
"Could you be a bit clearer on what you mean when you say (repeatedly) that the R^2 figures in the Mann et al study are "not significant" since you can't be using "significant" in its normal technical sense?"
I understand that M&M have done a series of monte carlo simulations using red noise for the NoAmer PC1; they then tested these against the calibration period. From this, you can estimate the lower bound of the 95% confidence intervals for R2, RE, etc. This is published in Geophysical Research Letters.
yours, per

re: 58 -- I have not "falsely claimed that it (Huybers) had been rejected!"

In 11-Aug-05, Mr. Rabett posted of a (May'05) pre-published Huybers'05 comment to MM05 -- by implication Mr. Rabett offers at least the possibility that 1) Huybers will be published and is factually true. This is clearly premature, in light of common peer-review comment policys (e.g. GRL's) plus the Jul'05 ClimateAudit thread I linked, which Mr. Rabett (and Mr. Lane) missed. My post 55 then stated the 3 other likely possibilities as, "already rejected/rebutted/false? Note: the 3 "/" and 1 "?" in post 55.

Given Mr McIntyre's "in play" comment at ClimateAudit, 2) already "rejected" is unlikely (my error), but 3) already "rebutted" and either still true or 4) rebutted and false are also likely possibilities for disposition of Huybers'05 comment at GRL, wouldn't you agree?

My apologies if post 55 was confusing, it could have been worded better. We all will soon see which of these 4 disposition possibilities is the actual case, but Mr. Rabett's # 1 is not going to be one of them.

=====

I also made no "false claims," re: Moberg'05. Moberg's multiproxy reconstruction temperatures do not exceed the peaks found in the MWP portion of his (RED) curve - this is a fact of both Moberg'05 plots and text. It is the Huang'04 reconstruction in RED-ORANGE that does (misleading color choice RED-ORANGE to RED). The same can also be said of the Hadley instrument record in BLACK! Please study the Wikipedia/Connelley reconstruction, that you yourself linked in your post 68 of http://scienceblogs.com/deltoid/2005/07/barton3.php (where you made your "misrepresenting Moberg" accusation of me). For further clarification, refer to my responses in posts 71, 72, and 75.

=====

I also have a reply to what you term a "red herring" preferential hockey stick question ending your post 74 in /barton3/. But I will respond to that there.

By John McCall (not verified) on 14 Aug 2005 #permalink

Dear John,

You may call me Eli of course.

Your #1 conflates two issues, whether the pre-print will be published and whether it is correct..... One of the reasons I pointed to it is that a number of people posting to this list profess to be expert in statistical methods and I was interested in their comments. I have pointed to this paper much earlier on Quark Soup so belay the silliness.

I see that I also owe James Lane a comment about climate sensitivity. Roughly speaking climate sensitivity is the response of global surface temperature to any forcing. For greenhouse gas forcing this is usually stated as the response to a doubling. As v. Storch et al, point out, Mann's various reconstructions imply a value of climate forcing on the extreme low end of GCM predictions, and thus would infer that surface temperature increases would be on the low end of GCM predictions (2-5 C) for doubling CO2 in the atmosphere. If I were a denialist I would have embraced MBH98 with ardor. It is pretty clear that a large number of people have not read v. Storch, et al.

By Eli Rabett (not verified) on 15 Aug 2005 #permalink

per:

I am a bit confused.

No you're not.

MBH have published their code, and this code shows that they calculated the R2 statistic; yet nowhere do they publish this for the tree PCs in MBH'98.

So what? Do they supply the calcs to colleagues if asked (those embarking on character assassination campaigns are not colleagues). You'll want to show they don't in order to show....whatever it is you imply...maybe that not having this tidbit makes the paper invalid or something. Whatever.

However, the R2 statistic has been calculated by others to be ~0 for the AD1400 step, and hence is seriously deficient.

So what? Science moves on in the details, but the larger findings remain.

And so on. Not naming "others" is a big clue.

The whole FUD campaign is re-drawing the issue. The FUDders can't compete on the existing playing field, so they have to make one of their own.

Bah.

John Mc:

Moberg's multiproxy reconstruction temperatures do not exceed the peaks found in the MWP portion of his (RED) curve - this is a fact of both Moberg'05 plots and text.

You'll want to read Moberg et al.'s text before making this claim:

We find no evidence for any earlier periods in the last two
millennia with warmer conditions than the post-1990 period—in
agreement with previous similar studies ^1-4,7. [pg 617]

Moberg et al. 2005. Highly variable Northern Hemisphere temperatures reconstructed from low- and high-resolution proxy data. Nature 433 pp. 613-617 doi:10.1038/nature03265.

Eyeballing the chart (any chart) doesn't help much. You want to read the actual text.

HTH,

D

Dano said:
"You'll want to show they don't in order to show.whatever it is you implymaybe that not having this tidbit makes the paper invalid or something. Whatever."
What i said is very clear; they got the results of their calculation, and didn't publish it in their paper. Last time I looked, deliberately suppressing bad data is scientific misconduct.
"However, the R2 statistic has been calculated by others to be ~0 for the AD1400 step, and hence is seriously deficient. " Dano "So what?"
the calculations have been published by M&M, but given that MBH have belatedly released their source code, we will all be able to find out exactly what they got pretty soon :)
I am bemused by your "so what". If this is true, then MBH had reason to believe that part of their reconstruction failed statistical tests, which would mean completely different conclusions. And they just didn't tell the journal.
you brush this off as if it is not important. That tells me a lot about you.
yours, per

Dano- maybe you missed my previous question ?
"so tell me Dano- do MBH add four years of data that don't exist to the Gaspe series ? This is a nice and simple question- one so simple that you should be able to answer yes or no."
this is the third time of asking; i thought this would be easy for you to answer, since you have (presumably) read the paper ?
yours
per

per, quit wasting our time by repeating already refuted arguments. The r2 value is not a statistical test. This has already been pointed out. Nor is it "bad data" or even particularly relevant. You might want to read an earlier post of mine about r2.

per, your enabling of the FUD campaign is cutely done. So charming, so innocent-sounding.

Slicing lil' out-of-context chunks down into crumbs and then declaring them a picnic is certainly a way to go about it. Lemme guess: you're a detail guy.

Why, one could run around all day chasing down tiny points of seeming contention!

Big picture: there's plenty of evidence that MBH was on the right track.

Heck, even Ronald Bailey says there's warming, fer chrissake (didn't he get the memo?!?). Crikey, are you going to enable the Reason smear campaign next?

Ah, well.

As to your Gaspe question (another crumb), I don't know - it is not mentioned in the paper (you mean the icon you are trying to crush, MBH98, right?). The supplementary info [data summary] does not mention it either. Nor does the Zorita. Nor did the datalists I checked. Nor their reply to M&M. Maybe it's buried in a larger paleo dataset somewhere - it's either not overly important or a big conspiracy.

Let me guess which one you believe, but provide no evidence to back.

As I am neither a paleo guy nor a character assassin with a web site (nor someone who reads such sites), I am not aware of individual locations of all such datasets and their components to be able to comment as an amateur on professionals' work. Apparently you aren't either, as you haven't shown the work.

Nonetheless, there are a dozen or so multiproxy global records to audit that all say the same thing, so you'd better get on it.

And then, the single-proxy indicator papers that provide evidence for same - why, they number in the hundreds.

You're wasting time posting here, per, when you have so many papers to audit and authors to smear. Gosh, this one paper is taking years to FUD - what ever ya gonna do about the remainder?

ÐanØ

Dano said: "As to your Gaspe question (another crumb), I don't know - it is not mentioned in the paper"
A good reply; I should have been specific. The cedars of the Gaspe peninsula, in Canada; the actual site name is St Anne River and the associated name is Edward Cook; cana036. The series starts in 1404; so how can it be included in a PC calculation which requires values for the years 1400-1403 ?
The series only has one or two trees for the first ~40 years; so how come it is used at all for that time ?
As always, you beat around the bush. But even at realclimate, they accept that if you take out the bristlecones and gaspe cedars, it completely changes their results (see qu. 6).
so will you accept that they invent figures for this data series ?
yours
per

What the FUD campaign counts on is having a full complement of rubes to dupe. This ain't Powerline, mate.

These rubes have to not know, somehow, that 1) Golly, things can be improved upon with further knowledge and 2) there are oodles of other things we can read that say the same thing as MBH.

The core of the FUD campaign can be summed up thusly:

Tim Lambert: per, quit wasting our time by repeating already refuted arguments.

And to which I reply: Tim, if they don't recycle oft-refuted arguments, what else will they have?

Ah, well, Dave. Lather, rinse, repeat away! Hope you don't mind if'n I don't reply to your spin cycle any more, as I see, 'per', that you still conveniently ignore big stuff to focus on little things to trump up, hoping to dupe someone here, convert a rube there.

D

timlambert said: "The r2 value is not a statistical test. This has already been pointed out. Nor is it "bad data" or even particularly relevant. You might want to read an earlier post of mine about r2."
I am bemused about this; the RE and R2 are statistics about the correlation, and MBH represent the RE statistic as defining statistical significance; so I don't see any problem about this. Indeed, in Geophysical Research Letters 2005 there is a publication which states that "In the case of MBH98,
unfortunately, neither the R2 and other cross-validation statistics nor the underlying construction step have ever been reported for the controversial 15th century period. Our calculations have indicated that they are statistically insignificant."

Are you really saying that if you got a value of ~0 for R2, that would not be a problem ? This amounts to failing an essential test statistic.
I think the subtext of your argument, that MBH calculated an irrelevant statistic for fun, casts an interesting light on things ! You seem to be arguing that it is okay to withhold bad results- but surely not !
You did point to a previous post, but I couldn't see anything relevant there.
yours
per

Dano:"These rubes have to not know, somehow, that 1) Golly, things can be improved upon with further knowledge "
In 1998, MBH was an incredible piece of work, and it formed a central part of the IPCC TAR. Further knowledge now tells us:

  • 1. MBH invented data points and used weak data when convenient
  • 2. their reconstruction for the 1400s lacks statistical significance
  • 3. MBH knew that they got a bad R2 statistic, and they didn't tell anyone

"2) there are oodles of other things we can read that say the same thing as MBH."
Are you really arguing that, because they got the "right" result, anything they did to get there is okay ? It is notable that -yet again- you fail to answer any specific questions about MBH !
yours, per

OK, one more:

lookit all the proof. Proof, proof, proof.

Not FUD, evidence this time.

My god! all the evidence to back your specious claim! Why, it'll be positively months before I wade thru it all! That is, all the proof besides the one source, that is. Repeated many places, it surely is, judging from the force of your argument.

Right. Carry on. Lather, rinse, repeat. May require extra rinsing in hard water.

D

per, an insignificant resuot does not prove anything--it does not mean that you should accept the null hypothesis.

I seem to recall that you were similarly ignorant of the concept of independence in the discussion of the Lancet study.

Tim, regarding r2, let's try again. Your most recent post provides a link to a link, which is this:

http://www.cmh.edu/stats/ask/rsquared.asp

This link provides a very clear explanation of the r2 statistic, and says by way of summary:

"R squared measures the relative prediction power of your model. It compares the variability of the residuals in your model (SSerror) to the variability of the dependent measure (SStotal). If the variability of the residuals is small then your model has good predictive power."

Conversely, if the variability of the residuals is large (SSerror) compared to the variability of the dependent measure (SStotal) the model has poor predictive power.

r2 = 1- SSerror/SStotal

In the case of MBH, for the 15th century, the r2 statistic is ~0.0, in other words, it has no predictive ability at all.

We know, from the recently disclosed code, that MBH calculated their r2, didn't report it, and went on to say that their RE stat was supported by cross-validation statistics. Do you think this is good practice?

Please note that I haven't used the phrase "statistical test" in any of the above.

By James Lane (not verified) on 15 Aug 2005 #permalink

James, on r2 see: Rosenthal, R. & Rubin, D. (1983). A note on percent of variance explained as a measure of the importance of effects. Journal of Applied Social Psychology, 9, 395-396.

It's summarized here.

Tim, that example is simply pathetic, in the context of your argument. It makes me wonder if you have any grasp of statistics at all.

A simple chi-square test would demonstrate the significance of the referenced dataset.

Don't you understand that different statistical methods are applied to different kinds of data? I think that is the point being made by the site you linked to.

Are you arguing from the link that Pearson, r2 and RE are worthless in any situation?

By James Lane (not verified) on 15 Aug 2005 #permalink

I understand the example perfectly.

Are you going to answer my questions? In case you missed them, here they are again:

1.Don't you understand that different statistical methods are applied to different kinds of data? I think that is the point being made by the site you linked to.

2.Are you arguing from the link that Pearson, r2 and RE are worthless in any situation?

By James Lane (not verified) on 15 Aug 2005 #permalink

re: 74
Dan0 - As Dr. Lambert has said, "quit wasting our time." Please answer at least one question put before you in 73.

re:71
Dan0 -- As Dr. Lambert has said, "quit wasting our time." Please answer at least one question put before you in 70.

re: 69
Dan0 -- As Dr. Lambert has said, "quit wasting our time." Please answer at least one question put before you in 66,67.

re: 65
Dan0 -- I thought you were an expert on (multi)proxy studies -- please refer to the IPCC definition of proxy:

"A proxy climate indicator is a local record that is interpreted, using physical and biophysical principles, to represent some combination of climate-related variations back in time. Climate related data derived in this way are referred to as proxy data. Examples of proxies are: tree ring records, characteristics of corals, and various data derived from ice cores."

I read and understand both the text and diagrams of Moberg'05 -- I even quoted some of the Moberg'05 multiproxy relevant text in post 71 (text in BOLD) of /barton3/, which I repeat here.

"ACCORDING TO OUR RECONSTRUCTION, HIGH TEMPERATURES-SIMILAR TO THOSE OBSERVED BEFORE 1990-OCCURRED AROUND AD 1000 TO 1100, and minimum temperatures that are about 0.7 K below the average of 1961-90 occurred around ad 1600. This large natural variability in the past suggests an important role of natural multicentennial variability that is likely to continue."

If as you said, you actually read Moberg, you certainly didn't cite text that is relevant to Moberg's multiproxy reconstruction, WHICH ENDED IN 1979 - your text refers to the instrument record stretching beyond '79. No matter, I also have already addressed that point in my post 70, last para, when I stated:

"the Moberg'05 PROXY RECONSTRUCTIONS are as I have posted, PEAKING HIGHER in the MWP than at end (1979, or even 1990 with a .1 oC/decade rise)!"

Please save your sanctimonious "multiproxy" bluster (of only a week ago) for someone else -- you couldn't stay consistent, if you argued with yourself. Let me suggest you do something useful like push for proxy updating to present (or at least to IPCC'01), then we will see how the proxies reacted to the warmest decade in the millennium.

=====

re: 64

Eli -- my apologies; you've showed me the path of my conflatedness. I now know, you weren't using Huygers to attack MM05, you were merely trying to elicit comment about Huygers from those on this thread. Please allow me to rephrase 55 in that spirit:

Mr. Rabett -- re: Huygers, Mr. McIntyre (and McKitrick) are already aware of Huybers' pending comment and critique points; they are in process of formally rebutting those criticisms. For a brief preview, please see:

http://www.climateaudit.org/?p=265

By John McCall (not verified) on 15 Aug 2005 #permalink

I understand that M&M have done a series of monte carlo simulations using red noise for the NoAmer PC1; they then tested these against the calibration period. From this, you can estimate the lower bound of the 95% confidence intervals for R2, RE

Are you sure they did this? I don't understand why anyone would think that this was a worthwhile thing to do for an r^2 figure, since it seems to me (if you have described this correctly and if I have understood it correctly) the "confidence interval" for r^2 would be a meaningless transform of the variance of the red noise. I have only looked at a couple of the M&M submissions on their websites, but in all of them, they appear to be using the Monte Carlo simulation to calculate significance levels for the RE figure, not for r^2 (although they call the r^2 figure "not statistically significant" in an offhand remark).

Btw, could people please stop saying that the R^2 figure is "~0.0" and give us the actual number? I've been searching for ages with no luck.

Please append to 81

And from the NOAA, another institutional definition of "Proxy" for your consideration:

proxy - Substitute. Paleoclimatologists use proxy evidence in place of direct measurements of climate parameters such as temperature, for times before instrumental measurements were made. Ocean sediments, glacial ice, and tree rings all contain proxies for climate conditions.

Dan0 -- Gloves off -- all this time, have you really considered "direct measurements," such as that in the instrument record, a proxy? It would explain a lot in your posts. Do you believe the majority of AGW proponents think similarly?

By John McCall (not verified) on 16 Aug 2005 #permalink

dsquared, MBH haven't released sufficient code for an exact replication. The quoted r2 stats are for the emulation, and that is why they are referred to as ~0.0.

By James Lane (not verified) on 16 Aug 2005 #permalink

I'm really not sure, John Mc, why you want to highlight these assertions again, but as I've said before: if this is the best the denialists can do, I feel a lot better on the whole.

Dan0...all this time, have you really considered "direct measurements," such as that in the instrument record, a proxy? It would explain a lot in your posts. Do you believe the majority of AGW proponents think similarly?

No. Proxies are things used in place of something else; e.g. for temp paleoclimate folk use various things such as tree rings, sediment cores, boreholes, etc. I would think very few AGW proponents think similarly to your question.

Please answer at least one question put before you in 73.

I find only one question and a number of evidenceless assertions. The question, at the end, is based on purported ignorance. That is: the questioner pretends to not have read the list of paleoclimate studies, or pretends that they are all incorrect, hence the implication that Mann's conclusions are based on false results (or some sort of accusation that 'the ends justify the means'). No evidence is given the paleo studies in the list are incorrect, hence the premise of the question is specious, to say the least. A rather crude tactic and quite transparent.

As Dr. Lambert has said, "quit wasting our time." Please answer at least one question put before you in 70.

I don't know about the dataset. I'd be happy to examine some evidence, esp. from someone not involved in a character assassination.

But anyway, I fail to see how this small, temporally-limited dataset negates the conclusions of a dozen papers that estimate temps over a half-milennia or more - do they all perform the same calculation to achieve their results?

As Dr. Lambert has said, "quit wasting our time." Please answer at least one question put before you in 66,67

66 depends on a big if, with the 'if' being speculation with no evidence; I don't waste anyone's time in such a manner and I don't expect others to either. As to 67, I don't know and there's no evidence given to show this is so. I'd be happy to examine some evidence, esp. from someone not involved in a character assassination.

I thought you were an expert on (multi)proxy studies — please refer to the IPCC definition of proxy:

I'm not an expert, I just read the journals. Apologies if I have stated somewhere that I'm an expert.

Nevertheless, the text you include:

According to our reconstruction, high temperatures-similar to those observed before 1990-occurred around AD 1000 TO 1100, and minimum temperatures that are about 0.7 K below the average of 1961-90 occurred around ad 1600. This large natural variability in the past suggests an important role of natural multicentennial variability that is likely to continue.

doesn't negate Moberg et al.'s conclusion that recent temps are unprecedented in the past 2 millennia.

As this is a letter and thus does not have methods/discussion/conclusion format, I can understand how you'd miss that.

But, if you feel you're correct and the abstract does negate the conclusions, you'll want to write Nature right away, as you've discovered something no one else has, and you'll make quite a name for yourself, I'm sure.

Cc us on the letter, will you? Thx.

Best,

ÐanØ

Re: #54

Per, you have turned yourself into such a caricature with such silly comments. The many gross errors that M&M produced have been more than adequately documented, yet here you are with your fingers in your ears going, "La, la, la, I can't hear you!! What errors?" When you can't even read the references to know what to do with the data, you have no business claiming to do an "audit". Pretty simple, really, and if you had an ounce of integrity, you'd acknowledge that simple fact. Like I said before, it's time for you to come clean about your affiliation with M&M, at least if you want people to take you seriously.

By David Ball (not verified) on 16 Aug 2005 #permalink

There is at least one issue we should push off the table. The proxy contribution to a reconstruction ends at the beginning of the training period. It is perfectly logical for Moburg, Mann, Jones and your mother-in-law to extend their conclusions into the present based on accumulating surface temperature records.

By Eli Rabett (not verified) on 16 Aug 2005 #permalink

Eli, it's much more fun to have someone examine your object on the table and declare that this object is a priceless gem.

You may know it's a bauble, sir, but it's a useful bauble. This bauble allows us to understand much about those who gaze on this bauble and declare it to be, say, the Hope diamond.

Best, sir,

ÐanØ

Moberg does not come to that conclusion. Instead, he says he finds no evidence of temperatures in the last two millenia as warm as recently. This is a wholly different conclusion.

Well, we sure have concluded we can call me 'Draino'. That's good.

And, my, what a...flexible world we live in!

Why, we can assert that "We find no evidence for any earlier periods in the last two millennia with warmer conditions than the post-1990 period—in
agreement with previous similar studies"
(concluding the recent warming is unprecedented) doesn't mean the authors have necessarily concluded that recent temps are unprecedented in the past 2 millennia.

Silly me! it's wholly different indeed. wholllllly different.

Ahhh...such is the stuff that FUD campaigns are made...

D

I'm moved to comment by the simply dreadful_ logic used by some commentators in this thread. (I.e. I'm venting :)

Dano: The pervayer of straw men! When 'per' asks if not publishing the R2 statistic is ok, talking about the other proof for global warming is a classic straw man. It's utterly irrelevent to the point at hand. (at least comments #85, #71, #69, #65, etc etc). Example "I fail to see how this small, temporally-limited dataset negates the conclusions of a dozen papers that estimate temps ...". Possibly true, but it's not dealing with the issue at hand.

Dano: Refusing to answer question because you don't like the person asking them is a variant on an 'ad hominem' argument. I.e. it's a junk response.

Tim: The math failure. "The r2 value is not a statistical test". Umm. Actually, that's exactly what it is. It tests for significance. If the point you wanted make was that the r2 value was a sufficent condition for rejecting the null H, but not a neccesary one then say so. But don't deny that it's a statistical test. That's just silly.

In addition #75: An insignicant result means that you have no indication. It doesn't mean you reject or accept the null H, it means you probably just can't tell with the data at hand. If you get an insignificant result, it's normally a big hint not to bother publishing because you're not adding any useful evidence.

A few things that become blindly obvious in this thread:

1. The's no question that the earth is warming and that global warming is a serious issue.
2. That's no question that there's a lot of quality evidence for it.
3. It's seems pretty obvious that MBH is _not_ quality evidence. At best, they're sloppy researchers.
4. Many of the people defending MBH seem to be defending them because they 'believe' in global warming. MBH put up evidence for global warming, and thus MBH must somehow be beyond reproach because they got the right result. I can throw _dice_ and get the right result some of the time, it doesn't make me a competent researcher!

People: Learn to focus on the issue at hand and stop conflating unrelated items. It's ok to say that MBH churned out sloppy research. It doesn't harm the case for global warming.

What does harm the perception of the case for global warming is a knee jerk defence of the indefensable.

Adding to 90...

A few more things that are obvious (other than as interpreter/paraphraser or even mind-reader of Moberg et.al., Mr. Dano is not very good):

1) Dr. Moberg knows his (von Storch's and others) reconstruction variability is much larger than MBH's with both the MWP and LIA being much more pronounced. This is important, because with greater variability of the past (including an MWP and LIA), the greater contribution weight one must concede to natural forcings (vs. anthropogenic).

2) Dr. Moberg et.al. (coincidentally or not?) agree with Steve McIntyre, that update/calibration of the proxies, must take place to verify/calibrate proxy registration of the "unprecedented warming" that MBH, Mr. Dano and others are so fond of highlighting. The training period is just too short (ending in 1980), and the recent 1990+ warming claims so unprecedented (more than a decade after training ends), to as Mr. Rabett puts in post 87:

" we should push off the table. The proxy contribution to a reconstruction ends at the beginning of the training period. It is perfectly logical for Moburg, Mann, Jones and your mother-in-law to extend their conclusions into the present based on accumulating surface temperature records."

"Logical," Eli, one can argue without scientific rigor. However, one strains scientific legitimacy when splicing millennium multiproxy reconstructions 10-20 years short of the business end of a 100+ year instrument records that includes the so called "warmest decade of the millennium" - especially when that business end is interpreted as being so alarming. Dr. Moberg and other climatologists know (as does Mr. McIntyre), one must have updated millennium proxy records to help determine the relative AGW vs. NGW weighting of the multiproxy recorded past. In fact, the extreme claims of recent warming in this millennium, by MBH and others, demand it!

By johnmccall (not verified) on 20 Aug 2005 #permalink

Re JM's 91#1.

First, forcings have no hair in physicspeak (as in electrons have no hair, ie you cannot tell one electron from another). That means it does not matter if a forcing is natural, supernatural or a result of people's actions. The results will be the same.

Second, variability has a sign determined by the sign of the forcing. Variability resulting from a positive change in solar forcing will always be positive. This means that if you prefer Moberg's reconstruction, or v. Storch's climate modeling, you expect a large increase in global temperature from greenhouse gas forcing. v. Storch et al, specifically state what the climate sensitivity in their model is. You could look it up. Have you signed on to that?

By Eli Rabett (not verified) on 21 Aug 2005 #permalink

Wow. I go away for a few days and a flame war breaks out.

Dano: The pervayer of straw men! When 'per' asks if not publishing the R2 statistic is ok, talking about the other proof for global warming is a classic straw man. It's utterly irrelevent to the point at hand. (at least comments #85, #71, #69, #65, etc etc).

Au contraire, Michael. You seem to think that the r^2 has been deliberately hidden. There is no evidence for this.

per's tactic is to paint false images of deliberate deception, which therefore negates the findings, which means that AGW is not happening.

Get it?

That is the tactic, and that is what I'm addressing.

Get it?

Good.

Now,

Example "I fail to see how this small, temporally-limited dataset negates the conclusions of a dozen papers that estimate temps ". Possibly true, but it's not dealing with the issue at hand.

This is where you are incorrect, sir.

The tactic is to agitprop and FUD into making some undereducated people believe that the IPCC relied on a falsely-gotten conclusion.

This is incorrect, as others have reached the same conclusion.

Get it?

Dano: Refusing to answer question because you don't like the person asking them is a variant on an 'ad hominem' argument. I.e. it's a junk response.

Huh? Do you mean my turning the issue back to tactics (which per studiously avoids addressing)? And who said anything about dislike?

Dreadful logic, indeed.

ÐanØ

I agree with Micheal both that belief in a larger scale issue (GW) should not lead one to be tendentious over arguments on points that support it (Mann, and that DanO is nonresponsive.

Dano: The pervayer of straw men! When 'per' asks if not publishing the R2 statistic is ok, talking about the other proof for global warming is a classic straw man. It's utterly irrelevent to the point at hand. (at least comments #85, #71, #69, #65, etc etc).

Au contraire, Michael. You seem to think that the r^2 has been deliberately hidden. There is no evidence for this.

I'm hoping this was clever humor. Alas, the probability is low.

Could it be that switching to subject to the "r^2 being 'deliberately' hidden" is itself a strawman argument!?

To add to the inadvertant humour, it's actually a straw man in two ways: #1 The original subject was apparently, is failing to publish the r^2 stat right or wrong?. It doesn't matter if it was deliberate or accidental. So raising the debate about it being 'deliberate' or not is a poor attempt to change the subject. i.e. a strawman. #2. Even better, my point was the extensive use of strawman arguments, which is blithely sailed past with an irrelevent point. Two birds with one stone! yay!

As is the rest of the diatribe: None of it has anything to do with the actual point. (Which, for the those in need of subtitles, was the extensive use of strawman arguments in a failed effort to simulate actual logic and reason).

Ok. Maybe that was just a little harsh. But only just. :)

Note for the hard of reading: I don't give a tinkers cuss if the r^2 stat was published or buried, deliberately or accidently, with or without being painted purple, before or after being savagely mauled by wild teddy bears.

My point is that you (the collective 'you') do your cause and yourself a serious disservice when you use invalid arguments to advance your cause. It makes bystanders wonder if you're lacking a real argument.

Could it be that switching to subject to the "r^2 being 'deliberately' hidden" is itself a strawman argument!?

Well, that was what I understood the implicit claim by per to be. Since he has multiple posts implying deliberate hiding of data, it's not unjustified.

To add to the inadvertant humour, it's actually a straw man in two ways: #1 The original subject was apparently, is failing to publish the r^2 stat right or wrong?.

Failing to publish it doesn't negate the conclusions.

It doesn't matter if it was deliberate or accidental. So raising the debate about it being 'deliberate' or not is a poor attempt to change the subject. i.e. a strawman. #2. Even better, my point was the extensive use of strawman arguments, which is blithely sailed past with an irrelevent point. Two birds with one stone! yay!

Well, one must acknowledge persistence, and good on you.

I also acknowledge the doggedness of blowing up your talking points into seemingly important factors.

You may want to check posts # 59, 61, 66 for the 'deliberateness' thingy you're on about.

Anyway, for your r^2 argument to be a good one (humoring you for a bit since you think it's important and we wouldn't want you to get upset about me not running around for hours chasing this point), your team leader, when constructing the talking point, should have provided you lads with some sort of statistic that shows the percentage of papers that mention the uncertainties (or display them by graphing) but didn't actually have text that included an r^2. See, having that would give a lot of weight to this argument. Or not. But hey.

Next, your team leader should have provided a statistic of how many of the papers in the negative published a supplemental with the r^2 to make up for this serious deficiency, deliberate or no. Or maybe, how many times an author refused to provide it when asked by a colleage - there, that's an easier one (not that M&M are colleagues...).

Even better would be for your team leader to provide a list of papers that had their conclusions negated because the r^2 wasn't published. Oh, wait: no it wouldn't. You wouldn't have a talking point. Never mind.

Lastly, your team leader should have given you a talking point to rebut the question: how come it has taken 7 years to notice this big, huge deal? Golly, no one on the IPCC noticed it, which we all know from this discussion is that the lack of this figger means the whole paper is shot, and therefore the IPCC is...well, whatever.

Right? Isn't this what you're getting at?

Well, we know Mann claims MBH didn't use r^2, but hey, let's not construct strawman arguments, right boys?

As is the rest of the diatribe: None of it has anything to do with the actual point. (Which, for the those in need of subtitles, was the extensive use of strawman arguments in a failed effort to simulate actual logic and reason)...My point is that you (the collective 'you') do your cause and yourself a serious disservice when you use invalid arguments to advance your cause. It makes bystanders wonder if you're lacking a real argument.

The actual point of Tim's post was that one of the authors didn't know what they were arguing about, and Tim explained why and how it degenerated.

My actual posts on this thread - presumably the ones with per that you are referring to - have actually been spent trying to unframe per's framing. Per has spent a lot of time constructing distraction devices and making picnics out of crumbs. I'm pointing this out on this thread. That's what I'm doing. Unframing a reframe. That's what it is. I'm sorry you didn't catch it until now. That's what I'm doing. I'm sorry you didn't catch it until now. That's what I'm doing.

If you think folk should waste their time chasing around tiny, irrelevant or inconsequential points, well that's what per wants; that's how this whole game works - try to find something inconsequential, blow it up into something important, and hammer away at that to deconstruct a particular paper.

I ain't biting at the chasing around inconsequential points. I'm sorry you didn't catch it until now. That's what I'm not doing.

You appear as if you're upset about that. Well. Now you know the story.

Best,

ÐanØ

Please do "bite at discussion of the inconsequential points" since they are the ones in contention.

It's interesting that you can find other papers that don't include rsq, but not a good response to the question: "do you think a low rsq is a relevant point about the MBH paper that should be included in publication so that people are aware of possible limitations of the fit?" If you're incapable of answering the question, just say so. I make no bones that I'm incapable of a sophisticated response on the suitability of rsq as a metric (note, it does have SOME usefullness in SOME situation, I know that much). There may be all kinds of flaws, caveats in rsq. But there is also meaningful relevancy (at least at some times). I don't get a great feeling from either you or Tim, that you are making a sophisticated argument about when rsq means something and when it doesn't. Because to be honest, I don't think either of you understands stats deeply (and I don't either).

Once again, please do "engage on the specifics". This is the only way to move forward. Don't be scared of being caught in a wrong statement. Better to state assertions and then we can all engage on validating or proving them false. But at least we move forward. That is how science and discovery works. Better to sharpen your case by honing the supports than to refrain from examining subpoints, because worreid about the impact on the higher issue. If the whole process "hurts your case", so be it.

Please do "bite at discussion of the inconsequential points" since they are the ones in contention.

Yes, and they are elevated way up there. I only have limited time, and as the strategy is to inflate inconsequential points, I'd be acknowledging the validity of the tactic. Nice.

It's interesting that you can find other papers that don't include rsq, but not a good response to the question: "do you think a low rsq is a relevant point about the MBH paper that should be included in publication so that people are aware of possible limitations of the fit?" If you're incapable of answering the question, just say so.

It's a relevant point. I'm quite sure if it were an issue, colleages called MBH and were sent the figures.

Again, the conclusions have been found to be valid by other researchers.

I don't get a great feeling from either you or Tim, that you are making a sophisticated argument about when rsq means something and when it doesn't. Because to be honest, I don't think either of you understands stats deeply (and I don't either).

Tim understands them way better than I do, me being a guy who needed three stats classes to make it thru grad school.

But the argument is not whether it means something, the proposition put forth is that MBH were hiding something, implying the findings were not robust.

Again, the conclusions have been found to be valid by other researchers.

Once again, please do "engage on the specifics". This is the only way to move forward.

I have been quite assiduous in engaging in the relevant specifics.

The MBH conclusions have been found to be valid by other researchers. I see no character assassination against those other researchers with their more recent and finer-grained analyses (and Moberg's 4 pages of supplementary material that surely contains errors in depicting recent warming as greater than MWP).

More relevant is that it would be easier to say S+C were hiding something, judging from the re-posting of their MSU datasets.

Those that clamor for clarity and concentrate on tiny points to elevate might have a field day with those boys. Just a thought.

BTW, TCO, I like your comments and note that they were met with some disdain at Stevie Mac's place. Not marching in lockstep with the rugged individualists has it's price, I see.

Best,

D

I really don't know what they're on about, Eli. The MBH98 even has a figure that shows the verification r2.

Maybe Stevie Mac's PosseTM thinks the calculations should be shown or something. Certainly the depiction is a bit broad for my tastes, but that's not what is being argued here,is it? (Saaaay, Steve, have you corrected your Posse's incorrect argumentation?)

Certainly lil' per's broad assertion that MBH knew that they got a bad R2 [presumably he means r2 -D] statistic, and they didn't tell anyone is full of shhh...er...ahem...well, I don't want to waste Stevie Mac's bandwidth on another flame war about how I sign my name.

There's a figger 3 that tells everyone - but that's per's game and it takes a while to show it (which is hard to do when he's banned).

So I'm not sure I answered your question, sir.

Best,

ÐanØ

Both R2 and r2 are accepted usages: R2 is the more common usage in econometrics and statistics, while paleoclimatologists and dendrochronologists tend to use r2.

The Figure in MBH98 is for the AD1820 roster with 112 "proxies", incluing 11 actual temperature series. The cross-validation R2 for this step was quite high as indicated in this figure. Mann obviously had no reluctance to disclose and even feature the R2 statistic in a step when it was favorable for his reconstruction.

The dispute is over the earlier reconstruction, and, in particular, the 15th century step in the stepwise reconstruction with only 22 proxies and no instrumental series as input. Mann has never released a digital version of this step and has refused to disclose the digital reconstruction of this step. Both my emulation and my run-through of the Wahl-Ammann emulation for this step show that the cross-validation R2 of the AD1400 step is ~0.0. The source code shows that Mann calculated this, but did not report it. The absence in the Supplementary Information is quite striking.

In our GRL article, we showed that simulations on red noise using MBH98 methods led to hockey stick shaped PC1s which generated high RE statistics and R2 of ~0.0 when fitted against NH temperature - a pattern identical to that of the actual MBH98 reconstruction using the 15th century proxy network. Hence the conclusion that this network has no statistical skill.

By Steve McIntyre (not verified) on 22 Aug 2005 #permalink

Re #96: Dano, you said: "Well, we know Mann claims MBH didn't use r^2, but hey, let's not construct strawman arguments, right boys?"

How do you reconcile Mann's statement with the Figure in Nature showing r2 statistics?

By Steve McIntyre (not verified) on 22 Aug 2005 #permalink

Michael, you're also using really quite confusing language about R^2. You write:

The math failure. "The r2 value is not a statistical test". Umm. Actually, that's exactly what it is. It tests for significance. If the point you wanted make was that the r2 value was a sufficent condition for rejecting the null H, but not a neccesary one then say so. But don't deny that it's a statistical test. That's just silly.

R^2 is a measure of goodness of fit. That's not the same thing as a "statistical test", if that phrase is to have its ordinary meaning. Proof; if it were a statistical test, then there would be tables of critical values of R^2 so that people could check whether their estimates passed an R^2 test. There are no such tables because there is no such test. R^2 doesn't "test for significance" and as I keep pointing out, it is possible to have a correct model with highly significant coefficients which has a low R^2 because it is a model of a noisy process.

Stephen McIntyre: I've asked this before but I guess it got lost in the thread; could you give the actual number for the R^2 figure in your tests rather than saying "~0.0", as zero is quite obviously a special number in this case. Also, it strikes me that the R^2 from an exercise like the one you describe would be a more or less meaningless transform of the variance of your red noise. In particular, in such an exercise it strikes me that a zero R^2 is nowhere near the worst that things could get; it is not uncommon in econometric applications to get an out-of-sample fit with a negative R^2 (ie a mean-squared forecast error which is greater than the variance of the data)

A couple of asides:

1. Economists also use adjusted R2 measures ( Rbar^2), for which the critical value is zero.

2. Economists got used to high R2 values in macroeconomics because most of the early analysis involved time series with a common trend, but other analyses on large data sets from household surveys and so on produce R2 close to zero, even though coefficients of interest are highly significant. I imagine the climate series are closer to the latter case.

By John Quiggin (not verified) on 23 Aug 2005 #permalink

Dano, you said: "Well, we know Mann claims MBH didn't use r^2, but hey, let's not construct strawman arguments, right boys?"

How do you reconcile Mann's statement with the Figure in Nature showing r2 statistics?

A. Steve, I was trying to nail down per on anything so I could show that he either hadn't read the Nature or was...er...misrepresenting the paper (it takes a while, as I found out on sci.env, to figure out a tactic to this end).

I should have done a better job at constructing that sentence, since I knew MBH98 used r2 in the paper. Apologies for the confusion.

B. That said, I don't know the answer to your question. I suspect the initial work was tested with r2 and sometime later they learned RE was a better test.

As you no doubt are keenly aware, the MBH98 was an early work. Subsequent work has improved upon it (as is often the case) yet the overall conclusion still stands, as the evidence from later papers indicates. Certainly the error bars were useful in the depiction, as some of the MBH98 uncertainty looks pretty good in later depictions.

Knowledge marches on!

Best, sir,

ÐanØ

Both R2 and r2 are accepted usages: R2 is the more common usage in econometrics and statistics, while paleoclimatologists and dendrochronologists tend to use r2.

How quickly one forgets. You are correct Steve.

My training is in the natural sciences, which uses r2, and we see r2 all the time. I also see I used R2 in an Urban Econ class not that long ago calendrically, but ages ago mentally.

BTW, your glacier discussion in the comments is fascinating.

Best,

ÐanØ

I hesitate to quibble with Prof. Quiggin but I have elected myself as the "R^2 Terminology Police" here so I have to keep insisting that R-bar-squared, like R-squared, doesn't have "critical values"; the value of zero is an important one for R-bar-squared because a value greater than zero indicates that there is at least some genuine fit to the model over and above what you would get arithmetically by eating up degrees of freedom, but it isn't a "critical value" in the sense in which 1.96 is a critical value for a t-ratio. JQ's substantive points remain.

I yield to no man, btw, not even Tim, in my obsessive nerdish Lambert completism. I've just remembered something ...

Here's an example of how "making a fetish out of r-squared" can lead you up the garden path (note that Tim is gently mocking Lott here with the tables; it is not a serious critique of Lott)

dsqaured - MBH98 certainly used r2 (OK, I'll use this form here instead of R2), as a test of statistical significance. MBH98 stated:

"For the r2 statistic, statistically insignificant values (or any gridpoints with unphysical values of correlation r , 0) are indicated in grey. The colour scale indicates values significant at the 90% (yellow), 99% (light red) and 99.9% (dark red) levels (these significance levels are slightly higher for the calibration statistics which are based on a longer period of time)." The color code shows that they treated 0.06 as being 90% significant; 0.14 as 95% and 0.20 as 99% significant.

Their usage of the term statistical significance follows dendrochronological practices in Cook et al [1994] and Fritts [1976; 1990], and is a little different than one's used to in general statistical literature, but can be followed. They use RE in a similar way. What's unique about MBH98 is the selectivity of the reporting of verification statistics.

dsquared: we reported the following values of the verification statistics standard in the trade for our emulation of the 15th century step of MBH98 (additional to the RE statistic) - R2: 0.02; CE: -0.26; Sign Test: 22/48; PM Test: 1.54. I've modified the emulation a little since then to adjust a scaling step after seeing Wahl-Ammann code (where we were virtually identical in construction of RPCs. The verification statistics for my run-through of the Wahl-Ammann version were: R2: 0.02; CE: -0.24; Sign Test: 0.54; PM: 0.91.

As noted above, the statistical terminology of MBH98 and dendrochronological literature is a little idiosyncratic and I'll try to post a note up at climateaudit on reconciling it to more usual statistical terminolgy of null hypotheses, with a view to discussing exactly what the null hypothesis is.

BTW I've discussed the MBH98 confidence intervals at climateaudit a few months ago. There's a lot of hair on these calculations as well.

By Steve McIntyre (not verified) on 23 Aug 2005 #permalink

Hey Steve, can you kill the spam filter on my IP addy in your comments?

For some reason, it flagged me. No idea why. I'm being overly considerate to the PosseTM today, since it's your house an' all.

Thanks!

D

Interesting way to communicate. The Spam Karma tends to penalize new posters. I'll hceck on it. I agree that you've been quite civil while you've been visiting. I appreciate the compliment about the glacier posts.

By Steve McIntyre (not verified) on 23 Aug 2005 #permalink

Still no good, Steve, at ~0045GMT.

Off on another vacation, cheers,

D

re: 92 Ely Rabett

"it does not matter if a forcing is natural, supernatural or a result of people's actions."

Of course it matters, the first two forcings are largely unaddressable save questionable actions such as blocking solar radiance before it reaches earth. In addition, if AGW forcings are minimal in contribution, then an already minimally effective AGW-reversing initiative like Kyoto "would be less effective than thought."

This brings me to your second point, my preference toward Moberg's reconstruction or von Storch's modeling isn't the issue — it's Esper J, Wilson RJS, Frank DC, Moberg A, Wanner H, Luterbacher J (in press) Climate: past ranges and future changes. Quaternary Science Reviews.

They are the ones calling for updating the proxies (adding to Steve McIntyre's voice), among other things to address the "amplitude puzzle" of recent (including MBH'99) reconstructions:

"data from the most recent decades, absent in many regional proxy records, limits the calibration period length and hinders tests of the behaviour of the proxies under the present 'extreme' temperature conditions. Calibration including the exceptional conditions since the 1990s would, however, be necessary to estimate the robustness of a reconstruction during earlier warm episodes, such as the Medieval Warm Period, and would avoid the need to splice proxy and instrumental records together to derive conclusions about recent warmth.

So, what would it mean, if the reconstructions indicate a larger (Esper et al., 2002; Pollack and Smerdon, 2004; Moberg et al., 2005) or smaller (Jones et al., 1998; Mann et al., 1999) temperature amplitude? We suggest that the former situation, i.e. enhanced variability during pre-industrial times, would result in a redistribution of weight towards the role of natural factors in forcing temperature changes, thereby relatively devaluing the impact of anthropogenic emissions and affecting future predicted scenarios. If that turns out to be the case, agreements such as the Kyoto protocol that intend to reduce emissions of anthropogenic greenhouse gases, would be less effective than thought. This scenario, however, does not question the general mechanism established within the protocol, which we believe is a breakthrough."

While we're waiting for AGW proponents to retire MBG'99 from the Wikipedia plot you all are so fond of posting, we can update the proxy records so the training period is enhanced up to and including the "warmest decade in the millennium?"

Oh, and I've checked RealClimate and ClimateAudit - doesn't appear that consideration has been given to this Jul'05 accepted paper yet -- perhaps if/when it's published? Scott Church seems to have insight into the latest RealClimate positions - maybe he knows if the paper is under consideration of their blog?

By John McCall (not verified) on 25 Aug 2005 #permalink

Oh and Dan0 --

You should read the paper as well. You and our learned host are so fond of throwing out and (IMO) misreading the millennium relevance of that Wikipedia reconstruction summary; and that includes the last 10-35 years of those proxy vs. instrument record of the plots. The paper will give some things to think about "splicing" ...

By John McCall (not verified) on 25 Aug 2005 #permalink

See the Huybers comment about the dendroclimatic proxy correlation to local temperature measurements. (he was referencing a ?Jones? paper iirc)

By cytochrome sea (not verified) on 25 Aug 2005 #permalink

Wait; I reread it, scratch that last post.

By cytochrome sea (not verified) on 26 Aug 2005 #permalink

Where is this paper available?

By Steve Bloom (not verified) on 26 Aug 2005 #permalink

RE #113

The article was published 8/10/05 on Science Direct.

"Persisting controversy (Regalado, 2005) surrounding a pioneering northern hemisphere temperature reconstruction (Mann et al., 1999) indicates the importance of such records to understand our changing climate. Such reconstructions, combining data from tree rings, documentary evidence and other proxy sources are key to evaluate natural forcing mechanisms, such as the sun's irradiance or volcanic eruptions, along with those from the widespread release of anthropogenic greenhouse gases since about 1850 during the industrial (and instrumental) period. We here demonstrate that our understanding of the shape of long-term climate fluctuations is better than commonly perceived, but that the absolute amplitude of temperature variations is poorly understood. "

"When matching existing temperature reconstructions (Jones et al., 1999; Mann et al., 1999; Briffa, 2000; Esper et al., 2002; Moberg, et al., 2005) over the past 1000 years, although substantial divergences exist during certain periods, the timeseries display a reasonably coherent picture of major climatic episodes"

I guess I don't understand your bluster. The article argues proxy reconstructions are good (including MBH) but could be refined. Not really big news.

re: 118 Oh you mean, in addition to difficulty reading/understanding the summary quotation cited in 113, you also didn't grasp what the quote, "although substantial divergences exist during certain periods" meant from your own citing?

TRANSLATION:

"substantial divergences ... certain periods" = Mann'99 during MWP. Please study Figure 1, just estimate what happens if Mann'99 is dropped from the averaging. Even though it's the steepest at the business end of the hockey stick, dropping Mann'99 would have less affect on that end (1990s instrument data warming); but the MWP proxy period average would rise significantly. Because of it's icon-status, Esper and Moberg (like von Storch before them) realize the hockey stick must be included (even if it's "rubbish") - and it's inclusion sure dampens the average amplitude of the MWP end of the millennium, while making the modern end steeper?

By JohnMcCall (not verified) on 26 Aug 2005 #permalink

I understand the paper well, but then I am not the one making irrational demands of it or its authors. Nor am I the one using my magical mystery powers to read deeply into & "translate" a mundane paper in an obscure journal.

The paper is not introducing the theory of relativity, it summarizes the current lit and sets out a generalized research agenda. Yawn.

I "demand" nothing -- it's obvious for those who read and understand, that the summary statement is the more definitive of the citations.

Had you taken some time to analyze figure 1 before you posted your quote, you would have seen the clear reference to Mann'99. There are others in the figure, but your intellectual laziness drove you to swing wildly, posting of "demands" and "magic," rather than drawing on modest observation and insight to post along that line.

But you're right about one thing (although you didn't go far enough); as far as you're concerned, everything about the article is obscure.

By JohnMcCall (not verified) on 30 Aug 2005 #permalink