Chronicle on Hockey Stick

William Connolley has a few comments on the Chronicle of Higher Education's article on the hockey stick wars.

There is also a question and answer session with Gerald North of the NRC panel. I liked this question, from one Patrick Frank:

The original hockey stick has been shown not just flawed but wrong. Why was the NAS committee unable to clearly state that?

You can just see Frank spluttering with indignation. Why didn't the NAS committee agree with me, why? As North puts it:

There is a long history of making an inference from data using pretty crude methods and coming up with the right answer. Most of the great discoveries have been made this way. The Mann et al., results were not 'wrong' and the science was not 'bad'. They simply made choices in their analysis which were not precisely the ones we (in hindsight) might have made. It turns out that their choices led them to essentially the right answer (at least as compared with later studies which used perhaps better choices).

More like this

My general feeling about Judith Curry's stuff over at Collide-a-scape was that it was all tolerably vague. But there was one specfic. Over there, she copied Bishop Hill and proposed "Jones 1998 and Osborn and Briffa 2006" as key neglected papers. More directly she has proposed: 1. The Spatial…
Joe Barton's Committee has released a report they commissioned on the hockey stick by Wegman, Scott and Said (WSS). The focus of the report is much narrower than the NRC report and the results are basically a subset of the NRC report. In particular, both reports find that "off-centre" method used…
Last month the National Research Council report on climate reconstructions released its report and basically vindicated the hockey stick. This was widely reported in the media. But not in The Australian. I did a search through the archives of The Australian to see what they had published about…
This is about the law suit filed by Michael Mann against the Competitive Enterprise Institute, the National Review, Mark Steyn, and Rand Simberg because of accusations they made that were actionable. Michael Halpern summarized: Competitive Enterprise Institute’s space technology and policy analyst…

The original hockey stick has been shown not just flawed but wrong.

Hee, hee! A classic in the making!
In comments on this blog and elsewhere, I see climate skeptics establishing their credentials early on by opening with some suitably derisive comment about the HS, e.g. "Now that the hockey stick has been broken into a thousand pieces ..."
Skeptics get obvious pleasure by thus stoning the devil. Indeed, this and similar phrases seem to serve the same purpose as other ritual acts of devotion. Perhaps in a few years, every skeptic statement will open with one of a few standard formulae, such as

Now that the hockey stick has been shown conclusively to be a complete and utter fraud, Tuvalu is not under water.

Wow...that's not exactly a ringing endorsement, is it?

By Dennis Williams (not verified) on 06 Sep 2006 #permalink

Dennis: Well, is the standard now that the hockey stick not only has to be right but that the methods used almost 10 years ago to obtain it have to be rigorous and perfect, or otherwise we must doubt its conclusions (even if they have been essentially verified in other later studies) and everything else about anthropogenic climate change?

And personally, I would call, "Most of the great discoveries have been made this way" to be a pretty strong endorsement, unless your standard is to disavow these great discoveries also because the initial work on them used pretty crude methods.

By Joel Shore (not verified) on 06 Sep 2006 #permalink

Might I suggest using "global warming denier" rather than "skeptic"? It better fits the behavior on exhibit, and avoids devaluing skepticism, which is a generally desirable attitude.

Moreover, if we also lump creationists and Intelligent Design advocates into the category of "evolution deniers", it not only simplifies discussion but also instroduces a symmetry and emphasizes the overlap in membership between the two groups of deniers.

John Quiggin is also using "denialist".

Allow me to claim a bit of credit for being an early denialist adopter

"Ah Tim, congratulations, you now have a dialog with the eminent Dr. Sonja Boehmer-Christiansen editor of Energy and Environment, house journal of denial. Denialist being much more accurate than skeptic."

The stick in the moorland Stoat, didn't think it would have legs at the time....

I don't get it. Is North saying that basically Mann et al made some bad choices in their analysis and just happened to come up with the right answer? Like, by chance or something?

So, another set of "choices" would have led to a different result?

Doesn't this open the door to the possibility that Mann et al made the choces that they did SO THAT their results would show a hockey stick shape?

What was the basis for the "choices"? Does anyone know?

By nanny_govt_sucks (not verified) on 06 Sep 2006 #permalink

I don't get it. Is North saying that basically Mann et al made some bad choices in their analysis and just happened to come up with the right answer? Like, by chance or something?

So, another set of "choices" would have led to a different result?

It's more to do with statistical rigour, I think... the precise method that Mann et al. used to combine all the various proxies is (we now know) not the most best way to do it - because they were the first people to really try.

As I understand it, more recent studies have reduced the uncertainties but essentially give the same result. North's point is that this is not unusual... when you try a novel technique, the uncertainties and limitations of your results are often not well understood, simply because of its novelty. The major political/cultural ramifications of climate change have meant that you have people willing to loudly both (a) downplay the uncertainties in the original result and (b) misrepresent the attempts to understand them for rhetorical purposes.

... the precise method that Mann et al. used to combine all the various proxies is (we now know) not the most best way to do it - because they were the first people to really try.

Bingo!!!!

This is all the denialists and contrascientists have - bashing on an old, first paper.

Let us keep that fact in the forefront of our mind when we see someone bashing that totem. See, if a totem didn't exist, they'd have to invent one.

Best,

D

The other problem as I understand it is that two of the tree species used for tree-ring proxies were problematic - not useless, but just not as useful as other proxies and given unwarranted confidence (in hindsight). Corrections to my understanding would be welcomed. I'd also love to see some clarification as to whether later reconstructions completely removed this problem - North seemed unsure about that.

Folks who are confused about the "hockey-stick" wars and who want to see how deniers "pull fast ones" should google up a PDF copy of the Wegman Report and have a look at figure 4.1. A major argument used against Mann et al is that Mann's data-centering convention "mines" noisy data for "hockey stick" leading principal components. To make that case, M&M generated a big set of random noise time-series and computed principal components from it using Mann's data-centering convention. And yes, in many cases,leading principal components computed from this sort of random noise do have that "hockey stick" shape. But there's a *big* catch here, and someone with sharp eyes should have no trouble spotting it.

To see what I mean, check out the Wegman Report Figure 4.1. Figure 4.1 shows Mann's "hockey-stick" plotted right next to a "noise-only" hockey-stick. They look pretty similar, don't they? Looks pretty bad for Mann, doesn't it? But take a closer look at fig 4.1 -- in particular, look at the Y-axis scales of the two "hockey-stick" plots. You'll see something **very** fishy.

By caerbannog (not verified) on 07 Sep 2006 #permalink

"But take a closer look at fig 4.1 -- in particular, look at the Y-axis scales of the two "hockey-stick" plots. You'll see something very fishy."

Not to mention the choice of autocorrelation co-efficient used to create the "noise-only" hockey-stick, which is 0.9 (equivalent to a "decorrelation time" of 19 years). Real tree-ring proxies have autocorrelation co-efficients around 0.15 (equivalent to a "decorrelation time" of 1.35 years). One wonders why Macintyre made such a blatantly biased choice of autocorrelation co-efficient for his supposedly unbiased "noise-only" hockey stick.

By Chris O'Neill (not verified) on 07 Sep 2006 #permalink

There is nothing "fishy" about Wegmen's Fig 4.1. The y axes are different. Panel 1 is the red noise simulation of PC1 under the MBH method. Panel 2 is the MBH reconstruction expressed in degrees centigrade.

The purpose of the figure is to demonstrate the imprint of the PC1 onto the reconstruction, as is made clear in the text. The shape is important, not the scale.


The purpose of the figure is to demonstrate the imprint of the PC1 onto the reconstruction, as is made clear in the text. The shape is important, not the scale.

OK, what percentage of the variance of the data is represented by M&M's red-noise PC1? What would an eigenvalue plot of M&M's red-noise data (from which they derived their hockey-stick PC1) look like? How do M&M's red-noise eigenvalues compare with MBH's "hockey-stick" eigenvalues?

By caerbannog (not verified) on 07 Sep 2006 #permalink

While James is deriving the answers that he says are demonstrated by that graph -- good questions, by the way, I'm sure James will be able to answer them, if he's right about the validity of presenting the pictures with such grossly different Y-axis scales. I suspect he's double and triple checking his answers right now, perhaps confounded by the magnitude of the effects illustrated.

Meahwhile -- on the Chronicle piece.

Is the 'Patrick Frank' from Stanford, the man who asked that "not just flawed but wrong" question, the one by that name working at SLAC (the Linear Accelerator)? Just curious

For those who didn't get the reference to Ben Santer, this link sums up that sorry early chapter in the denialist assault on scientists. He was treated much the same way that Rachel Carson was by the chemical industry.

http://www.ucar.edu/communications/quarterly/summer96/insert.html

Re the different scales on the figure 4.1, there isn't anything wrong here except a poorly described diagram. In the process the graph would have been scaled. I think the better question is to look at the eigenvalues.

But James, you raise an important point. You claim that there is nothing wrong about figure 4.1 since the context is made clear in the text. I have long felt that there is nothing wrong with using the hockeystick graphic wherever since it is described in the text. I am glad you agree with this line of thought.

By John Cross (not verified) on 07 Sep 2006 #permalink

By the way, here's Pat Michaels attacking Santer and predicting, in 2000, a significant cooling trend through the year 2007:

"Then there's the problem of Santer's ending the study on a high note, as it were, with annual average data from 1998, which just happens to be the big El Niño spike, now departed. Everyone knows that even after adjusting for the problems of orbital decay found in the satellite data last year, there is no significant overall warming unless this decidedly singular year is included (which leads us to predict a significant cooling trend from 1998 through 2007 in these data)." ...
-- http://www.heartland.org/Article.cfm?artId=9785

caerbannog,

You are missing the point of the diagram. It illustrates the tendency of the Mann method to prefer hockey-stick shaped series. Eigenvalues are irrelevant in this context. If it's the PC1, it by definition explains the highest proportion of the variance.

If you want to look at results on the same scale, how about Fig 3 in MM05 (GRL) (or Fig 4.3 in Wegman) which compares Manns PC1 with the PC1 derived from conventionally centred data?

The first figure shows that Mann's algorithm mines for Hockstick series. The second figure shows the effect on PC1.

Anticipating your next post "but wait! the effect on the reconstruction doesn't matter, as long as the correct number of PCs are retained!". The "correct number" of PCs retained" is defined as any number that is sufficient to include the bristlecone/foxtails, which now limp in at PC4, explaining 8% of the variance. As the subsquent regression stage "doesn't care" about the order of the PCs, once they are in, they're in, so the reconstruction is largely unaffected.

So, if you want to accept a reconstrtuction of temperatures pre-1600 based on a handful of North American tree ring sites, not correlated with local temperatures in the instrumental record, and disallowed as as tmperature proxy by the corers themselves (Gaybill & Idso), be my guest. I won't join you.

John Cross,

But James, you raise an important point. You claim that there is nothing wrong about figure 4.1 since the context is made clear in the text. I have long felt that there is nothing wrong with using the hockeystick graphic wherever since it is described in the text. I am glad you agree with this line of thought.

There is a matter of emphasis. I first saw the Hockey-stick on the front page of my local newspaper. And despite the use of the word "uncertainty" in the title of MBH99, there isn't much discusssion of uncertainty in the paper itself.

Beating up on 10 (Mann, 1998) and even 20 (Hansen, 1988) year old scientific papers seems to be a very popular pastime among some.

It is a little like now harping on the arithmetical errors Johannes Kepler made when he was figuring out the shape of the planetary orbits.

Who really cares? The critical thing is that he got the orbits right.

Same with Mann and Hansen. Their work basically supported the idea that AGW is real and lots of other work done since backs this up.

Scientists work in the present. Those who are obsessed with the past are not scientists. They are historians.

James: I haven't done the research so I can't back up the statistics on it but I suspect that most of the people who were seeking further clarification of the figure would come across the title of the paper at some point (probably fairly early on). So having the word Uncertainity in the title is probably sufficient.

But lets say that they had their eyes shut for the first page - phrases like "NH reconstrcutions prior to 1400 exhibit expanded uncertainities" and "the 1990s are likely the warmest" or even "More ... data ... are needed before more confident conclusions can be reached" are more than adequate to provide context.

By John Cross (not verified) on 07 Sep 2006 #permalink

I think the criticism about Mann's standardisation method is a bit off the mark. As I understand it from M&M's paper, the dodgy standardisation method involved using the pre-'blade' data to construct the mean for standardisation, and the post-'blade' data to construct the standard error. This does seem rather flaky, but in a counter-intuitive way it seems to work very well.

By this I mean simply that, if there was no change (as the denialists claim) in global temperatures at any point in the data, it shouldn't matter if you use the whole series to standardise or just a very large part of it (since the mean will be the same either way). If, on the other hand, there was a big change (starting at about the point the 'blade' started) then Mann's method will tend to emphasize this change, and M&M's critique will tend to reduce the magnitude of that change and its explanation of variance. Which is exactly what happened. standardisation methods are important but they aren't magic, and in this case its clear that if there was no change in the series, Mann's method would be just as good as M&M's. To those who would attack the Mann hockey stick on the basis of this 'dodgy standardisation' I give this thought experiment: 1) you say there was no change in the environment in the last 30 years; 2) doesn't that mean that in a data series of 1000 years we can leave out the last 30 years in calculating the mean - they're hardly important; 3) if we can't, because those 30 years have a special effect on standardisation, doesn't that mean they're special? 4) why are they special? Is it because the temperature was different and so were the forcings?

M&M have also managed to show that in their adjusted method using the whole series, they get Mann's solution 15% of the time (from memory of the paper). This could be taken to mean in some democracy of ideas that Mann's method is not the optimal solution to the model; but alternatively we could infer that Mann's solution is to some extent *standardisation independent*. Seems a bit like gold to me - an effect so strong that no matter how you skew and fiddle with the data it still appears in the solution at least some of the time.

I think the criticism about Mann's standardisation method is a bit off the mark. As I understand it from M&M's paper, the dodgy standardisation method involved using the pre-'blade' data to construct the mean for standardisation, and the post-'blade' data to construct the standard error. This does seem rather flaky, but in a counter-intuitive way it seems to work very well.

That's not even close enough to be wrong.


Eigenvalues are irrelevant in this context. If it's the PC1, it by definition explains the highest proportion of the variance.

Let's see.... say we have a data-matrix with 100 columns of noisy time-series data whose eigenvalues are 0.012, 0.0115., .... 0.001. Then the leading PC accounts for just a little over 1 percent of the variance. PC1 may account for a larger proportion of the variance than any other individual PC, but it still accounts for a tiny fraction of the total variance. (In MBH's case, the leading PC accounted for a large share of the variance -- something like 0.4 or so....)

In order to convince me that MBH's results really can be coaxed out of red-noise, you are going to have to show me not just a hockey-stick-shaped leading PC whose associated eigenvalue is somewhere between 0 and 1, you are going to have to show me a hockey-stick-shaped temperature reconstruction derived from that red noise, a reconstruction with a dynamic range that matches MBH's and with a blade nicely overlays the instrumental temperature record (like MBH's). An unscaled principal component with an unknown eigenvalue magnitude just won't do.

I have used SVD-based techniques in my work (totally unrelated to climatology), and in our applications, eigenvalue magnitudes mattered -- a *lot*.

If I am completely off-base here, could someone knowledgeable like Dano or Eli Rabett set me straight?

(And of course, we haven't even gotten to the autocorrelation issue raised by Chris O'Neill.)

By caerbannog (not verified) on 07 Sep 2006 #permalink

"If I am completely off-base here, could someone knowledgeable like Dano or Eli Rabett set me straight? "

may i preempt dano:

"It's all mendiscisizing atroturf oilpaid shills tactics"

yeah, knowledgable.

By Hans Erren (not verified) on 07 Sep 2006 #permalink

Hans,

You employ the standard strategy of the political right wing - you don't criticize your opponents with broad empirical evidence, you denigrate them. I have seen this tactic applied to the arguments of critics of western foregin policy such as Chomsky, Pilger, Fisk, Herman and others. The strategy creates the impression, without any documentation, that those with views opposed to the 'mainstream' must be crazy.

Reams of evidence - if you cared to look at it - reveals that industry has spent many millions of dollars trying to debunk the science it hates. Corporate-funded think tanks like the GCMI, the CEI, Hudson Foundation, CAT etc. don't give a rat's ass about the science behind climate change. As they represent self-valorizing amoral tyrannies whose sole concern is short-term profit, they must distort science to bolster a political agenda and a pre-determined worldview. They are acting like lawyers paid to represent a client. Corporate planners are paid to think in terms of quarterly profit margins, not in terms of burgeoning problems, however plausible, that may happen in ten or twenty years.

There is no doubt that a lot of money is floating around the anti-environmental slush fund, money that is being invested to ensure that the status quo is maintained, and damn the science. You denigrate yourself by making frankly feeble attempts to label critics of corporate-funded scientists and lobbying groups as 'conspiracy theorists'. The fact that you do this reveals that they have hit a nerve, because the facts they present are sound.

By Jeff Harvey (not verified) on 07 Sep 2006 #permalink

Let's see.... say we have a data-matrix with 100 columns of noisy time-series data whose eigenvalues are 0.012, 0.0115., .... 0.001. Then the leading PC accounts for just a little over 1 percent of the variance. PC1 may account for a larger proportion of the variance than any other individual PC, but it still accounts for a tiny fraction of the total variance. (In MBH's case, the leading PC accounted for a large share of the variance -- something like 0.4 or so....)

Have you ever seen a PCA where the leading component has an eigenvalue anything like as small as 0.012? I don't think you know what you're talking about. As for asking Dano about it, you might as well ask your refrigerator.

Have you ever seen a PCA where the leading component has an eigenvalue anything like as small as 0.012?

Um, I think that is the point. From what I recall the eigen values for the random "hockey-sticks" were very small.

By John Cross (not verified) on 07 Sep 2006 #permalink

caerbannog: If I am completely off-base here, could someone knowledgeable like Dano or Eli Rabett set me straight?

You are completely off-base there and elsewhere. You simply do not understand what the true PCA is, what's the motivation for it, and the fact that Mann's "PCA" has nothing to do with it. It is completely meaningless to talk about eigenvalues or explained variance in the context of Mannian PCA as they have no meaningful statistical interpretation. If you really want to understand this issue, instead of blindy parroting propaganda fed to you in places like this blog, please study Appendix A of Wegman's report until you understand every line of it.

You get basicly a hockey stick shape out of any data set with Mann's method if a single series have a higher (or lower) mean in the "standardization period" than the overall mean. I posted Matlab code over ClimateAudit a long time ago, try it yourself if you do not believe. The hockey stickness does not result from redness per se, but with autocorrelated time series you are more likely (than with white noise) to have series with the above mean property. Mann had bristlecone pine series to do the trick.

sg, and others: It is not that Mann used a "different" method, which just did not happen to be "optimal". Mann used methods that are unknown to others, whose properties are yet to be established, and did not even bother to tell people about it. That's not science. If you were caught in my field of doing that, you'd be an outcast for the rest of your career. That seems not to be the case in climate science.

[Tim: If you do not let this through unmodified, do not EVER come anywhere to complain about censorship.]

Jean S,

You said, referring to Michael mann, "If you were caught in my field of doing that, you'd be an outcast for the rest of your career. That seems not to be the case in climate science".

Great. So please tell me what should be done with the likes of Sallie Balinas, Wille Soon, Pat Michaels, Fred Singer, the Idso clan and all of the other buffoons who have been mangling science for years? Have you ever read, for example, how Craig and Sherwood Idso twist, mutilate, and distort peer-reviewed studies on their abyssmal web site in order to argue that the more carbon dioxide we spew into the atmosphere, the better? What they write is pure and utter tripe (speaking as a population ecologist who works on plant-animal interactions) so let's come clean here: what are your thoughts on these clowns?

By Jeff Harvey (not verified) on 08 Sep 2006 #permalink

Jeff: As far I can tell, they ARE outcasts in their fields.

"So, if you want to accept a reconstrtuction of temperatures pre-1600 based on a handful of North American tree ring sites"

I don't want to and I don't.

If you exclude this "handful of North American tree ring sites" you can still produce a statistically valid reconstruction of Northern Hemisphere average temperature back to 1450.

I just wish people would stop the lies about pre-1600 reconstructions requiring this "handful of North American tree ring sites".

By Chris O'Neill (not verified) on 08 Sep 2006 #permalink

Jean S,

Thanks for clarifying my point.

If Mann did indeed screw up, he should be criticized. What concerns me is the attention the sceptics get from the media. In almost every debate on AGW one of the usual suspects pops up: if it ain't Singer, it's Lindzen, and if it ain't Lindzen it's Baliunas, and if it ain't Baliunas it's Michaels, and it it ain't Michaels its Balling and so on. In 1996, a memo was leaked from the American Petroleum Institute by a whistleblower in which the API claimed to be 'concerned' that they were having to rely on the same coterie of scientists (Michaels, Baliunas, Singer etc) as denialists in the climate change debate. The point was that the API was worried that the public would become wary of seeing the same names used repeatedly to support the corporate view that AGW is overblown or non-existent and that the API should therefore aim to recruit a new batch of what they termed 'independent' scientists' for this purpose. Ten years later, and which scientists do we still see used in media reports to deny AGW? Michaels, Singer, Baliunas etc. And why is this? Because the vast majority of climate scientists don't support the 'corporate view'.

By Jeff Harvey (not verified) on 08 Sep 2006 #permalink

Jean S, so when you say that someone in your field is outcast for the rest of their careers, you mean that they would keep their academic position, continue to publish papers, be named to head up organizations and get heavily funded by industry to boot?

I would very much like to know which field you work in as I feel a career shift coming on.

Thanks
John

By John Cross (not verified) on 08 Sep 2006 #permalink

Jeff, I don't want to get involved too much into this discussion, but here's my (last) few cents: I really do not understand why people like Tim continue support Mann's work (MBH9X). It is so obviously garbage. I also happen to know that there are some serious problems to be uncovered in his later work. So why in the heck blindly defend Mann? IMO, fighting the battle for the "hockey stick" to the bitter end is a disservice for the "AGW camp": it only gives weapons for the "septics". Why not dissociate from it, admit it is garbage, and really move on? The "case" of the AGW camp does not really need or rely on Mann's work.

John, I only wish my field was "heavily industry funded". Maybe I would not need then to write here with pseudo-name as I would be expecting a nice salery from the industry ;)

"I really do not understand why people like Tim continue support Mann's work (MBH9X). It is so obviously garbage"

The first comment vindicated.

By Peter Hearnden (not verified) on 08 Sep 2006 #permalink

Peter, I don't exactly understand what you mean, but I suppose you wanted to attack me somehow. Thanks, I've always liked you, and never (in purpose at least) offended you.

Well, I got my time (thanks Tim!), and I'm off here.

Ah, George Aiken come back as Jean S. But Jean darling, you are still full of yourself and other things.

There are people here who understand considerably more about the statistical issues than I do. Our host and Chris O'Neill are two of them, but I do want to point out that by the nature of the beast a lot of physically important results initially depended on shaky math. The guys who walk behind the elephant cleaned up the details later, and guess what, the results were pretty much the same. North made this point also.

And yes James ordinates do matter. If they did not matter why were they scaled in the first place.


You get basicly a hockey stick shape out of any data set with Mann's method if a single series have a higher (or lower) mean in the "standardization period" than the overall mean.

How well do those artificial "hockey-sticks" replicate the instrumental record over the 150 years (or so) for which instrumental data are available?

By caerbannog (not verified) on 08 Sep 2006 #permalink

Jean S said: "I really do not understand why people like Tim continue support Mann's work (MBH9X). It is so obviously garbage."

Funny, I read the recent National Academy of Sciences report (on temperature reconstructions) and don't recall anything in there that referred to Mann's work as "garbage" or anything close to it.

Then again, perhaps Jean just knows much more about climate science than the members of the NAS panel.

I do appreciate all the thoughts here (esp. the James and Hans comedy), thank you all.

wrt evil bunny's 'off base' question, no. This is the question exactly, and what Ritson and others are saying.

If this is such a blockbuster find, you'd find folk such as David Stockwell publishing all over the place, making a name for himself.

Best,

D

wrt HSness and totem-propping, I should have mentioned the I posted Matlab code over ClimateAudit a long time ago, try it yourself if you do not believe.

Another instance of 'who cares'.

If it's blockbusterness is so relevant, I'm sure some journal will evaluate it's print worthiness.

Perhaps Jean can get in on the ground floor of CA's new journal that will print all the blockbuster discoveries arising out of comments: Galileo: The Journal of CA NewScience.

Best,

D

Can I just urge those of you on the side of AGW to tone down the ad homs. Please make the case based on strong science, sound statistics, and publish code and data to allow replication. If we are to win this vital battle, we need to win because the science is sound. Intense useage of ad homs actually suggests that those who use them are not confident of the science. Is that the impression that we want to give.

I am DEEPLY concerned that the poor science advanced by Mann et al, serves to discredit AGW, and gives comfort to those who would prefer to take no action.

By concerned of Berkely (not verified) on 08 Sep 2006 #permalink

Can I just urge those of you on the side of AGW to tone down the ad homs.

Can I just urge those of you on the side of denialism to learn the def of ad hom.

Ad hom: "you are an idiot, therefore your argument is faulty"

Not ad hom: "your argument is faulty because x, y, z, and by the way you are an idiot".

Knowing the def will have the unfortunate result of taking away an essential rhetorical tactic for some, but still.

Best,

D

Denialist? Rather than sceptic?

By concerned of Berkely (not verified) on 08 Sep 2006 #permalink

A point very well made "Concerned"
Climate science has been a scientific backwater but now with increased scrutiny, the science had better be good.
Re your second point: Mann has tried to rewrite history(MWP),statistics, and the scientific method. To disagree with him is not even scepticism let alone denialism!

Maybe this is a good place to ask some skeptics: As I understand it, M&M claim that (a) the MBH method mines for hockey sticks and (b) you won't get a HS without the bristlecones (or whatever). These appear to be incompatible claims, to me.

Slight note on eigenvalue magnititude. Size does matter in this case, as does number of eigenvalues. Once an eigenvalue is accepted as relevant, we have to take the data coded there seriously. White, pink or red noise lack the markers of meaningful data. The classic paper for some of these issues is "Derivation of theory by means of factor analysis or Tom Swift and his electric factor analysis machine", by Scott Armstrong, available at
http://repository.upenn.edu/marketing_papers/13/
CA mentioned it, but missed the point (how do you discriminate meaningful from random data), and that it focused on PCA (called principal factor analysis in this paper). Oddly, the technique criticized for eigenvalue selection by Armstrong was essentially the same as that used by M&M. Given that glacier data, bio proxy data, and boreholes have shown the same story, I'd think the replications are done, and claiming that a small coterie of advocates have overwhelmed the process is nonsensical. I do see a small clique, but not on the side seeking and using data.

William, you seem to be playing "dumb" here. Do you honestly not get it? ClimateAudit is flush with this information. Do yo not read blogs with a differing point of view?

MBH mines for hockey stick shapes, and the bristlecone pines are the hockey stick shaped series that get promoted by the MBH methodology.

Remove the MBH methodology and do a simple average of all the series and there's no hockey stick shape.

Remove the bristlecones, and run the MBH method (see Mann's "CENSORED" folder) with the rest of the series and you don't get a hockey stick.

Put them both together and the MBH method mines for and emphasizes the hockey-stick shaped bristlecone series.

By nanny_govt_sucks (not verified) on 09 Sep 2006 #permalink

Good thing some folk fetishize an 8-year-old debunked quibbled-to-death first paper.

Think of all the other researchers who will be bogged down by ululating group-hug astroturfers once this totem fetish wears off.

Best,

D

"MBH mines for hockey stick shapes" when you turn up the de-correlation time to one-fifth of the callibration period, according to an unreviewed claim. For some reason the person making this claim has shown no interest in using a real proxy like de-correlation time such as one-sixtieth of the callibration period. He doesn't seem to care about appearing to be biased.

"Remove the bristlecones, and run the MBH method (see Mann's "CENSORED" folder) with the rest of the series and you don't get a hockey stick."

What a blatant lie. Remove the bristlecones after 1450 and you still get a hockeystick AND a statistically valid reconstruction. Adding the bristlecones back in makes no significant difference to the hockeystick obtained after 1450, even though according to some, these bristlecones cause a huge bias.

By Chris O'Neill (not verified) on 09 Sep 2006 #permalink

"Remove the bristlecones after 1450 and you still get a hockeystick AND a statistically valid reconstruction."

Remove Mann's papers and you still have AGW. NAS said so. Even Wegman said so.

In the grand sceheme of things, it makes no difference whether Mann made statistical or other errors in a paper published almost a decade ago. For some to be so obsessed with Michael Mann is nothing short of Mann-iacal.

I don't know what they have to say,
It makes no difference anyway,
Whatever it is, I'm against it.
No matter what it is or who commenced it,
I'm against it.

Your proposition may be good,
But let's have one thing understood,
Whatever it is, I'm against it.
And even when you've changed it or condensed it,
I'm against it.

I'm opposed to it,
On general principle, I'm opposed to it.

[chorus] He's opposed to it.
In fact, indeed, that he's opposed to it!

"There is a long history of making an inference from data using pretty crude methods and coming up with the right answer. Most of the great discoveries have been made this way. The Mann et al., results were not 'wrong' and the science was not 'bad'. "

This just in: Einstein disproves F=MA! Entire field of Newtonian physics revealed to be a sham!

Good thing some folk fetishize an 8-year-old debunked quibbled-to-death first paper.

Would "some folk" be William? He brought it up.

By nanny_govt_sucks (not verified) on 10 Sep 2006 #permalink

Mann's is not the only old paper that the GW Deniers (sounds like a baseball team, doesn't it) keep bringing up, of course. Hansen's 88 paper is the other one.

Neither paper makes the least bit of difference to the validity of the AGW argument, but apparently those who keep trotting out the same tired "Mann/Hansen (Mannsen?) was wrong" claim do not appreciate this.

Neither paper makes the least bit of difference to the validity of the AGW argument, but apparently those who keep trotting out the same tired "Mann/Hansen (Mannsen?) was wrong" claim do not appreciate this.

I respectfully disagree.

The mendacicizers know exactly what they are doing and the dupes like na_g_s parrotting the message pass it on. It's the modern version of the telephone game.

Best,

D

Dano,
You may be right when it comes to some of these people, but I was affording everyone the same benefit of the doubt with regard to being a "medacicizer".

Understood, JB:

but I like to distinguish between the folk who purvey FUD and the dupes who like the words of the purveyors because they've chosen their identity according to that worldview.

That is: the purveyors are the users and the dupes are being used by the purveyors. That's a critical difference in my view. You'll never change the minds of the used, so it's best to deconstruct the arguments of the users.

Just a thought, sir.

Best,

D

It is interesting to see that the only hockey stick papers which the denialists are attacking are the ones by Mann et al. There are many other papers reporting on different types of temp reconstructions which have also shown that the hockey stick is valid. Is it that the denialists have some thing personal against Mann or is it that they are too shallow to actually look in the scientifiic literature to see whether other data support or contradicts Mann? It would appear that they are only able to parrot talking points given to them by other denialists rather than think and analyse for themselves.

Ian Forrester

By Ian Forrester (not verified) on 11 Sep 2006 #permalink

Steve McIntyre seems to claim that any study which uses tree rings is invalid. Or maybe it is only tree ring data which shows 20th centure warming...

By John Sully (not verified) on 11 Sep 2006 #permalink

So John. You are clearly not a gardener. If you were, you would understand that vegetative growth (tree ring thickness) is maximised when conditions are optimal - that is, temperature is neither too low nor too high, moisture levels are right, pH is right, soil conditions are right, shade levels are right etc etc. If temperatures are too low, then growth is low giving thin tree rings. If temperatures are too high, then the plant is stressed, and the tree rings are thin. That is, if temperature were the only factor, the relationship between tree ring thickness and temperature is inverse quadratic.

That means that you cannot assume a linear relationship between tree ring thickness and temperature such that the thicker the tree ring, the higher the temperature. Not so.

Maybe this issue is addressed in the dendro-chronology papers, but my understanding is that it is assumed that there is a linear relationship between tree ring thickness and temperature. If that assumption does not hold up, then there is a significant question as to whether tree ring studies can ever tell us much about past temperature.

What would I know though. I am just a simple gardener!

there is a significant question as to whether tree ring studies can ever tell us much about past temperature. What would I know though. I am just a simple gardener!

You'd better tell this to the folks who do this for a living, as they'd like to hear your answers to your 'significant' question. I bet they'd listen to a simple gardener. Would you like to have me log you in to a dendro listserv and you can try to defend your assertion to folk who do this for a living?

Let me know.

Best,

D

"You are clearly not a gardener. If you were, you would understand that vegetative growth (tree ring thickness) is maximised when conditions are optimal - that is, temperature is neither too low nor too high"

Places such as, for example, the Arctic northern treeline or the high-altitude treeline where trees are growing in the coldest places that they are capable of growing. Now where was it that they got their temperature proxy tree-rings from again?

By Chris O'Neill (not verified) on 11 Sep 2006 #permalink

caerbannog: How well do those artificial "hockey-sticks" replicate the instrumental record over the 150 years (or so) for which instrumental data are available?

As well as MBH9X although I do not think MBH "fit" is particularly good. You have to understand that the "proxy network" (which includes MannPCA obtained "hockey sticks") is "trained" against the instrumental series. In essence, the best linear combination of proxies matching the instrumental series is found. The problem here is not the method itself, the problem here is the fact that MBH fails the usual validation statistics. In other words, the "fit" to the instrumental series is spurious. See Burger and Cubash papers for more illustration of this problem.

Chris: Remove the bristlecones after 1450 and you still get a hockeystick AND a statistically valid reconstruction.

Any reference? Statistically valid, in other words you are saying that bristlecones causes MBH to be statistically invalid?!?!

mtb: Maybe this issue is addressed in the dendro-chronology papers

Good questions. They actually are addressed in the dendro-papers, and in general, the problems are recognized. Only few researches are making these "magic" temperature reconstructions assuming not only linear relationship, but for instance using precipitation proxies, teleconnections (no correlation to local temperature), SH proxies for NH temperature etc. I do not need to name those researchers.

If you are interested in these dendro issues, IMO Samuli Helama's recent PhD thesis is a good starting point.

The Climate Marshall has spoken:
The hockey stick is broken.
Michael Mann is on the run,
Now go and get your gun.
We gotta hunt him down,
So he don't come back in town.
We'll string him from a tree,
How fitting that will be.
The law is on our side,
Mount up men, let's ride.
There'll be time enough for dancin'
When we get both Mann and Hansen.

Jean S said: "Chris: Remove the bristlecones after 1450 and you still get a hockeystick AND a statistically valid reconstruction.

Any reference? Statistically valid, in other words you are saying that bristlecones causes MBH to be statistically invalid?!?!"

Re-read what Chris said more slowly and you will see that he said that removing the bristlecones after 1450 resulted in both a hockey stick and a valid reconstruction. Was this just a slip up or was it a deliberate attempt at obfuscation?

Ian Forrester

By Ian Forrester (not verified) on 12 Sep 2006 #permalink

Jean S:

Helama also found a relationship between tree-ring width and temperature in a paper that likely resulted from his dissertation. I presume, then, that you agree with the authors that tree-rings are likely valid for temp reconstructions? [mtb, feel free to pipe in any time]

Thank you in advance for your reply,

D

Say, Jean S:

Can you help me out with this passage from the Helama you linkied above:

Interestingly, the 20th century AD was shown to be one of the warmest spells during the past millennium, and amongst the warmest centennial period during the entire reconstruction. On the other hand, coldness during the preceding century, 19th AD, was shown to be contrastingly amongst the severest periods during the past millennium.

[pg 21 sec 3.1]

and how this...fits...with your statement above that some papers' "fit" to the instrumental series is spurious. and whether the Helama...fits...in this category?

That is: did his curve-fitting in this paper remove the noise from the series adequately to make this judgement?

Thank you in advance for your reply,

D

Ian: Chris was claiming that you get a statistically valid reconstruction (MBH). It is untrue whether you have bristlecones or not.

Dano #1: Yes, it is part of his thesis (see the list of papers on p. 8). I do agree, have I stated otherwise? Especially, the ones close to treeline, where the temperature is a true limiting factor (as Chris noted above), might make a good temp proxy. What I do not agree is that you can just take some chronologies, throw them into a bag, shake a bit, and vola you have a NH temperature reconstruction.

Dano #2: The above passage refers to his publication I. IMO, it is rather adequately justified, see the publication. So Helama does not fit to that category. The verification stats are not that wonderful though that we could claim that the reconstruction to be the "final word" in July temp reconstruction of Finland. Anyhow, before you start trumpeting the passage anymore, I seriously suggest that you actually take a look at the original paper (the graph).

Also, if you are truly interested in these matters, I suggest that you also see his latest (?) publication:

Helama et al: "Extracting long-period climate fluctuations from tree-ring chronologies over timescales of centuries to millennia", International Journal of Climatology, 25(13), pp. 1767-1779, 2005. http://dx.doi.org/10.1002/joc.1215

Abstract: For a long time, tree-rings have been thought of as containing almost no variation at timescales of centuries and millennia, i.e. at low frequencies. Here, we show that this might be an issue of data analysis rather than an actual lack of variability. A data set of subfossil and living Scots pines from northern Fennoscandia was examined by means of their ring-width time series. The premise was that the growth trends of individual time series could be quantitatively determined and decomposed into their different elements. It was shown that not all the components of growth trends were invariant over long periods of time, and that consequently the use of a single-curve standardization (i.e. Regional Curve Standardization, RCS) may result in temporally inflated and deflated indices of ring-widths. Observed non-climatic bias in tree-ring indices was probably due to gradually changing conditions in the pine population of the forest-limit ecotone. Changes in population density seem to have hampered the previous attempts at palaeoclimate reconstruction by masking the actual low-frequency climate variability. A new approach, expected to yield unbiased tree-ring indices, was proposed. The new chronology constructed by this approach showed consistency with multi-centennial variations that are based on independent palaeoclimate evidence.

Jean S "What I do not agree is that you can just take some chronologies, throw them into a bag, shake a bit, and vola you have a NH temperature reconstruction."

Are you saying that this is what Michael Mann did? Or just implying?

Thank you Jean S. Yesterday I did an ISI search on the author. I found a number of other interesting papers also.

The discipline is alive and well, undaunted by blog comments.

Best,

D

"Chris: Remove the bristlecones after 1450 and you still get a hockeystick AND a statistically valid reconstruction.

Any reference? Statistically valid, in other words you are saying that bristlecones causes MBH to be statistically invalid?!?!"

Looks like I have to be pedantic for some people:

Remove the bristlecones after 1450 and you still get a hockeystick AND still get a statistically valid reconstruction.

These tests were done by Wahl and Ammann in their paper to be published in Climatic Change available here. There's a nice summary of the results in this testimony to the House Committee on Energy and Commerce.

"See Burger and Cubash papers for more illustration of this problem."

Yes Burger and Cubasch don't have any serious problems with their work at all.

By Chris O'Neill (not verified) on 13 Sep 2006 #permalink

Thanks Chris for the link to the testimony. You are correct and Jean S is simply mistaken.

I just wish you had included the following excerpt for those who are too lazy to click on the link you provided to the testimony:

"MM's third methodological criticism surrounding the inclusion of the bristlecone/foxtail pine series was rejected for several reasons. The right frame in Fig. 2 illustrates that excluding these series has little effect on the MBH98 reconstruction, except to force it to begin in 1450 instead of 1400, because of lack of a data. Since the exclusion had little effect, and losing these data series would hinder reconstructions of earlier climate, WA06 rejected this criticism"

which is followe by a graph showing that there is almost no differece bewteen the Mann result and the "fixed" result.

And the last line of the abstract Jean S posted is:
"A new approach, expected to yield unbiased tree-ring indices, was proposed. The new chronology constructed by this approach showed consistency with multi-centennial variations that are based on independent palaeoclimate evidence."

Dano: No problem. As I know you are an active follower of CA, I don't if I'm repeating myself to you, but take a look at abstracts in HOLIVAR: http://www.holivar2006.org
I found them pretty interesting. They appear also to be a quick way of finding out what really is going on in the "reconstructions fields". Googling the authors led me to many interesting papers. I tried to market the conference over CA, but, as it many times happens, people were more interested in "other things"/fighting :(

Chris: You can find from this thread a few adjectives describing your style of "discussion". WA does not "validate" MBH. If you don't again believe me, go to a statistics professor of your choice, and ask him/her about the verification results in WA tables 1S/2S. And of course, Mann's (realclimate) view on BC work is truly independent and nonbiased... could you actually point out how BC work is nonvalid regarding their main critisim of MBH: the spuriousness/nonrobustness of its regression?

JB: As you clearly do not understand these matters yourself, just let it be.

hank: And your point is?

""MBH mines for hockey stick shapes" when you turn up the de-correlation time to one-fifth of the callibration period, according to an unreviewed claim. For some reason the person making this claim has shown no interest in using a real proxy like de-correlation time such as one-sixtieth of the callibration period."

When one does actually use a realistic correlation co-efficient (phi), such as 0.15 per year, instead of 0.9 per year, in McIntyre's R code (NAP report Appendix B), his "hockeystick" (more like a jump) nearly completely disappears. I wonder what metaphor we could use to describe a blatantly biased choice of parameter that causes a hockeystick to appear. Hmm. Oh yes, mining for a hockeystick.

By Chris O'Neill (not verified) on 14 Sep 2006 #permalink

Chris, if your noise is white, with high probability, there are no "hockey sticks" to mine for: likely the means of caliberation period are close to overall means. Is it so hard to understand that it is not the autocorrelation per se from which MBH method creates hockey sticks (re-read Wegman's Appendix A)? Wegman illustrates hockey sticks with the AR parameter 0.2. I encourage you to test on real tree-ring data how realistic the AR coefficient 0.2 really is: it's too low.

And fundamentally, who cares if you can or can not produce MBH-like curve with MBH methodology from noise of your choice? The verification statistics are for testing how good a reconstruction is: MBH fails those and no twigling is going to change that fact.

"WA does not "validate" MBH."

What happened to the problem with the "bristlecone bias"? Have McIntyre's years of argument come to nothing?

"If you don't again believe me, go to a statistics professor of your choice, and ask him/her about the verification results in WA tables 1S/2S."

And who's the statistics professor of your choice, "professor" McIntyre?

The point about validation of a reconstruction is to show that it (the reconstruction) is substantially better than noise, e.g. better than 99% or more examples of noise. The noise in this case is generated using the statistical properties of the proxies, i.e. you make a series of noise pseudo-proxies and use these to make a series of reconstructions. If the real reconstruction is any good then its validation measures will be better than those of the noise-based reconstructions in the vast majority of cases (e.g. better than 99% of cases). So the argument then is that the proxies are very unlikely to be just noise and very likely (99% likely) to indeed have responded to the variable being reconstructed.

One thing McIntyre is still claiming is that the RE validation score needs to be at least 0.54 to claim 99% liklihood while Wahl and Ammann say their results only require a score of 0 to claim 98.5% liklihood. W&A's reconstructions give an RE of 0.44 or higher for their and the MBH proxy networks back to 1400. Even with McIntyre's claim, I'd guess that an RE of 0.44 would still give a liklihood of at least 95%. Even with McIntyre's disputed RE estimates, his claim that the hockeystick reconstructions lack liklihood looks pretty shakey.

"And of course, Mann's (realclimate) view on BC work is truly independent and nonbiased."

It wasn't an opinion. It was detail of mistakes that were actually made.

"could you actually point out how BC work is nonvalid regarding their main critisim of MBH"

You should not be lazy and read it yourself. Basically the climate model they used to generate their simulated proxies was defective.

By Chris O'Neill (not verified) on 14 Sep 2006 #permalink

Chris, only thing I'm too lazy is to teach you basic statistics through a blog. Especially as you seem to be unwilling to learn anything. Your description about validation is pseudo-statistical BS.

The bottom line in this whole issue is and remains that the verification correlations in MBH are close to zero and caliberation ones much higher. I'm hopeful that if you really try and study hard, you'll someday understand what it means. Until then I wish you happy life in uncompelled ignorance.

"Your description about validation is pseudo-statistical BS."

My description aboout validation statistic significance is based on material written in papers published in thoroughly reviewed scientific journals. For someone who has demonstrated their ignorance of the massive flaw in McIntyre's bristlecone bias argument to complain about BS shows staggering hypocrisy. McIntyre's validation argument is just another in a long line of wrong arguments that he puts up for no scientific motivation. (BTW, you could at least keep up with "professor" McIntyre's latest argument about validation statistics but no doubt that's pseudo-statistical BS to you.) How can anyone who realizes the defects of McIntyre's bristlecone bias argument give his arguments much credibility?

By Chris O'Neill (not verified) on 15 Sep 2006 #permalink

So what is the answer about the appearance of different units, or different sizes in units in Figure 4.1 of the Wegman report? This duffer endlessly re- reading your handy pocket comment on what to watch out for in stats, notes that a variation in units, is a typical trick. Poster caerbannog raised the issue and I see there is quite a difference between the two graphs shown. What is the pupose of such difference, is there a good reason for it, should it have been noted by the Author, what impact does it have on the apparent size of the difference between what MB 98 said and what the bobsey twins claimed? See poster James who then says there is nothing fishy and that the y axes are different. Well, yes, is that not the point?

If it is wrong to expect 10 year old papers to stand up to beating on, does that mean that you concede the flaws in Mann's paper?

"Figure 4.1 of the Wegman report"

A couple of things about this figure 4.1 in the Wegman report and the closely related figure 9-2 of the NRC report.

Figure 9-2 of the NRC report, which was intended to show how "Mann's method" creates hockey sticks from random data, was generated by averaging a large number of random series that were selectively weighted. As you might guess, this selective weighting weighted each series towards a positive hockeystick, i.e. series that had a higher average in the calibration period than overall average were weighted +1 while series that had a lower average in the calibration period than overall average were weighted -1. McIntyre's program for doing this is given in Appendix B of the NRC report. With little surprise this program gives the hockeystick of figure 9-2 of NRC.

This weighting choice makes the assumption that the reconstruction procedure from climate proxies does the same thing but this assumption is entirely false. The weighting that reconstructions apply depends on the correlation with the instrument record during the calibration period and this depends on a lot more than just the average value of each proxy during the calibration period. In fact, the weighting of each proxy has absoutely nothing to do with the change in average value between non-calibration and calibration periods because the calibration process completely ignores the average value during the non-calibration period. McIntyre's pseudo-reconstruction process leading to figure 9-2 of the NRC report is thus completely false.

The other thing about figure 9-2 of NRC is that (as you can see from looking at its generating program in Appendix B of NRC), the correlation coefficient that generates it (phi) is 0.9 while real tree-ring proxies are around 0.15. Guess what happens if you put a correlation coefficient of 0.15 in McIntyre's generating program? The hockeystick nearly completely disappears in the noise. Was McIntyre worried that an honest choice of correlation coefficient wouldn't have any impact?

By Chris O'Neill (not verified) on 30 Sep 2006 #permalink

A. 0.15 is an underestimate of the AR1 of the proxies in that step. 0.6ish is the correct value. Just do a simple, vanilla AR1 calculation and you will see. I think you are going off of Ritson who did not calculate AR1 properly.

B. SM claims to have used an ARFIMA red noise, so just looking at AR1 is not what he did. The p value will be different, for ARMA or for ARIMA or for ARFIMA than it would be if we just look at AR1 processes.

C. Still waiting for a response to my question about the beating. Why is it wrong to find flaws in a 10 year old paper. Math is math. If there is a flaw in a 100 year old paper, we can find it and analyze it.

D. Agreed that looking at the recon (versus the PC1) lowers the impact and changes things. However, the problem is that Mann did not adequately document his methods so it becomes a bit of an abstract question as to how to do "Mann's reconstruction" with slightly changed inputs. For instance, he never documented the off-centering. Never documented a few other steps.

E. In any case, even if the impact on the recon is less then what SM claims, Mann's method still mines to some extent, and his documentation did not disclose this (and he will still not even admit if the off-centering was an inadevertant error or on purpose). In addition, the impact on PC1 remains regardless and there are parts of the paper that talk about PC1 in particular, about the "dominant mode" and such. Given that his method of PC calculation was inappropriate and undisclosed (not even formally PCA), those parts of the paper talking about PC1 specifically are in error.

I haven't seen Mann's method applied to any sort of random data so I'm not going to go along with any hypothesis that says that Mann's method mines to some extent. What I have seen is a putative equivalent of Mann's method that does indeed mine to some extent and which just happened to be used with a parameter value that gives the desired result. It's not enough to just make up a putative Mann method, McIntyre has to carefully select the parameter to get the right result.

But regardless of the parameter, McIntyre's putative Mann method does not correspond with the calibration process at all. Calibration does not use any proxy data from before the calibration period to decide how to weight proxies in the reconstruction. However, McIntyre's putative Mann method weights proxies depending on how each proxy's average value differs between pre- and intra-calibration periods. I have never heard of a calibration process that uses data from outside the calibration period. But apparently McIntyre has invented a new method.

And yes there are flaws in Newton's laws of motion.

By Chris O'Neill (not verified) on 30 Sep 2006 #permalink

Why was Mann's transform not documented in the methods?

I don't know whether Mann's transform, whatever that is, was or was not documented in the methods. What I do know is that the person making the loudest complaints about Mann's documentation has made a fraudulent representation of one of Mann's methods.

By Chris O'Neill (not verified) on 01 Oct 2006 #permalink

Go read the paper and see if it was reported. I assert that it was not. Have a take.

"I assert that it was not."

I have no interest in your assertions.

By Chris O'Neill (not verified) on 01 Oct 2006 #permalink

You don't seem to have any interest in figuring it out yourself or of addressing it, if it makes your side look bad.

I have an interest in finding out myself who is making a fraudulent argument. So far I have found McIntyre has made some of these. I haven't implemented MBH98 from their algorithm description but I and others have run Wahl and Ammann's implementation of MBH and found no evidence of fraud. McIntyre just doesn't seem to get the point of Wahl and Ammann. I guess if someone is capable of making frauds the way he has then he might find it difficult to have his concept of reality challanged.

By Chris O'Neill (not verified) on 02 Oct 2006 #permalink