It’s been over two years since John V, used the surfacestations.org data to show that the warming trends were the same for “good” and “bad” weather stations. Since then they’ve collected data on more stations, but still have not published their own comparison. It would be cynical of me to suggest that the reason is that the data doesn’t show what they want, but now Menne et al have published a peer reviewed paper analysing a more extensive set of stations, and surprise, surprise the “bad” stations have a cooling bias. John Cook has the details.

Comments

  1. #1 sod
    January 27, 2010

    On the off chance Josh is full of shit, and the box works fine measuring ambient temp (the position I’m leaning toward), how would trees affect it, except through their inate, temperature regulating, transpiration?

    denialists don t understand anomalies, and they also don t understand trends.

    a growing tree will of course effect the temperature TREND.

    as will a house, that is casting shadow on the box.

  2. #2 Bernard J.
    January 27, 2010

    All this speculation about the possibly forthcoming Pielke Snr and Watts paper (“PaW10″?) has me speculating about what must be percolating in the minds (such as they are) of said gentlemen – and I use that term generously…

    Given the focus that so much of the Denialati has bestowed upon Watts’ (and by association, Pielke Snr’s) endeavour over the last several years, and given the devastating damnation to eternal irrelevance that Menne’s paper has wrought upon continuance of surfacestations.org’s mission, I suspect that Pielke Snr and Watts have a lot of poopy in their nappies. Their paper will be picked apart to the last comma and full-stop, and if ol’ Anthony and Roger had performance anxiety before, they’re going to have it worse now.

    There must be scathing rebuttals already sitting in the out-trays, simply waiting for them to dare publication. They can hardly postpone their paper, however: that in itself would be an admission, by omission, of defeat.

    Onward to the gallows of professional irrelevance and humiliation gentlemen…

  3. #3 Bernard J.
    January 27, 2010

    [Sod](http://scienceblogs.com/deltoid/2010/01/so_thats_why_surfacestationsor.php#comment-2230672):

    denialists don t understand anomalies, and they also don t understand trends.

    a growing tree will of course effect the temperature TREND.

    as will a house, that is casting shadow on the box.

    Is that a house of the growing sort?! ;-)

  4. #4 Harold Brooks
    January 27, 2010

    Re: 83

    Because we know a microsite bias must be there even though the data don’t show it? Sorry, you’ve got Occam tripping over his beard on that one.

    Ah, ha! Caught you! William of Ockham was clean-shaven! He must be part of the conspiracy, too.

  5. #5 Neven
    January 27, 2010

    Well, it seems a compendium of some sorts has been released by Watts and d’Aleo, sponsored by the ever trustworthy SPPI, home of the Monckton. It’s about the surface temperature records:

    As many readers know, there have been a number of interesting analysis posts on surface data that have been on various blogs in the past couple of months. But, they’ve been widely scattered. This document was created to pull that collective body of work together.

    Of course there will be those who say “but it is not peer reviewed” as some scientific papers are. But the sections in it have been reviewed by thousands before being combined into this new document. We welcome constructive feedback on this compendium.

    Oh and I should mention, the word “robust” only appears once, on page 89, and it’s use is somewhat in jest.

    The short read: The surface record is a mess.

    About the collaboration with Pielke Sr. Watts says the following:

    Well this isn’t a journal paper, just a compendium of important issues discovered about the surface temperature record, and yes a full journal “peer reviewed” article is being worked on.

    As found on WUWT (We Use Wishful Thinking).

  6. #6 Andrew Dodds
    January 27, 2010

    Bernard J. -

    Problem is, even if the paper is immediately shot fuller of holes than a moth-eaten string vest, it will still be touted as definitive by those who already convinced. No argument is too feeble for the Echo chamber..

    Neven -

    I like the new definition of peer review ‘I published this on a blog and censored anyone who disagreed. Everyone else counts as a reviewer’. Think I’ll start on my ‘peer reviewed’ articles on String Theory and why it’s wrong because you can’t tie very small things, like grains of rice, in knots.

  7. #7 TrueSceptic
    January 27, 2010

    83 carrot,

    You want < ..>, not [..] :)

  8. #8 TrueSceptic
    January 27, 2010

    97 carrot,

    Read what I was responding to ;)

  9. #9 Deech56
    January 27, 2010

    Bernard J @102: Good analysis. IMHO, the paper will need to be submitted to a non-E&E journal (given Pielke Jr’s quote about the journal, how can Pielke Sr. go there?). Are there compliant editors out there? If rejected, I guess the complaining can begin, but I would insist on transparency – show the reviews. If the conclusions do not agree with the John V/NCDC/Menne analyses, there’s some ‘spainin’ to do. Certainly such a ballyhooed study needs to be completed and published.

    Interesting that Menne and crew have left an opening for extending the time period back further.

  10. #10 Paul H
    January 27, 2010

    Neven quotes the blurb on Watts and D’Aleo’s report:

    “Of course there will be those who say “but it is not peer reviewed” as some scientific papers are. But the sections in it have been reviewed by thousands before being combined into this new document. We welcome constructive feedback on this compendium.”

    Do you think “by thousands” they are referring to blog comments? The thousands of fawning and barely legible comments of the Wattbots! They neglect to mention that they delete critical posts, edit out sections of posts presenting criticism and simply ignore the points made.

  11. #11 carrot eater
    January 27, 2010

    Andrew: If it’s written by a couple of clueless guys, and it’s been reviewed by thousands of their clueless fans, I suppose it’s been reviewed by their peers.

    Deech56: You could extend the analysis further back, but to consider the hypothesis of Watts it is not at all necessary. To consider the hypothesis of Watts, you only need examine the trends since 1975 or 1980 or so. Going further back, I’d expect the ‘good’ and ‘poor’ stations to converge even more, because you’d end up in the time period before any MMTS stations were installed. Many of the ‘poor’ stations probably would have been rated ‘good’ back then.

    107: TrueSceptic: Thanks for that. Although, I rather like the look of the incorrect tags. It draws even more emphasis than italics would have.

  12. #12 dhogaza
    January 27, 2010

    Do you think “by thousands” they are referring to blog comments?

    Yes. This is the Founding Principle of Blog Science, after all.

  13. #13 luminous beauty
    January 27, 2010

    D’Aleo & Smith are making the claim that there are only 35 Canadian stations and only 1 above 65°N for 2009 in the NOAA/WMO data set. I’ve tried looking at Smith’s webpage, but between the hand-waving assertions and computer glibberish, I have no idea where he is getting this information. According to NOAA/NCDC/Deutscher Wetterdeinst there are more than [80 Canadian stations](http://bit.ly/bxJWg6) (#s 71017 – 71990) reporting monthlies and 20 or more above 65°N for 2009.

    WUWT?

  14. #14 Eli Rabett
    January 27, 2010

    Gorgio Gilestro has shown that homogeneity adjustments to the GHCN data are symmetric about the mean (actually they are more highly peaked than a normal distribution, although the wings appear gaussian), so yup, there are lots of stations with negative biases in the raw data, and as Eli said to Steve and Tony, a long time ago, yeah, Tony you’re right, we can handle that in software.

    Hat tip to Zeke Hausfather for the link

    And, oh yeah, trees transpire, the evaporating water cools the area underneath.

  15. #15 MarkB
    January 27, 2010

    Pielke/Watts have a mutually beneficial relationship – one that ultimately hurts the once decent reputation of Pielke as a legit scientist.

    The benefit to Pielke is that he gets much greater public exposure and promotion of his work and blog than he might get than working within the scientific community. Pielke also doesn’t like the fact that many of his colleagues don’t think too highly of his work, and/or have pointed out his errors. He’s certainly not ever one to admit mistakes, so he seeks venues that cheerlead his work, and encourages, rather than discourage the sort of behavior exposed here:

    http://www.realclimate.org/index.php/archives/2009/07/more-bubkes/

    See also my first comment in the RC link.

    Although it’s speculation, politics might play a role in his recent rhetoric. I see him opining about policy quite frequently, including the EPA decision to declare atmospheric greenhouse gases a danger to public health.

    Perhaps he also enjoys the relative fame contrarianism brings.

    As for Watts, although his message makes him popular among fanatics, his obvious weakness is no scientific cred or appearance of legitimacy among more careful observers. Pielke promoting him (and his “excellent website”) gives the false impression that Watts is credible, and is doing real objective scientific work.

    For Watts, the relationship has no real practical drawbacks. Potentially a drawback might be if Pielke behaved like a legitimate scientist that his credentials implied and stopped cheerleading Watts, instead taking a critical look at his work and inane rhetoric, which seems unlikely to happen. For Pielke, the drawback is being associated with an extreme ideologue and hack, which could alienate him from his colleagues. Perhaps he’s felt that’s already happened. Much of his work is considered rather shoddy.

  16. #16 carrot eater
    January 27, 2010

    I think perhaps Pielke and maybe Spencer are just jumping the shark now. They maybe just don’t care anymore, so they’ll throw in their lot with people of low credibility and ability.

    Eli: I haven’t done the analysis myself yet, but I bet that histogram would have a slight positive bias if you only used US stations, due to the systematic change in many stations switching to MMTS and their TOB at the same time. And of course, we all know that showing US numbers and pretending they are global numbers, or that they apply globally, is a favorite of the denier camp. That said, this particular thread actually is about the US.

  17. #17 Eamon
    January 27, 2010

    MarkB@115

    As for Watts, although his message makes him popular among fanatics, his obvious weakness is no scientific cred or appearance of legitimacy among more careful observers.

    And the fact that he posts any old thing that supports his view, however logically absurd it is. There was a post late last year claiming that because nuclear subs had surfaced ‘at the North Pole’ in the 60s the ice coverage must have been similar to today’s.

  18. #18 dhogaza
    January 27, 2010

    There was a post late last year claiming that because nuclear subs had surfaced ‘at the North Pole’ in the 60s the ice coverage must have been similar to today’s.

    Someone with too much time on their hands should really do a site similar to “fundies say the darndest things”, but for WUWT, original posts and of course, the commentary.

  19. #19 jakerman
    January 27, 2010

    Papertiger ask:
    >*[...] Josh contends that thermometers that are housed in aerated boxes, painted with reflective white latex, specificly to give them an absolute perpetual shade, are somehow cooled by surrounding trees. Now how is that supposed to work?*

    But papertiger partially answer this question with:

    >*how would trees affect it, except through their inate, temperature regulating, transpiration?*

    Others add to papertiger’s partial explanation:

    >*[...] shading a thermometer permanently will help reduce variations, but if it’s shaded by trees they will still reduce the temperature further, as they will reduce the temperatures immediately outside the box, too.*

    Furthermore, as trees get bigger the volume of atmosphere under its influence increases. Thereby extending the temperature gradient. Thus producing a cooling bias over the period that the tree expands.

  20. #20 John McManus
    January 27, 2010

    I gave in and looked at surfacesatations.org. What a hoot!

    I assume that all the comments are meant to criticise because Watts is Watts. I saw bad siting because there was a light bulb in an enclosue: it was burned out. Another enclosure was censured for having a lightbulb. As in the case of the first it was switched. It was off. My conclusion is that people who have not eaten enough carotts ( sorry Eli) turned the light on to read the thermometers. Maybe they turned it off whwn they left. By the way, one of these enclosures didn’t seem to contain a thermometer.

    Another photo was criticised for having an airplane nearby! The jet in question was an vintage job and would need a brave pilot indeed. Elsewhere firemen had a barbeque in the frame. It was on wheels, leading one to wonder where it was when it was hot.

    It seems that an elephant , labouring mightily and for some time, has produced a mouse.

    John McManus

  21. #21 dhogaza
    January 27, 2010

    Oh, John, you poor soul :)

  22. #23 Boris
    January 28, 2010

    OMG, The NCDC invited Anthony to particpate, but did not send him a letter of invitation. Are you kidding me? Thes jackbooted scientists and their MFing reluctance TO SEND A FRICKIN’ LETTER? YOU MOTHER****ERS! YOU WHINING HYPOCRITICAL TOADIES! THE DENIAL OF A ****SUCKING LETTER OF INVITATION MAKES ME SO ****ING MAD I COULD–aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaarrrrgggh! YOU BASTARDS? NO LETTER!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! WRO GH0000000000000010RHWVIO; RRRRRRRRRRRRRRMCAWCGOMIWWJIO; HHHVWRO CWRRRRRRRRRRRRRRRHGOQWWWWWWWWWWWWWWWWWWWWWWWVJKNM,2BN489 BVP

  23. #24 Bernard J.
    January 28, 2010

    Lars Karlsson points to [Watts' response to Menne 2010](http://scienceblogs.com/deltoid/2010/01/so_thats_why_surfacestationsor.php#comment-2232740).

    Oh dear.

    Oh dear, oh dear, oh dear oh dear…

    Watts sounds like someone telling his teenage children “Yes, of course there is a Santa Clause. And he lives with the Easter bunny!”…

    From the 40-watts bulb himself:

    Texas state Climatologist John Neilsen-Gammon suggested way back at 33% of the network surveyed that we had a statistically large enough sample to produce an analysis. I begged to differ then, at 43%, and yes even at 70% when I wrote my booklet “Is the US Surface Temperature Record Reliable?, which contained no temperature analysis, only a census of stations by rating.

    Was it 33% or 43%, at first? Either way, Watts should have taken note of a professional. He might have thought to beg to differ, but he should have acquainted himself with the concept of sampling power.

    His excuse provides a delightful example to undergraduate statistics students about how to determine sample size, and about oversampling. Indeed, one day (if it’s not happening already) stats educators will take Watts’ growing collection of surface stations and use them exactly for this purpose – to set the task of calculating what sample size was required to answer the underlying basic question of surfacestations.org.

    The problem is known as the “low hanging fruit problem”. You see this project was done on an ad hoc basis, with no specific roadmap on which stations to acquire. This was necessitated by the social networking (blogging) Dr. Pielke and I employed early in the project to get volunteers. What we ended up getting was a lumpy and poorly spatially distributed dataset because early volunteers would get the stations closest to them, often near or within cities.

    The urban stations were well represented in the early dataset, but the rural ones, where we believed the best siting existed, were poorly represented. So naturally, any sort of study early on even with a “significant sample size” would be biased towards urban stations.

    And if he were even a half-competent analyser he would have been feeding new stations into a speadsheet as they were surveyed, and have had the results of any relevant statistical calculation instantly returned for any and all to see. That’s how I work, and I am sure that it is the same for any scientist who, for whatever reason, frequently resamples.

    And if he and his volunteers were selectively biased in favour of sampling the stations most likely to give a result that indicated a significant upward distortion of a continental warming trend, and no such result was actually being evidenced in the growing database, then he would have had an effective answers months or years previously.

    Actually, he did have such an answer. He just didn’t want to know that he already had it.

    We also had a distribution problem within CONUS, with much of the great plains and upper midwest not being well represented.

    This is why I’ve been continuing to collect what some might consider an unusually large sample size, now at 87%.

    As [carrot eater](http://scienceblogs.com/deltoid/2010/01/so_thats_why_surfacestationsor.php#comment-2227012) pointed out, Watts is patently clueless about degrees of freedom, as well as with other aspects of sampling procedure and sample size. Does he really think that something profound is going to shift in the data at this late hour?!

    Watts’ towering Dunning-Kruger affliction is blinding him to the obvious reasons why Menne and other real scientists at the NCDC would be leary of “collaborating” with him.

    He might be assiduously ignoring the taps on his shoulder, and that shadow from the looming scythe, but the fact remains that the rumours of the demise of surfacestations’ fundamental raison detre are not premature…

    Off with his head!

  24. #25 carrot eater
    January 28, 2010

    blather, blather.

    And then this gem:

    “yes even at 70% when I wrote my booklet “Is the US Surface Temperature Record Reliable?, which contained no temperature analysis, only a census of stations by rating.”

    Did you ever stop to consider that you couldn’t make any conclusions without doing any analysis? And you expect professionals to take you seriously? And yet you are still plastering your unfounded conclusions all over the internet, without having done the analysis?

    Re Degrees of freedom: Even without doing a formal statistical consideration, he could have done as Bernard J says – set up a code to automatically re-analyse as each new station came in. When you see that the results have converged and adding new stations doesn’t much change the result, you have a good idea that you’ve gotten a decent sample.

  25. #26 Bernard J.
    January 28, 2010

    [Carrot eater](http://scienceblogs.com/deltoid/2010/01/so_thats_why_surfacestationsor.php#comment-2232855).

    On our point about automatic updates of analyses, it is one of those little things from which scientists garner a simple and an inordinant amount of pleasure. I know that I sit with a little forward hunch as my graphs automatically update, or, where applicable, as my macros run.

    The fact that Watts has not ever mentioned, to my knowledge, either the peculiar satisfaction that such a process provides – or the results thereof – indicates to me that he either has not seriously attempted a proper analysis (unbelievable!); or that he has, and hasn’t liked what he has seen.

    Or that he simply has no idea how to process the data in the first place, in which case one wonders how he is able to insist that a greater sample size is required.

    No matter which way it is sliced and diced, Watts does not come out looking competent.

    But that’s hardly a surprise, huh?!

  26. #27 carrot eater
    January 28, 2010

    Bernard J:

    Watts barely knows what an anomaly is; I doubt he’s gridding up anything. That part is for Pielke and his students.

    “When you have such a small percentage of well sited stations, it is obviously important to get a large sample size, which is exactly what I’ve done. Preliminary temperature analysis done by the Pielke group of the the data at 87% surveyed looks quite a bit different now than when at 43%.”

    We’ll assess that when you publish it, Mr. Watts. Until you publish an analysis, you might refrain from producing think-tank published pamphlets that draw conclusions about the surface record.

  27. #28 Connor
    January 28, 2010

    Oh, look, it’s al Menne et. al.’s fault for being mean to Anthony, awwwww

    “In the summer, Dr. Menne had been inviting me to co-author with him, and our team reciprocated with an offer to join us also, and we had an agreement in principle for participation, but I asked for a formal letter of invitation, and they refused, which seems very odd to me. The only thing they would provide was a receipt for my new data (at 80%) and an offer to “look into” archiving my station photographs with their existing database. They made it pretty clear that I’d have no significant role other than that of data provider. We also invited Dr. Menne to participate in our paper, but he declined.”

    http://wattsupwiththat.com/2010/01/27/rumours-of-my-death-have-been-greatly-exaggerated/

    Apprently Watts is beyond reproach but NOAA are guilty of ‘exaggeration’ – hilarity ensues! :D

  28. #29 Derecho64
    January 28, 2010

    Watts’ “NCDC was mean” and “43% isn’t enough” memes have been taken at face value by Revkin over at Dot Earth.

    Sheesh, Watts sure is an ignorant crybaby.

  29. #30 dhogaza
    January 28, 2010

    What does that make Revkin, then?

  30. #31 carrot eater
    January 28, 2010

    You guys are unduly harsh on Revkin. Looks to me that he just pasted whatever Watts wrote, so that the reader could judge. I don’t see Revkin taking a position on any of it, beyond noting that some other sceptic thought the Menne paper looked pretty good for now, and that it’s up to Watts and Pielke to finally publish something.

  31. #32 GFW
    January 28, 2010

    Someone upthread mentioned E.M. Smith, aka “Chiefio”. There’s a hilarious post of his here: http://chiefio.wordpress.com/2010/01/27/temperatures-now-compared-to-maintained-ghcn/

    I posted the following comment, which is still in “moderation”

    That map is pretty funny.

    Let’s see. You take a data set that is strongly biased towards land in general and northern hemisphere land in particular (and there’s a lot more northern hemisphere land to start with), and then take December 2009, which is
    a) Northern hemisphere winter, and
    b) A single month that just happened to have an Arctic Oscillation that pushed cold air into two of the regions well covered by the data set while warming the high arctic (mostly not in the data set).
    You then compare this unusually cold NH hemisphere winter month to a 15 year baseline – which completely removes seasonal effects from the baseline.

    Brilliantly composed cherry picking. Well done sir!

    Another way to put it is that he’s discovered a “trick” to “create the decline”.

  32. #33 GFW
    January 28, 2010

    Er, I guess I lost formatting inside the blockquote. Oh well.

    I have to say, Revkin bent over backwards to give Watt’s point of view – almost twice as much of Watt’s long note as the entire rest of the story.

  33. #34 Boris
    January 29, 2010

    More laughing at the chiefio:

    Further, if you select a low point of one of those cycles for your start of baseline, your ‘anomalies’ will, again by definition, show a lot of “warming” that is not really warming. It is just the placement of your baseline. So is the placement of the baseline ’special’? From personal experience and second hand from the reports of my parents, the 1930’s and early ’40s were warm. The 1960s to 1970s were particularly cold. (It snowed in my home town for the first time in dozens of years). So what I see is a simple “cherry pick” of a cold point to place the baseline, straddling that cold time.

    This guy actually thinks that the choice of baseline matters. Does he think that no warming has occurred if we choose the last 20 years as a baseline and temperature anomalies rise from -.7 to zero? Is he really this dumb?

  34. #35 John
    January 29, 2010

    “Is he really this dumb?”

    Yes.

  35. #36 Lars Karlsson
    January 29, 2010

    Boris,
    If we use the last 2 decades as base period, then we don’t get much warming, we merely get plenty of uncolding. It isn’t warming until its above 0. Yessir!

  36. #37 carrot eater
    January 29, 2010

    Boris:

    When somebody goes around averaging absolute temperatures, you can be pretty sure that person doesn’t understand the point of anomalies.

    Try this on him: have him go back to the GISS page, and have him read this sentence: “Base period: Time interval to which anomalies are relative. This input is not used for trend maps.

    Maybe if he thinks about that really really hard, he might figure out his idiocy. One of his many idiocies, at least.

    The deniers sure know how to pick their leaders.

    GFW: Be careful there. I could be wrong, but I would think that the baseline in his plot is of Decembers only. You generally compare apples to apples, and I would expect that GISS page does the same. So if you plot Dec 2009 against a baseline of 1991-2006, the baseline should only be Decembers of 1991-2006.

  37. #38 GFW
    January 29, 2010

    Carrot eater: Y’know, I actually thought that was possible, but assumed he’d have said so were it the case.

  38. #39 GFW
    January 29, 2010

    The strange non linear color scale makes it hard to tell, but you’re probably right. That would make some of the features of the map more likely. Even so, he still cherry-picked one month with a special feature (the AO) disproportionately affecting the data set. That it was also a peak or near-peak of El Nino is totally ignored because of basically zero ocean coverage.

  39. #40 MapleLeaf
    January 29, 2010

    This is pasted form another thread here on Tim’s page. I’d appreciate your thoughts on Smith’s (D’aleo’s and Anthony’s programmer buddy who likes to be called ‘Chiefio’) wild allegations made over at WUWT.

    You can read his rebuttal of some of the accusations at:
    http://www.skepticalscience.com/news.php?p=3&t=154&&n=123
    Go to comment 150 by MarkJ.

    I’m not familiar enough with the GISStemp code to know if he has some valid points or if these are just the ramblings of a mad man. I’d really appreciate someone in the know speaking to his allegation that:

    Chiefio rants:

    “The PAPERS that support the Reference Station Method and the anomaly process may well have done “selfing” but that is NOT what is done in GIStemp. This is a common delusion among warmers and one they cherish dearly.
    The reality is that a thermometer anomaly IS calculated against a “basket of others”. It is also a reality that this is done long after all the ‘in-fill’, homogenizing and UHI calculations. Basically, the “anomaly” can not protect you from all the broken bits done before it is calculated.

    Until they get past the fantasy of what they believe is being done and look at what the code actually does do, they will get nowhere.

    This “Belief in the Anomaly” is just that. A “Faith Based Belief” in how they think the world works. It is not based on an inspection of what the GIStemp code actually does. ”

    Thoughts? He seems to be claiming that GIStemp calc. the anomalies using the incorrect data.

  40. #41 carrot eater
    January 29, 2010

    GFW: He made the graph at the GISS webpage, so you can cut him out of the loop in understanding what he did.

    http://data.giss.nasa.gov/gistemp/maps/

    Generally, monthly anomalies are just that: the baseline period is specific to that month. Somebody can correct me if I have that wrong.

    So his plot indeed does show that for parts of the NH, it was an unusually cold December. That’s fine. That he thinks that means the entire earth is cooling is just stupidity.

    Seriously, if anybody is interacting with him, ask him to draw the trend map over his period of interest. See how long it takes him to figure it out.

  41. #42 dhogaza
    January 29, 2010

    The PAPERS that support the Reference Station Method and the anomaly process may well have done “selfing” but that is NOT what is done in GIStemp

    The Clear Climate Code people – who *are* highly skilled software engineering experts – have been rewriting GISTEMP in Python, with the goal of creating a better structured, more readable version of the software (the old FORTRAN version of NASA GISTEMP suffers, among other things, of being written to break up the processing into several discrete steps, most likely due to limited computer capacity back in the 1980s).

    As part of their effort to understand the methodology and the original code, they’ve been studying the reference papers as well.

    They’re of the opinion that GISTEMP does probably correct the methodology documented in the various papers.

    Shorter version: EM Smith’s a documented idiot, based on that alone I’d doubt his claim. Given the work done by the clear climate code folk and their (somewhat tentative, but informed) opinion I’d say it’s clear his claim is absolute garbage.

  42. #43 MapleLeaf
    January 29, 2010

    Carrot Eater @ 141:

    Just trotted over to Mr. Smith’s blog. Wow, blatant deception on his part. What you and others have said is correct, so I won’t rehash that.

    Unfrickin believable and he has the audacity of GISTemp using smoke and mirrors and not having a clue. Yet another Dunning-Kruger victim in the denialist camp.

  43. #44 MapleLeaf
    January 29, 2010

    Dhogaza @142, Thanks.

    I hope that you are correct. It would be nice id Gavin Schmidt could put “Chiefio” out of his misery on this one– b/c it is a pretty serious accusation. That said, I’m not sure if the impacts would be significant even if there were an error though. Maybe I should ask the same question over at RC and CCC…..

    Anyhow, it is probably a load of BS, again, on Smith’s part.

    This all reminds me, I have to learn python….too much to do too little time.

  44. #45 carrot eater
    January 29, 2010

    MapleLeaf: He’s a totally incompetent madman; we’ve seen that plenty already.

    Incoherent as well, so here is my best guess at what he’s talking about:

    I’m guessing that by “selfing” he means the following method of calculating the anomaly: Take an individual station, find its mean over 1950-1980 or whatever, and subtract that mean from the actual temperature record. You can then go on to combine the different stations into spatial averages.

    He is correct that GISS does not do this (at least, last I checked). But this is clearly described in [Hansen/Lebedeff](http://pubs.giss.nasa.gov/abstracts/1987/Hansen_Lebedeff.html) (1987), on page 13349-13350.

    The problem with combining the stations as above is that it does not allow the use of stations with data gaps in the reference period (1950-1980, or whatever). So GISS instead uses the period of overlap between each individual station and the neighboring stations as the reference period for the combining. This period of overlap does not have to be continuous, but it needs to include at least 20 years.

    So that’s why he thinks the anomalies are “calculated against a basket of others”.

    If this is confusing, just look at equations 1-3 in the paper. It’s a perfectly fine way of combining anomalies. The result is equivalent to what I’m guessing he means as “selfing”, except that it allows more stations to be used in the combination.

  45. #46 MapleLeaf
    January 29, 2010

    Carrot eater @ 145. Thanks Carrot. I think that you interpreted his incoherent rambling correctly.

    OK, so he is just out to lunch again it seems. Surprise, surprise.

  46. #47 carrot eater
    January 29, 2010

    Well, it sounds like he saw something in the code, didn’t understand it, didn’t go double-check the papers, and went straight to making claims on the internet.

    Everybody should be sceptical as they learn new information, but this isn’t exactly the way to go about it.

    A student may be having some trouble learning how to, say, draw free-body diagrams in physics class. But physicists should not be expected to stop their research and wait for him to figure it out. Nor need the student go online and claim Newton was a fraud. Nor should the student’s learning process become a public spectacle, and be mistaken by the public as being something relevant.

    Yet this is what we get with some of the deniers.

  47. #48 GFW
    January 29, 2010

    Carrot eater. Yup, you’re right.
    The GISS site is nice and easy to use. It easily demonstrates that changing the base period has absolutely zero effect on a trend. Using the same land-only 250km approach Chiefio did, the 2000-2009 December trend is 0.33C. Heh, sticking to land in northern hemisphere winter is actually a cherry-pick in the warming direction, giving almost twice the global-annual trend.

    I’d go over to his place and retract the first half of my critique (seasonal variation), while using the above to bolster the second half (blatant cherry pick of very strong negative AO). However, he never released my comment from moderation, so why bother.

  48. #49 carrot eater
    January 29, 2010

    GFW: I think it’s best to go back and retract/revise anyway. It’s best to immediately retract if you are being snarky and get it wrong. Plus, you’re correct overall except for that one bit, so that’s good.

  49. #50 MapleLeaf
    January 29, 2010

    Aah, yes ‘retracting’ and issuing corrections. Something the denialists simply refuse to do, period.

  50. #51 GFW
    January 29, 2010

    Ok, retraction/clarification submitted. Not that either the original or the followup are likely to see the light of day over there.

  51. #52 Bernard J.
    January 29, 2010

    [GFW](http://scienceblogs.com/deltoid/2010/01/so_thats_why_surfacestationsor.php#comment-2236867).

    You could always just post your two pieces here, so that ‘Cheifio’ has to live with the evidence of his censorship. It probably wouldn’t faze him too much, but it’d still be another thorn in his side.

  52. #53 Eamon
    January 30, 2010

    dhogaza@142

    The Clear Climate Code people – who are highly skilled software engineering experts – have been rewriting GISTEMP in Python, with the goal of creating a better structured, more readable version of the software (the old FORTRAN version of NASA GISTEMP suffers, among other things, of being written to break up the processing into several discrete steps, most likely due to limited computer capacity back in the 1980s).

    And surprise, surprise – E.M. Smith more-or-less banned Nick Barnes, one of the Clear Climate Code guys, for having the temerity of trying to [explain](http://chiefio.wordpress.com/2009/03/05/mr-mcguire-would-not-approve/#comment-2393) things on his blog!

    More on the basic matter at Nick Barnes’ [blog](http://nickbarnes.livejournal.com/114757.html) – which distills to “Chiefio don’t want to understand stats.”

  53. #54 doskonaleszare
    January 30, 2010

    Until they get past the fantasy of what they believe is being done and look at what the code actually does do, they will get nowhere.
    This “Belief in the Anomaly” is just that. A “Faith Based Belief” in how they think the world works. It is not based on an inspection of what the GIStemp code actually does.

    This is hilarious.

    In Chiefio’s audit of STEP3 he couldn’t figure out what the to.SBBXgrid.f file is for, so he skipped this part (his summary: “It looks like it mostly just smears data over time and space to fill in missing chunks”). All Mr Programming Expert could do is whine about naming and file path conventions used in GISTEMP, without a glimmer of understanding what this code really does.

  54. #55 carrot eater
    January 30, 2010

    It’s one thing for him not to realise he’s hopelessly out of his element (though again, looking at his own code with goto statements, one wonders what his strength is).

    But it’s quite ridiculous that all his friends don’t realise it either.

  55. #56 dhogaza
    January 30, 2010

    And surprise, surprise – E.M. Smith more-or-less banned Nick Barnes, one of the Clear Climate Code guys, for having the temerity of trying to explain things on his blog!

    Oh, that’s great, and thanks for posting it, and the link to NIck’s blog. I tried reading EM Smith’s blog once, but decided it’s dangerous to my sanity so haven’t been back, so missed the bit with Nick over there.

  56. #57 Derecho64
    January 30, 2010

    If EM Smith is truly a software engineer, then McIntyre and Watts are genuine climate scientists.

  57. #58 carrot eater
    February 1, 2010

    EM Smith is embarrassing himself at WUWT now. Take a look and laugh. I’m curious how many replies I’ll need to get him off this one.

  58. #59 carrot eater
    February 1, 2010

    Can anybody recommend the most convenient way to archive pages at WUWT? Just curious.

  59. #60 P. Lewis
    February 2, 2010

    Can anybody recommend the most convenient way to archive pages at WUWT?

    IIRC, the best way is to put it in the trash can!

    The stuff there has been recycled so many times that there’s nothing of substance left (not that there was much in the first place — unless you value comedy material).

    However, you could try using Zotero, with which you can take snapshots of web pages to save locally.

  60. #62 bluegrue
    February 2, 2010

    If you are looking for a value-added local copy give the Firefox add-on [Mozilla Archive Format](https://addons.mozilla.org/de/firefox/addon/212) a try. It saves the page into a MAFF file, which is a zip archive of the webpage plus some meta data such as time of the saving and original url. So instead of a html-file plus subdirectory with lots of files you just have a single file. MAFF can contain single pages or collection of pages. The add-on also supports the use of MHTML files, which store single pages using MIME encapsulation (the same as used for e-mails).

  61. #63 P. Lewis
    February 2, 2010

    Hey blugrue, that MAF link is good. Think I’ll stick with Zotero, but I’m definitely going to investigate/use MAF.

  62. #64 carrot eater
    February 2, 2010

    Thanks, all.

    In any case, EM Smith found a way to trip up the map plotter on the GISS website so that it assigns a color for grid boxes that don’t have data. Which is a useful observation, and something he could point out to the GISS people. But he goes on to speculate that this is the cause of Arctic warming, which is just insane.

  63. #65 Anthony
    February 18, 2011

    Just noticed this post and visited http://surfacestations.org. Still no published data on the “warming bias”. Is the Heartland Institute hiding the decline?

  64. #66 Bernard J.
    February 18, 2011

    [Serendipitous post](http://scienceblogs.com/deltoid/2010/01/so_thats_why_surfacestationsor.php#comment-3325413), Anthony.

    I was just looking for this thread to ask the very same question, given Watts’ promotion of [Joanne Codling's nonsensical attack on the Australian BoM and the CSIRO](http://scienceblogs.com/deltoid/2011/02/the_war_on_the_bureau_of_meteo.php).

    I am itching to know exactly when Watts anticipates his paper’s release, and upon what power analysis he has decided how many stations he requires for a significant result.

    Release the data Watts! We demand to see your correspondence!

  65. #67 Trey
    January 30, 2013

    Please what ever you do DO Not look at the actual sight locations by way of photograph. It might be too much empirical data to deal with for the warmest / hockey stick view