This week in Global Warming Denial

Four of the six government members on a committee examining geosequestration put out a dissent denying the existence of anthropogenic global warming. I was going to write a post on what they got wrong, but I realised that since they rely on Bob Carter so much, I'd already done it. John Quiggin comments on the affair:

But now that the disinformation machine has been created, it's proving impossible to shut it down. Too many commentators have locked themselves into entrenched positions, from which no dignified retreat is possible. The problem has been reinforced by developments in the media, where rightwing talk radio and blogs have formed a closed circle of tribal loyalty, in which hostility to science is taken for granted. Spurious talking points are picked up and amplified by these groups, eventually finding their way into the opinion columns of writers like Andrew Bolt and Miranda Devine, and then into the opinions of conservatives in general.

See also Guy Pearse and Trevor Cook.

The disinformation machine is still desperately trying to hype the correction to NASA temperatures. Yesterday both Sydney newspapers had articles (Tim Blair in the Terror, and Michael Duffy in the SMH). They both seem to be reading from the same script, because they both confused US temperatures for global temperatures. In the US, 1934 and 1998 are in a virtual tie, and the correction didn't change this. Globally, 1998 was much warmer than 1934, and the correction didn't change this. Penguin Unearthed corrects Duffy, but it also works for Blair. James Hansen comments on the anti-NASA campaign:

What we have here is a case of dogged contrarians who present results in ways intended to deceive the public into believing that the changes have greater significance than reality. They aim to make a mountain out of a mole hill. I believe that these people are not stupid, instead they seek to create a brouhaha and muddy the waters in the climate change story. They seem to know exactly what they are doing and believe they can get away with it, because the public does not have the time, inclination, and training to discern what is a significant change with regard to the global warming issue

Meanwhile Joel Schwartz at Planet Gore is excited by a Stephen Schwartz paper that he hopes will overturn the consensus in one fell swoop. It takes James Annan five minutes to find a fatal flaw:

Perhaps a better way of putting that would be to suggest applying the analysis to the output of computer models in order to test if the technique is capable of determining their (known) physical properties. Indeed, given the screwy results that Schwartz obtained, I would have thought this should be the first step, prior to his bothering to write it up into a paper. I have done this, by using his approach to estimate the "time scale" of a handful of GCMs based on their 20th century temperature time series. This took all of 5 minutes, and demonstrates unequivocally that the "time scale" exhibited through this analysis (which also comes out at about 5 years for the models I tested) does not represent the (known) multidecadal time scale of their response to a long-term forcing. In short, this method of analysis grossly underestimates the time scale of response of climate models to a long-term forcing change, so there is little reason to expect it to be valid when applied to the real system. ...

Changing Schwartz' 5y time scale into a more representative 15y would put his results slap bang in the middle of the IPCC range, and confirm the well-known fact that the 20th century warming does not by itself provide a very tight constraint on climate sensitivity. It's surprising that Schwartz didn't check his results with anyone working in the field, and disappointing that the editor in charge at JGR apparently couldn't find any competent referees to look at it.

And the most clueless article award goes to Robert Bryce:

Here's my review [of An Inconvenient Truth]: it is an overly simplistic look at a complex problem and it concludes with one of the single stupidest statements ever put on film. Yes, that's harsh criticism. But it's the right one, given that just before the final credits, in a segment addressing what individuals can do about global warming, the following line appears onscreen: "In fact, you can even reduce your carbon emissions to zero."

This statement is so blatantly absurd that I am still stunned, weeks after watching Gore's movie, that none of the dozens of smart people involved in the production of the movie - including, particularly, Gore himself - paused to wonder aloud something to the effect of, "Hey, what about breathing? Don't we produce carbon dioxide through respiration?"

Bryce writes a piece about Global Warming and has no clue about the carbon cycle. Sir Oolius is suitably scathing.

More like this

That portion of the blogosphere that takes no shame in including Ann Coulter in their blogrolls is all atwitter with the news that NASA has "silently" released adjusted temperature records showing that 1934 is the warmest year on record, not 1998 or 2005 or 2006. How will Al Gore, James Hansen and…
In an ongoing effort to discredit mainstream climate science, climate contrarians have incorrectly asserted that there is a “pause” in the rate of global warming. This was never true, but now, it is even less true. (Originally Published Here) To any objective observer, the Earth is now a world…
There are two new scientific research papers looking at variation over the last century or so in global warming. One paper looks at the march of annual estimates of global surface temperature (air over the land plus sea surface, not ocean), and applies a well established statistical technique to…
Real science is about the gathering of multiple lines of evidence, bulding on previous research that built on research before that. One of the hallmarks of denialism is choosing a single study or dataset out of a multitude simply because it is an outlier that confirms their prefered viewpoint. On…

Tim, you should have continued the quote from Bryce's piece:

'The answer, is yes, we do. Thus, by including the claim that you can "reduce your carbon emissions to zero" the film's producers might as well have hung a sign around Gore's neck that says "I'm an idiot."'

No Bryce. The only idiot around here is you!

He continues:
"Don't get me wrong. I'm not saying that there's no global warming nor am I claiming that carbon dioxide has no effect on the atmosphere. I've read the IPCC's latest report for policymakers. I've also read a good bit of what the skeptics have to say. All of it leaves me confused."

Yeah, listen to scientists and flat-earthers and assume both sides are credible, and you will be genuinely confused. This guy's greatest criticism about the movie is essentially a nit-picking point. Keep up the good work Bryce.

"They both seem to be reading from the same script"
Bingo. I enjoy googling new rightwing memes verbatim (with quotes) to see that 53 sites have overnight sprung up with identical wording, and not one has paraphrased them.

Yep, the right must feel persecuted by the evil liberals who control everything in the world. Here we have the brave Stephen McIntyre standing up to the evil liberal-communist beast, NASA. And they are so afraid of revealing their junk data that they block him from accessing it. Or could it be that the program he wrote to gather the data was hammering data.giss.nasa.gov with thousands of requests which violated their rules? Eh, the former explanation is more conspiratorial and makes the righties feel more like underdogs. The latter makes too much sense and can't be used to paint NASA as the bad guy.

, accessed 23 August 2007.

LOL. I guess that's why they're so skeptical about AGW. They can see into the future and see that nothing catastrophic will happen.

Chad, why don't you read this:
http://www.climateaudit.org/?p=1584

This is the only source of information on what happened with Mc and this "DOS attack". Anything else you read about this is just someone else's take on what happened. There is a lot of misinformation about what happened. First, I will concede that Mc made mistakes and there are things you could say about him if you wanted to. But you are mischaracterizing what happened. First, you can see in the first few responses that the 'robots.txt' file that he was supposed to be following was not even accessible to him. So, had he even had the knowledge to check for the robots.txt file (something you could certainly take Mc to task for, if you would like. Based on your comments, it doesn't sound like that is what you want to take him to task for), NASA was preventing people who didn't have the correct user agent (note, the poster who tried to download the robots.txt using a command line program had to have enough knowledge to 'pretend' that the command line program was really the user agent "Netscape/4") from reading it.

Next, you say that he wrote the script. Mc explicitly refers to the fact that he got the script from someone else who had already used it to download all of the data that Mc was trying to get. The author of the script gave some description of how the script worked and how the script did not cause a DOS attack. At any one time the script added no more than 1 additional request to the load on the NASA server. The equivalent of you clicking buttons on your browser very quickly. In addition, the web master comments that Mc had triggered 16000 requests in the period when Mc was running the script. Mc states that this was over an 8 hour period. 2000 requests an hour (much less than 1 per second) is not "hammering" any computer built in the last several years. I'd be willing to bet the software could be installed on the desktop computer you made that post from and it could be handled in that time. The fact that Mc was able to execute this many requests was because the NASA server was able to process them all. The script had also been executed at least one time, possibly many times by the original author and others that it may have been shared with.

People have claimed that the fact that Mc encountered errors was proof that he was causing a DOS attack and the server was returning errors. However, Mc also explicitly states that his errors were caused by a 403 diagnostic code. A 403 error is a security error, it means you are trying to access something forbidden to you. This started after the NASA webmaster restricted Mc access via his IP address. The errors were not caused by server load, but because his access was restricted.

Finally, people have stated that Mc should have just asked for the data, instead of running the script. Well, when that finally happened, the people responsible for the data just informed him to continue using his script in the same manner he had been using it.

The worst Mc was guilty of was what Lee continually referred to as 'whinging'. Ok. Is that really the point you were trying to make? That when he got the data that pointed out the NASA error, he was 'whinging' about it?

By oconnellc (not verified) on 19 Aug 2007 #permalink

It seems especially unfortunate that we can't create a non-politicized commitee to examine geo-sequestration. Were these political appointees? The area has enough skeptics from the left -who are afraid it might fail, or that it allows us to continue to use fossil fuels. It doesn't need a whole additional pack of people who deny that AGH are even a problem.

oconnelc is, as usual, wrong. McIntyre was getting "a few missing records" before the webmaster blocked him. This is good evidence that he was overloading their system. And yes, you could build a system that could handle such a load, but they hadn't, because they did not anticipate or desire bots to download every page.

McIntyre is not naive -- he knows how the right-wing media will present this stuff. His motivation here seems to be malice against NASA.

oconnellc- I read the link you provided. It was my mistake to use the words "program he wrote". So I retract that. The point I was making, in a very sarcastic way, was that many on the right were quick to see it as a conspiracy. Like NASA had something to hide. I was writing that after reading the link that boris posted and the link going to CA, "GISS Interruptus" from another Deltoid post.

Mc's accounting of what happened, as related in that townhall piece, is, to put it kindly, bullshit.

As was pointed out to him at CA, at the time, by even some of those who in general support him.

And oconnellc is smart enough to know this.

In fact, I'm certain oconnellc does know this.

The worst Mc was guilty of was what Lee continually referred to as 'whinging'.

No, he's guilty of lying about what happened to the townhall blogger.

He's also guilty of lying about the significance of the changes to the weather data to the townhall blogger.

"Were these political appointees?"

Yes, And not only that, they are conservative government politicians. And if you knew their track record on other issues you would be laughing (or crying) even harder.

By Obdulantist (not verified) on 19 Aug 2007 #permalink

Does anyone know how Dana Vale and Jackie Kelly ended up on the science committee?

Dana Vale is best known for two incredible gaffs. The first was the proposed Gallipoli "theme park" on the Mornington Pennisula... because it kinda looks like the Turkish coast. The other was her bizarre claim in parliament that Australia would become a nation with an Islamic majority in 50 years because we Aussies "are aborting ourselves almost out of existence".

http://www.smh.com.au/news/national/abortion-will-lead-to-muslim-nation…

Jackie Kelly once made the famous remark that in her north-western Sydney electorate: "no-one in my electorate is interested in a university education, Penrith is pram city". Someone so disparaging of tertiary education, particularly in less wealthy regions, should not have input into Australia's science and technology policy.

Then there is Dave Tollner, who thinks the best way to stop cane toad infestations, rather than a broad based approach, is to whack the bloody things with a cricket bat. He also got done tacking alcohol into a dry indigenous community:

http://www.theaustralian.news.com.au/story/0,25197,22125374-601,00.html

The man is clearly an idiot.

As for Denis Jensen, he should know better.

Thanks for the link and the roundup - I think John Quiggin is right - the whole thing has got a life of its own beyond anything that makes any sense.

Just having a brief look at the dissenting report. I notice that the only scientifc journal used to reference the "it warming on other planets" claim is the Hammel and Lockwood article (GRL 34, 2007) that has been show to contain some serious flaws, including the use of an out of date TSI database and very low formal correlation. All the other references come from popular publications.

They also make the old claim that the "scientific consensus" position in the 1970's was the Earth was going to enter another ice age (which William Connely at stoat has been pretty comprehensivly demolished). Their only reference is an online article by George Will (http://www.realclearpolitics.com/articles/2006/04/cooler_heads_needed_o…) at the denialist site, realclearpolitics.com. The article does not cite a single reference, and contains many factual errors. Opinion pieces like this should never, ever be used to base policy on.

The report also dredges up some old favourites. Water vapour is the dominant greenhouse gas. Climate sensitivity to a doubling of CO2 is low (their reference is personal communication with Richard Lidzen) and that the temperature of the globe peaked in 1998. They also make the bizarre claim that the mass balance of the Greenland ice sheet is positive, which they justify with a single paper published in 2002, which was published pre-GRACE measurements.

I found these errors after all of 10 minutes of reading while waiting for noodles to soak. God knows what calamities an in-depth read could dredge up.

Tim, I hope you live in a marginal Liberal electorate come November, so we can vote these morons out (and replace them with other morons, but that's a whinge for another day).

There is no best way to deal with cane toads, which are a pest in hawaii and florida as well as australia (and, where just as in australia, the buggers don't even eat the insect pests they were introduced to control).

Here's a good piece:

http://news.mongabay.com/2005/0417-tina_butler.html

Sorry, just found another howler.

The report claims that current sea level rise is simply a rebound from the last glacial period, and cite this GISS website:

http://www.giss.nasa.gov/research/briefs/gornitz_09/

Which does not support thier case at all, but discusses, in depth, past sea level rises and the current accelerating sea level rise:

"The current phase of accelerated sea level rise appears to have begun in the mid/late 19th century to early 20th century, based on coastal sediments from a number of localities. Twentieth century global sea level, as determined from tide gauges in coastal harbors, has been increasing by 1.7-1.8 mm/yr, apparently related to the recent climatic warming trend. Most of this rise comes from warming of the world's oceans and melting of mountain glaciers, which have receded dramatically in many places especially during the last few decades."

The report then goes on to warn of the dangers of melting the West Antartic ice sheet or the Greenland ice sheet for sea level rise, stating that:

"A global temperature rise of 2-5°C might destabilize Greenland irreversibly. Such a temperature rise lies within the range of several future climate projections for the 21st century"

These people didn't even read their references. No where does this article mention a recovery from the last glacial as an explanation for modern sea level rise.

I weep that these people are my elected representatives.

Bad, bad Boris, for making me read this:

"After I was blocked and I explained myself they still didn't want to let me have access to the data," McIntyre lamented.

He continued: "They just said go look at the original data. And I said no, I want to see the data you used. I know what the original data looks like. I want to see the data that you used. But one of the nice things about having a blog that gets a million and half hits a month is that I then was able to publicize this block in real-time and they very quickly withdrew their position and allowed me to have access."

I can't tell what's worse -- the paranoia or the naked self-congratulation.

It's like taking a bite of something with a revolting taste and a disgusting smell, then trying to distinguish between the two.

Well, I have been trying to comment here on the subject of the post but your webmaster will have none of it! Fair enough, but he doesn't bother to tell me why despite me sending him an e-mail. I assume from the success of my test comment above that I am not banned?

David Duff:

I find your comments strange. I have known Tim to ban only in very rare occasions and usually not related to a single post but a series. He seems quite willing to put up many opposing comments and if the comment is offensive he usually disemvowels it as opposed to deleting. If your comment did not appear it is safe to say he didn't get it. Why don't you post it at your own blog and link to it.

By John Cross (not verified) on 21 Aug 2007 #permalink

John, thank you for your suggestion but a) I actually wrote: "I assume from the success of my test comment above that I am not banned?" which means what it says; and I also wrote that "he [the webmaster who invites one to e-mail him if you have difficulties] doesn't bother to tell me why despite me sending him an e-mail" which also means what it says. But in case you still confused I am simply trying to tell our host that there appear to be difficulties getting through.

David Duff: I am still confused. Your initial statement was:

Well, I have been trying to comment here on the subject of the post but your webmaster will have none of it!

That seems to be a statement that the webmaster is not allowing your comments on here. Compare that with:

I am simply trying to tell our host that there appear to be difficulties getting through.

Perhaps you could explain how the two are the same? Thanks,

John

By John Cross (not verified) on 21 Aug 2007 #permalink

McIntyre is one of those people who are satisfied to spend life diddling with details.

They manage to convince themselves that what they are doing is very important when it really means nothing.

Well, I know I am a skeptic, but it kinda fun watching people bash Steve McIntyre when he has been proven right almost every time. The alarmist still feel the need to hide the data that they say supports their findings, but Steve McIntyre has shown repeatedly that they have a good reason for this.

Lets take Gavin Schmidt from RC, who made the statement:

It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom - that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes. Currently, although they are not necessarily ideally placed, there are thousands of stations - many times more than would be theoretically necessary.

Which is little more than a lie wrapped up in 'estimation' when the facts are not quite so good for the alarmist position. NOAA/CRN says that a minimum of 100 stations are needed for the USA alone, not the whole NH! To get a higher confidence of the climate, 95 percent, a full 300 stations are needed just for the USA. Saying only 60 stations are needed is just untrue and a climatologist should know this.

Further, Gavin goes on to say that 'thousands of stations - many times more than would be theoretically necessary' when this is a lie. The fact remains that for surface stations GISS, where Gavin works, uses Hansen et al (2001) to adjust the station trends. For UHI this means that only ~250 stations are used to adjust the other ~1000 stations within the USA.

Now why does Gavin misrepresent reality like this? Because surfacestations.org is showing that a significant number of stations do not meet NOAA/NWS/WMO site guidance. Studies have found that failure to meet site guidance will lead to 1-5 degree C errors for the station.

The impact of this is that errors at a small number of 'rural' surface stations will inject a warming trend that is just an artifact of the errors. Hansen pointed this out in his 2001 paper but does not want to believe it now.

This is even worse for the rest of the world. South America for example, only has six 'rural' stations and half of them are islands.

So the 'accelerated' warming from instrumented readings that diverge from the proxies and cause the divergence problem could be nothing more than a warming bias due to improperly sited stations.

Oh, and this 'accelerated' warming is the reason given that solar could not be the cause of warming, but in the USA, which has the best 'rural' stations network, there is almost no warming trend in the 20th century.

So why would an alarmist want to release their data and work when so much of it can be proven to be bad assumptions, lies, and poor statistics?

"Studies have found that failure to meet site guidance will lead to 1-5 degree C errors for the station."

What studies would those be? Can you provide a citation? And do those studies report a + or - error in degree C?

Well, one such study is 'The Role of Rural Variability in Urban Heat Island Determination for Phoenix, Arizona' (2006)
shows that 1 - 5 degee C variability based on siting.

Vernon, let me do this real slow for you again:

temperature does NOT matter for global warming. what matters is a CHANGE (TREND) in temperature.

the temperature in Texas will quite often be a few degrees hotter than in Alaska. but that is completely IRRELEVANT.
because Alaska stations show the same trend, that texas stations show.

so the urban heat island effect will only influence the TREND, if a station that was not under UHI effect before, suddenly becomes affected by it. (growing city, "eats" former rural station)

THINK. TYPE. POST.

"Well, one such study is ..."

That study shows that individual rural sites differ wrt to temps, which cannot be a surprise, but does not appear to discuss departures from 'site guidance'. Also, the temperature variations are both + and -. As the published analyses by Hansen et al deal with temperature changes over time, I don't see how this study supports your case. Any other studies?

sod, let me do this real slow for you. You have a small number of stations and right now, based on the census that is underway, it would appear that 50 percent of them do not meet site guidance. These are the ~250 (lights = 0) stations that Hansen considers 'rural'.

Now why are you wrong? Well the consensus of peer reviewed studies on microsite issues shows the exact opposite of what you claim. Peilke's study, Oke's study, Gallo's study.... Very simply UHI causes can also be seen at the microsite level. The causes are the same and the scale does not matter.

The causes are:

1. Canyon effects: these happen at the "urban level" and the microsite level. Think multipath radiation.
2. Sky view impairment; Happens at the urban level and microsite level.
3. Wind shelter. happens at both scales..
4. Evapotranspiration. asphalt dont breath like soil. 10 miles of asphalt or 10 feet.
5. Artifical heating. essentially a density question. Same cause. different scales.

the sign of bias for UHI and microsite is the same because the causes are the same. So basically the TREND your so big on is UHI on a smaller scale.

If you read Hansen (2001) you would see that he tossed out any station that did not show warming (there were five). Talk about loading the results.

So, what 'accelerated' warming trend are you seeing in Texas and Alaska? The USA warming trend is 0.12 degree C for the 20th century.

So, where does the ROW big trend come from, well, there are not a lot of long term stations. There are few rural stations (all of South America has six). Six stations are determining the trend for all of SA.

Basically it comes down to not enough stations needed to actually get the data needed to determine the trends and not good enough quality control of the stations to determine if what is being seen is the actual trend or microsite UHI.

That is why making the statements about thousands of stations is a lie.

Oh and sod, how many years worth of observations are the basis for this unusual amount of melting? Let me guess, it is 30 years worth of satellite imagery? I alway love the fact that we just start measuring something and since we have not seen it before, it is unusual. My self, I do not know if it is unusual or not. We are coming out of the LIA and I do not know how much ice there should be. I did read the UC Irving study that indicates that a significant portion of the melting is due to dirty snow.

Vernon, put your glasses on and have a look at this graph of the comparison of surface temperatures with two data sets from two different satellites.

http://tinyurl.com/2zlf8e

Please explain to me how it is possible that the micro-site errors you keep claiming to exist allow the three graphs to be superimposed.

By Ian Forrester (not verified) on 05 Sep 2007 #permalink

"If you read Hansen (2001) you would see that he tossed out any station that did not show warming (there were five)."

I can't find that in Hansen 2001. Got a page number to document that?

That is amazing, not! Notice I said the 20th century, not the last 20 years. Gee, if you take those same charts and go from 1998 to 2005 you see cooling! It is amazing! What does it prove, why nothing, well except that there is both warming and cooling going on.

Since you did not bother to show what the charts referencing, would that be the globe, the USA, Spain? Since I know that the satellites did not show the correct warming to begin with, until an 'adjustment' was found. I have not done any studies into that so I freely admit I do not know.

I do know that my original post was about alarmist mis-representing facts. I have not seen anyone address what I am talking about. Sorry.

Oh, and so your not too confused... from the 1980s the USA only and the ROW do match. It is what is shown for the ROW prior to the 1980s that is the issue. Well, that and the fact that most the world does not have many rural stations in that time period so the trend that is applied to all the rest of the stations is suspect.

richard, Try page 7.

The strong cooling that exists in the unlit station data in the northern California region is not found in either the periurban or urban stations either with or without any of the adjustments. Ocean temperature data for the same period, illustrated below, has strong warming along the entire West Coast of the United States. This suggests the possibility of a flaw in the unlit station data for that small region. After examination of all of the stations in this region, five of the USHCN station records were altered in the GISS analysis because of inhomogeneities with
neighboring stations (data prior to 1927 for Lake Spaulding, data prior to 1929 for Orleans, data prior to 1911 for
Electra Ph, data prior of 1906 for Willows 6W, and all data for Crater Lake NPS HQ were omitted), so these apparent data flaws would not be transmitted to adjusted periurban and urban stations. If these adjustments were not made, the 100-year temperature change in the United States would be reduced by 0.01°C.

Now if you read Hansen (2001) page 21 graph f, all unlit stations, that is the actual temp trend for stations that are rural. It would appear that the 'warming' trend in Hansen's work is the artifact of his adjustments, since the rural temperature should, like the current NOAA/CRN be showing what the trend is with out UHI inpacts.

richard, Hansen found 5 stations in northern california that had trends way, way off from surrounding stations, and dropped them.

This is localized to that small local region, and (from memory - I don't have the paper on this computer) including that strong cooling island in a sea o warming, if they had kept it, caused only a few thousanths of a degree change in the US trend.
---
Vernon, those "charts" are clearly labeled. you said: "Since you did not bother to show what the charts referencing." Vernon, one is clearly labeled "global", the other "USA(48)" Try not to also be an idiot when you're being an asshole.

Of course the overlap only spans the time the satellites have been up. That does not change the fact that the comparison clearly shows that the adjustments Hansen uses are managing to get the surface station record corrected to within a very small margin, 0.1C or so.

"from 1998 to 2005 you see cooling!" Bwaaaaahaaaaaa. Dude, cherry picking like that labels you for what you are, as clearly as anything can.

"most the world does not have many rural stations in that time period" If youa re goign to make such claims, please documetn how many is "not many" and how the number is not sufficient to analyze temp trends. Go for it.

Lee, sorry my bad, missed the labels. And once again I say, so what that they match over the very short period? The issue that your trying to ignore is that:

The alarmist still feel the need to hide the data that they say supports their findings, but Steve McIntyre has shown repeatedly that they have a good reason for this.

Lets take Gavin Schmidt from RC, who made the statement:

It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom - that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes. Currently, although they are not necessarily ideally placed, there are thousands of stations - many times more than would be theoretically necessary.

Which is little more than a lie wrapped up in 'estimation' when the facts are not quite so good for the alarmist position. NOAA/CRN says that a minimum of 100 stations are needed for the USA alone, not the whole NH! To get a higher confidence of the climate, 95 percent, a full 300 stations are needed just for the USA. Saying only 60 stations are needed is just untrue and a climatologist should know this.

All the chanting of why surfacestations.org's census does not matter because there are so many stations that there is over-sampling... which is not true. Which you want to ignore. Which you want to drag off on a tangent to keep from having to deal with.

The answer is that nothing your saying has addressed this. The fact remains that people like Steve McIntyre have shown that the facts are not what the 'alarmist' are presenting.

vernon,b ack taht up,. Cite it.

Becuse, among other things, when you say:
"To get a higher confidence of the climate, 95 percent, a full 300 stations are needed just for the USA."
you are spouting statistical jibberish. And you have already shown, over and over, that you haven't got a clue what you are talking about. Do you even understand what Gavin means when he says '60 degrees of freedom, 60 well-placed stations?' Do you know what a confidence iterval is? That '95 percent confidence' means zip, nada, zilch in this context, without a stated confidence interval? That you are writign sentences that have no meaning?

"richard, Try page 7. "

So is your claim that including the data from these five stations would significantly change the results of the data analysis?

If that is the case, then it would seem to me that you could demonstrate this by carrying out a re-analysis and demonstrate that the warming is, as you claim, an artifact. Have you done this? Would that not be the simplest approach?

sod, let me do this real slow for you. You have a small number of stations and right now, based on the census that is underway, it would appear that 50 percent of them do not meet site guidance. These are the ~250 (lights = 0) stations that Hansen considers 'rural'.

that is all nice, but has nothing to do with the argument i made.
wether a site is hot or cold, does not matter. because we only want the TREND over years!

Now why are you wrong? Well the consensus of peer reviewed studies on microsite issues shows the exact opposite of what you claim. Peilke's study, Oke's study, Gallo's study.... Very simply UHI causes can also be seen at the microsite level. The causes are the same and the scale does not matter.

of course there is NO consense on "microsite issues", in the context of climate change.
but you can point me to some of those peer reviewed articles. 500, perhaps?

there is a phenomenon called "micro climate". it s not the same.

1. Canyon effects: these happen at the "urban level" and the microsite level. Think multipath radiation.
2. Sky view impairment; Happens at the urban level and microsite level.
3. Wind shelter. happens at both scales..
4. Evapotranspiration. asphalt dont breath like soil. 10 miles of asphalt or 10 feet.
5. Artifical heating. essentially a density question. Same cause. different scales.

again, all of these effects will only affect a station in the context of climate change, if it will change the TREND of the station.

1. yes, there is a "canyon effect".

2. "sky view impairment" rather obviously is a term, that is only used by YOU! in general, it will provide a COOLING TREND for the station. i ve seen little discussion of cooling "microsite" effects so far. have you done any study on this?

(btw, a google search of your strange terms brought up the realclimate page, where you had exactly the same argument rebuked!)
http://www.realclimate.org/?comments_popup=469#comment-51236

4. the claim that 10 feet or 10 miles of asphalt would have the same effect on a station, is simply total nonsense.

the sign of bias for UHI and microsite is the same because the causes are the same. So basically the TREND your so big on is UHI on a smaller scale.

NO! the difference is simple:

it s reasonable to assume, that cities get bigger and hotter. this would lead to slow increase in temperature over time, that could be mistaken for a climate change effect, when it really only is a microclimate change (the UHI got bigger)

for a majority of rural station, there does not exist a similar explanation. while changes might occur, they will often happen suddenly and the effect will generally be smaller by magnitudes and might as well be negative.

If you read Hansen (2001) you would see that he tossed out any station that did not show warming (there were five). Talk about loading the results.

lol. "any!" lol.

Basically it comes down to not enough stations needed to actually get the data needed to determine the trends and not good enough quality control of the stations to determine if what is being seen is the actual trend or microsite UHI.

NO!
there are different times of measurement around. satelites, trees, plant growth, animals, glaciers sea ice and ice cores all support the thesis of climate change, non of them is explainable by UHI.

Oh and sod, how many years worth of observations are the basis for this unusual amount of melting? Let me guess, it is 30 years worth of satellite imagery? I alway love the fact that we just start measuring something and since we have not seen it before, it is unusual. My self, I do not know if it is unusual or not. We are coming out of the LIA and I do not know how much ice there should be. I did read the UC Irving study that indicates that a significant portion of the melting is due to dirty snow.

i always like, how sceptics tend to come up with different explanations (sea ice cycles vs. dirty snow), contradicting each other. both explanations are falls, of course!

people have been watching and trying to ship the arctic for more than 30 years.

sea ice grows and melts every year. it would need to get dirty every year anew!

hm, this part got lost some how:

1. yes, there is a "canyon effect". are you trying to imply, that a significant amount of rural stations had it s location affected by some canyon appearing recently?

Well, lets see. Where to begin. Where to begin. First, The precise statistical definition of the 95 percent confidence interval is that if the sampling was conducted 100 times, 95 times the results would be within the confidence intervals and five times they would out side. NOAA/CRN says in order to get the confidence interval of 95 percent, 300 stations are needed. This is more than just the possible range around the estimate. It also tells you about how stable the estimate is. A stable estimate is one that would be close to the same value if the survey were repeated.

Secondly, I am not implying that Hansen dropping 5 stations would have a major impact, but that the fact that they would not per Hansen have a large impact but he still removes them because they do not match a preconceived trend is an indicator.

sod, the canyon effect is multipathing. If you do not know what multipathing is, google it. Basically it means that heat in the form of radiation is both arriving directly and indirectly. I will clue you in, if you put your surface station next to a structure as the site guides tell you not to, then you get the canyon effect (multipathing) which increases temperatures, you get possible shading which decreases temperatures, you get wind shelter which increases temperatures, you get asphalt which increases temperatures, and you get artificial heating which increase temperatures.

NOAA/CRN seems to think that the site matters and is taking exceptional steps to find sites that will not be subject to local microclimatic interferences such as might be induced by topography, katabatic flows or wind shadowing, poor solar exposure, the presence of large water bodies not representative of the region, agricultural practices such as irrigation, suspected long-term fire environments, human interferences, or nearby buildings or thermal sinks.

These will all cause a bias to the trend your trying to find. With a limited number of stations and a significant number of stations that do not meet site guidance then making any claims about 'thousands' of stations and over-sampling is a lie, which is my point.

Finally, the number of rural stations. I suggest you go to GISS and look it up your self. I offered the following, there are only six rural stations that meet Hansen (2001) in all of South America, want to make a guess of the confidence interval that would have?

But since you alarmist are hell bent on ignoring my argument to chase rabbit trails. Anyone want to cite me anything that says only 60 stations are needed to get the climate trend within the northern hemisphere? Anyone want to cite me any study that shows that station site guides do not matter?

hm.

NOAA/CRN says in order to get the confidence interval of 95 percent, 300 stations are needed.

i have some doubts, whether you understand what a confidence interval is or not.

the 300 stations are irrelevant, because you do not understand, how the correction for urban stations is made.
they are NOT dropped entirely. (that s what you seem to think)
instead, an adjustment is made to make the TREND at the urban station more similar to the trend of close rural stations. so you can t simply rule out the urban stations, which gives you a much higher station count than 300.

if the urban station in daisyville, texas didn t change a lot, it will show a similar TREND to the surrounding rural stations in cowcounty. NO adjustment will happen!

check page 5 of the Hansen paper.
(note btw, that he mentions Phoenix Arizona as a place with extreme urban heating. not suprising. simply imagine building a city somewhere in a desert..)

Secondly, I am not implying that Hansen dropping 5 stations would have a major impact,

hm. you started by making the claim, that he excluded ANY station, that had a negative trend.
now you admit that he only left out 5 stations, and you don t think that those would make a difference?
wouldn t dropping all stations with a negative trend make a difference?

sod, the canyon effect is multipathing. If you do not know what multipathing is, google it. Basically it means that heat in the form of radiation is both arriving directly and indirectly. I will clue you in, if you put your surface station next to a structure as the site guides tell you not to, then you get the canyon effect

what you are doing is bizarre.
you take the exceptional heat island effect, meassured at two difference sites in Phoenix Arizona. then you claim, that a similar effect happens, when a structure is build near a rural station. that is nonsense!

again: changes at rural station do only change the TREND, if there are multiple changes over time, ALL causing HIGHER temperature.
the effect will only get similar to an UHI, is a METROPOLITAN area is build around it. rather unlikely.

here s an example, o#f ho w i think it is done:

imagine a NO global warming scenario. (jep, try hard.)
you have a rural and a urban station, very close together.

the urban station (due to city growth) has a temperature increase of 1°C every year, over a 10 year period.

the rural station of course has no increase. but after 5 years, a new building next to it makes it s temperature jump by 5 °C.

to adjust the urban station, you will compare the changes every year.
this will lead to a DOWNWARD correction of the temperature increase for the first 4 years.
in the fith year, the temperature increase at the urban station (1°C) will be increased UPWARD (because the rural station shows a 5°C change).
the change (TREND!!!!) in the remaining years will be corrected DOWNWARD again.

sod, you my friend are loud but of limited value in these discussions. You make statements but offer no facts, studies, or cites to such. Please, either present supporting documentation, as I do, or prepared to be ignored. The ~250 stations are used to adjust all the other stations trends. If all the other stations trends are modified to meet the trend of those ~250 stations, which they are, then the other stations do not matter.

You are saying that if I change the trends from all the stations to match the Hansen lights=0 rural stations, I am doing more than just measuring the trends from those ~250 stations. Huh? How?

Once again I say:

But since you alarmist are hell bent on ignoring my argument to chase rabbit trails. Anyone want to cite me anything that says only 60 stations are needed to get the climate trend within the northern hemisphere? Anyone want to cite me any study that shows that station site guides do not matter?

sod, you my friend are loud but of limited value in these discussions. You make statements but offer no facts, studies, or cites to such.

nice try.
i pointed you to page 5 of the hansen paper above. my answers to you included links to your comments on realclimate and to the sea ice center.

The ~250 stations are used to adjust all the other stations trends. If all the other stations trends are modified to meet the trend of those ~250 stations, which they are, then the other stations do not matter.

that is total nonsense.

look at what Hansen writes:

Indeed, in the global analysis we find that the homogeneity adjustment changes the urban record to a
cooler trend in only 58% of the cases, while it yields a warmer trend in the other 42% of the urban stations.

(page 5 again, btw)

the results of the urban stations ARE important. you cannot simply drop them as datapoints.

sod, how does what you said change anything. GISS and Hansen 'adjusted' the trends of the non-rural stations to match the rural stations and discussed how that 'adjusted' trend differed from the station's own original trend. And this proves what, the value of using non-rural stations which have had their trends 'adjusted' to match the local rural stations adds what - why nothing. Why because the only station trends being used are the rural ones.

Once again I say:

But since you alarmist are hell bent on ignoring my argument to chase rabbit trails. Anyone want to cite me anything that says only 60 stations are needed to get the climate trend within the northern hemisphere? Anyone want to cite me any study that shows that station site guides do not matter?

Oh and the non-rural stations are worthless for measuring the actual climate trends, hence, the USCRN network which will accurately measure climate trends within the USA will have no non-rural stations.

" Please, either present supporting documentation, as I do, or prepared to be ignored."

That's funny. I see assertions, not supporting documentation.

Hansen and his colleagues have presented data analyses supporting AGW. Their work has been peer-reviewed and accepted by the climate science community. If you and your colleagues want to overturn their conclusions, then the only way to do it is via the peer-review process. Unless you and your colleagues are willing to carry out your own analyses, your complaints will not have any standing. Government policy decisions will follow from the IPCC reports; in the U.S., at least, the political allies of the AGW skeptics are getting fewer and weaker.

If you and your colleagues really believe you are correct, then surely it is in your interest to proceed with your own data analyses and get them into the peer-review literature. Why haven't you done that? You could start by showing that sites you think are questionable actually have produced biased data and that these significantly affect the trends shown by Hansen etal. Asserting that is so does not make it so.

The work of Hansen et al remains unchallenged in the literature; the blogosphere noise does not change that.

s records are examined for sudden jumps and corrected, this is part of the NOAA homogeneity corrections.

By Eli Rabett (not verified) on 06 Sep 2007 #permalink

Gee richard, I guess this means that:

a. you have no studies or cities that indicate that Gavin was telling the truth. That you only need 60 stations for the whole NH.

b. you have no studies or cities that indicate that using a limited number of rural stations to adjust all the other stations is the same as having thousands of stations.

c. you have no studies or cities that indicate Hansen was wrong when he says that his work is dependent on the accuracy of the data he gets from the surface stations.

d. you have no studies or cities that indicate that not meeting site guidance does not matter.

Well gee, what did you offer, I know - an appeal to authority.

Eli, BS. Since your reputed to be a scientist, please show me:

how you can over sample when there are not enough stations to even get a 95 percent confidence interval?

Please explain how applying the trend from Hansen (2001) light = 0 rural stations to all the non-lights = 0 stations is doing any more than just using the trends from the ~250 stations?

How about you provide a cite that shows that only 60 stations are needed to measure NH climate trends?

How about a cite that proves that not meeting stations site guidance does not matter?

Finally, how does adjusting for sudden jumps address the problems with badly sited stations? Please provide a study or some cite that shows that badly sited stations will show sudden jumps?

How about addressing my argument rather than trying to play with red herrings?

"d. you have no studies or cities that indicate that not meeting site guidance does not matter."

Since you are obsessed with 'guidance', then perhaps you should adhere to the guidance of scientific principles. Produce some data analyses for peer-review. Othewise you have nothing, just assertions. Assertions that no one will heed because you don't have the analyses to back them up. You are asserting there are problems with the data: prove it via peer-reviewed data analysis.

"an appeal to authority"

Actually, no, its an appeal to the science. Rather than an appeal to assertions; that is your specialty.

richard, you so funny. My whole argument is that gavin lied by saying that only 60 stations are needed to measure the climate trend in the NH and that with only ~250 stations being used to 'adjust' the trends of all non-lights = 0 stations, there is not thousands of stations to over sample.

What kinda data do I get to prove a negative? Gavin makes claims that I am calling him on. He will not addressed them on RC.

Your spin is that if I do not get published I am not right. Sorry, but science does not require me to publish anything. I am not the one making the claims, I am the one saying that the evidence does not support his claims and I do present the evidence, which is more than anyone here has done.

Yeah for the ad hom attack! I was pretty sure it would get a round to that since no one here actually seems to have facts, studies, or other evidence.

You forgot to point out that I am being payed off by the oil industry!

Remember, it is faith that matters, ignore those pesky facts.

"My whole argument is that gavin lied .."

Well, you can take that up with Dr Schmidt. However, your 'argument', as presented here, is based on assertions, not facts. When you have completed your own data analysis and submitted it to peer-review, then perhaps you will have something to blog about.

Oh, and, I think that if you review your own posts you will find that you certainly have made more ad hom attacks than I ever have.

"Sorry, but science does not require me to publish anything. I am not the one making the claims"

Well, you are making plenty of claims. You just can't back them up with any peer-reviewed data analyses. That's why you are not taken seriously. The peer-reviewed science is there; if you want to challenge it and get the attention of the climate science community, then get your own peer-reviwed publications. Why can't you do that?

So richard, you say:

blah blah blah blah blah blah blah blah blah blah blah blah

and yet you do not address any fact I have presented nor do you present any of you own. Do you have a point in this?

"do not address any fact I have presented "

You have not presented any facts. Just assertions. Don't you know the difference?

Its a fact that ScienceBlogs is getting closer to the 500,000th comment, but it is only your assertion that there is a significant problem with Hansen et al's data analyses.

vernon said:
"Please, either present supporting documentation, as I do, or prepared to be ignored."

guffaw. Yes, vernon, we are laughing at you.

---

Vernon, what Gavin said was that there are only about 60 degrees of freedom in surface temperatures, and that 60 stations in the proper place would be sufficient to sample the US surface temperature record.
You are leaving out critical parts of that statement when you dispute it - either because you dont understand it, or becuse you are being dishonest.

So, vernon, would you tell us in your own words what it means that US surface temperatures have only 60 degrees of freedom? Because I don't think you have clue one.

-

Vernon, your definition of a confidence interval is wrong in a very critical way. Even more important, your use of the concept continues to be simply balls-up jibberish. If there were only 100 stations, or 25 for that matter, there would still be a 95% confidence interval - by definition. For you to state that Hansen needs 300 stations to get a 95% confidence interval is a statement with no rational content. For yo to continue to say that Hansen said he need 300 stations to get a 95% confidence interval, repeatedly, without bothering to cite where he supposedly said such a thing, again makes me think you a either utterly clueless or dishonest.
--
Vernon, are we doing a repeat of your claim that Hansen started his analysis by applying an 'urban offset' subject to up to 5C error based on site locations? I notice you have dropped that argument - have you yet acknowledged, even to yourself, that you were simply, hopelessly wrong?

Lee, try to get your facts right. Gavin said:

It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom - that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes.

Notice he said NH, not USA. As for the USA, NOAA/CRN says that it takes 100 stations to would be sufficient to sample the US surface temperature record. To actually get a good (0.1 degree C) takes 300 stations. So, want to try again?

Once again look at what I am saying. I am saying that Gavin and the rest of you alarmist are saying that we can ignore the station site errors because of over sampling. I am pointing out that since it takes 300 stations to get a 95 percent confidence interval, then the mere ~250 used in Hansen (2001) is not enough to qualify for 'over sampling'. I never said that Hansen says he needed 300, do you bother to read what I post? I said that NOAA/CRN said that.

Oh and if you were to read: Janis et al (2004) Station Density Strategy for Monitoring Long-Term Climatic Change in the Contiguous United States
http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2F…

A network of 327 stations for the contiguous United States satisfied a combined temperature-trend goal of 0.10°C decadeâ1 and a precipitation-trend goal of 2.0% of median precipitation per decade.

So to get a 0.1 degree C you actually need 327 stations. GISS cheats, they apply the UHI adjustment from ~250 stations to all the non-rural stations and then act like the trend is from all the stations instead of the ~250 rural stations.

Now, what part did I get wrong? That 60 stations are enough to measure the climate trend in the NH? That there is enough stations to over sample and still get results measured in the 0.01 degree C? I do not think so.

Oh, almost forgot, 60 degrees of freedom is the number of observations minus the estimated parameters. I would like a cite to what study says that only 60 stations are needed.

Oh Lee, I quit when I realized that you refuse to either present facts or accept any. There are many studies that indicate the problems with sites and the affects of not accounting for the site issues, hence, there are site guides. You on the other hand, have yet to present any evidence to the contrary. There is a limit to how much time I will waste on some one unwilling to actually discuss an issue.

I know, I know, you going to say I do not agree with your facts... but then I do not see much in the way of facts, evidence, studies, etc. here.

I am willing to present the basis for my opinion. Why do you feel that you do not?

"I quit when I realized that you refuse to either present facts or accept any"

The facts in favor of AGW are presented in the IPCC reports. You have not presented any data to refute the data in those reports. You have made assertions that there are problems with the data, but you have not provided any data analyses or facts to show that the conclusions in these reports are wrong. You still don't seem to understand the difference between an assertion and a fact.

richard, richard, richard... sigh, why do you bother to tell untruths? I present arguments that are based on facts and logic. I present my cites to the studies and sometimes the IPCC documentation. You on the other hand, keep popping up saying if I do not publish then I am not allowed to offer an argument or point out where some of you alarmist are just flat not telling the truth.

How about you actually show where my argument is wrong rather than appealing to authority (IPCC) and taking it on faith that I must be wrong.

so, vernon:
"Oh Lee, I quit when I realized that you refuse to either present facts or accept any."
So, you still claim that Hansen starts his analysis by applying an "urban offset" subject to 1-5C errors based on where the stations are sited? Is that what you are saying? Because I repeatedly asked you for a citation to any source showing that procedure, and you have never, not once, not EVER shown me such a citation. The citation you did supply says no such thing. Your 'facts' are simply wrong in many cases, and you refuse to show here you are getting them from.
So sto blathering aobut showign sources, until you actually show some RELEVANT soruces.

"60 degrees of freedom is the number of observations minus the estimated parameters."
Oh, good fricking god....

----
"it takes 300 stations to get a 95 percent confidence interval"
To get WHAT 95% confidence interval. You are still spouting nonsense, vernon. ANY number of stations greater than 1 will give a 95% confidence interval. To say taht you need soem minimum number of stations to get a 95% confidence interval, is jiberish, it means nothing, it has no logical content. Hansen does say something about a 95% confidenc eitnerval - do you know what the 95% confidence intrerval is for hansen's surface station analysis?
---
"It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom - that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes."
Then, vernon, why are you disputing this by talking about the number of station in the US?

--
You aren't stating facts, vernon - you are saying things that simply are not true, and simply claiming that some authority supports your position, without citation for that claim, or makign citatins that often say somethig completely different from what you claim.

"I present arguments that are based on facts and logic. I present my cites to the studies and sometimes the IPCC documentation."

You have not provided any 'cites' to demonstrate your assertion that Hansen et al's data analyses are incorrect. The IPCC reports support and are consistent with Hansen et al's analyses.

You are asserting that there are problems with certain weather sites. But you have not presented any data to show that there really are any problems with the data, nor have you presented a peer-reviewed re-analysis of data to support your position.

Once again, if you think your case has merits, you are free to carry out whatever data re-analysis you wish and submit it to peer review. Why can't you do that?

Lee, since you do not bother to read my posts I guess this is not going to matter but... as I cited, it takes 300 stations to have a 95 percent confidence interval for temperature within the USA with resolution of 0.1 degree C. Now since you cannot read you may not notice that I have two different things.

1. Gavin said that only 60 stations were needed for the whole NH. That is crap, if it is not, please provide any evidence of that.

2. Gavin and alarmist like you are constantly sprouting that surfacestations.org's census does not matter because of over sampling. Well in order to get a 0.1 degree C resolution within the USA (lower 48) it takes ~300 stations.

This means that since all non-Hansen rural lights = 0 stations are 'adjusted' to the rural lights = 0 trend then those ~250 stations are the trend. There is no way to over sample with ~250 stations. The other thousand do not matter since their trends are the trends of the ~250.

Now which part of this is too hard for you to understand.

And as for the 1-5 degree impact on failure to meet site guidance. NOAA/CRN says in the Site Identification, Survey, and Selection FY 02 Research Project For The NOAA Regional Climate Centers (RCC)at page 10 and 11:

ftp://ftp.ncdc.noaa.gov/pub/data/uscrn/site_info/CRNFY02SiteSelectionTa…

Class 1: Flat on horizontal ground surrounded by a clear surface with a slope below 1/3 (<19 11 degrees). Grass/low vegetation ground cover <10 cm high. Sensors located at least 100 meters (m) from artificial heating or reflecting surfaces, such as buildings, concrete surfaces, and parking lots. Far from large bodies of water, except if it is representative of the area, and then located at least 100 meters away. No shading when the sun elevation >3 degrees.
Class 2: Same as Class 1 with the following differences. Surrounding Vegetation <25 cm. No artificial heating sources within 30m. No shading for a sun elevation >5 degrees.
Class 3 (error 1 C): Same as Class 2, except no artificial heating sources within 10m.
Class 4 (error >/= 2 C): Artificial heating sources <10m.
Class 5 (error >/= 5 C): Temperature sensor located next to/above an artificial heating source, such a building, roof top, parking lot, or concrete surface.

Now looking at the pictures from surfacestations.org, I see a lot of stations that are class 3,4 or mostly 5.

Once again, if you think your case has merits, you are free to carry out whatever data re-analysis you wish and submit it to peer review. Why can't you do that?

Indeed.

Why can't any of these chimps do that? Where are the hypotheses? Numbers? Papers? Journal articles? Scribblings on napkins? Idle party chatter?

No-f'n-where. Nowhere.

They got nothin'.

Ignore these people.

Best,

D

""A network of 327 stations for the contiguous United States satisfied a combined temperature-trend goal of 0.10°C decadeâ1 and a precipitation-trend goal of 2.0% of median precipitation per decade.""

Vernon: "So to get a 0.1 degree C you actually need 327 stations."

No, the citation doesn't mean you need 327 stations. It means 327 stations are sufficient. Try to understand the logical difference between "necessary" and "sufficient".

By Chris O'Neill (not verified) on 06 Sep 2007 #permalink

Once again... I do not see anyone addressing my argument here. I guess lee, richard, dano, and chris cannot do that.

Chris, do you have a clue why 300 stations matter or are you just breezing in to toss off a comment?

Vernon, do you have a clue what "necessary" and "sufficient" mean?

By Chris O'Neill (not verified) on 06 Sep 2007 #permalink

Chris, what is your point. Mine is that Gavin at RC says that only 60 stations are needed to do the whole NH and mine is that 60 stations are not enough to do the lower 48 of the USA. So please, what is your point.

sod, how does what you said change anything. GISS and Hansen 'adjusted' the trends of the non-rural stations to match the rural stations and discussed how that 'adjusted' trend differed from the station's own original trend. And this proves what, the value of using non-rural stations which have had their trends 'adjusted' to match the local rural stations adds what - why nothing. Why because the only station trends being used are the rural ones.

my quote of the Hansen paper was a hint to the existance of stations with little need of adjustment.
unless you assume stations to either need a strong negative or a strong positive adjustment, the fact that nearly 50% of stations lean each way makes the existance of stations with little/no need of adjusmnet rather very likely.

so if Hansen, instead of adjusting stations, simply had dropped only those stations with need for a big adjustment, and the number of remaining stations would stay above 300, everything would be fine for you?

even though the result would obviousely be a WORSE assumption of cliamte status?

Chris, what is your point. Mine is that Gavin at RC says that only 60 stations are needed to do the whole NH and mine is that 60 stations are not enough to do the lower 48 of the USA. So please, what is your point.

your question (again) doesn t make any sense.

so how many people do we need to poll, to predict the outcome of the next presidental elections? X says 1000 are enough, while i say, that 1000 aren t even enough to detremine the otcome of Ohio.

with the information given it s IMPOSSIBLE to decide BOTH cases!

it s important how ACCURATE the result is supposed to be, and how much KNOWLEDGE (in the form of ADJUSTMENTS) you are going to put into the raw data.

it s not unreasonable to assume, that a handfull of stations might suffice in a few decades.

"I do not see anyone addressing my argument here"

You have yet to make a logical argument, so there is nothing to address. It is not logical to infer that a photograph of a site will invalidate data from that site. If you believe Hansen et al's analyses are incorrect, you are free to carry out a re-analysis and submit the results to peer review. Why can't you do that?

richard, must be nice to be so grounded into the dogma of your faith. I will say that your statement indicates that you are not up to the logical though needed for this discussion so I will lay it out for you.

Now my train of though is to first show that failure to meet site guidance will introduce errors. NOAA shows that failure to meet site guidance will introduce 1-5+ degrees C error. Further they state that "Temperature sensor located next to/above an artificial heating source, such a building, roof top, parking lot, or concrete surface" are class 5 sites. Class 5 sites inject 5+ degrees of error and the error is all warming. Using the station census underway at surfacestations.org, it is easy to pick out the stations that are not in compliance and further to identify the class of the station, hence, to know what the sign of the error is going to be.

Now why can a picture not show being next to/above an artificial heat source?

"Now my train of though is to first show that failure to meet site guidance will introduce errors. "

No, failure to meet site guidance MIGHT introduce errors. You will have to analyze the data from the sites you feel are questionable and demonstrate that they need further adjustment or need to be discarded. Why can't you do that?

You are using the old tobacco "fatal flaw" argument. Didn't work for the tobacco industry, won't work for you. You need to do the data analyses and present your case through the peer-review ssytem.

richard, still cannot confront the truth. NOAA/CRN already did the work, which I am quoting and which I am guessing you did not bother to read. Oh, it it has already been peer reviewed.

Leroy, Michel. 1998a. "Meteorological Measurements
Representativity: Nearby Obstacles Influence", 10th
Symp on Meteorological Observations &
Instrumentation (pp. 233-236), AMS, Phoenix, AZ (Jan
11-16, 1998). Also see Leroy, 1998b. "Climatological
Site Classification", at the same conference.

Janis et al (2002) Site Identification, Survey, and electionFY 02 Research Project for the NOAA Regional Climate Centers (RCC) found at: http://www1.ncdc.noaa.gov/pub/data/uscrn/site_info/CRNFY02SiteSelection…

Lee it took me a while to figure out what you did not understand. Basically you want to know the what the margin of error is for the 95 percent confidence interval is. A common convention in science and engineering is to express accuracy and/or precision implicitly by means of significant figures. Here, when not explicitly stated, the margin of error is understood to be one-half the value of the last significant place.

Since Janis et al (2004) Station Density Strategy for Monitoring Long-Term Climatic Change in the Contiguous United States http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2F…

A network of 327 stations for the contiguous United States satisfied a combined temperature-trend goal of 0.10°C decadeâ1 and a precipitation-trend goal of 2.0% of median precipitation per decade.

did not list the error margin, then the implied one is .05 degree C. So it takes ~300 stations to measure the climate trend within 0.1 degree C +/- 0.05 degree C. Does that answer what your asking?

richard, I forgot to point out to you that per NOAA that a class 5 station will have error >/= 5 C. There is no might about it, that there will be at least 5 degree C error is minimum, it could be more and it will be a warming error. This is not a 'might' no matter how much you want it to be.

"I forgot to point out to you that per NOAA that a class 5 station will have error >/= 5 C. "

No, they MIGHT have errors. You have to identify the particular site with a 'guidance' issue and demonstrate that there is a problem with the data. What is so hard to understand about that? There is no way around it, YOU have to do a data analysis. Why can't you do that?

So what is your basis for saying that NOAA is wrong? NOAA say that it will have a minimum error of 5 degrees C. Please indicate what source you have that proves this is not correct?

"So what is your basis for saying that NOAA is wrong"

You should go back and actually read what you are citing.

There are hundreds of sites but only a few classes. Any two sites might be classifed the same but have different data 'errors'. Lets say you had two class 1 sites. One might have some attributes of class 2 but be classifed as 1; the other might not. So they won't have the same data errors. Also note that sites can be mis-classified, even by an experienced investigator (let alone by random photographs). That is why you have to look at the data from individual sites, and do your own analysis. Your analysis will have to show that the data trends significantly depart from that shown by Hansen et al. Now, why can't you do that?

richard, that is so much garbage.

'Lets say you had two class 1 sites. One might have some attributes of class 2 but be classifed as 1; the other might not.'

Do you know how lame this sounds? If one site had attributes of a class 2 site, it would be a class 2 site. DUH! It would not be two class 1 sites.

So your whole argument is, I do not have any facts or logic to disprove what NOAA says, but I feel that it takes more than a picture to see 'Temperature sensor located next to/above an artificial heating source, such a building, roof top, parking lot, or concrete surface.' Yeah, I can see where it takes an expert to spot the hidden building, roof top, parking lot, or concrete surface. Pretty hard to spot a class 5 station.

So basically, you have nothing but I do not like the fact that you appear to be right and that goes against my beliefs!

re 82, vernon. Yes, that answers my question. I asked if you were being clueless, or dishonest. The former, I see. The confidence interval in this case is the range around the stated temp anomaly, within which one is 95% confident that the true temp anomaly falls. You have repeatedly stated that one needs 300 stations to get a 95% confidence interval - with no further clarification, despite being repeatedly asked. You have finally, finally, noticed that mentioning a confidence interval without stating the actual interval,is simply jiberrish. The 95% confidence interval Hansen mentions for his analysis, with this level of sampling, is +-0.1C. And no, this is not a "margin of error is for the 95 percent confidence interval."
---
You are equally full of - well, I'll be generous, and call it confusion - on Gavin's '60 stations,' and you show this when you continually refuse to accurately report what he actually said - not to mention your absurd definition of what degrees of freedom actually are. "number of stations minus the estimated parameters." Indeed.

What Gavin said is that the real-world temp anomaly record for the NH has only 60 degrees of freedom - this because of the spatial correlation of temp anomalies across large areas. And that therefore in theory 60 'well sited' stations (ie, each station in exactly the right place, spaced exactly perfectly from each other station, and with no measurement error) would be enough to get the "gross" temp trend. What would be lost is sufficient local sampling to get local trends - what one could in principle (not in practice) get, is the gross NH trend.

Note that this does NOT mean that Gavin said that 60 real-world stations is sufficient - he was pointing out the quite small number of 'virtual' stations that need to be represented by sampling (ie, degrees of freedom) to get good hemispheric trends. The fact that one needs more than 60 in the real world, where historical stations are not perfectly placed and perfectly homogeneous over time, does not mean that Gavin was "lying" as you claim. I means that you are too frickin' clueless to understand Gavin's perfectly valid point.
----
And one MORE fricking time, on the 5C error. TEMPERATURE IS NOT THE SAME FUCKING THING AS TEMPERATURE ANOMALY!!!!!!!! It does not matter if a station is 5C "warmer" or might be 5C warmer. Or cooler. The very first thing that is done,is that absolute temps are converted to temp anomaly, so that the effect of differing absolute temp is removed,and all that remains is changes from a baseline temp. The METEOROLOGICAL impact of siting in warmer or cooler locations is removed. Only differences from a common baseline period remain. It doesn't matter that paving might make the station read warmer than if it were in a meadow, UNLESS that also alters the change in temps over time. And you have presented NO evidence that paving impacts changes over time. EVERY ONE of the things you have cited have looked at impact on measured temps, not on changes in temp anomaly over time. They are IRRELEVANT to climate analysis.
----
And vernon, this is not ignoring your arguments. I am, once more, responding DIRECTLY to what you say, showing how and why you are wrong. Claiming that I am ignoring your argument, while repeating exactly the same thing I just demolished, is not a response to what I'm saying.

richard, I forgot to point out to you that per NOAA that a class 5 station will have error >/= 5 C.

Vernon, will you please clarify what sort of error Leroy is talking about?

average error?
maximal error?

and in comparison to what?

here your link again:
http://tinyurl.com/25v7o9

so let us take a look at Ashville location again. please take a look at the paper quoted by McIntyre.

but before, guess by how much the presents of an AIRFIELD RUNWAY close to the station has on the daily maximum temperature.

please guess, before you look at Figure 1 on page 2 of the paper!

http://ams.confex.com/ams/pdfpapers/71791.pdf

Lee, actually I failed to understand that you did not understand some of the basics. The default, unless stated otherwise when not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. Sorry for not realizing you did not understand this faster.

Additionally your contention is that you feel there is no basis for micro climate having the same trends as urban heat islands. That is not supported by the facts. The main cause of the urban heat island is modification of the land surface by urban development; waste heat generated by energy usage is a secondary contributor. Once again, going back to the NOAA, class 5 sites have been 'located next to/above an artificial heating source, such a building, roof top, parking lot, or concrete surface.' These modifications of the land surface are reflections of urban development on the micro climate scale.

I am sure that you have something other than your rant to prove this wrong?

As to the 60 degrees of freedom, provide the cite.

sod, not sure what you want me to see and be shocked about? Since you have the url, look it up your self.

" If one site had attributes of a class 2 site, it would be a class 2 site. DUH! It would not be two class 1 sites."

Please try reading, it might help you. I said 'some' attributes. Site classification means making decisions; not all sites will have all attributes of a given class. The site classifiers will place sites in classes but sites in any one class will not be identical. That is why you have to look at the data from individual sites, and do your own analysis. Your analysis will have to show that the data trends significantly depart from that shown by Hansen et al. Now, why can't you do that?

richard, did you bother to read what how the classifications break down. It is not a, if you get 2 out of 3 your ok. It is you must meet the criteria and you keep moving down the list till you do.

"you must meet the criteria and you keep moving down the list till you do"

No two sites will be the same, regardless of class. That is why you have to look at the data from individual sites, and do your own analysis. Your analysis will have to show that the data trends significantly depart from that shown by Hansen et al. Now, why can't you do that?

"Chris, what is your point."

I already said it:

"No, the citation doesn't mean you need 327 stations. It means 327 stations are sufficient."

So your conclusion from the citation:

"So to get a 0.1 degree C you actually need 327 stations"

is wrong.

By Chris O'Neill (not verified) on 07 Sep 2007 #permalink

vernon,

Stop being an idiot. If you can.

A confidence interval has NOTHING WHATSOEVER to do with "one-half the value of the last significant place." +- 1/2 the value of the last sig fig, is simply the range of the possible values of the next digit, if one were to report another sig fig. It expresses possible rounding error - it has NOTHING WHATSOEVER to do with confidence intervals. The confidence interval is a function of the number of observations, and the variance of the observations. Stating repeatedly, as you did, that one needs 300 stations to get a 95% confidence interval shows that you either don't understand this basic fact, or that you are incoherent. If one samples only 25 stations, one will still have a 95% confidence interval, but that interval will be wider than if there were 300 stations. And it will be wider even if the temp anomaly is still reported to 0.01C. Hansen reports his temp anomaly values to 0.01C, with a 95% confidence interval of approximately +- 0.1C.

vernon, I have never said that microsite effects don't matter. They matter both ways - they can lead to both heating and cooling over time. In many cases, effects due to changes in microsite effects can be corrected - adding a paved parking lot next to a station will cause a jump in temp in that year, maintained over the remaining record, and that can be detected and corrected in Hansen's analysis. Most of his analysis is devoted to doing exactly this kind of detection and correction.

But this is not what you have been saying - you have NTO been presenting arguments about microsite changes and efects on trends. You have been pointing to papers that show possible siting effects on temperature - not temperature anomaly - either due to gross location, or due to local environment. You have been claiming that those show that there is major error in the measured and corrected trend in temp anomaly that Hansen reports. This is simply not true - those issues regarding siting effects on temp, are simply irrelevant to trends in temp anomaly. You are arguing that there is uncorrected warm-biased spurious trend in temp anomaly introduced by changes in microsite (I think - you are so incoherent its hard to be sure), and you are trying to support that by citing papers showing effects of siting decisions on temperature. Those are simply not relevant to the temp anomaly trend issues. You have not made any supported arguments as to whether there are spurious trends, whether they are uncorrected, whether they are biased toward warming - IOW, you have made NO ARGUMENTS on the issues relevant to whether Hansen's trend analysis of temp anomalies is correct.

The fact is that Hansen applies a consistent correction algorithm to the entire data set, and that during the period when that can be compared to an independent data set, the satellite record, they are nearly identical. The correction algorithm works, and you have presented exactly NO, NONE, NADA, ZIP, ZILCH evidence or argument to even imply that it might not work just as well going back before that period of overlap. You are citing irrelevant work on the effects on meteorological temps of siting decisions, NOT on temp anomaly.

You spent dozens of posts earlier decrying a 5C "UHI offset" that you were claiming that Hansen introduced as his first step, while citing a paper that describes no such thing but does describe what he actually did. You don't know basic stats - you use buzzwords, but without the underlying knowledge, even to the point of confusing rounding error with confidence intervals. You are clueless, dishonest, off target, and frankly, a waste of space here.

"You are clueless.."

But also funny. I mean who could read this:
"The default, unless stated otherwise when not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. Sorry for not realizing you did not understand this faster."

and not laugh?

sod, not sure what you want me to see and be shocked about? Since you have the url, look it up your self.

the effect of an airport RUNWAY, next to a station, on daily maximum temperature is ZERO on over 70% of days.

i think that this puts the majority of your "mini-UHI" effects into perspective.

oh and no, i can t figure out what sort of error Leroy is talking about.

please help me to figure it out. you surely know, as you have been quoting his error numbers ">5°C" several times...

here s the link again:
http://tinyurl.com/25v7o9

page 10+

sod, I have been quoting Janis et al (2002) Site identification, Survey, and Selection FY 02 Research Project for the NOAA Regional Climate Centers (RCC) found at: http://www1.ncdc.noaa.gov/pub/data/uscrn/site_info/CRNFY02SiteSelection…
I am sure you think you have a point with this but what evidence do you have that Janis (2002) is wrong? If you do, when do you plan on doing the study and publishing/peer reviewed?

Eli, ad hom away but I noticed that you cannot address my argument.

richard, gotta point?

Lee, I am not going to go haring down that rabbit trail. I just want to point out that nothing your saying in anyway disproves my argument. I am saying that to get a measurement with enough accuracy and precision to actually measure warming in the NH takes more than 60 stations. In Janis et al (2004) Station Density Strategy for Monitoring Long-Term Climatic Change in the Contiguous United States http://ams.confex.com/ams/pdfpapers/58599.pdf he shows that to get a resolution of 0.125 degree C, which just happens to be pretty close to the actual delta for the USA, takes 167 stations should enable NOAA to reduce climate uncertainty to about 95%. So I will agree you could do all of the NH with one station, the precision, accuracy, and uncertainty for such a reading would make it nearly useless. The same is true for the 60 stations which Gavin alludes to. So, either present some evidence that 60 stations will give meaningful results when it takes 167 just to get a resolution that will show the trend within the USA.

"richard, gotta point?"

Obviously, you don't have one. Your posts make it clear you do not have a clue what you are talking about; you do not know what an error term is, nor a confidence interval, nor how to calculate them.

"The default, unless stated otherwise when not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. Sorry for not realizing you did not understand this faster."

Before embarassing yourself further with posts like the above quote, I suggest you take some time off and try to learn something.

When you have you might be able to figure out why you have to look at the data from individual sites, and do your own analysis. Your analysis will have to show that the data trends significantly depart from that shown by Hansen et al. Now, why can't you do that?

richard, all you do is pop up and say I am wrong, you never have facts or citations. Please present something more than your normal ad hom attack to show where I am wrong.

sod, I have been quoting Janis et al (2002) Site identification, Survey, and Selection FY 02 Research Project for the NOAA Regional Climate Centers (RCC) found at: http://www1.ncdc.noaa.gov/pub/data/uscrn/site_info/CRNFY02SiteSelection… I am sure you think you have a point with this but what evidence do you have that Janis (2002) is wrong? If you do, when do you plan on doing the study and publishing/peer reviewed?

i asked you a simple question:
what sort of error is he talking about???

you did not answer that question. my question does not imply, that Leroy is wrong. instead i simply think, that you do NOT know what you are talking about.

look, there is a difference. if i say:
in the posts of Vernon, error>90%
it could imply a lot of things.

i could pseak about a maximum error.
there was a post of Vernon, with an error>90%

or of average error.

or of the errors in a specific number of your posts.

to fully evaluate what "error>5°C" means, you need to know, what error he is talking about!

oh and looking at your reply to Lee, i think you quoted a good paper:
http://ams.confex.com/ams/pdfpapers/58599.pdf

but from taking only a short look, i get the impression that they are trying to get the temperature right in every CELL of the grid.
that is not the same, as getting the US trend temperature right!

you need significant oversampling, if you want to make predictions for subsamples.

again, i took only quick look at might be wrong. please check it again!

104, sod,

Yep, the Janis et al paper Vernon cited to me, is looking at minimum density necessary to detect decadal trends with a resolution of <= 0.1C/Decade, at EVERY LOCATION IN THE CONTIGUOUS US.
Somehow, I doubt that Vernon realizes this is a different question from the gross trend across the entire contiguous US.

sod, NOAA says that a class 5 site will have error >=5 degree C. That is a class 5 station, which part of that is hard to understand? It is not all the class 5 stations together, or all the stations plus class 5 stations, but simply a class 5 stations will have error >= 5 degree C.

sod, I think that you may want me to be wrong but look at what the definition of a class 5 station is, what the error associated with a class 5 station is, and tell me why looking at a picture will not show why a station is class 5 or not?

Now since the problems identified in a class 5 stations are all due to heat sources, then 5+ degrees hot would be expected based on what NOAA says.

Actually, in that paper they are trying to get the temperature right for the USA not each cell. What I read them as saying is how many stations are needed in what cells in order to measure the temperature for the USA to 0.1 degree C with a 95 percent confidence takes 233 perfectly placed stations.

Now will anyone try to address my issues?

That Gavin saying that 60 stations will measure climate trends for the entire NH is a gross misrepresentation of fact.

That since Hansen (2001) is used by GISSTEMP to adjust all the stations and only the ~250 lights = 0 stations are used to adjust all other stations for UHI, then there is no over sampling. This means that any claims that over-sampling with detect and correct (either fix or reject) errors is not true.

I do not see anyone showing that I am wrong. Lots of name calling, but no evidence, no facts.

re 103, Vernon,

Dude, we HAVE been telling you where you are wrong. You are simply too stupid, or dishonest, to understand that.

Vernon, have you noticed yet how many of your arguments you have simply abandoned after they became embarrassing to you - but without acknowledgment on your part that you were wrong? We have.

"richard, all you do is pop up and say I am wrong, you never have facts or citations. Please present something more than your normal ad hom attack to show where I am wrong."

Please refer to the top post and work your way down. You have been repeatedly shown that you are wrong, misguided and unable to understand the simplest concepts. You still persist in thinking that a photograph is definitive and confuse site classification with site data. You continue to confuse temperature with temperature anomalies.

You have to look at the data from individual sites, and do your own analysis. Your analysis will have to show that the data trends significantly depart from that shown by Hansen et al. Now, why can't you do that?

"sod, I think that you may want me to be wrong but look at what the definition of a class 5 station is, what the error associated with a class 5 station is, and tell me why looking at a picture will not show why a station is class 5 or not?"

That is typical of your gibberish. How many times do you need to be told this? Error is determined by examining and comparing the data. No station will perfectly fit the class definitions: EVERY SITE IS DIFFERENT.

If you can't figure out why a picture taken without regard to any properly devised standard can't be used to assign class, you need to go back to school.

VErnon, dude...

Reread the paper you cited to me. Find all the occurrences of 'per grid cell' 'within a grid cell' and so on. Here is a key quote - but it is not the only one.:
Grid cells characterized by high spatial variability in temporal variability characteristics will need more stations to estimate trends FOR THAT CELL to within the desired tolerances."

The paper is about the required station density to meet the guidelines FOR EVERY GRID CELL.

Are you simply unable to understand plain written english, vernon?

Lee and richard, yes you say 'your wrong' a lot, you just do not happen to provide any facts or evidence. richard, if you do not have any peer reviewed work that shows that NOAA and Janis are wrong, then shut up. Your entire argument consists of nothing I say matters since I did not do what you wanted and you want to ignore all the work that NOAA has already done. Lee, you ignore the facts and logic to go racing off on a tangent, not going to do that you any more. Either address my facts and logic by presents citations that show I am wrong or you too can shut up.

All I hear from you alarmist is 'where is your facts!' and when I present them, you ignore them.

Yes, Lee, part of the station density paper is how many stations are needed in a cell to measure the temperature of the cell. Please, do you know another way to measure temperatures?

Lee, of course they are doing it on a cell basis. How else would you do it? Please, I would like to hear that one.

richard, your stupid. That pretty will describes my opinion of you at this point. You are saying that NOAA and Janis et al are wrong because you say so, but cannot present any citations to back up your claim. If you think they are wrong, then you re-do their work and get it published/peer reviewed and until then... .

Have you bothered to read the definition of a class 5 station? What is hard about a picture showing if a station is 'Class 5 (error >/= 5 C): Temperature sensor located next to/above an artificial heating source, such a building, roof top, parking lot, or concrete surface.'

richard, I am pretty sure that I did not claim a picture could be used to categorize every station but class 5 stations sort of stand out in pictures. So, you want to explain why a picture will not show a station is located next to/above an artificial heating source?

Vernon, I'm finished with you. You cant read plain english, you cant respond to a plainly stated point. That quote shows that they are looking at the number of stations per grid cell to meet tolerances FOR THAT GRID CELL. Your response is that they are measuring temps per cell - meaning you haven't got a fricking clue what you just read. Go away - and no, the fact that your abysmal ignorance and stupidity leaves people unwilling to talk wiht you any more, does not mean you won..

Lee, since you have no clue what your saying, please, tell me why Janis and NOAA figuring out how many stations are needed per cell to meet a national requirement invalidates anything to asshat.

Initially, a network of about 250 CRN stations was thought to be sufficient for capturing the climatic signal for the contiguous US. This network density was projected from study that estimated a network of 182 stations could reasonably reproduce the 1910-1996 trend in annual precipitation computed from the climate division data set (Karl and Knight 1998). The purpose of the present study is to refine the estimate of the spatial density and total number of stations required over the contiguous US.

This is what they say they are doing... now in that twisted head of yours do you get this to be anything more than what they say:

A national network meeting a monitoring goal of 0.10°C/decade consists of 233 stations with an average station separation of 242 km. With nearly 100% of all cells requiring fewer than 4 stations to meet this monitoring goal, little spatial variability in network density results

It goes on to show the actual stations per cell based on the climate variation within the cell. So, Lee, what stupid point are you trying to make?

Lee, I am pretty sure I have won when no one presents any facts or cites any studies that prove me wrong. In fact, once Eli drops in to ad hom me but ignore my facts and logic, I know I have won.

In fact, no one has presented any facts or citations to prove me wrong this time.

It goes on to show the actual stations per cell based on the climate variation within the cell. So, Lee, what stupid point are you trying to make?

he makes the claim, that the high number of stations is needed, because this network is supposed to be accurate in ALL subsamples.

this is a crucial quote of the paper:

Grid cells characterized by high spatial variability in temporal variability characteristics will need more stations to estimate trends FOR THAT CELL to within the desired tolerances."

if i want to find the most clueless poster on this page in recent weeks, i ll pick a random sample of about 50 posts.
it would include 2 or 3 of yours. i would easily figure out, that you are the guy i was looking for.

if i wanted to find the most clueless poster under each topic of this page in recent weeks, i would need a MUCH BIGGER sample.

notice how i do NOT need to know the most clueless poster under each topic, to figure out who is the most clueless poster onthis site!

It goes on to show the actual stations per cell based on the climate variation within the cell. So, Lee, what stupid point are you trying to make?

sorry, but you have not the slightest clue, what an error is.

this is what Leroy says:
Class 5 (error >/= 5 C): Temperature sensor located next to/above an artificial heating
source, such a building, roof top, parking lot, or concrete surface.

now let us look at a two stations in Alaska. station A was build on a grassy surface, fullfilling all criteria for a class 1 station.
station B is 100 meters away, and was build on a concrete surface.

what will B be claaisfied to be?

now both stations are covered with 50 centimeters of snow, 9 months of the year.
what difference will the temperature be, in that 9 months???

"richard, your stupid. That pretty will describes my opinion of you at this point. You are saying that NOAA and Janis et al are wrong because you say so, but cannot present any citations to back up your claim. "

In your following post, you accused someone else of throwing ad homs at you. What a hypocrite. Even your grammar sucks.

I have not said that NOAA or Janis et al are wrong, because they agree with me. Why don't you try actually reading the articles you cite? The class errors you are babbling about do not invalidate the data from any station. You have to examine the data first, then decide what to do with it.

"In fact, no one has presented any facts or citations to prove me wrong this time."

Go to the top of the the posts and read down. Your arguments have been demolished multiple times. You have not presented any facts; just false assertions. When your errors are pointed out, you just repeat them again and pretend no one has done so.

sorry, that first quote inthe second post, should have been this one:

sod, NOAA says that a class 5 site will have error >=5 degree C. That is a class 5 station, which part of that is hard to understand? It is not all the class 5 stations together, or all the stations plus class 5 stations, but simply a class 5 stations will have error >= 5 degree C.

and to look at that last comment by Vernon:

Lee, I am pretty sure I have won when no one presents any facts or cites any studies that prove me wrong.

the problem is, that you do NOT understand, the studies you ae citing.

this has been shown by Lee and me. fact.

sod, I only have one question. The paper determines now many stations are needed per cell (along with spacing) in order to get a national network meeting a monitoring goal of 0.10°C/decade. The process was a polynomial regression between MAE (0.1 degree C) and LSR network size (Ns). I fail to see how this has much impact. There are 110 cells in the continuous USA. How does:

Grid cells characterized by high spatial variability in temporal variability characteristics will need more stations to estimate trends for that cell to within the desired tolerances.

Well, I did find that I made a mistake earlier... the national goal is 0.10°C/decade which means that +/- 0.005 degree C, where the cells are at 0.1 degree C +/- 0.05 degree C.

So, how does this make my point that 60 stations are not enough to accurately measure the contiguous USA much less the whole NH?

As fun as this is, does anyone want to actually address my arguments, namely:

That Gavin saying that 60 stations will measure climate trends for the entire NH is a gross misrepresentation of fact.

That since Hansen (2001) is used by GISSTEMP to adjust all the stations and only the ~250 lights = 0 stations are used to adjust all other stations for UHI, then there is no over sampling. This means that any claims that over-sampling with detect and correct (either fix or reject) errors is not true.

Please, how about a cite that shows that adjusting all other stations by the ~250 does any more than measure the 250 or better yet, show me a citation that shows only sixty stations are needed for the entire NH.

sod, great work with Alaska, but since we are talking about the contiguous USA, I still have to say:

You are asserting that there are problems with certain weather sites. But you have not presented any data to show that there really are any problems with the data, nor have you presented a peer-reviewed re-analysis of data to support your position.

Once again, if you think your case has merits, you are free to carry out whatever data re-analysis you wish and submit it to peer review. Why can't you do that?

sod, I especially love when vernon refutes himself with his direct quotes, as in 115.

Estimation of Spatial Degrees of Freedom of a Climate Field

Xiaochun Wanga and Samuel S. Shenb

a. Department of Meteorology, School of Ocean and Earth Science and Technology, University of Hawaii at Manoa, Honolulu, Hawaii
b. Department of Mathematical Sciences, University of Alberta, Edmonton, Alberta, Canada

ABSTRACT

This paper analyzes four methods for estimating the spatial degrees of freedom (dof) of a climate field: the Ï2 method, the Z method, the S method, and the B method. The results show that the B method provides the most accurate estimate of the dof. The Ï2 method, S method, and Z method yield underestimates when the number of realizations of the field is not sufficiently large or the field's mean and variance vary with respect to spatial location. The dof of the monthly surface temperature field is studied numerically. The B method shows that the dof of the Northern Hemisphere (NH) has an obvious annual cycle, which is around 60 in the winter months and 90 in the summer months. The dof for the Southern Hemisphere (SH) varies between 35 and 50, with large values during its winter months and small ones during its summer months. The dof of the global temperature field demonstrates a similar annual cycle to that of the NH. The dof estimated from the observational data is smaller than that from the GFDL GCM model output of the surface air temperature. In addition, the model output for the SH shows the opposite phase of the seasonal cycle of the dof: large dof in summer and small ones in winter.

Thank you Lee, you make my point "90 in the summer months" is not 60.

90 dof for northern hemisphere at max.

US is ~4% of the area of the northern hemisphere.

4% of 90 dof in NH, is ~3.6 dof for the entire US. Let us be generous and call it 4 dof for the contiguous US.

~250 rural station in the contiguous US.

250 rural stations / 4 dof = 62.5 stations per dof.

---

So, the US is oversampled by about 60 times.

BTW, that 60x oversampling is the minimum - it assumes that the only contribution to sampling is from the rural stations. Recall that many of the urban and sub-urban stations get minimal correction - there IS a contribution to the record from those stations, diluted by the rural correction, but not absent. So the oversampling is in fact larger - likely much larger - than 60x in the contiguous US.

Thank you Lee, you make my point "90 in the summer months" is not 60.

lol, but 60 is 60. so 60 stations is enough, to do the northern hemisphere in winter.

just a short time ago, you seemed to be claiming that you need several hundred stations to assess the USA.

Well, it took me a while to work though the paper, been a few years for me. Now having read it and read the supporting papers, two major points come to light. First, the underlying assumptions made is that the best-fit distribution represents the 'true' data well enough, so that the effects of noise in the simulated data and in the real data are the same. A second assumption made is that the error distribution of each data point is Gaussian.

The impact of these assumptions is that while they say stations in the paper, they are referring to cells. As a further note, the cell size is quite large so basically they are saying if you have a station that can represent a cell, only 60 cells are needed to measure a trend. However, their results are only valid if the individual cells error distribution is Gaussian. This would only be true if there were no UHI as Jone and CRU propose but Hansen and GISS say there is. In this case, while I am not a fan of the way Hansen applies his UHI offset and with the GISSTEMP code release I hope to have a better understanding of it, there is an UHI which is a warming bias.

So my understanding is that using the MCM they proposed only 60-90 cells +/- 5 cells are needed to determine what the temperature trend, they do not say what the precision of the the results will be, nor can they be definition know what cells. This would indicate that every cell has to be measured at a high level of precision, that all errors must be Gaussian, which cannot be claimed right now, and that while they discuss 'stations' they are actually talking about cells.

I therefore say that Gavin's claim that 60 'optimally' placed stations could measure the whole NH is not supported by this paper.

Lee, as to the 4 stations for the contiguous USA, it is 4 cells of 4.5 x 7.5 degrees. Please note that you cannot take 4 percent of the whole NH as what the contiguous USA DoF is. That is not a valid assumption, but even letting that stand. If there could be a station that could accurately measure that cell, then it could be 4 stations but NOAA/CRN says that it takes ~12-24 stations to do a cell that size with any accuracy (depending on the location of the cell). Four cells would be anywhere from 48-96 stations. Since the MCM method is based on random cell selection, then all the cells that can be selected must measured, which indicates that NOAA/CRN 233 stations would be needed. But even if it was only four cells, then sampling is just at 2x.

Oh, and I almost forgot, since your using so few observations (cells), then you loose the Law of Large Numbers (LLN) so each cell has to have a higher precision. Current stations are accurate to within 0.3 degree C. The NOAA stations per cell gets the precision to 0.1 degree C per cell. That is not addressed in the DoF argument either.

and that while they discuss 'stations' they are actually talking about cells.

sorry vernon, but you do NOT understand cells nor stations. it is totally bizarre, that you re trying to tell a peer reviewed paper, what they are talking about.

again:
no single station (or cell) needs to be precise, to get the total correct, if you are NOT interested in any subsample!!

sod, did you read the paper? Did you read my post?

The impact of these assumptions is that while they say stations in the paper, they are referring to cells. As a further note, the cell size is quite large so basically they are saying if you have a station that can represent a cell, only 60 cells are needed to measure a trend.

What did I get wrong? Please point it out to me since you claim I do not know the difference.

vernon remains as amusing as always.

Remember, folks, this began with vernon's charge that Gavin was lying when he said:
"It has been estimated that the mean anomaly in the Northern hemisphere at the monthly scale only has around 60 degrees of freedom - that is, 60 well-place stations would be sufficient to give a reasonable estimate of the large scale month to month changes."

Vernon was arguing that several hundred stations are necessarily just in the US, and that the 250 rural stations in the not an oversample, and in fact are not sufficient for determining a gross temp trend. vernon asked for a study supporting that clam about the DoF.

I suplied such a paper. There are only 60-90 DoF in the northern hemisphere. The northern hemisphere and the US is heavily oversampled - which in context was precisely Gavin's point. There are large numbers for statistics - the idiocy from Vernon about losing the law of large number is simply absurd.

And the Wang and Shen paper is not about either stations or grid cells - vernon is simply engaging in another flight of fantasy. It is about Degrees of Freedom in the entire northern hemisphere temperature field.

vernon has now made a fool of himself on the alleged "urban offset" that he argued was improperly applied - but turns out not to even exist and was entirely a figment of his imagination. He has made a fool of himself by conflating potential temperature errors in stations with potential temperature anomaly errors. He has made a fool of himself on the sampling necessary for accurately determining the overall US trend vs that required for accurately determining a trend in every grid cell. He has made a fool of himself by conflating rounding error with confidence intervals. He has made a fool of himself with his definition of degrees of freedom. He has made a fool of himself by confusing grid cells with degrees of freedom.

Personally, I'd say we have ample evidence that vernon is simply a fool.

Lee, so much for a reasonable discussion. Read the paper you cited. They did a MCM on 1/4 the cells for the NH with each cell being 4.5x75 degrees. The results are that 90 cells +/- 5 cells (that is 90 +/-5 DoF) will give reasonable results for the surface temperature of the NH.

So Lee, if you could pop your head out of your anus, this is not about stations. The authors use cells and stations interchangeably, which seems sloppy to me. They say stations but everything they do comes back to the 4.5x7.5 grid.

I also noticed that you must not have understood the paper or my response since you immediately went into an ad hom instead of pointing out any thing I got wrong. Sorry to be you dude.

Oh, I made a typo, each cell is 5x5 degrees.

...and vernon dives deeper into the shallow end of the pond.

Yes, Wang and Shen use the extant grid cell data to estimate the NH DoF at 60-90. So what?!?! The fact is, there are 60-90 DoF n the NH, and Gavin is correct in pointing out that th NH is greatly oversampled. The methodology that Wang and Shen used to calculate the DoF is irrelevant to this - what amtters to GAvin's statement, and to the adequacy of current NH sampling, is the DoF number.

vernon is either being dishonest and attempting to divert from the actual point, or too fricking stupid to realize that the Wang and Shen methodology is irrelevant to the USE of the DoF values they calculate - or both.

Lee, please, the DoF only applies to the 5x5 grid, nothing else. The cells of the grid are not stations, despite what the authors call them.

sod, did you read the paper? Did you read my post?

lol, no. actually i saved me that time. i knew that your argument would fall apart, as soon as i open it.

now that i popped it open, indeed it does:

Another example is the question of
how many stations are needed to measure the global
average annual mean surface temperature. Researchers
previously believed that an accurate estimate required
a large number of observations. Jones et al. (1986a,b),
Hansen and Lebedeff (1987), and Vinnikov et al. (1990)
used more than 500 stations. However, researchers gradually
realized that the global surface temperature field
has a very low dof. For observed seasonal average temperature,
the dof are around 40 (Jones et al. 1997), and
one estimate for GCM output is 135 (Madden et al.
1993). Jones (1994) showed that the average temperature
of the Northern (Southern) Hemisphere estimated
with 109 (63) stations was satisfactorily accurate when
compared to the results from more than 2000 stations.
Shen et al. (1994) showed that the global average annual
mean surface temperature can be accurately estimated
by using around 60 stations, well distributed on the
globe, with an optimal weight for each station.

http://tinyurl.com/ypxv7u

so yes, if everytime, when one of those peer reviewed papers mentions STATIONS, in reality it wanted to say "CELLS", then you are right.

What did I get wrong? Please point it out to me since you claim I do not know the difference.

everything.

in short:
1. you are not contradicting the 60 stations claim.

as you see in the quote above, they are using DOF and stations in a similar way, as a good station will be sufficient.

2. all your explanations are wrong.

the last time, you made claims about station numbers, it turned out that you did not understand, that they were OVERSAMPLING cells, because they wanted to get the SUBSAMPLE (cell) right as well.

now you are claiming, that an UHI effect in a 5x5 grid is unavoidable. please take a lk at a worldmap, with such a grid!

http://tinyurl.com/3doqfd

that argument, as most brought up by you, does not make the slightest sense.

3. the claim that a good station (+adjustments) will not provide a good estimate for a certain grid size, is plain wrong.

your claim about stations needed per grid, accumulates all theerrors you made before: you use a comparison with a work, that is interested in gettin a SUBSAMPLE right. you add up, instead of noting redundandencies. you ignore, whatever you don t like.

sod, why don't you try following what the authors said they were doing. They are taking the temp for the cells of the 5x5 grid and doing the MCM on that. That is not the out put of the stations, it is the out put of the cells in the grid. That is what they say they are doing once you get past the opening paragraphs you were quoting. Lee, your worse. Let me guess, your a couple of eco freaks with no back ground in math or modeling?

sod, vernon clearly does not know what a DoF is in this context, and has no clue what the implications are of a low DoF for the NH temperature field.

Earlier he defined DoF as:
'60 degrees of freedom is the number of observations minus the estimated parameters.'

The man is beyond hope - he refuses to realize just how little he actually understands.

sod, why don't you try following what the authors said they were doing. They are taking the temp for the cells of the 5x5 grid and doing the MCM on that. That is not the out put of the stations, it is the out put of the cells in the grid.

most of those calculations will get much more complicated, if you use single stations.

and other people did exactly that:

Jones (1994) showed that the average temperature of the Northern (Southern) Hemisphere estimated with 109 (63) stations was satisfactorily accurate when compared to the results from more than 2000 stations.

the way they are using DOF and stations in the opening paragraph, leaves me with the impression, that they assume that a 1 to 1 translation at least COULD be found. that seems to be, what your opponent is implying as well.

and yes, i noticed your EXREMELY selective reply (not an answer, as you NEVER answer anything), again.

my first sentence after the Vernon quote was:

they used the best available data, beacuse they were trying to do a comparison.

it got lost, because i had a tag wrong, i guess....