Revenge of the pollsters

The pollsters in question being Bray and von Storch, who get ratty on Prometheus about Gavin “too sexy for my model” Schmidt’s RC piece dissing their latest survey.

Slightly confusingly, although the piece is signed by B+vS, it says I am a sociologist and Hans von Storch is a climate scientist… I attend first to the blog posting… which suggests to me that Bray wrote it. I’m going to go with the asserted attribution for the moment. Anyway… oh, before I head off, there is also First a thanks to those… who contributed favorable comments on the RealClimate blog. Yup, thats science for you: only favourable comments are thanked; those who pointed out valid flaws (but how could they? There were none…) are not thanked.

But first some background; you can find more on wiki. There are three surveys, in 1996, 2003 and 2008. As far as I can tell, the 1996 one is relatively uncontroversial, and being done on bits of paper was hard to bias. The 2003 one was strongly controversial, in part because it was done electronically with no means whatsoever of knowing who the replies came from. The 2008 one is ongoing (or is it done? The posting has some previews); the RC comments with respect to the way the questions are done applies to the 2003 survey too.

B+vS defend the 2003 sample with The link and password to the survey was distributed on a number of on line lists (CimList [sic] for example) to at least limit the responses to members of those lists and not open to the general public. This is a curious thing to say and calls their honesty into question. Failing to acknowledge that the link+passwords were distributed (not by B+vS) on a septic mailing list is not open of them (note that the loss of the passwords isn’t in dispute; Bray even discusses it). Asserting that their distribution method limited the responses is just wrong.

Even curiouser, they continue Negative, preemptive comments were almost immediately online (see Useless Online Survey of Climate Scientists). Weeeellll… you can see how B+vS may have been stung by the crit; but a response in May 2005 to a survey done in 2003 is hardly “almost immeadiately” (though its hard to track the timeline; this asks you to cite it as “2003″, but it was rejected for publication in Science in 2004).

No matter. The point we’re arguing now is the wording of the 2008 survey. Part of the problem is that the sort of questions you’re interested in as a sociologist aren’t the same as those for a climatologist. B+vS are pretty snarky about RC (they manage to say In a hurried attempt to perhaps discredit the survey…). Unfortunately B+vS don’t really address the RC post in any detail; I assume they cherry-picked the two they could find best responses to, but those are unimpressive:

B+vS sez The question quite explicitly asks how well can a current model estimate the temperature 10 years into the future – isn’t that what models are intended to do? This is very strange. No, its not what the models are intended to do (at least, not the std models into IPCC AR4; we’re not talking about Smith et al here; or are we? the question is vague). von S knows that full well, so I think this is more evidence that B is writing all this alone. Similarly, in response to Q52, B+vS have largely ducked what Gavin said.

After that they start taking on the commenters at RC, which is odd in itself; they don’t seem to have realised that the authors of blogs aren’t responsible for the comments (except in well regulated fora like this one of course!). But that does elicit a comparison of some 2003 results with the preview of 2008. These are apparently “similar”. Yes in 2003, about 30% were under halfway on cl-ch-is-anthro; in 2008, only about 10% are (this is at least in part because the 2003 question was particularly badly phrased).

Conclusion: B+vS are well p*ss*d off and they allow it to show, to the detriment of their dignity.

[Tim Lambert comments]

Comments

  1. #1 Eli Rabett
    2008/10/14

    I particularly liked the part where you accept responsibility for the comments on your blog. That being the case I was very disappointed in the BS survey, mostly because whoever wrote the questions left them so open ended that the answers are worthless.

    [Yep, I'm happy to take responsibility for that comment :-) -W]

  2. #2 Gavin
    2008/10/14

    You don’t want to take that too far though. Why, you might even end being responsible for my comments and that will get you into all sorts of trouble…

  3. #3 Simon D
    2008/10/15

    Questions aside, the study has a potentially huge selection bias. Only 18% of the people responded to the survey. The authors claim a low response rate is “normal” for online surveys. Fine. The real issue, however, is whether the low response rate in this case could confer a bias. The fact that all online surveys have a low response rate does not prove this survey population is representative of the scientific community.

    In order to properly evaluate whether the poll results reflect the views of the climate science community, the reader needs data contrasting the survey sample population with the greater population. Otherwise, we can’t evaluate whether there’s been a selection bias.

    For example, this line in the Prometheus post is particularly worrying: “some respondents took the courtesy to respond saying they were not climate scientists and were hence discounted from the possible sample”. I wouldn’t be proud of that fact. It suggests poor filtering: there could be many other respondents who were not climate scientists but DID fill out the survey.

    [Ah, I thought "some respondents took the courtesy to respond saying they were not climate scientists and were hence discounted from the possible sample" was a snark against Oreskes -W]

  4. #4 Steve Bloom
    2008/10/15

    If Bray is indeed a sociologist he has absolutely no excuse on the survey procedure issues. That’s bread and butter for those folks.

  5. #5 Fergus
    2008/10/16

    There’s no excuse for a poor sample. I’ve got a list of several thousand people who have either published in CS related journals or presented CS papers. And quite a lot of their email addresses.

    It is hard, though, to get them to respond. No problem with that; MR surveys have low response rates;this can be compensated for by the quality of the sample and there is a standard methodology for calculating error relative to the number of respondents.

    But I am struggling to justify to myself the effort and time required to set up, design and analyse a good quality survey on this question, now.

    I suspect that we need to recognise that the important questions with regard to scientific opinion on AGW have been addressed and answered sufficiently. There are more pressing questions to ask scientists’ opinions on. My suggestion would be geo-engineering. That would get a few interesting responses.

    [Hmm yes, I think you may have hit the major point there: just what is the point of all this surveying anyway? -W]

  6. #6 Joseph O'Sullivan
    2008/10/17

    [Hmm yes, I think you may have hit the major point there: just what is the point of all this surveying anyway? -W]

    All this surveying reminds me of the the cartoon about having to stay up late having to comment on a blog because someone said something wrong on the internet.

    It also gives RPjr an excuse to argue for argument’s sake which is 90+/-5% of what he does on the internet. It’s no wonder that Bray wrote his post on prometheus.

  7. #7 Raymond Arritt
    2008/10/17

    Further to Fergus (hey, I like the sound of that…), it would be interesting to follow up with the non-respondents and ask them why they didn’t respond. I made a deliberate decision not to respond to the Bray and von Storch survey because I thought many of the questions were ill posed. Other people would have declined for other reasons — skepticism that the results could be misused, simply couldn’t be bothered, or whatever. (I mentioned this to Gavin S. and he noted that one would then need to follow up to the non-respondents to the query on non-response, etc…)

    Something I’d really like to see on these surveys are questions that help gauge whether the respondent actually understands the facts he’s being asked about. For example, if a survey asks whether someone thinks the IPCC is too conservative/sensational, there could also be questions like “According to the IPCC, the likely range of sea level rise expected over the next century is about (a) 1 mm to 10 cm; (b) 10 cm to 1 m; (c) 1 to 10 m; (d) 10-50 m.” We might find a strong correlation between those who thought the IPCC was overly gloom-and-doom and those who answered (c) or (d). Or we might not, but it would be interesting to see.

The site is currently under maintenance and will be back shortly. New comments have been disabled during this time, please check back soon.