AAPOR alleges Gilbert Burnham violated AAPOR's code

The American Association for Public Opinion Research (AAPOR) has put out a press release alleging that Gilbert Burnham (who is not a member of the AAPOR) violated the AAPOR's code of ethics. What did he do? Their press release states:

Mary E. Losch, chair of AAPOR's Standards Committee, noted that AAPOR's investigation of Burnham began in March 2008, after receiving a complaint from a member. According to Losch, "AAPOR formally requested on more than one occasion from Dr. Burnham some basic information about his survey including, for example, the wording of the questions he used, instructions and explanations that were provided to respondents, and a summary of the outcomes for all households selected as potential participants in the survey. Dr. Burnham provided only partial information and explicitly refused to provide complete information about the basic elements of his research."

That seems to be more than a little misleading. Burnham has released the data from the study. This report goes into a fair bit of detail on how the survey was conducted. And here is the survey instrument which includes the "wording of the questions he used".

The AAPOR press release fails to specifically state what information was not provided. Nor does there seem to be any sort of report available from the AAPOR web site. I've emailed them asking for this information, but so far have received no reply.

Update: Reaction to AAPOR press release.

More like this

By Liz Borkowski  Revereâs been keeping us up to date on the latest news about the National Institute of Environmental Health Sciences â specifically, the stepping aside of Director Dr. David Schwartz for an NIH investigation, and the letter sent to NIEHS employees with the apparent goal of…
Remember "ClimateGate", that well-publicized storm of controversy that erupted when numerous email messages from the Climate Research Unit (CRU) webserver at the University of East Anglia were stolen by hackers and widely distributed? One of the events set in motion by ClimateGate was a formal…
Hedilberto Sanchez, 26, was killed on Monday (Jan 11, 2010) at a construction site in Elmhurst, NY when an 18-foot high cinder block wall collapsed on him. He leaves behind his wife, and two sons, Luis, 6 and Edison, 3. Three other workers were injured in the incident, including Mr. Sanchez's…
Here we continue our examination of the final report (PDF) of the Investigatory Committee at Penn State University charged with investigating an allegation of scientific misconduct against Dr. Michael E. Mann made in the wake of the ClimateGate media storm. The specific question before the…

AAPOR's investigation of Burnham began in March 2008, after receiving a complaint from a member.

complain from a member? sloppy research? false informations?

i am holding my breath...

Tim Lambert writes:
And here is the survey instrument which includes the "wording of the questions he used"

Well, that's the "Iraq Mortality Survey Template" posted on the National Journal site. But is it the questionnaire that was actually used? Nobody seems to know, as Burnham and Roberts (so far, to my knowledge) have declined to confirm or deny it. Other researchers, eg Fritz Scheuren, have requested copies of the questionnaire, but have apparently been refused by the Lancet authors.

By Robert Shone (not verified) on 04 Feb 2009 #permalink

Naturally, this is front-paged at RedState who gloss it as:

[T]he author of that âstudyâ [i.e. Burnham] has been officially censured by his professional peers for not meeting either scientific or professional standards in that âworkâ

Love those scare quotes on "study" and "work". And of course they fail to note that epidemiologist Burnham is not even a member of AAOPR (what does an mortality survey have to do with opinion polls anyway?). They follow up with this gem:

This is getting to be drearily predictable - political propaganda is given a spray-paint-coating of scientific imitation, and then is âmarketedâ as being science. Itâs good that someones with better expectations of professional responsibility had the guts to ask for more details, and then reacted appropriately when the basic standards of âscienceâ were clearly being violated.

As an aside, the same situation holds with much (most?) of the so-called âscienceâ associated with âglobal warmingâ (or whatever itâs being called this week). A great deal of professional censure is required there as wellâ¦.

Tim writes: "Burnham has released the data from the study." This is untrue for two reasons. First, Burnham has never released any data to Spagat or to any of his co-authors. Second, although Burnham released some of the data to me and other selected researchers, he has not released anywhere near enough for an outsider to judge the quality of the work.

As always, none of the critics are asking for information that would allow one to identify a particular respondent or interviewer. But, among other things, we would like to know which teams conducted which interviewers. We don't need the names of anyone involved. But if Team A had results that differed significantly from Team B, then concerns would be raised. Note that Roberts promised to release this data to Fritz Scheuren more than a year ago at the Joint Statistical Meeting in Salt Lake City. Roberts reneged.

Until Burnham et al have released the data to all critics and have made enough data available to allow others to judge their work, it is incorrect to claim that "Burnham has released the data from the study." At best, Tim could truthfully claim that Burham has released some of the data to selected outsiders.

And, for those who care, I am not a member of AAPOR and did not initiate the complaint.

Well, here's what they ahve to say on that:

we feel the time is now right to make the data set available to academic and other scientific groups whom we judge have the technical capacity to objectively analyze the data.

We won't freely distribute the data.

Conditions for the Release of Data from the 2006 Iraq Mortality Study

These data will be released on request to recognized academic institutions or scientific groups with biostatistical and epidemiological analytic capacity.

1. The data will be provided to organizations or groups without publicly stated views that would cause doubt about their objectivity in analyzing the data.

2. The data will remain the property of Johns Hopkins Bloomberg School of Public Health, and will be provided only on condition that the datasets are not shared with others.

What #1 means is "we won't give the information to anyone who might disagree with us."

#2 means "we won't let the people we give it to give it to anyone who might disagree with us."

That's not the behavior of scientific researchers, it's the behavior of political whores masquerading as scientific researchers.

If the data honestly supported their conclusions, they'd be willing to let those who disagree with them see it. Their behavior is almost perfectly aligned with what you would expect from people engaging in fraud.

Greg Q:

> > 1. The data will be provided to organizations or groups without publicly stated views that would cause doubt about their objectivity in analyzing the data. [...]

> What #1 means is "we won't give the information to anyone who might disagree with us."

Are you saying that "non-objectivity" equals "objectivity"?

Tim and the AAPOR appear to agree on this, at least; not releasing the information upon which you base your conclusions is a problem.

By slickdpdx (not verified) on 04 Feb 2009 #permalink

This in a country (the United States) where lying about Iraq and Iraqi casualties is a cottage industry.

They cannot censure him. They can say they don't agree with his methodology. They can refuse him admission if he applies. But they have no standing.

Hence, they're wingnut assholes, regardless of any other merits they may possess.

By Marion Delgado (not verified) on 04 Feb 2009 #permalink

I, too, am concerned about the way this case was handled. If the researcher isn't a member of the association, may not even be familiar with the association, or may think of them as irrelvant and inconsequential, why should he/she release details??? A peer-reviewed journal (?) already found his methods to be sufficiently comprehensive!

Perhaps he chose not to aqueisce to what appeared to be intimidation tactics. Now look, his reputation, possibly career, is ruined. Maybe his research wasn't sound, maybe it was. This should have gone through "The Lancet." Without knowing all of the details, this seems like a very irresponsible reaction from AAPOR.

By Witch-hunt (not verified) on 04 Feb 2009 #permalink

Cool. Can we get AAPOR to investigate Inhofe's list?

> Perhaps he chose not to aqueisce to what appeared to be intimidation tactics.

Well, we may never know, since the AAPOR hasn't released the exact words they wrote to Burnham, and the exact responses they received. Now, this isn't strictly part of the AAPOR code of ethics, but really, it'll be good for them to release these details for the sake of openness.

I just did a search on the AAPOR site for the terms: "epidemiology" Zero hits.
"age structure" Zero hits.
"mortality" Zero hits
"opinion" 84 hits.

Burnham was not doing a public opinion survey - he was doing an epidemiological survey, which comes with its own different set of ethical constraints. He was not a member of this organization, nor signatory to its code of ethics - properly so, as he was doing epidemiology - nor subject to its enforcement apparatus. The organization has no authority over Burnham, Burnam refused to cooperate with an absurd investiqation by an organization which has no business investigating him, and then he was censured by it.

Witch,hunt, indeed.

I want to know if AAPOR has substantiated the claims that 6 million Jews were killed in the Holocaust. Is that accurate? And, were the methods use to deduce that number scientific? Is it even OK to question those figures? Is it true that in Germany it is illegal to question those numbers? And, if so, does that qualify as having explicitly refused to provide that information for review? Also. Is AAPOR as aggressive and critical in their research of public opinion when it is pro-western? Or is that allowed to slide because it fits their public opinion agenda?

By James Hovland (not verified) on 04 Feb 2009 #permalink

Hey, has anyone noticed that Johnson, Spagat, Gourley has finally got his made up fudge factor paper puplished in the esteemed:

Journal of Peace Research 45:5, 653, 2008 'Bias in Epidemiological Studies of Conflict Mortality'

By Jody Aberdein (not verified) on 05 Feb 2009 #permalink

The 'main street bias' paper in fact received The Journal of Peace Research Article of the Year Award.
http://dissident93.wordpress.com/2008/12/15/journal-of-peace-research-a…

Perhaps its critics at Deltoid could pool their informed, scientific comments ("bogus", "fudged", "pulled out of ass", etc) into a paper which they could get peer-reviewed and published?

Or perhaps they could listen instead to genuine authorities such as Jon Pedersen, who wrote (in an email to me, 4/12/06): "I very much agree with the MSB-team that there is some main stream bias, and that this is certainly an important problem for many surveys - not only the Iraq Lancet one... The MSB people have come up with some intriguing analysis of these issues".

By Robert Shone (not verified) on 05 Feb 2009 #permalink

Of course. And probably shameful.

By Robert Shone (not verified) on 05 Feb 2009 #permalink

Does anybody know whether the published version of the MSB paper was an improvement on the drafts previously discussed here? Iâm not going to fork out 20 dollars to find out.

By Kevin Donoghue (not verified) on 05 Feb 2009 #permalink

Journal of Peace Research? Tim L's comment sums it up.

Robert, you should read my comment on the thread above this. I'd like to know why Bush, Blair and co. didn't invest the time and money into doing a survey themselves, given that many punditys estimated that the invasion would lead to massive death totals. Why don't all governments do this? The answer is two fold: first, they don't really care how many die when they are promoting an alternative agenda. Second, if the accurate death toll were known, even if it was "only" 200,000, then this would appear horrific enough. But so long as the actual totoal remains as clear as mud, then the concern can be dismissed. In other words, without 100% unequivocol proof the problem does not exist. I've had to debate all kinds of climate sceptics: those who downplay biodiversity loss, those who dismiss acid rain, climate change etc. And they use employ the same strategy as those who defended the US invasion of Iraq: without concrete, iron-clad evidence of a process, then it just does not exist. So why would the US and UK governments be stupid enough to fund a survey that eventually shows the death toll in Iraq to be (for argument's sake) 200,000 plus or minus 5,000? Such a total could not be so easily sanitized, so they ignore it, and any studies that suggest totals to be exceptionally high are summarily dismissed.

By Jeff Harvey (not verified) on 05 Feb 2009 #permalink

Tim writes "That's very embarrassing for the Journal of Peace Research."

Well, opinions differ on that. But let's make some progress. Instead of merely asserting that this is "embarrassing," why not participate in a round-table on the topic? Deltoid would be the perfect location for that debate. I (and perhaps some of the authors) would be willing to participate. Tim (and other critics) could write something. The authors (and their supporters) could write a response. And so on. That's the way that science should work. To merely assert that something is "embarrassing," without being willing to participate in a debate on the topic, is not productive.

Here is the editorial board of the JPR. Is there any reason for us to think that they are less able to judge the quality of Spagat et al's work than Tim Lambert is? Perhaps! But only if Tim is willing to discuss the topic in detail.

By David Kane (not verified) on 05 Feb 2009 #permalink

David Kane: Tim (and other critics) could write something. The authors (and their supporters) could write a response.

Tim did write something about the MSB paper - in fact he devoted several threads to it. You responded, as did others, some of whom thought the paper wasn't as bad as Tim made out. AFAIK none of the authors are banned from commenting.

You want a forum for discussion? Here it is.

By Kevin Donoghue (not verified) on 05 Feb 2009 #permalink

Most of the criticism from Tim Lambert and others was directed at one set of parameter values which was presented only as an illustrative example by the msb authors (who later added an exploration of the parameter space).

In other words the criticism missed the point that the actual bias could be determined only as a result of disclosure by the Lancet authors on basics such as sampling procedures and main streets selected as starting points, etc.

So this brings us back, in a way, to the AAPOR thing. The Lancet authors still haven't disclosed the basic level of information which is obviously necessary to assess how their claim of giving all households an equal chance of selection holds up.

If you're extrapolating from 300 actual violent deaths to 601,000 estimated violent deaths, based on this claimed sample-randomness, then it would seem pretty important that the sampling scheme could be assessed in some way. Currently it can't be, because nobody outside the Lancet team knows what that sampling scheme entailed.

By Robert Shone (not verified) on 05 Feb 2009 #permalink

Robert Shone [offen as anna plurabella] leaps on anything he can to sledge the Lancet studies [and medialens].

mostly from his hideyhole at mediahell using sockpuppets.

he's not to be trusted an inch

By Stillan Darkwater (not verified) on 05 Feb 2009 #permalink

I'm used to David Edwards/Cromwell (editors of Medialens) stalking me (under the pseudonym "Woofles") at mediahell.org, but I hope they'll use their real names on a respectable "science" blog.

By Robert Shone (not verified) on 05 Feb 2009 #permalink

Tim: Wasn't that discussion about a draft of the paper? I believe that the published version is different. So, why not re-open the discussion in a new thread? Again, I am not suggesting more back and forth deep in a comment thread (as fun as that is!) but a proper round table, similar to what Crooked Timber occasionally hosts, would be worthwhile. If you aren't willing to back up claims like "very embarrassing," then you shouldn't make them.

I don't claim to be an expert in mortality surveys, but the authors of the paper and the editors of the journal don't seem to be, either.

How do you know what the editors of the journal are experts in? Do you doubt their academic credentials? Do you deny that the paper was peer-reviewed by qualified reviewers? Again, it is one thing to argue with the paper or the authors, but to impugn the editors when (AFAIK) you have not even read the published article seems a bit much. Or is anyone who disagrees with you automatically suspect?

By David Kane (not verified) on 05 Feb 2009 #permalink

David Kane: I believe that the published version [of the MSB paper] is different.

You believe it's different? Do I take it then that you aren't willing to pay the 20 bucks for it either, then?

I certainly hope it is different for the sake of the journal's reputation, but I seem to recall Michael Spagat claiming that the paper was going to be published without substantial changes.

By Kevin Donoghue (not verified) on 05 Feb 2009 #permalink

bi -- IJI:

No, I'm saying they're lying sacks of poor quality fertilizer.

The People who wrote that study are left-wing zealots. Their ability to judge the "objectivity" of anyone else is nil. Given their complete lack of objectivity, their insistence that they wont give the information to anyone else who "lacks objectivity" is just another fraud.

Real scientists routinely give their data to people who want to prove them wrong, or who want to use that data for their own agendas (i.e. a lab that is competing with yours, on the same research subjects). The only reason for refusing to give data, used in a published paper, to anyone and everyone who wants to examine it is that your "research" was a fraud, and releasing the data will show it. See, for example, Michael Bellesiles.

There are no ifs, ands, maybes, or buts. If you want to do science, then you have to be willing to give the data that underlies your published research to anyone who wants to see it, especially those who want to see it so they can prove you wrong.

The refusal to do so is tantamount to a signed admission of fraud.

What would your response have been to John Lott saying "Tim Lambert disagrees with me, and wants to prove me wrong. Therefore he's not objective, and I will not give him any of the data that underlies my claims"? How would that be even the slightest bit different from what the Johns Hopkins people are doing?

What would your response have been to John Lott saying "Tim Lambert disagrees with me, and wants to prove me wrong. Therefore he's not objective, and I will not give him any of the data that underlies my claims"? How would that be even the slightest bit different from what the Johns Hopkins people are doing?

did John Lott share some data with Tim? just curious...

Greg Q- Could you wipe the spittle from your keyboard please?

The refusal to do so is tantamount to a signed admission of fraud.

Ethics guidelines established for such research only exist to encourage fraud, in other words.

Tim writes: "A reader has kindly sent me a copy of the published paper. My criticism stands."

You realize, of course, that much of your criticism can no longer stand precisely because we know a lot more about the actual sampling now than we did when you wrote your criticism. (Or, rather, we now know for a fact that much of what the Lancet said they did, they did not, in fact, do.) Tim wrote then:

n, the size of the unsampled population over the size of the sampled population. The Lancet authors say that this number is 0, but Johnson et al speculate that it might be 10. This is utterly ridiculous. They expect us to believe that Riyadh Lafta, while trying to make sure that all households could be sampled, came up with a scheme that excluded 91% of households and was so incompetent that he didn't notice how completely hopeless the scheme was. To support their n=10 speculation they show that if you pick a very small number of main streets you can get n=10, but no-one was trying to sample from all households would pick such a small set. If you use n=0.5 (saying that they missed a huge chunk of Iraq) and use their other three numbers, you get a bias of just 30%.

There are so many problems with this criticism that it is hard to know where to start. Do you really stand by it? We now know that many of the streets were not included in the sampling frame. Do you deny this?

I've said it before and I'll say it again. Since David Kane keeps asking the same question thereâs no reason why he shouldnât get the same answer. Really David, if you now claim to know "a lot more about the actual sampling" can you tell us just what it is you know that you didn't know the last time you brought this up, or the time before that? That's not a rhetorical question - I haven't heard anything new about that since Burnham gave his talk at MIT (which you seemed quite happy with at the time). Did I miss some revalation?

By Kevin Donoghue (not verified) on 05 Feb 2009 #permalink

Kevin: There has been a big change in our knowledge of the sampling between Tim's post of December 1, 2006 and today. Perhaps the simplest way to summarize it is for you to tell us what you think the sampling was and we can tell you why you're wrong. The short version is that we (and I honestly thinks this includes Burnham) don't really know the sampling plan.

Tim: I am agnostic on their n = 10 number. Given what we now know (but did not know then), it is plausible. But your n = 0 is almost certainly wrong. Correct? Anyway, the most productive way forward is to start a new thread on the MBS paper. Feel free to just repeat your critique. Or, if you want me to go first, I will send you something to quote. That is the way you make progress in science.

Kevin,

I just checked those links. Any other readers who do so will be confused as to your meaning. So, let's simplify! Here is the exact question I asked last March:

Were all streets included in the sample (including back alleys) or just streets which intersected main streets?

Well? You didn't choose to answer this question a year ago. Would you like to answer it now? And, if you can, please provide a reason for your belief: something from the Lancet paper or the supplementary materials or the authors' public statements.

I don't think you can. And it's not your fault! My honest belief is that even Gilbert Burnham (who I think is a good guy in a tight spot) does not know. He knows what he told his Iraqi colleagues to do. But he has all sorts of reasons for thinking that they didn't actually do that. So, he is in a bind.

By the time I was finishing J School classes, we were heavily intermingled with advertising, marketing and PR students. And the McCardles are the product our system wants to turn out.

Just as tasty and filling as the real thing. People's choice!

By Marion Delgado (not verified) on 05 Feb 2009 #permalink

Of course I did answer your question, albeit in a different thread.

So, once again, will you answer my question? What is this new information you have obtained which makes it necessary for Tim to open a new thread on the MSB study? You are saying nothing now that you werenât saying in that thread and in many another before it.

You donât seem to notice that you contradict yourself. You say (1) that Burnhamâs team wonât answer questions that nobody else can answer but (2) we have new information about what they did so we need to revisit the topic. Does not compute.

Contrary to your claim (no. 39), science does not progress by running around in circles.

By Kevin Donoghue (not verified) on 05 Feb 2009 #permalink

Kevin writes:

So, once again, will you answer my question? What is this new information you have obtained which makes it necessary for Tim to open a new thread on the MSB study? You are saying nothing now that you werenât saying in that thread and in many another before it.

A fair question! What we know now that we did not know is March 2008 is that the Lancet authors are no longer standing by the explanations that they gave before people started looking more closely at their work.

Consider this example from April 16, 2008. Summary: Burnham and Roberts made a bunch of false claims about their data in a letter to the National Journal. They published those claims on their web page. I (and others) pointed out that those claims were false. They then deleted the letter and refused to apologize, pretending as if the entire incident never happened. See the link for full details.

Why is this important? Because it means that we can no longer rely on Burnham's MIT presentation! We now know for a fact that some of things that Burnham believed (in all honesty) to be true pre-April 2008 are not, in fact, true. Even worse, these untrue things are not admitted to. They are washed down the memory hole.

And, even if you ignore this problem for a moment, it is still the case that science (or at least I!) move slowly. Although it is true that my initial impression of Burnham's MIT talk was favorable, it was only in June 2008 that I was able to create a transcript of the talk and look closely at the slides. Although this should have been clearer to me before, it became clear then that his explanation of the sampling scheme was gibberish. To quote myself:

Pages 21-24 highlight a different version of the sampling plan than is described in the paper. Burnham claims that they did not restrict the sample to streets that crossed their main streets. Instead, they made a list of "all the residential streets that either crossed it or were in that immediate area." This is just gibberish.

First, if this was what they actually did, why didn't they describe it that way in the article? Second, given the time constraints, there was no way that the teams had enough time to list all such side streets. Third, even if the interviewers did do it this way, the problem of Main Street Bias would still exist, except it would be more Center Of Town Bias. Some side streets are in the "immediate area" of just one main street (or often in the area of none) and other side streets (especially those toward the center of a town or neighborhood) are near more than one. The later are much more likely to be included in the sample.

Again, maybe these problems should have been obvious to me before, but they only became obvious in June 2008. Apologies for the delay.

Is it really your claim that the Iraqi survey teams had time to drive into a city they had never been to, create a listing (without maps!) of every main street, cross street and back alley, and then randomly select among them?

see david, you are using a typical denialist tactic of a huge list of completely random and independent lines of attack against the lancet papers. you can neither give reason nor evidence of fraud or big errors in the study.

the main street bias attack is one without any substance. you can easily check this for yourself:

just walk down a huge road (why not chose a shopping lane?) and ask every person you meet, whether they live in a road intersecting this one or not.

the yes/no ratio you get, is the "mainstreet bias" ratio for the WORST CASE scenario of polling behaviour by the Lancet team.

i am looking forward to read your numbers!

David Kane: ...the Lancet authors are no longer standing by the explanations that they gave before people started looking more closely at their work.

So Burnham no longer stands by the description he gave at MIT? You haven't produced any evidence of that at all.

Again, maybe these problems should have been obvious to me before, but they only became obvious in June 2008.

As will be apparent from the thread I previously linked to, you started kicking up about this no later than 4 March 2008. You said then that you âbecame convinced that this was a real issueâ some time previous to that. If you want to go on nitpicking Burnhamâs statements for inconsistencies it might be best to get your own story straight. Iâm not suggesting that because you give conflicting descriptions of your researches nothing you say about them can be trusted. But thatâs the kind of deduction you go in for and it does you no credit.

Anyway, at some point you made the startling discovery that the procedure described by Burnham doesnât give all households in a district an equal chance of being selected. Now if that was news to Tim Lambert it might make sense for him to open a new thread to discuss the matter. But he probably saw that difficulty the moment he first set eyes on the relevant paragraph of the Lancet paper.

Is it really your claim that the Iraqi survey teams had time to drive into a city they had never been to, create a listing (without maps!) of every main street, cross street and back alley, and then randomly select among them?

I donât know where you got your information about which Iraqi cities a group of Iraqis (whom you havenât met) have been to, or what maps they carry in their cars. Iâm quite sure Gilbert Burnham never claimed that they created listings including every back alley.

By Kevin Donoghue (not verified) on 06 Feb 2009 #permalink

Kevin writes:

So Burnham no longer stands by the description he gave at MIT? You haven't produced any evidence of that at all.

You don't read my blog closely enough! I first blogged about Burnham's Feb 2007 MIT presentation in March 2007 but didn't get around to creating and posting a transcript until June 2008. Apologies for the delay.

Now, if that were the last word from the Lancet authors as to how the survey was conducted, you might have a point. But it wasn't. Instead, they posted a Q&A about the survey in early 2008. And that contradicts Burnham's presentation. In the Q&A, they claim that:

Sampling in the 2006 study was designed to give all households in Iraq an equal chance of being included.

They might be true, but it is inconsistent with Burham's presentation.

But, if that were the Lancet author's last (and final) statement, we might still make progress. But it isn't! Instead, they took that statement down and replaced it with nothing.

It is impossible to know what the Lancet authors claim the sampling procedure to be. If you have a link, provide it.

Kevin goes on:

Anyway, at some point you made the startling discovery that the procedure described by Burnham doesnât give all households in a district an equal chance of being selected. Now if that was news to Tim Lambert it might make sense for him to open a new thread to discuss the matter. But he probably saw that difficulty the moment he first set eyes on the relevant paragraph of the Lancet paper.

If you feel like diving into the intellectual history about why it takes me so long to figure things out, I am happy to go along for the ride. In the meantime, you are exactly correct. It was/is obvious that there is no way for every household in Iraq to have an equal chance of being sampled. Then why were the Lancet authors claiming the opposite as late as spring 2008?

Best part about this debate? The "alleges" in the title of this post. Is it really a matter of factual dispute that Burnham has violated AAPOR standards? Now, you may argue that Burnham has no obligation to follow these standards, you may argue that the standards are stupid, you may argue that the whole controversy has been spawned by the evil neocon conspiracy, but there is no way to deny to Burnham has violated them. Let me walk Tim through this slowly.

Here are the AAPOR standards.

2. The exact wording of questions asked, including the text of any preceding instruction or explanation to the interviewer or respondents that might reasonably be expected to affect the response.

Burnham et al have not released the exact wording of the questions, much less the interviewer scripts. Ergo, they have violated the standards. There is nothing alleged about it.

Don't believe me? Here is a quote from Mary Losch, AAPOR standards chair. AAPOR

requested the survey instrument, (including consent information) and it was not provided. The template did not appear to be much beyond an outline and certainly was not the instrument in its entirety.

And there you have it. Tim ought to strike out the "alleges" from the post title.