The Australian's War on Science V

The Australian doesn't just make war on climate science, they don't like epidemiology either, printing Anjana Ahuja's hatchet job on the Lancet study.

Greta at Radio Open Source has posted a response from Les Roberts:

The two main criticisms which were in both the Nature article and The Times article are completely without merit. They said there wasn't enough time to have done the interviews. We had eight interviewers working ten hour days for 49 days, they had two hours in the field to ask each household five questions. They had time.

The other criticism was that our people stayed close to the main streets of towns to conduct their surveys. They say that bombs disproportionately go off near the main streets - the car bombs, the IEDs. But the vast majority of these deaths are Iraqis shooting Iraqis, or from coalition forces. I'd have to check the figures, but I think less than 15 percent of deaths are from car bombs and IEDs.

More like this

I believe that it was settled on the previous thread that the proper appellation for anyone who still utilizes either of the above criticisms of the Lancet study is "Ass-plucker".

Defintion: Ass-plucker "one who plucks parameters and other pseudo-scientific gibberish out of his/her ass and uses it as evidence of something other than last night's meal".

One thing mentioned in the piece there is: "In January, the Iraqi government added its own count -- 50,000 civilians dead since 2003."

This seems to come from a "recent op-ed" by Roberts in which he claims that: "The government in Iraq claimed last month (January 2007) that since the 2003 invasion between 40,000 and 50,000 violent deaths have occurred"

Can you direct me to the source for this? I'm not sure exactly where this one comes from, and I've spent too much time in the past on wild goose chases tracking down the origin of various factoids from Roberts.

Iraqi officials come out with that kind of tripe quite frequently. After Burnham et al was published Reuters reported one as putting "the total Iraqi death toll since the war started at 40,000" and it isn't unlikely that they were still plugging that line in January for the benefit of anyone who is fool enough to take them seriously. The US State Department has a web page pushing IBC figures in the same region.

Josh, if playing silly games of Gotcha with Les Roberts is taking up too much of your time, there's a simple solution: do something more constructive.

By Kevin Donoghue (not verified) on 07 Mar 2007 #permalink

"it isn't unlikely that..." there might be a source for Roberts' claim somewhere.

Thanks for that info. Now on to more constructive things.

Roberts reply, or at least their quote of him, is ambiguous and seemingly more rhetorical than focused on clearing up misconceptions.

"We had eight interviewers working ten hour days for 49 days, they had two hours in the field to ask each household five questions. They had time."

First, Lancet 2006 had eight interviewers working in 2 person [one male, one female] teams, according to Nature's interview of one of his Iraqi interviewers and according to Roberts replying to Nature questioning him about the apparent contradiction between 1 and 2 person teams.

So they only had four groups heading into houses at a time asking questions in 2006, not eight, according to the Nature article's quote of both Roberts and his Iraqi interviewer. Yes, there were eight interviewers, but they didn't interview any more quickly than 4 would excepting, I assume, physically setting up whatever interview material a little faster than one person would.

Then there are at least two interpretations of the timeframe for his somewhat ambiguous sentence.

1) The eight worked independently for 10 hours, and had 2 hours per home to ask five questions. This interpretation is incoherent with the Nature article and some other comments of Roberts about the shorter length of his 2006 study vs. the UNDP ILCS at 80 odd minutes per household; we can discount this interpretation unless he is up for some major revisions of past statements.

2) 4 teams worked ten hour days of which 2 hours were devoted to asking questions in the field. 2 hours x 4 teams x 49 days = 392 hours [23520 minutes] of total interviews for the 2000 homes in the sample [some got discounted from their final analysis for being in the wrong area]. 23520 / 2000 = 11.76 minutes on average spent interviewing each household. This interpretation is inconsistent with Hicks' critique that with getting in, getting informed consent, explaining the process, setting up, cleaning up, saying goodbye, etc. and you're hitting about 15 minutes per non-problematic household and potentially much longer for households with chatty interviewees or where there's been traumatic death. It is also a bit less than the 12 to 14 minutes Roberts asserted the 2004 survey took, but not much.

3) Burnham says about 20 minutes interviewing per household in 2006. 20 minutes x 2000 houses = 666 and 2/3 hours interviewing.

Burnham's figure is almost twice Roberts and allots an additional 270 hours worth of interviews to the teams over those 49 days. Why are their public statements of timeframes for interviews in the same study so inconsistent?

Then there's also this inconsistency asserted by Nature:

"Roberts and Gilbert Burnham, also at Johns Hopkins, say local people were asked to identify pockets of homes away from the centre; the Iraqi interviewer says the team never worked with locals on this issue."

Roberts then says they only asked five questions. He makes no mention of these questions being multi-part, of looking for death certificates, of cross-checking facts in the case of apparent conflict, of obtaining informed consent, performing the question with sensitivity or many other things mentioned in the actual Lancet study. Again, his reply seems more focused on making a rhetorical point than on clearing up confusion. I'd be curious to see the actual survey the interviewers were working from to see how closely the claim of just 5 questions coheres with their stated research methdology and with the actual survey.

Hey Tim, since he apparently likes your "close reading" of this debate, maybe you can get him to reply about:

1) Why don't his figures match Burnham's?

2) Why did he respond to Nature's follow up question on how 4 teams could have met the timeframe in 2006 with a reply about 2004?

3) How was having effectively 6 interviewers in 2004 supposed to rebut a query about the sufficiency of 4 teams in 2006?

4) Why did they switch from 1 interviewer per house in 2004 to 2 in 2006?

5) Why include a male and female interviewer on each team in 2004 and then split them up? Why include one of each in 2006 and not split them?

6) How was the survey worded and how was further information elicited from respondents?

7) What training were the interviewers given in how to be tactful, sensitive, how to obtain informed consent?

8) What's his estimate of how long getting informed consent should take?

9) What's his estimate on average interview times for households with deaths vs. households with none in the 2006 study?

10) Why didn't the Lancet study cite a min/max range of interview length or a mean interview length? Was this recorded by the interviewers?

11) Why was Roberts' and Burnham's claim about the interviewers asking locals where homes away from town center could be found contradicted by an interviewer?

While this wouldn't settle issues like 'main street bias' [the data should do that] or the possibility of bias in the interviewers themselves [baseless without evidence] it certainly would explain an awful lot of methodological inconsistencies and lacunae that make the study look iffy as is. As it stands, it sounds a bit like Burnham and Roberts don't know what the researchers in Iraq actually did, where they did it or how long it took them to do it.

I was glad to see the radioopensource site with this Roberts piece telling us to forget all these "old arguments" like bias in the sampling etc, also gets to issues that matter elsewhere, such as this piece worrying over whether the 'shuffle' of an iPod is *truly random*:
http://www.radioopensource.org/the-ghost-in-your-machine/

Kevin, I have just one question for you.

1. Why do you keep asking questions that have been answered?

Oh, and from my post on the Nature story:

>>The US authors subsequently said that each team split into two pairs, a workload that is doable", says Paul Spiegel, an epidemiologist at the United Nations High Commission for Refugees in Geneva, who carried out similar surveys in Kosovo and Ethiopia.

>I don't think that the October paper implied that they had four people at each interview, but you'd hope that this finally puts the matter to rest.

Incidently, the ILCS interviews averaged 82 minutes. There were about 150 questions about the household and 100 question about each individual in the household. Why hasn't Hicks and co accused them of fraud?

Hicks has apparently not done so -- not in so many words, at least: "The fact that they can't rattle off basic information suggests they either don't know or they don't care." -- but others clearly have.

What some of these people don't appreciate is that scientific fraud is not proved by idle "I can't believe it" speculation. Then again, perhaps it is not so much their goal to prove fraud as it is to plant doubt.

Tim,

Where were any of those questions I listed answered? Especially, where does Roberts or Burnham explain why one's estimate implies about 12 minutes and the other's about 20?

Perhaps Hicks doesn't have a problem with the ILCS since it doesn't look as slapdash as the Lancet study.

The ILCS trained it's core staff for three weeks followed by two weeks for the local staff of 500. Lancet managed 2 days. It sampled over 21000 houses compared to under 2000. ILCS is described as a rapid response survey specifically designed to elicit fast responses but to be comprehensive. Still it took 82 minutes average. Lancet took some still indeterminate length of time because of public statements by two of its authors that conflict.

Further, it looks like the ILCS questions were very tightly focused compared to the Lancet's and didn't require recall of specific dates two years in the past. The timeframes in the ILCS are frequently, "the past two weeks' "the past month" and immediate.

And again the reason one knows all this about the ILCS is that they actually measured the length of their interviews and released their survey data and actual questions in a nice tabular form, unlike Lancet.