Anjana Ahuja has written an extraordinarily one-sided article attacking the Lancet study. She drags out the same criticisms that were covered in the Nature story, but even though she cites the Nature piece, she carefully avoids mentioning the Lancet authors' replies, or the opinions of the researchers supporting the study. Ahuja also makes many factual errors, even going as far as claiming that one of the interviewers contradicted Burnham when, in fact, they supported him. All of Ahuja's errors are in the direction of supportting her case, suggesting that she is biased.
Ahuja begins:
Iraq Body Count, an antiwar web-based charity that monitors news sources, put the civilian death toll for the same period at just under 50,000,
This is untrue. The IBC just counts deaths reported in the media. It is not a count of the total number of deaths.
One critic is Professor Michael Spagat, a statistician from Royal Holloway College, University of London. He and colleagues at Oxford University point to the possibility of "main street bias" -- that people living near major thoroughfares are more at risk from car bombs and other urban menaces. Thus, the figures arrived at were likely to exceed the true number. The Lancet study authors initially told The Times that "there was no main street bias" and later amended their reply to "no evidence of a main street bias".
Spagat is an economist and not a statistician, and has no experience in conducting surveys. His colleagues at Oxford are physicists who also have no experience in conducting surveys. Spagat and co know full well that most of deaths occurred outside the home so it matters little where they live. Spagat et al's contrived analysis was only able to make the alleged "main street bias" matter by making absurd assumptions like that they only sampled 10% of the population, and that 90% of the population of Iraq virtually never used main streets for travel or shopping.
And don't you just love the way Ahuja plays "Gotcha!" with the slight change in wording from the authors?
Professor Spagat says the Lancet paper contains misrepresentations of mortality figures suggested by other organisations, an inaccurate graph, the use of the word "casualties" to mean deaths rather than deaths plus injuries, and the perplexing finding that child deaths have fallen.
The authors acknowledged that their graph labeled "casualties" as "deaths" and erroneously compared rates and accumulated counts. So that's is two errors that Spagat correctly reported, but the next mistakes are Spagat's:
The word "casualties" does not even appear in the body of the paper. And the study actually found that child deaths increased. In one paragraph Spagat made as many mistakes as he was able to list from the Lancet study.
"The authors ignore contrary evidence, cherry-pick and manipulate supporting evidence and evade inconvenient questions," contends Professor Spagat, who believes the paper was poorly reviewed. "They published a sampling methodology that can overestimate deaths by a wide margin but respond to criticism by claiming that they did not actually follow the procedures that they stated." The paper had "no scientific standing". Did he rule out the possibility of fraud? "No."
I guess Spagat thinks that they should have got economists and physicists to review the paper instead of statisticians and epidemiologists. I think the reviewers were able to figure out what the sampling methodology was from the paper. Unlike Spagat, who came up with the interpretation that they said that they only sampled 10% of the country and when told that he had misunderstood what they had done accused them of not following the procedures that they has stated. And can we rule out the possibility of fraud in Spagat's work? No.
If you factor in politics, the heat increases. One of The Lancet authors, Dr Les Roberts, campaigned for a Democrat seat in the US House of Representatives and has spoken out against the war. Dr Richard Horton, editor of the The Lancet is also antiwar.
And The Times, like every single Murdoch paper, was stridently pro-war. This might explain why Ahuja's article is biased.
Dr Richard Garfield, an American academic who had collaborated with the authors on an earlier study, declined to join this one because he did not think that the risk to the interviewers was justifiable. Together with Professor Hans Rosling and Dr Johan Von Schreeb at the Karolinska Institute in Stockholm, Dr Garfield wrote to The Lancet to insist there must be a "substantial reporting error" because Burnham et al suggest that child deaths had dropped by two thirds since the invasion. The idea that war prevents children dying, Dr Garfield implies, points to something amiss.
No, Burnham et al do not suggest that child deaths dropped since the invasion (they increased), and Garfield did not say that the study suggested that either. Garfield suggested that because the child mortality rate was much lower than from surveys conducted in the 90s, the study had undercounted child deaths. And Ahuja has cherry picked her quotes from Garfield. Here is what he thinks about the accuracy of the study:
I am shocked that it is so high, it is hard to believe, and I do believe it. There is no reasonable way to not conclude that this study is by far the most accurate information now available.
Back to Ahuja's story:
Professor Rosling told The Times that interviewees may have reported family members as dead to conceal the fact that relatives were in hiding, had fled the country, or had joined the police or militia. Young men can also be associated with several households (as a son, a husband or brother), so the same death might have been reported several times.
However, they were able to produce death certificates, so it is not credible that they invented the deaths. And if you wanted to conceal that someone had joined the militia, why not just not say that they had joined the militia instead of concocting a lie? As for double counting, that is very easy to check against and the researchers made sure that deaths were not counted twice.
Another critic is Dr Madelyn Hsaio-Rei Hicks, of the Institute of Psychiatry in London, who specialises in surveying communities in conflict. In her letter to The Lancet, she pointed out that it was unfeasible for the Iraqi interviewing team to have covered 40 households in a day, as claimed. ...
Professor Burnham says the doctors worked in pairs and that interviews "took about 20 minutes". The journal Nature, however, alleged last week that one of the Iraqi interviewers contradicts this.
Only if by "contradicts" you mean "confirmed". Here's the Nature story:
The US authors subsequently said that each team split into two pairs, a workload that is "doable", says Paul Spiegel, an epidemiologist at the United Nations High Commission for Refugees in Geneva, who carried out similar surveys in Kosovo and Ethiopia. After being asked by Nature whether even this system allowed enough time, author Les Roberts of Johns Hopkins said that the four individuals in a team often worked independently. But an Iraqi researcher involved in the data collection, who asked not to be named because he fears that press attention could make him the target of attacks, told Nature this never happened. Roberts later said that he had been referring to the procedure used in a 2004 mortality survey carried out in Iraq with the same team (L. Roberts et al. Lancet 364, 1857-1864; 2004).
So the Iraqi researcher told Nature that they worked in pairs, which Spiegel says is doable. But Ahuja, after reading that very paragraph, carefully avoids mentioning Spiegel's opinion, presumably because he is an epidemiologist with experience in such surveys as opposed to the psychiatrist Hicks. And she falsely claims that the Iraqi researcher says that they didn't work in pairs when they did.
Ahuja's piece is a disgrace.
Extraordinary, isn't it? I liked this bit best:
Several academics have tried to find out how the Lancet study was conducted; none regards their queries as having been addressed satisfactorily
So out of the population of people who have outstanding queries, 100% of them haven't had their queries resolved? When someone puts this in their third paragraph, you pretty much know what level of understanding they are working at when it comes to sampling theory.
The thing that confuses me is that the Garfield letter is printed in the Lancet correspondence, and all of Spagat's objections are (garbled versions of) letters in the correspondence. But the author doesn't mention Burnham's "reply to critics" at all - she actually quite strongly implies that it didn't exist, in claiming that Burnham hasn't responded to Garfield etc on birth rates. Did Spagat not tell her that it existed, did he tell her and she didn't bother to read it, or did she read it and not bother to mention it.
A spot of "Kaplan's Fallacy", btw - Garfield clearly says in the letter that the undercounting of child deaths might account for +/- 30% variance in the estimate, but the article argues as if it's definitely -30%.
The title says it all "Could 650,000 Iraqis really have died because of the invasion?"
It's called "proof by incredulity" and is typically used when all other efforts to dismiss the evidence have failed (usually miserably).
Here are some titles in a similar vein:
"Could men really have landed on the moon in 1969?"
"Could humans [as opposed to aliens] really have built the pyramids 5000 years ago?"
Oh, and let us not forget the most oft-cited "Proof by Incredulity" of all:
"Could human beings ever have evolved from single-celled organisms as Darwin proposed?
The IBC is obviously wrong, if they are only counting what gets reported in the media. I don't know if every death in *America* gets reported in the media.
Much less, the people who die in a country that is going through a civil war and that doesn't appear to have a strong history of media independence.
It just seems obvious.
dsquared writes:
This is vaguely off topic, but I know several academics with concerns about the Lancet studies (especially the survey details), who have sought (unsuccessfully) to get satisfactory answers to their questions. Do you know any such academics who have received satisfactory answers? (There are lots of academics who have always thought (and said) that the Lancet studies are wonderful. I am curious about those with concerns that have had their concerns addressed.)
If you don't know any, wouldn't that suggest that the statement is accurate?
By the way, there is some hope that the data will be available in the future. Kudos to all those who have fought this lonely fight (and to the Lancet authors for agreeing to this reasonable request).
IBC, I think, would say that their count includes the government and hospital death tolls, so it comes down to whether one thinks those numbers are complete or better than 50 percent complete, as opposed to missing 90 percent or more. (Or some intermediate figure--if the Lancet papers were discredited tomorrow it wouldn't necessarily mean IBC was right.)
From the IBC's own FAQ
"It is likely that many if not most civilian casualties will go unreported by the media. That is the sad nature of war."
So it's pretty obvious that they are low balling the true number. At best, they can be considered a bottom figure. But any newspaper that reports their number without taking this into account is doing a horrible job of reporting.
If the mainstream media are so interested in accuracy, then where are all the news stories that are as critical of IBC as they are of the Hopkins study?
David Kane wrote:
Perhaps, but I do know at least a couple, so that makes the statement inaccurate.
David Kane asks: If you don't know any [academics who have received satisfactory answers], wouldn't that suggest that the statement is accurate?
Not in any sense of the words "suggest" and "accurate" that a science reporter ought to be using. As for the kudos to people who have "fought this lonely fight", is it seemly to be congratulating yourself thusly? (I take it that accusing people of fraud counts as fighting; it certainly smacks of looking for a fight.) And what do you propose to do with the data? If you are right and the whole thing was cooked, do you suppose Burnham et al made such a hash of it that the data will incriminate them?
It's not hard to guess what will happen when the data is made available. The carpers will say that what we really need is to hear the Iraqis describe in detail just how they implemented the instructions they were given. One of the interviewers is dead and it isn't likely that the others will be in any hurry to identify themselves in public. Certain powerful people have made it very clear that they don't approve of what they were doing. So the MSB squad will still have their Devastating Critique ((C)d-squared), data or no data.
Do you know any such academics who have received satisfactory answers?"
"Satisfactory" is in the eye of the beholder and not all beholders are equally qualified to assess the validity of the results.
If an academic with no understanding of sampling methodology questions the results, it is basically meaningless.
Tim,
The headline should read:
"London Times hatchet job on last shreds on Lambert credibility"
There are few people in Australia that are so consistently wrong as you are. Perhaps John Quiggin might have you covered in that regard. From DDT to Global Warming to Iraq death counts you manage to end up on the side that can't discern opinion from truth, an emerging phenomenon in the Internet Age.
One positive development is that Josh Dougherty's contribution to the Lancet correspondence is a lot more sensible and civilly worded than his occasional contributions to comments threads here.
Perhaps it's worth pointing out that, according to his CV, Prof Spagat enjoys lucrative consultancy contracts (totalling at least $300,000) with a company called Radiance Technologies. I assume this is the outfit based in Huntsville AL, which supplies sensor systems to the US military in Iraq.
I knew that "boot" means "trunk" in the UK, but had no idea that "contradict" and "confirm" were synonyms there. The things you learn from the respectable news.
Jack L, you forgot how wrong we got it on evolution, and how the Republican sweep in the 2006 election indicated that the US public has seen through our absurd claims that Iraq is not, in fact, the safest place in the world, except for the possibility of tripping over the sweets and flowers with which US visitors are daily garlanded.
Jack Lacton said
"last shreds on Lambert credibility"
Not to be nitpicky of anything, but shouldn't that be "last shreds of" ?
"Few people in Australia that are so consistently wrong"
and shouldn't that be
"Few people in Australia that are as consistently wrong "?
Where do these people learn their English anyway?
Tim,
Perhaps you are simply a very subtle reasoner, but I didn't quite get how Les Roberts altering his story after being contradicted by one of his interviewers is implicit support of the integrity of the research. According to the quotation you offered, he made a claim about the interviews in response to being questioned that he retracted after an actual interviewer contradicted him. To me, that somehow doesn't smack of integrity.
And does Hicks being a psychiatrist specializing in surveying communities in conflict dequalify her to make a professional judgement on the Lancet study? If you think psychiatry is irrelevant to the topic, I must say epidemiology doesn't seem that much more [or less] pertinent to war.
JB - I tried to fix the 'of' but you can't post twice within a short space of time so I didn't worry about it. I don't know where you learned English but your 'so consistently' vs 'as consistently' is flat out wrong. Both are grammatically fine.
Prof Quiggin - Who argues evolution? Only those that made #10 on my list of institutions that ruin the world at http://tinyurl.com/yt8uv5, and who argues that it's difficult in Iraq. If you believe that we weren't greeted to cheers and garlands initially then you're not dealing with truth. The best comment that I've heard about Iraq was from a man that was a janitor (or some such thing) who went to work every day in a very dangerous part of Baghdad and in which he was not of the right religion for the area. When asked whether things were better know than when Saddam was in charge he replied that it absolutely was. When asked why he responded that under Saddam you went about your business, might get killed, and had no hope. Now you go about your business knowing that you might get killed but there's hope for a better future.
Kevin, here's what Roberts wrote in their reply in the Lancet:
>Sampling in each cluster involved two teams working together. With this arrangement, sampling 40 households in a day was indeed feasible.
ie the four interviewers worked in two teams of two interviewers. This was confirmed by the interviewer that spoke to Nature. Apparently Roberts also made a statement about the 2004 study that was misunderstood to refer to the 2006 study. Correcting a misunderstanding is not the same as making a retraction.
"Apparently Roberts also made a statement about the 2004 study that was misunderstood to refer to the 2006 study. Correcting a misunderstanding is not the same as making a retraction."
People questioned the methods used to draw the 2006 sample. In response to this, Roberts says the "teams" all went off individually, so those questions about the 2006 study are "not valid".
It was "misunderstood" because it was a misleading answer (while the only correct answer would have been "I have no idea how to answer these questions").
A nitpick: If you factor in politics, the heat increases. One of The Lancet authors, Dr Les Roberts, campaigned for a Democrat seat in the US House of Representatives and has spoken out against the war. Dr Richard Horton, editor of the The Lancet is also antiwar.
That would be a Democratic seat, the use of the term "Democrat" in this context is nearly universally understood to be a (childish) slur in the United States (members of the Democratic Party are Democrats, but things related to the party are Democratic. It would hvae been correct to say "a seat as a Democrat" but not "a Democrat seat.)
A fairly telling exposition of the author's bias.
Tim,
It still seems mysterious to me how Roberts either mistook a question about the feasibility of the 2006 survey for a question about the 2004 survey such that he either gave a wrong answer or said something so vague it misled the reporter. I do agree retracting and correcting misunderstandings are different things.
I found this in the introduction of the Lancet paper:
"Recently, Iraqi casualty data from the Multi-National
Corps-Iraq (MNC-I) Significant Activities database were
released.5 These data estimated the civilian casuality [sic] rate at 117 deaths per day between May, 2005, and June, 2006, on the basis of deaths that occurred in events to which the coalition responded."
This seems close enough to be prima facie support of Spagat's critique, regarding the paper containing mistaken references to casualties. Mistakenly pluralizing a term that was plainly in the intro seems significantly less a problem than conflating casualty and death.
And just generally, given the Lancet's rundown of what was asked, the estimate of 20 minutes per interview seems unlikely to me. 20 minutes to enter someone's house, explain your purpose and caveats of your survey, get settled in for note-taking, then discuss an emotionally and politically touchy subject like the death of a loved one and chatter inducing subjects like family births, deaths, migrations and what have you, wait for someone to get a death certificate in 8 of 10 households, read it, discuss it, maybe ask more questions in case of conflict and then take one's leave sounds improbable. The possibility of someone just breaking down and crying alone makes this figure sound low. And if Les Roberts didn't cite the correct number of groups going out and doing interviews or couldn't make himself plain enough to be understood, might he have accidentally misstated the time needed for interviews or misled the reporter by commenting on the time it takes him to shower in the morning rather than answering the question at hand?
Here's the [exact quote](http://www.zmag.org/content/showarticle.cfm?ItemID=11309)
>During my DRC surveys I planned on interviewers each interviewing 20 houses a day, and taking about 7 minutes per house. Most of the time in a day was spent on travel and finding the randomly selected household. In Iraq in 2004, the surveys took about twice as long and it usually took a two-person team about three hours to interview a 30-house cluster. I remember one rural cluster that took about six hours and we got back after dark. Nonetheless, Dr. Hicks' concerns are not valid as many days one team interviewed two clusters in 2004.
The thing that gives it away that Roberts was talking about the 2004 survey was the fact that he said "in 2004". Twice. And it seems I have to spell out the relevance of the statement about the 2004 study for you. If in 2004, a two person team interviewing individually took about three hours for a 30 house cluster, in 2006, a four person team working in pairs and asking the same questions would be expected to take how many hours for a 40 house cluster?
Oh, and Josh, will the IBC be contacting Ahuja about her misrepresentation of the IBC number in her article?
Getting back to Tim's own ironically-titled hatchet job, the author for the Times appears to make a couple minor mistakes that Tim tries to make a lot of hay over to serve his own biased agenda.
She confuses the points over which the Lancet authors have been caught fibbing. She may also have confused the child deaths issue with the finding of non-violent deaths going down quite a bit over the first two or three years after the invasion (while L1 finds them going up in the equivalent period - another one of those "striking similarities" with the two studies, providing "strong validation" for both).
Tim can't help making a few errors of his own either:
"the next mistakes are Spagat's: The word "casualties" does not even appear in the body of the paper."
As usual, the mistake is Tim's. Whether the word appears or not is a red-herring of Tim's to begin with, but it appears in the exact way described by Spagat, with "casualties" and "deaths" being interpreted as the same thing: "These [DoD] data estimated the civilian casuality rate at 117 deaths per day between May, 2005, and June, 2006".
(Apparently Tim looked only for the plural in his Expert word search on the document).
Before this, Tim starts his hatchet job off with an absurd complaint over a reference to IBC which he says is "untrue" and supports by referencing a fraudulent analysis he'd written distorting news stories to help support a disgraceful and disinformative smear campaign against IBC in which both he and Expert Les Roberts were up to their eyeballs.
In fact, the Times author's reference is perfectly reasonable and more accurate than the reference written in the 'peer-reviewed scientific paper' that we're all supposed to worship:
"The best known is the Iraq Body Count, which estimated that, up to September 26, 2006, between 43,491 and 48,283 Iraqis have been killed since the invasion."
The Times author gets the fact that IBC refers to civilians, while the peer-reviewed paper from the Experts is, if anything, rather less accurate. It is still more inaccurate if considering this lie the Experts added for good measure: "Estimates from the Iraqi Ministry of the Interior were 75% higher than those based on the Iraq Body Count from the same period."
Perhaps this and many other peer-reviewed lies are what Spagat was referring to when he said their paper contains: "misrepresentations of mortality figures suggested by other organisations".
Tim also issues a bunch of red herrings, misleading claims and speculations to dismiss MSB, like the assertion (from data with Lancet did not collect) that most deaths were outside the home. But of course MSB already considers this. And then Tim speculates that some of the assumptions tested out in the MSB paper (and on which MSB itself does not depend) are "absurd", even as he (like Roberts) has no idea how a "main street" was defined here, or what percentage of Iraqis from which locations might use them, or how often.
The MSB paper I read (and which Tim appears to mostly have not read, if the 'analysis' to which he links is any indication) makes it clear that there is no way to know how much the biased sampling scheme might have biased the results until more is known. Until then, one can only speculate and use various assumptions to try to get some idea of what the effect might be. Tim chooses to speculate that one set of such assumptions tested out in the MSB paper are "absurd". But Tim has no idea if they're "absurd" or not, and neither do the Lancet authors. What is obvious though is that Tim will speculate or say anything at all that will puff up this study and deflect criticisms of it. That seems to be his major role in all this and has been for several years now. In any case, Tim's dissembling about MSB was addressed pretty well in the linked article by an amateur (Robert Shone) who tried in vain to set Expert Tim straight on all the things he got wrong.
The rest of Tim's hatchet job consists mostly of ad hominems, which I don't know are correct or not (judging by them being from Tim it's probably safer to assume not), but are the standard ad hominem diversions from the substance of the arguments.
Tim the passage you quote is not the quote, let alone an "exact quote" of Roberts saying, in response to questions/criticisms about sampling in the 2006 study that the "teams" split off with each member working independently ("teams" of 1).
Furthermore, the passage you quote was addressed by Hicks here, and Roberts' claims there only make matters worse:
http://www.hicn.org/research_design/rdn3.pdf
As someone who conducts surveys for a living, I will second Kevin's comments. I find almost every aspect of the fieldwork as described implausible, specifically the response rate, the interview rate (time per interview) and the incidence of death certificates being produced.
The Lancet paper itself refers to the difficulties the field teams encountered including stops at roadlocks and lengthy explanations being required to gain the confidence of respondents.
There was no attempt to validate the interviews (in market research in Western countries typically 10% of interviews are audited). The study authors were not even in the country at the time, so nobody knows what the fieldworkers actually did. I suspect that faced with a difficult and dangerous task, they simply made many of the interviews up.
I know that if I ever got a survey back with a claimed response rate of over 98% and the kind of daily completion rate as claimed, I'd order an audit in a second.
Finally, anticipating Tim telling me that 98% response rates are typical in Iraq, I don't believe it. People decline to be interviewed for all sorts of reasons which are not culturally dependent - they need to go to the toilet, they're about to eat, they don't feel well, they are deaf, blind or dumb, they are infirm, they're about to have sex, they are mentally ill, they have friends over, they are working to a deadline etc etc
98% response rate, dream on.
Shorter Josh: "Anyone who disagrees with me is a liar".
This time he has added to his list Michael O'Hanlon and Jason Campbell who [wrote](http://www.brookings.edu/fp/saban/iraq/index.pdf):
>"estimates for civilian casualties from the Iraqi Ministry of the Interior were 75 percent higher than those of our Iraq Body Count-based estimate over the aggregate May 2003 - December 2005 period."
So James, you think that all surveys ever conducted in Iraq have been fraudulent. Got it.
I haven't added anyone to the list Tim. Roberts was already on it.
What does "our Iraq Body Count-based estimate" mean.
O'Hanlon and Campbell explain directly prior (and which Roberts must have seen) that this means a number significantly reduced from the IBC figure (with morgue entries and other things removed), and is NOT the "best known" figure you would have seen on the IBC website and which is deceptively quoted and put alongside this claim in the Lancet paper. The actual IBC figures, as opposed to "our" (O'Hanlon/Campbell's) reduced version, were higher than the Iraqi figures being compared to it.
You will not see the explanation given by O'Hanlon/Camp in the Lancet report because Les Roberts is attempting to deceive his readers about IBC by wrenching a factoid out of context, putting it alongside IBC's actual figures, and using it to misreprepresent the "best known" IBC as being lower than everything else, which is right in line with the nonsense he concocted in his ludicrous "sensitivity analysis" of "8 independent studies" from 2005.
Not at all, Tim. But having been in the industry for 30 years, forgive me if I am unsurprised if market research companies operating in the third world exaggerate their response rates (I'm not talking here about the Lancet authors).
I think my point is nobody gets 98% response rates for door-to-door work anywhere in the world, for reasons that I have explained.
Finally, I never mentioned "fraud". I'm saying that nobody knows what the fieldworkers did, but that what they claimed to have done doesn't add up.
I think my point is nobody gets 98% response rates for door-to-door work anywhere in the world
Really?
Kevin enumerated a vast sequence of steps that could not possibly be carried out in 20 minutes ...
[And just generally, given the Lancet's rundown of what was asked, the estimate of 20 minutes per interview seems unlikely to me. 20 minutes to enter someone's house, explain your purpose and caveats of your survey, get settled in for note-taking,]
shall we say maybe 5 minutes so far?
[ then discuss an emotionally and politically touchy subject like the death of a loved one and chatter inducing subjects like family births, deaths, migrations and what have you, wait for someone to get a death certificate in 8 of 10 households (sic - dd) , read it, discuss it, maybe ask more questions in case of conflict and then take one's leave sounds improbable.]
certainly does seem improbable that this last bit would only take 15 minutes. However ...
Kevin appears to have missed the fact that not every house in the sample would have had a death in it. In fact there were a total of 629 deaths (547 post-invasion, 82 pre-invastion). Therefore, even in the worst case in which each of those deaths took place in a different household, 1220 of the households surveyed would have reported no deaths (that's 66% of all households).
If 1220 households took an average of 5 minutes to survey (rounding up to make the arithmetic easier), but the total sample of 1849 households to an average 20 minutes to survey, then how long did the (maximum of) 629 households with a death in them take to survey? This is a GCSE maths question. And the answer is [(1849 x20) - (1220 x 5)]/629 = 49.09 minutes.
The main point that one should make of the Times article is that it rigidly conforms to a pattern that epitomizes our corporate-state 'mainstream' media which must forever downplay or ignore western crimes and forever highlight crimes committed by officially designated enemies, even if the evidence for the latter is fragmentary. This is because part of the foundation of the western 'creed' is that we are, by definition, the 'good guys' who support noble values such as social justice, peace, freedom, and democracy in our foreign policy and thus our media acts as a conduit for this myth of our basic benevolence. The media will occasionally admit that our governments make mistakes in carrying out noble deeds, but the idea that our leaders are calculating liars who have an alternate agenda and who are complicit in mass murder is beyond the pale - never to be acknowledged. When the established order is as divided as they were in the US-UK war of aggression against Iraq, then this makes it increasingly difficult to promote the western 'creed' of benevolence. This explains why the media (including the hatchet job done by the Times) has flexed such muscle in downplaying the carnage in Iraq - in effect it is rallying around the political elites in power who are responsible for the slaughter, because of the damage this has done to their reputation and to myth of western nobility.
Note how the same media only emphasized Saddam Hussein's crimes after he had invaded Kuwait and had 'slipped the leash'. Until then his crimes were largely ignored, because he was 'a man with whom we could do business' in the famous words of Margaret Thatcher, defending Saddam after the Halabja massacre. Similarly, the crimes of other western proxies such as Suharto (whose crimes dwarf even those of Saddam's) were mostly excluded from the western media, at least until he became uppity in 1998 and started challenging IMF rules. Suddenly, the mainstream media shook off its collective amnesia and regained its mental faculty with regard to this world class torturer and mass murderer.
If the study had been conducted using exactly the same methods but had estimated the body count of an aggressive war carried out by an officially designated enemy, I have no doubts that our media would have promoted the study to the hilt, giving it prominent coverage over an extended time. But the utter destruction of Iraq must be downplayed because we are the culprits. Have hundreds of thousands died in Iraq since March 2003? Almost certainly. Did hundreds of thousands of civilians die under the sanctions regime that preceded it? Almost certainly. Was the bombing of Iraq in 1991 aimed at destroying the country's civilian infrastructure? Most definitely. But to acknowledge the fact that our western governments are criminal entities is taboo, hence why the Times and most of the other western MSM sources have either downplayed or attacked the conclusions of the Lancet study. What else do we expect when the main aim of the MSM is to 'support and defend the political, economic and social agenda of the privileged groups that dominate society and the state', which is the propaganda model of Herman and Chomsky (1988).
Can anyone explain the point in the white Times piece that the Lancet suurvey actually had the children body count lower than before the war started?
This is an interesting stat. to ponder.
JC: the explanation of this point is that someone (almost certainly the Times journalist) has got their facts wrong. The Lancet paper does not find this, as you can see in Table 2. For infant deaths, 11 were recorded before the war (15 months) and 29 after the war (40 months), giving an unchanged rate of 0.73/month in the sample. For total deaths of children under 15, the figures were 14 and 66, meaning that the number of deaths per month rose from 1.3/ month to 1.65/ month. They didn't fall.
von Scheel, Rosling and Garfield note that the overall infant and child death rate (for children under 15) is quite a lot different from *other* surveys of the *under-5* death rate. This is an interesting point, and it is acknowledged in Burnham's response, but the Times has just got the wrong end of the stick.
Jeff Harvey: "Most definitely. But to acknowledge the fact that our western governments are criminal entities is taboo, hence why the Times and most of the other western MSM sources have either downplayed or attacked the conclusions of the Lancet study."
This has a bit of a conspiracy tone to it, but there is a great deal of truth to some of this, but for different reasons. I don't think that the MSM is somehow trying to prop up our government, but they are complicit for different reasons.
The main reason is simply that most of the sources on a story like this come from government officials, even if they do not appear in the story. We can see that this happened with Weapons of Mass Destruction, in which the media was completely wrong, because it was relying on its cadre of contacts inside the government.
So the media is never going to be too critical of the government policy simply because the nature of the relationship between media and government. There's just very little independence.
There's also the problem of the American media's timidity the need to appear "balanced," and the fear by journalists to not appear unpatriotic.
Thom,
Thanks for some insightful points. However, I don't think what we read in the media has anything to do with a conspiracy at all. I believe that most journalists truly believe what they write, even if it appears to be exceedingly biased. But because the media is often owned by big corporate interests, or depends on these same entities for advertising revenue, then I think what we read is, to partially quote (I think) the Glasgow University Media Group, not a 'natural or neutral phenomenon but a manufactured product of ideology'. In other words, there is a filtering process through which media selects for certain views and perspectives. These views are in line with the underlying political ideology in which we live; those with differing views are marginalized.
There's no doubt that a few excellent journalists do challenge the prevailing wisdom, even in the mainstream 'liberal' press, but the system ensures that the numbers are stacked against them. I suggest a read of the media chapter of Mark Curtis' excellent book on British foreign policy, 'Web of Deceit', or the equally excellent book, 'Guardians of Truth' by David Edwards and David Cromwell of Media Lens. Their analyses provide empirical evidence that the MSM bolsters a specific agenda while marginalizng or ignoring alternate views.. By the way, they also mention the relibaility of the MSM on 'official' sources, but part of the problem they allude to is that the MSM rarely challenges these sources. This at least partially explains why the Iraq war party was able to build up such popular support in the U.S. for its illegal war.
Spagat is an economist and not a statistician
You might want to reconsider this point......
Sebb
P.S. To spell it out - economics IS statistics.
Sebb wrote:
And geography IS climatology.
Most economists use statistical methods (many badly). This is v2.1 of math is not a science.
Informed verdicts on MSB:
Tim Lambert: [Main street bias is] "bogus".
Stephen Soldz [Main street bias is] "dishonest".
Jon Pedersen: [Main street bias is] "certainly an important problem for many surveys - not only the Iraq Lancet one".
Robert, among the informed verdicts was this one you got from Robert Chung:
"I'm sure that you are correct, and that many people are simply dismissing [MSB] out of hand- but I don't believe many professionals do. I do, however, believe that for this particular situation many professionals dismiss an overall bias factor of 3."
The merit of this comment is that it distinguishes between the problem (which may perhaps be what Pederson had in mind) and the model presented by Johnson et al. Your brief anthology confuses these very different things.
It's a remarkable thing that five people got together to write a paper to tell us something we all knew already: that when you pluck the parameters of a model out of your ass, you can always get the conclusions you want.
Thanks for your insightful and scientific remarks, Kevin. I can now update my "anthology":
Tim Lambert: [Main street bias is] "bogus".
Stephen Soldz: [Main street bias is] "dishonest".
Kevin Donoghue: [Main street bias is] "ass-plucking".
Any more contributors of scientific wisdom to this "science" blog discussion?
JoshD: Thanks for the link to Hicks piece. It was far more carefully considered than my laundry list but I was gratified to see some of my concerns mirrored there.
James: Out of curiousity do you have any experience with average times for obtaining "informed consent" from interviewees? That piece alone sounds like a stumbling block for the proposed length of time in the Lancet interviews.
DD: I would recommend that Hicks article to you. It clarified the points of contention much more accurately than I managed in one run-on sentence. And you're right, I should have writted that death certificates were produced, according to the Lancet, in 8 of 10 houses *which claimed a death*. Further, I would recommend reading the actual methodology the Lancet published for yourself, or perhaps rereading it, because their explanations of the care and sensitivity with which the interviews were conducted will give you a more accurate grasp of the issue than my vastly curtailed summary. I would submit that even sans a death in a household, the other questions, explanations and assurances would either last significantly longer than the five minutes you've posited or they would have to be taking shortcuts in the stated methodology. After getting settled in, then the questioning starts, which even if there was no death would take a fair amount of time given what they were asking.
Moreover, in the Hicks article, her analysis of Les Roberts explanations leading to a figure of 3 to 6 minutes average per household is interesting. Obtaining informed consent, implying both that the interviewee consents to the interview without feeling rushed or coerced and is adequately informed to make a rational decision, can according to her take more than 6 minutes alone.
So I salute your spirited rebuttal, but I really think the evidence weighs in favor of some skepticism on Burnham et al. 2006, as least as far as the claims and counter claims have been presented on this blog and in various FAQS I've seen cited. Maybe the authors of the study could clear it up and end our armchair speculating.
So the media is never going to be too critical of the government policy simply because the nature of the relationship between media and government. There's just very little independence.
There's also the problem of the American media's timidity the need to appear "balanced," and the fear by journalists to not appear unpatriotic."
And let us not forget the desire of some within the media (Judith Miller, Bob Woodward) to feel self-important and to rub elbows with those making the life and death decisions.
Hicks' paper is flatly wrong in her analysis of Roberts. Here's the quote again:
>During my DRC surveys I planned on interviewers each interviewing 20 houses a day, and taking about 7 minutes per house. Most of the time in a day was spent on travel and finding the randomly selected household. In Iraq in 2004, the surveys took about twice as long and it usually took a two-person team about three hours to interview a 30-house cluster. I remember one rural cluster that took about six hours and we got back after dark. Nonetheless, Dr. Hicks' concerns are not valid as many days one team interviewed two clusters in 2004.
Roberts says that in 2004 the surveys took "about twice as long" as 7 minutes. That's about 15 minutes. Double check: Each interviewer does 15 houses in three hours, 180 minutes/15 is 12 minutes per interview. Most households did not have a death to report, so the interview can easily be done in 10 minutes.
Main street bias in and of itself is not "bogus, dishonest, ass-plucking", but claiming that the Lancet results are rendered null and void by main street bias, before one even has a clear idea of the methodology that was used in the study (as Spagat et al did) certainly qualifies as "bogus (if not dishonest) ass-plucking".
And claiming (or even implying) that the IBC count is anything other than an underestimate is certainly "bogus, thoroughly dishonest, ass-plucking."
Tim,
Call me slow, I like having implications spelled out; it makes for clear discussion. Your quote of Roberts' comment still doesn't seem relevant to your initial blog entry which was the focus of my comment.
The Nature quotation you provided has Roberts responding to Nature itself that 'sometimes the four individuals worked independently.' I see no more or less implication of this but that Roberts meant they worked singly, I don't know of another way for individuals to work independently of one another, and that Roberts was replying to Nature directly.
Your 'exact quote' to help me clear up my confusion has this provenance: "Joe Emersberger from Canada, who follows this issue closely, collected some of the expert criticisms of the report and a selection was put to Mr Roberts." As far as I can tell Mr. Emersberger is a Canadian engineer and not a writer for Nature, Jim Giles wrote the Nature article [no clue who interviewed Roberts], and Les Roberts was replying directly to Emersberger's question in your 'exact quote.'
Your 'exact quote' doesn't quote Roberts saying the individuals worked singly or independently, which is the only way Hicks would be flatly wrong in her 3 to 6 minute claim re: 2004. It apparently has nothing to do with the Nature question and response at all.
You didn't offer an exact quote of Les Roberts' response to Nature about the teams going it alone and it's irrelevant to the issue at hand. How did Nature's interviewer manage to misunderstand Roberts saying his interviewers went singly in 2006 when he meant they went singly in 2004, given that Nature asked if even two teams of 4 splitting in pairs could possibly do the job in the 2006 study? An exact quote of what Roberts was asked and responded would help settle this; a quote of an irrelevant reply to a different question won't.
The things that "gives away" that you were not offering an exact quote of Roberts to the journal Nature is the Media Lens article you cited stating Paul Reynolds of the BBC was forwarding reader questions [inclu. Emersberger's] to Les Roberts.
Also, I did see what Les Roberts said regarding the length of time it took in the 2004 study.
Here is the stated methodology for that study:
http://www.thelancet.com/journals/lancet/article/PIIS0140673604174412/f… [requires a free registration]
Among other things Robert here asserts study teams of three people [team leader and male and female interviewer], a similar methodology of obtaining consent and informing interviewees about the purpose of the survey as in the 2006 study, asking quite a few specific and involved questions that would require a fair amount of recall on the part of the interviewee, even in the event they had no deaths, and an attempt to confirms deaths via death certificate in 2 deaths per cluster.
Now taking Roberts' 3 hour 30 house cluster, this means a team of two did all this and polished off a cluster of 30 houses traveling door to door in 6 minutes per house including all incidentals relating to actually moving from house to house, being greeted, getting inside, asking questions, etc. and every so often waiting for a death cert. Roberts says in many days one team polished off 2 clusters, i.e. 3 minutes per house according to the conjunction of his claims.
You have yet to quote, and I've yet to see, anything else asserting the teams in 2004 worked singly. Why include a male and female interviewer in each team, presumably for the sake of putting the interviewee at ease regardless of gender, if they were going to split up? Moreover, why split them individually in 2004 and not in 2006?
For argument's sake, if they did work singly then, we get 6 minutes per household for those double cluster days in 2004 which falls significantly beneath your estimate of 10 minutes, and which is incredibly optimistic for what they claimed to do and ask. We get this from conjoining your analysis and Robert's in the Media Lens article.
Maybe they didn't perform a sensitive, informed consent style of survey on those double cluster days?
Here's a sample informed consent template for a no-risk [to the respondent] survey:
http://www.umass.edu/research/comply/surveystemplate.doc
I assume one for at-risk respondents would have to be longer and more detailed than this, and take longer than this to complete given the face to face nature [questions from interviewees being possible] of the Lancet interviews.
Maybe the teams did conduct an ethical survey in 2004 but didn't meet Roberts' purported timeframe or target number of houses, i.e. he doesn't understand his own teams' field ops or got bad data. That's about the best face one can put on it without assuming some dissembling by Roberts.
After reading the 2004 and 2006 methodology, the basics of informed consent and Hicks claims about the length of respondent's replies and sensitivity of the topic necessitating some circuitousness, how realistic a timeframe does the most generous interpretation of Roberts' claim at 6 minutes per household on double cluster days in 2004 sound? Not very.
Kevin, you entirely ignored everything I wrote in my last comment. Roberts said that each interview took about twice as long as 7 minutes. That's about 15 minutes. And when he said that they did two clusters in a day, he did not mean that they did two clusters in three hours, but that three hours of interviewing, drove to another cluster and did three more hours.
And why do you think he would say something different to Nature than he would say to the BBC in response to the same question?
[Obtaining informed consent, implying both that the interviewee consents to the interview without feeling rushed or coerced and is adequately informed to make a rational decision, can according to her take more than 6 minutes alone. ]
Kevin, is this where we have ended up? Is it really? We started off breathing fire, brimstone and accusations of scientific fraud. Now we're quibbling about whether the disclaimers and data protection principles were read off the card correctly. It's all a bit of a comedown, isn't it? I'm sure that you will be able to work yourself up into a righteous rage, treating what will no doubt become "fundamental questions of research ethics" with the sensitivity of a maiden aunt, but really.
What are these "detailed questions" you're talking about, by the way? I'm just seeing age and sex for 7 family members, plus births and deaths. I just conducted a mock interview with the guy at the next desk to me and it took 3 minutes.
Robert Shone asked:
Yes. Would you kindly add me to the "ass-plucking" category? Thanks.
Robert Chung wrote:
> Would you kindly add me to the "ass-plucking" category?
It does seem that most critics of MSB in this "science" blog fall into the "ass-pluckers" category.
dsquared wrote:
> What are these "detailed questions" you're talking about,
> by the way? I'm just seeing age and sex for 7 family
> members, plus births and deaths.
Funny, I thought they asked questions about deaths in a bit more depth than, say, the ILCS study. I thought the whole "problem" with the ILCS study (according to the Lancet authors) was that the questions about deaths were few and brief (in contrast to the Lancet study).
Try to keep up, Robert. Only (a maximum of) a third of households surveyed had any deaths at all. If the ones with no deaths only took about 5 minutes, the ones with deaths took nearly an hour. I thought we were agreed on this, viz my comment of March 6, 2007 04:13 AM and Kevin's reply of March 6, 2007 08:34 PM
You miss the point. Establishing whether there were any deaths which fit the criteria would seem to take more time depending on the level of questioning (contrast this with ILCS).
Guess what, the Australian ran the story as well.
It seems that unless one is talking about a significant discrepancy between the stated times for the interviews and reasonable interview times (reasonable to those who have actually conducted such interviews in a war setting), one is engaging in "disproof by incredulity".
Is this really what the criticisms of Lancet have come to?
This is particularly pathetic in the case of those Lancet detractors who have never even conducted such a survey interview, since they are basically pulling their "reasonable" estimate out of thin air.
JB wrote:
I think we're trying to standardize on the term "plucking it out of their asses."
Robert:
It is possible -- at least in principle -- (though, I agree highly unlikely) that my terminology (pulling out of thin air") and yours ("plucking it out of their asses") are not inconsistent.
Of course, if one has one's head up one's ass, then it is not likely that there is any room left for air (thin or otherwise).
..unless one is an "airhead", that is.
dsquared keeps saying "5 minutes" for interviews for households with no deaths, but that seems like pretty wishful thinking. The steps involved would seem to be:
1. Determining which house to sample next (apparently involving the field team estimating the "nearest front door" to the house they just did in some way, from a range of nearby candidate houses).
2. Walking to it, knocking on the door, and waiting for it to be answered
3. introducing themselves, presumably giving a brief explanation of their purpose and then getting invited inside
4. giving their "lengthy explanation" about the survey and how it works.
5. getting consent to do the interview
That's before you even get to any of the survey questions, and would itself seem to involve easily more than "5 minutes".
Then you have even the basic questions which would appear to include asking them to list about seven different household members, to list the birth dates and sex for each and to list every exit or entry from the house going back several years (ie, explaining moves to or from the house by those seven members, or by others who don't live there anymore).
Also, dsquared says that the houses with deaths (and therefore with a host of other questions and issues) could take 45 minutes, which may be a reasonable estimate, but they had about 7 or 8 deaths per cluster. If this is usually 1 per house, that would be over 5 hours just to do those houses. But his "5 minutes" for the no-death houses doesn't seem at all reasonable.
This is all before you get into whatever the heck they did in the morning to draw the sample. This would appear to involve driving around and within the entirety of the selected area to locate and make up a list of all of whatever they chose to call "main avenues" in the area and every street that intersected with those.
Subsequent statements claiming to have included "all streets equally" in the "random" selection process - in contradiction to the published account - would mean they would have had to locate and add to the list every street in the area, not just the "main avenues" or the streets intersecting with those, before they could even select a street. After they finally do get to select the street, it would appear to involve going the length of the street selected to enumerate all the houses on it so that they could "randomly" choose one house on the street to start with. Before this they also speak of delays with checkpoints and other things travelling to and from the clusters. How long does all this seem like it would take before they even get to knock on the first door? Maybe dsquared with say about 15 minutes.
This is all going on at the height of the summer in 130+ degree heat too, which suggests they would have to take at least a lunch break and other breaks for refreshments and things during the day to keep from passing out. The timing issues do seem a bit odd, as others have suggested.
On another issue, both Tim and the Times reporter seem to get the Nature article confused, thinking the issue is whether the teams broke into 2's. In the Nature article, the Lancet authors contradict themselves about breaking into 1's (and also, on a separate issue, contradict themselves about using locals to help them include obscure streets that would be excluded by their published account of their methodology).
Tim quotes something where Les Roberts says they broke into 2's but this is not the relevant quote. In response to people pointing this out, such as Kevin, Tim says "why do you think he would say something different to Nature than he would say to the BBC in response to the same question?"
Well, they seem to often give different answers to the same questions (like how they selected the houses), but in this case the answer appears to be because Giles was asking him a follow up question and not the same queston. According to Giles' article, what went on is:
1. Giles questions Roberts about concerns over time needed for interviews in the 2006 study. Giles puts to him that even breaking into 2's (even considering the kind of answer Tim quotes from many months ago), it is still a stretch to think they could accomplish all those interviews.
2. Roberts deflects this by saying that they actually broke into 1's to do the interviews.
3. Giles asks the guy in the field about breaking into 1's and he tells him that they never broke into 1's.
4. Giles asks Roberts about this and he responds that that the field-team guy is right, and says that he was referring to the 2004 study when he said that they broke into 1's.
Joshd said: "The timing issues do seem a bit odd, as others have suggested."
I have a simple question.
Have you ever conducted a survey like the one in question, Josh?
If so, please tells us what it was for and whether the results were ever published.
I will assume no response to mean "no" -- ie, that you have never conducted such a survey.
Nice evasion JB. Tell it to James above, and to Hicks, who seem to see the same 'odd' things with the timings. Maybe in their case you should get to work thinking up some other way to evade the substantive points other than with ad hominem red herrings.
Good God, so we've moved on to trying to determine if the interviewers were lying about how much time it took to conduct interviews? And the best evidence for this is someone saying it would be "a stretch"?
I know a solution. Write your own similar survey, get on a plane to Iraq, conduct your own interviews and then report back. Any volunteers? Hello? Hell-ooooo?
Anything else is speculative and not, in fact, scientific.
The claim that "the interviews could never have been carried out in the stated times" implies one thing and one thing only: there was fraud involved.
Apparently, people like Josh would have us believe that it's all a conspiracy on the part of Johns Hopkins, MIT and the Lancet.
The researchers didn't really do the survey at all but made the whole thing up in a hotel room somewhere -- probably not even in Iraq, but at some resort in Hawaii or Bermuda.
The people they are talking about -- libeling, really -- are respected researchers at two of the best Universities in the US: MIT and Johns Hopkins.
It's absurd.
Josh,
I'm not the one doing the evading. You are --and everyone can see that.
I asked a simple question: "have you ever conducted such a survey?" and you evaded it.
The fact that more than one person questions the timings is meaningless because if one has never done such a survey, one
is in no position to say how much time it took. None.
So, back to the simple question: have you ever conducted such a survey?
The bit I like best in Josh's description of the process is "getting invited inside". Is that really the way such things are done in America? I worked as an enumerator for a census years ago. I was invited inside just once, when an illiterate man needed me to complete the form for him.
This is a bit like the incredulity regarding the alleged difficulty of digging so many graves. People just don't seem to have much experience of digging holes, canvassing for elections, collecting for charity or even delivering things door-to-door.
JB, no you're evading all the points I mentioned and are not pointing to anything at all wrong with them. If there's something wrong with them, the way to go would be to say what it is.
You're doing the same thing many here do to evade inconvenient points: constructing an ad hominems in which the topic (be it a poll or an introduction or interview) is some kind of brain surgery that only a small handful of people could make any simple deductions about, or even be in a "position" to raise questions about. And then even this latter category is usually limited further to only those people who agree with Les Roberts and Tim Lambert.
I assumed your question to me was a rhetorical device, so I didn't answer it. I haven't done surveys but I've been interviewed in some. And I've also walked back and forth between houses and knocked on doors. I sometimes also have to introduce myself to new people. And I've also asked and answered questions. So where's the brain surgery?
But this is, as I said, irrelevant diversion. It's only ever the voices on this blog who disagree with its Party Line that ever get this kind of ad hominem evasion thrown at them. And this evasion is still more lame here because my comment was mostly just agreeing with things others have said in this very thread and elsewhere, to whom your ad hominem diversion can't be applied.
Talk about irrelevant diversions. Your entire "argument" ("Disproof by incredulity") falls in that category.
Not only that, ss I said above, by questioning whether the surveys could feasibly have been carried out in the stated times, you and others are basically impugning the honesty of those who said they carried them out -- implying that fraud was involved.
That's a serious charge. If it were me, I would make damned sure that I had proof before even insinuating as much. I certainly would not engage in such innuendo if all I had to go on was some uninformed guess about how much time I think a survey should take.
JB, your claim about "disproof by incredulity" is disingenuous. Most of the Party Line about MSB here is "disproof by incredulity": ie, 'i can't believe there could be much bias', 'i can't believe they could have screwed up the sampling that bad', 'i can't believe that people would spend any less than 21 hours away from home each day'. etc. etc.
If you have a problem with these kinds of arguments, then all positions on things like MSB in either direction are "without merit", whether arguing for a zero or small bias factor or a large one. So we'd be just left with this issue here where there's this obvious bias in the sampling scheme favoring certain areas over others, leading to a sample that is not random, and nobody can say how much this might bias the results, but we know that the published estimates and CI's all rest entirely on one of these 'ass plucked' speculations (zero bias factor).
On top of this, the authors repeatedly change their story in ways that contradict each others' accounts and their published account, but all purport to "refute" any speculations but those speculations on which all their estimates rest. If you're worried about people making "charges", which I don't think anyone has done, perhaps less dissembling would help.
Kevin Donoghue: This study was a different animal entirely than census taking. People participating had reason to fear for their lives simply by participating. The surveyors had to obtain informed consent from each people without them feeling rushed or coerced into replying that they understood the purpose of the survey, the lack of individual identification. The situation was very much unlike taking a census poll on a doorstep. If they were performing this on a doorstep, they would not have been able to ensure, even to the limited degree they did, that the participants were being honest. Even so, the whole enterprise was so risky, one of the 2004 authors refused to participate on the basis of risk to the interviewers in 2006. How does this cohere with your census taking? Nicely congruous?
Double D: Perhaps you started off full of brimstone and scientific fury. I started off puzzled, dubious and vaguely frustrated with the quality of argumentation supporting the Lancet study, and that's about where I am now.
The detailed questions they asked, in addition to age, sex etc.:
"Respondents were also asked to describe the composition of their household on Jan 1, 2002, and asked about any births, deaths, or visitors who stayed in the household for more than 2 months. Periods of visitation, and individual periods of residence since a birth or before a death, were recorded to the nearest month. Interviewers asked about any discrepancies between the 2002 and 2004 household compositions not accounted for by reported births and deaths. When deaths occurred, the date, cause, and circumstances of violent deaths were recorded."
Maybe it's just me but this would require a fair amount of thinking and recall on my part and looking at calendars, checking records and the like. Now if the interviewers didn't care what quality of data they received, and I grant that is absolutely possible, then perhaps their subjects rattled off answers without a second's hesitation. Still, especially in households of 7 or more, I can see tracking all the in and outs of uncles, cousins, births, deaths, would take some thought. Also, Hicks points out many replies would not be in the form of facts, but stories, which again would draw out the time.
I personally think research ethics on informed consent are tripe except for the real risk posed to the subjects of the interviews. That and if the Lancet didn't bother to follow professional ethics it would probably be a bit hypocritical and not very well received by their peers.
Tim,
Why should I expect Roberts gave incoherent answers to different sources? Maybe because he gave incoherent answers to a single source? Maybe because I like relevant proof when facts are in dispute?
JoshD summed it up nicely; your reply wasn't relevant to my point.
On a follow up to the feasibility of only 4 rather than the even more improbable 2 groups successfully completing the 2006 survey, Roberts replies to Nature that they went singly. On being apparently contradicted by one of his interviewers, he replies he meant they went singly in 2004 and not 2006. Either Roberts was ignorant or dissembling or Roberts bizarrely decided to trash his own case by stating that in 2004 he worked with 2 more teams than he had in 2006, despite being questioned about the improbability of the 2006 team doing what he claimed they did even with 4 teams.
Why reply about 2004 when questioned about 2006 and why reply that he had more interviewers in 2004 if he was attempting to prove 2006 was feasible despite further doubts? How odd to assuage doubts about too few interviewers by pointing out he used 50% more groups in a previous study. Why change interview methodology from 1 person in 2004 to 2 person in 2006? Why not mention this in the report itself; in fact why report "Sampling followed the same approach used in 2004, except that selection of survey sites was by random numbers" if another change in sampling [which includes data collecting] was a switch to two person groups? Why specify a male and female in each team in 2004 when they were being split up?
Any way you slice it, the interpretation you are striving to defend is incoherent as put forth and an accurate but irrelevant quote from a source other than Nature doesn't help explain why Roberts asserts single interviewers in reply to a follow up question about 2006, why the report claims no change in sampling, why switch team numbers at all, etc.
Even Roberts is non-responsive to the question asked in your 'exact quote from Nature except from another source than Nature.' He is asked about Hicks' critique of his 2006 report by the BBC and replies with a non sequitur about 6 individual's performance in 2004. Maybe he just utters a lot of non sequiturs?
Overall, when I compared the Lancet's article, their disclosure and methods, the Lancet job seems not very well fleshed compared to another Iraq survey. The UNDP publication on Iraqi living conditions, actually listed mean and min/max ranges [how unprecedented!] for the times their interviews took in the body of the report, rather than spouting half-assed "it took about twice as long as some studies I did elsewhere" or "about 20 minutes" in reply to skeptics long after publication. Plus, Hicks stated she doubted 15 minutes was sufficient for even a short questionnaire and very doubtful about one on violent death. It isn't part and parcel of surveying to take note of the length of the interview? Of course it is. Interview length appears to practically be it's own subfield in the science of surveying. But getting a straightforward answer on this topic out of the Lancet bunch is like pulling teeth.
You're completely correct that I shouldn't have assumed he meant 2 clusters in three hours. Still, about twice as long as seven minutes is fourteen minutes [or are we using the new math?] and since about 3 hours, 2 interviewers and 30 houses each equals about 12 minutes then 12-14 minutes, let's say 13 per house in 2004 and Burnham stated about 20 per house in 2006. I note neither Roberts nor Burnham, although Hicks does, make the entirely reasonable allowance that you and dsquared have made that houses without deaths would take less time and the others longer.
If one 2006 team of four split into two subteams does 40 houses in 20 minutes a piece, we get 4 hours 40 minutes of solid interviewing per subteam each day without factoring in picking your spot, travelling to the place, finding out who is home, determining if it is risky, going door to door, waiting for answer, breaks, longer emotional breakdown interviews, etc.
Your estimate of an easy 10 minute interview coheres with *none* of the published replies by the professionals on this set of studies, neither Burnham, nor Roberts, nor Hicks.
Even in houses with no deaths, they have to go through physically moving from place to place, finding people at home, getting inside, setting up, explaining their purpose, getting informed consent from individuals and questioning individuals in these decently populated househoulds about a variety of facts on ihabitants with specific dates, concluding the interview, gathering their material and leaving.
From what they've written and implied, it doesn't sound like doing an interview in someone's doorstep was desirable or fit their protocol. According to Hicks "The time required for these tasks should not be significantly shortened by the authors' use of "word of mouth" in the community about their survey since epidemiological researchers are expected to explain their study and to obtain informed consent directly from every interviewee on an individual basis, as well as securing privacy for household interviews so that interviewees' refusal or agreement to participate, and their answers, are kept confidential."
To maintain the ethical standards they claimed, they had to go inside and talk to the head of household or spouse to get informed consent and then do the interview with them in a reasonably private place.
And the more of Les Roberts' statements I read the less impressed I am with his general credibility. He is a biased and vitriolic debater. His statement that he has rushed the results of every mortality study he's ever done, rarely taking more than a week to publish [since death is so critical] is rubbish.
In fact, death is critical enough he should take a little time and check his work rather than rushing and making mistakes like conflating death and casualty.
His statement that all producers of mortality studies dislike the cause of their study seems like a facile way to dismiss his anti-war stance as a source of doubt on the results of his study. I don't think anyone would think of investing some personal animus against malaria [unlike his apparent feelings toward the Iraq war] in their epidemiology study, so his claim hardly seems relevant.
Kevin, if you require "a lot of thinking and checking calendars" to work out who lived in your house two years ago, then I put it to you that you live in a quite atypical household (perhaps a commune or squat?). In my case, the answer would be "my family, same as today" and I suggest that this is typical.
Come on everyone, this isn't cold fusion, it's an experiment you can try at home. I did it and reported the results above. It took me three minutes to ascertain from the guy at the next desk to me a) the composition of his household b) how it had changed since 2004 and c) whether anyone had died in it. That leaves two minutes for the sequence of 5 steps which Josh enumerates above, which I perhaps carelessly summarise as "walking next door".
Along with much else, Kevin wrote this: "If they were performing this on a doorstep, they would not have been able to ensure, even to the limited degree they did, that the participants were being honest."
How does going indoors ensure greater honesty? It might if the local militiamen are actually watching the proceedings but in that case closing the door would probably arouse their suspicions, increasing the risks for all concerned. I quite agree that the Iraq survey isn't like census taking, but if you think respondents to a census don't have concerns which need to be addressed you are mistaken. The point is that people with a job to do work a lot faster than you think and they don't usually hang around having cups of tea in every house. The fact that doing this kind of thing in Iraq is risky is all the more reason to get it done quickly.
Apart from Josh's say-so, have you any reason at all to believe that many of the interviews were in fact conducted indoors?
Kevin Donoghue wrote:
I think we're trying to standardize on the term "plucking it out of their asses."
If the Lancet study is so flawed and unreliable as those trashing it say, why are you making these weak and tortured arguments about how long interviews take?
If the interviews did take longer, then you are accusing the interviewers of filling out questionaires on their own. With no real evidence except "Gosh, it seems to me, sitting in front of my Dell in a nice safe warm building that those assholes over there didn't have enough time to do this survey, even though I've never conducted a survey, particularly one in Iraq."
It's not a compelling argument.
Josh said: "JB, your claim about "disproof by incredulity" is disingenuous. Most of the Party Line about MSB here is "disproof by incredulity":
No, just your arguments and those of a few others on this blog and elsewhere (eg, Ahuja).
There is a reason that people like Les Roberts spend years at University learning how to carry out scientific studies -- and the word "scientific" actually means something.
If you really are under the impression that science is all about sitting around speculating -- and repeating "I can't believe" -- about this that and the other, then I suggest that you take a science class at the local community college, because nothing could be further from the truth.
Kevin Donoghue wrote:
> have you any reason at all to believe that many
> of the interviews were in fact conducted indoors
I've asked Gilbert Burnham about this indoor/outdoor thing, but no reply so far (I think he's currently in Afghanistan). In the past I asked him about the sampling method used to select minor streets, but received only a repeat of the assertion that they included "all" residential streets (without providing anything on the selection process). Why is everyone having to speculate on issues which are fairly fundamental to assessing the study? Why don't the authors just release these details on a website, rather than letting them trickle out in inadequate and contradictory fashion in answer to journalists, etc? They had "space" contraints in the Lancet journal. There are no space constraints on the web.
Robert Shone asks: "Why is everyone having to speculate on issues which are fairly fundamental to assessing the study?"
Whether interviews took place indoors isn't fundamental to anything, except in the sense that Josh seems to have drawn that particular assumption from his fundament. (I hope the other Robert will settle for that formulation.) Nobody has to speculate about such matters. Frankly I'm surprised that you are pestering Gilbert Burnham with such trivia. The man has work to do, you know. I too would like to know more about the procedures followed by the interviewers, particularly how they handled the selection of streets; but I can readily see why indulging my curiosity isn't a high priority at JHU. And hounding a man with downright silly questions is no way to persuade him to be more forthcoming.
"hounding a man with downright silly questions is no way to persuade him to be more forthcoming."
The Lancet author's reluctance to talk to every Tom, Dick and Harry who emails them is not surprising.
It is also not surprising, therefore, that Josh and others are making their "case" on this and on other blogs. It is the only place where they will ever get a forum (for which they should be thanking Tim Lambert).
No researcher or journal editor in his or her right mind would ever take such "I can't believe it" speculations seriously. It's not science.
I suspect that the researcher/editor would have a good laugh before they threw the speculation in the circular file though.
DD: You are forgetting that the average household in this study contained 6.9 inhabitants, much closer to a 'commune or squat' than Mom, Pop and 2 kids. I am going to submit that larger households are more prone to change than you are allowing especially in the middle of a guerilla war.
Moreover, in my case, the answer would not be "my family same as today". Family members have moved in, family members have died, and if you asked me exactly when either occurred, such as for instance, "Exactly how many members composed your household Jan. 1, 2004," since January 1st is New Year's I do know, and would know pretty fast as the last three years have been turbulent and Christmases have been important to us, but I don't think most Iraqis celebrate Christmas. January 1st isn't their New Year, it's just the first day of January.
I could certainly dash out a number if you asked me for some random date, if accuracy wasn't an issue, but without a old calendar and a bit of thought, I wouldn't know the exact date they moved in or died [and likely wouldn't know it after]. I am lucky enough when I recall birthdays and anniversaries, much less dates of immigration and death. Maybe your house is just atypically boring.
It certainly isn't that complicated but unless you want crap for results you had better give people time to give you a correct answer. Why shouldn't the goal of the Lancet have been to give respondents as much time as needed, access to whatever journals, datebooks, family bibles, whatever, to give as accurate as answer as possible, rather than aiming for a 5 minute in and out job [which in itself doesn't seem very likely in the face of obtaining informed consent and doing the interview in private]? Several of their questions are time dependent, has X been here for three months, what date did Y die, was it less than Z time period after he or she moved in, etc. Unless their data was irrelevant to them, which I grant is entirely possible, or few households experienced any change, which is just false, this should take more time than 5 minutes door to door plus informed consent, judging personal safety and making it private.
Kevin Donoghue:
> hounding a man with downright silly questions is no
> way to persuade him to be more forthcoming
Please don't accuse me of "hounding" people. Burnham doesn't regard my few, brief queries as "hounding". You're starting to sound like the hysterical types over at MediaLens, with respect.
[Why shouldn't the goal of the Lancet have been to give respondents as much time as needed, access to whatever journals, datebooks, family bibles, whatever, to give as accurate as answer as possible, rather than aiming for a 5 minute in and out job [which in itself doesn't seem very likely in the face of obtaining informed consent and doing the interview in private]? ]
Kevin, are you aware that there has been a war in Iraq recently? It is quite important to the study.
James said: "I suspect that faced with a difficult and dangerous task, they simply made many of the interviews up."
...and what concrete evidence do you have for that, James, good fellow?
You claim to "do surveys for a living" but your low standard for proof (or in this case disproof) indicates otherwise.
Stating that "They could not have done the surveys in the stated time" is not proof -- not even close.
I am curious. What is your education/training in the area of surveys? Surely, you are not doing them for a living sans education/training, are you?
I wrote: "hounding a man with downright silly questions is no way to persuade him to be more forthcoming."
Robert Shone responded: Please don't accuse me of "hounding" people.
Fair enough, "hounding" was out of order; I apologise. I'm sure the tone and manner of your questioning was as polite as one could wish. But the question itself - whether interviews were conducted indoors - isn't at all pertinent. It's quite possible that Burnham doesn't know. There is no obvious reason why he should have stipulated that interviews be held indoors.
JB,
Why aren't questions about sampling methods [how'd you pick streets?] and survey methods [indoor or outdoor?] 'science' when considering the validity and possible biases of a type of survey? Why aren't they relevant when attempt to guesstimate the length of interviews because the answers given by Lancet authors are contradictory?
It's also funny you say that when precisely those type of "I can't believe it" questions from Hicks were chosen for publication by Lancet's editors. Are you equating Lancet with the circular file or were you just making an unfortunately ironic point?
DD:
Funnily enough I was aware of that. I am also aware of the study by the UNDP in Iraq that used 500 interviewers with an average time of 82 minutes and a max. time of something like 106 for their survey with people going back to those houses when the supervisors thought points were unclear.
Kevin Donoghue:
The reason for indoor interviews, as asserted by Hicks, would be to prevent any perceived coercion and keep both interviewer and respondent safe. Important points in a guerilla warzone.
Kevin Donoghue wrote:
> But the question itself - whether interviews were conducted indoors
> - isn't at all pertinent. It's quite possible that Burnham doesn't know.
Burnham mightn't know the answer, but that doesn't make it a trivial question. It has implications for both the duration of the interviews and also the response rate (and a few other things). And it's an easy matter for Burnham to clear up - a simple "yes"/"no"/"a mixture"/"don't know" would suffice.
Huh, my mother died on May 5th, 1983, and I'm unlikely to ever forget that date until Alzenheimer's sets in.
You really don't remember when family members living with you have died?
Iraqis are reputed to be family-oriented people. What makes you think they'll forget when a family member dies? Or moves in or out of their house, for instance?
Kevin
You missed my point, so let me re-re-state it (AGAIN) concluding that the "survey could not have been carried out in the stated times" (and implying that there was therefore fraud involved) based on little more than uninformed speculation about what a reasonable time might be -- as you and others have done here -- is not science.
If you believe otherwise, perhaps you might send your critique of the Lancet study off to the editor of a scientific journal and see if they will publish it.
Be sure to send to include your statement that "the more of Les Roberts' statements I read the less impressed I am with his general credibility. He is a biased and vitriolic debater." I'm sure that will convince the journal editor to publish your critique of the Lancet study if nothing else will.
JB, Kevin's currently a little tied up single-handedly rewriting the textbooks on climatology and economics on the "Rightwing bloggers" thread.
Give him a day or so to finish that and polish his Nobel acceptance speech(es) and I'm sure he'll be only to happy to do the same for statistics and epidemiology.
Kevin wrote: The reason for indoor interviews, as asserted by Hicks, would be to prevent any perceived coercion and keep both interviewer and respondent safe.
As I noted earlier, if the local militiamen are watching you, you don't ensure your safety by inviting strangers into your house and closing the door so that you can't be observed. Hicks doesn't seem to be the sharpest pencil in the box. If I wanted advice on how to survive in a Iraq, I think I'd turn to a Baghdad physician before I'd ask a London psychiatrist. YMMV of course.
JB:
My speculations about some apparent inconsistencies are not my final judgement, nor have I claimed or implied they are. Some of the inconsistencies are likely only apparent, some are maybe just things I don't understand but which may make sense to epidemiologists, and some are just outright inconsistent. That's why I ask questions rather than calling the Lancet study fraudulent.
I listed most of the problems I have; if you can clear them up, have at. If not, stop trying to imply I am libelling the study authors by noting that about 11 and 3/4s is not equal to about 20 and try to figure it out.
Questioning the validity of a study is logical and reasonable, which is necessary if not sufficient for science. Asking why Burnham's and Roberts' statements on interviews are inconsistent speaks to the validity of the study. For my part, I don't know if the inconsistency is due to misstatement, ignorance, fraud or some other reason. I know that 20 minutes is not 11 and 3/4s minute; I have yet to find out where and in what context Burnham says the figure 20 minutes. I know the interviewers collectively either did or didn't ask for local help identifying dispersed clusters of homes. Nature asserts the interviewer they interviewed disagrees with Burnham and Roberts on that. You should be curious about why, as I am.
JB,
I have relevant tertiary quaifications in social psychology and statistics. I've been involved in social and market research for nearly 30 years. I've conducted thousands of surveys, many involving door-to-door cluster sampling. What about you?
I do no find the fieldwork as described believable. I also agree with comments by Kevin about average interview length. This is an opion. I have no proof of anything and neither do you.
James,
I am a scientist by training (physics) but, unlike your background, mine is irrelevant in this case (as is the question of whether I have proof) because I am not the one implying that "the surveys could not have been carried out in the stated time" (and that there was therefore fraud involved).
The fact is, I have not passed judgment on the Lancet study, other than to point out that the authors are certainly well qualified and the Universities where they work well respected.
You readily admit that you "have no proof" and yet feel free to make statements like "I suspect that faced with a difficult and dangerous task, they simply made many of the interviews up."
Sorry to say it, but that ain't science. Not even close.
JB, you're not listening. I didn't say it was "science", I said it was an opinion.
As Kevin points out, there are inconsistencies in the details of the fieldwork as described by the authors. That's never a good sign, although perhaps not surprising in that the study authors were not even in the country.
It's interesting that in the appendix, the authors describe the difficulty and danger of the operation, including interviewers being detained at roadblocks en route to the location, and that
"lengthy explanations of the purposes of the survey--and that it would help the Iraqi people--were necessary to allay fears."
That doesn't sound like an environment where one would expect quick interviews and a 98% response rate.
I think that we have all missed a trick here. I therefore withdraw every single argument I have made based on mathematics, statistics, quotations from the study or geographical and demographic facts about Iraq and substitute the single assertion:
"It seems to me that the the death rate in Iraq massively increased as a result of the war".
You will notice because of my helpful use of the bold font, that I have now said "it seems to me" and therefore cannot be gainsaid.
James, several times you have stated that the 98% response rate as well as the average interview times claimed by Roberts and Burnham are implausible. Do you also think that the 98.5% response rate and 80 minute average interview times claimed by Pedersen for the ILCS survey are implausible?
Here lies the problem- opinions get bandied about and some people end up taking them as fact. So when are the Lancet sceptics going to do their own study, using whatever methodology they prefer?
Gilbert Burnham has informed me that "all" interviews were conducted on the doorstep. For "security reasons" the teams didn't enter homes.
They aren't. It's much easier and much more fun to attempt to poke holes in published work rather than bother pubishing yourself.
It's armchair, op-ed science.
For "security reasons" the teams didn't enter homes.
I am gloating quietly. My faith in the common sense of Baghdad physicians was well founded.
Kevin Donoghue wrote:
> I am gloating quietly.
I'm glad that the answer to my "pointlessly silly" question (as you characterised it) had some use.
Robert Shone wrote:
Thanks for reporting the response -- though I had suspected that to be the answer (and in fact that the indoor/doorstep thing was just a red herring), it's always good to get confirmation of one's intuition.
Since you went to that trouble, let me clarify why I think no professional thinks MSB is bogus but not many think an overall bias factor of 3 is reasonable.
As you know, the main methodological difference between the 2004 and 2006 sample designs was in how the starting house was chosen. In 2004 it was by GPS, in 2006 by qualifying main street. Spagat and Johnson rightly point out that the estimated violent death rate from the 2006 study was about 3x higher than in the 2004 study -- but the problem in this case isn't the violent mortality rate, it's the corresponding non-violent rate, which decreased in just the right proportion so that overall mortality from all causes for equivalent periods was almost the same between the two studies.
So demographers and epidemiologists, especially those who specialize in cause-specific mortality, ask themselves which of these hypotheses makes more sense:
Note that the last hypothesis doesn't imply that there is no MSB, only that it needn't be as large as 3. Furthermore, demographers and epidemiologists are very much used to seeing reporting ambiguities in the allocation of cause of death, even in countries where data systems are good. Given these options, most professionals lean toward the last hypothesis. And that's the basis for my statement that no professional thinks MSB is bogus, but no professional (I know) thinks an overall bias factor of 3 isn't.
BTW, to the extent that Spagat and Johnson's hypothesized MSB doesn't apply to the 2004 study (and it doesn't), it tends to affirm the earlier estimate.
Robert wrote:
> most professionals lean toward the last hypothesis.
Robert, you often claim to know what "most professionals" think. Given the number of people that fall into the category "professionals", I think you're generalising to an absurd extent. I also think you're missing the point on MSB. Whether or not you (or "most professionals") agree with one outcome (of plugging values into the MSB equation) doesn't alter the fact that the Lancet team have (to date) failed to demonstrate how they prevented any main street bias (ie they haven't supported their claim of zero main street bias in their study).
We're left with one story on sampling methodology - the account published in the Lancet (the "main street" selection process). But Burnham and Gilbert claim that "all" houses were included in the sampling method - in contradiction to what is published. This is the fundamental issue which MSB raises which so far hasn't been addressed.
"Burnham and Gilbert" should of course read: "Burnham and Roberts".
Robert Shone wrote:
Well, that's probably because you're unaware of how concentrated the core of this field is, and how much we talk to each other. I will simply say that I don't know anyone who thinks MSB has an overall effect of 3, and I'm reasonably sure that if there were a substantial number of those people, I'd have run across them.
No, I haven't missed that point. I've always said that MSB is something to worry about. However, both by training and by selection our field tends to be pretty pragmatic. What I've been explaining is why you'll have a hard time finding any professional who thinks MSB is a dominating problem.
Robert Shone wrote: I'm glad that the answer to my "pointlessly silly" question (as you characterised it) had some use.
Indeed; and many thanks for posting it. I think the reason for my characterisation of the question will become apparent if you now consider the obvious follow-up question: what difference does it make?
Robert wrote: Spagat and Johnson rightly point out that the estimated violent death rate from the 2006 study was about 3x higher than in the 2004 study....
Not if we take account of the Fallujah cluster in the 2004 study. Watching Burnham's recent presentation at MIT, I noticed a slide showing that the 2006 study includes a few clusters which are exceptionally violent. This brings to mind dsquared's "minefield" analogy from 2004 (link below). Could it not be the case that in 2004 they hit just one big "mine" and (sensibly) treated it as an outlier, but in 2006 they hit a few smaller "mines" - clusters which pushed up the average violent death rate, but which were not bad enough to be treated as outliers?
http://crookedtimber.org/2004/11/11/lancet-roundup-and-literature-revie…
Robert wrote:
> if there were a substantial number of those people,
> I'd have run across them
With this kind of backwards logic, you sound exactly like Medialens. Your claim of overwhelming support for your views among "professionals" has mutated into: "Well, if anyone disagrees with me, I haven't run into them". So let me put it this way: How many "professionals" (whom who claim share your verdict on MSB) can you name?
Last sentence should read: "How many "professionals" (whom you claim share your verdict on MSB) can you name?"
Kevin Donoghue wrote:
> what difference does it [doorstep interview] make?
Well, the notion that Lancet 2 came up with a vastly higher figure than ILCS due to differences in the ways the interviews were conducted (Lancet focusing solely on death; ILCS not) looks to me more implausible with the idea of these brief interviews conducted on the doorstep. 5-15 minutes on the doorstep from a fresh start with total strangers yielding a deaths figure fours times higher than a question about deaths in the middle of a 90-minute interview after the interviewees are presumably a bit nore settled, etc. How would that work?
Actually Sean Gourney has said the ratio of violent deaths between L1 and L2 is 2.4 - not 3.
I've explained in the thread below (based on some info obtained from Les Roberts) why I believe the ratio should only be 1.5
http://www.medialens.org/forum/viewtopic.php?t=1949&sid=4303fbe8842a784…
One key reason is that GPS was not used in L1 or L2 for the Anbar governorate. Anbar needs to be taken out of L1 and L2 to compare the violent death tolls produced by the different methodologies.
At any rate the methodology was intended to measure death s from All casues - which is the point I believe Robert touches on above.
Also can't help notincing how in this thread people assume the accuracyu of Narture's paraphrasing of what Roberts' said is beyond question.
Regarding Robert Shone's question about ILCS etc.:
If someone had been asking me questions for 90 minutes the desire to get them out of the house would be pretty strong. Clearly if I "forget" the death of poor old Ahmed there will be no supplementary questions about the cause of death; we move right along to the next page of the questionnaire. AFAIC a one-topic interview, which probably won't last long even if there have been several deaths in the family, is much less of an imposition.
Anyway unless I've missed something we don't know how many deaths in total were recorded by the ILCS; we don't know how many in categories other than infants, maternity-related and "war related" deaths; and we don't know what that particular term means. We don't know very much about when the deaths took place or about their geographical distribution - if we knew those things we might have some insight into the Iraqis' interpretation of "war related".
Unless the UNDP can tell us a bit more I can't see much point in trying to reconcile ILCS figures with Lancet 1 or Lancet 2.
Robert Shone asked:
Hmmm. Haven't counted. But here's the thing: there are actually very few places where PhDs are trained to do this kind of research in third world countries, and I've been teaching at one of them, I split my time with a second, and I used to work at a third. Plus, I've been in this field for a pretty long time. Because I have either trained, trained with, trained under, or worked with so many people I think I'm modestly qualified to know what goes on in the field. Now it's your turn. How many demographers, epidemiologists, or biostatisticians who have been trained to do this kind of research in third world countries do you know who think that MSB has an overall bias factor of 3? 'Cuz I haven't run across any.
Kevin Donoghue wrote:
Very possibly. It's not likely that mortality was a rising tide that uniformly lifted all clusters. There was actually quite a bit of variation across clusters in the 2004 study, even excluding Falluja.
Kevin Donoghue wrote:
Yeah, this is what I've been saying ever since I got a good look at the ILCS survey. They're too different. The only person who seems to be pushing the comparison is joshd, and he's pushing it 'cuz it's about all he's got.
Robert Shone wrote:
So let's see: if Burnham had said they conducted the interviews inside, it would mean that his study was unreliable but since he said they conducted the interviews on the doorstep, it means that his study was unreliable. Hmmm.
Robert, I've never seen such crass credentialism as your posts consistently express. You never back up your endless assertions that your views are supported by "most professionals". I'm asking you to name these anonymous "professionals" whom you repeatedly invoke to buttress your claims. And don't bother reflecting the question back at me, because unlike you I'm not claiming support from some mass of anonymous Experts. I wonder: have you been taking lessons from the MediaLens School of Appeal to Authority? It sure sounds like it.
Kevin Donoghue wrote:
> I can't see much point in trying to reconcile ILCS
> figures with Lancet 1 or Lancet 2.
The point is that both Les Roberts and Gilbert Burnham do make the comparison - they attempt to explain away the huge discrepancy with this point about focus of interview questions. And to use Robert's line of "argument", how many "professionals" have you "run across" who think the difference in interview question focus would account for the vast discrepancy in the way that Roberts and Burnham claim? Go ahead and name them...
Robert wrote:
> if Burnham had said they conducted the interviews inside,
> it would mean that his study was unreliable but since he
> said they conducted the interviews on the doorstep, it
> means that his study was unreliable.
You're confusing two separate issues - 1) whether the account of the interviews is plausible at all; 2) whether the brief doorstep interview chimes with the Roberts/Burnham contrast with ILCS (supposedly explaining away the huge discrepancy with ILCS).
Robert Shone:
The interviews happening on the doorstep certainly makes the timeline much less problematic, but how is this supposed to control for potential coercion or peer pressure? The point Hicks was making was that you can't control for these without getting the respondent away from external observation.
From the photos on the UNDP website, it would seem the ILCS survey took place indoors. I wonder why security concerns weren't a concern in that instance?
Did Burnham happen to mention why his estimate of the interview length was 20 minutes to about 12 for Roberts?
James: Is there any quantitative means of controlling for the possibility of bias from coercion? Is Hicks assertion correct? Would the interviews have gotten more accurate results if conducted privately?
There are legitimate (statistical) means for discovering inconsistencies within the results of a study (if not outright scientific fraud -- as some have implied the Lancet study involved).
But alas, they involve more than mere speculation about how much time a survey should/would have taken. As with all legitimate means of evaluating the study results, one has to actually analyze the survey results (if you can imagine that)
I asked Roberts and he says he agrees with Burnham about the interview length: 15-20 minutes.
Robert Shone wrote:
Oh dear. Robert, over the last several months we've had conversations not only on Deltoid, but I think on two (possibly three) other sites. Why haven't you expressed before that you've never seen such crass credentialism as my posts consistently express? Could it be that your claim is, oh, I don't know, maybe a tad hyperbolic?
Robert Shone also wrote:
No, I don't think I'm confusing those issues at all; in fact, they're essential to the interpretation of your behavior. I'm pointing out that whether Burnham's answer was "indoors" or "on the doorstep" that you would take it as questioning the study results. This is not the behavior of an open-minded man.
Robert, it's time for you to take a little step back and calm down. You're hurting yourself.
"Robert [Shone], it's time for you to take a little step back and calm down."
That's also my (entirely unprofessional) prescription. When somebody tells you why scientists hold a particular belief, that's not an appeal to authority. Or if it is, my copy of Essential Cell Biology is just one long appeal to authority. Try looking at the argument on its merits.
Similarly, when I tell you why I'm not much bothered by the ILCS, there's no point in telling me that Burnham and Roberts take it very seriously. When I see them present a good case for worrying about it I will. Roberts says that Pederson's criticisms bother him more than anyone else's. Well, that's his problem. I wouldn't know Pederson if he showed up on my doorstep.
AFAIC the "huge discrepancy" you keep going on about doesn't exist. How many war-related deaths (as per ILCS) are there in the Lancet samples? I don't know and I don't believe anyone else does either.
Tim,
I'm glad Roberts and Burnham can finally agree on the interview time. Why did Roberts' last claim about his 4 true teams being in the field for 2 hours a day for 49 days contradict this most recent claim? 11.78 minutes is very unlike an average of 15 to 20.
Kevin wrote:
> When somebody tells you why scientists hold a
> particular belief, that's not an appeal to authority.
Right. But when somebody repeatedly claims their views are supported by "most professionals" (without backing that up with evidence - names, details, etc) - then that is appeal to authority. I suggest it's also an appeal to authority to boast about one's own credentials whilst remaining anonymous (so perhaps Robert would like to unveil himself, so we can corroborate his claims of expertise).
And the advice to "calm down" is, with respect, somewhat rich coming from people who accuse the MSB authors of "plucking" their assumptions out of their "asses" (and I say that with a compassionate wink and a smile, just so you don't falsely attribute "uncalm" emotional states to me) ;)
...so perhaps Robert would like to unveil himself....
He has, several times. You really don't pay attention, do you? In any case, the reasoning he attributes to "most professionals" is sound enough, whether it is as widely held as he claims or not. Perhaps you'd like to respond to it, instead of bitching incessantly about invalid forms of argumentation while resorting to them yourself at every turn?
I say, in all serenity, that the MSB authors plucked their assumptions out of their asses because they presented no valid arguments in support of them. See Tim Lambert's critique, which you may have noticed they did not respond to.
If you now reply with an attack on Tim Lambert's credentials, interlaced with expressions of disdain for credentialism, that won't surprise me in the least.
Kevin Donoghue wrote:
> He has, several times. You really don't pay
> attention, do you?
I don't read everything on this blog, no. Perhaps someone could direct me to where he provides the relevant information (to save me trawling through millions of words). Thanks.
Kevin, Roberts didn't say that they were in the field for two hours a day. He said that if you worked out the total person-hours devoted to the project, there were two person hours per household.
Kevin Donoghue wrote:
> Perhaps you'd like to respond to it, instead of bitching incessantly
I responded, at length, some time ago. Here's an excerpt:
The "n=10" is based on representations (eg graphically on Iraqi street maps) of the street selection scheme published in the Lancet (plus the Lancet authors' additional detail about spilling over into side streets). The only assumption here seems to be regarding the Lancet authors' definition of main streets as "major commercial streets or avenues". The MSB team considered conservative and liberal interpretations of this. The result is illustrated by the maps they've published (which, to me, depict clearly that n=10 is reasonable, even for liberal interpretations of "main street").http://www.rhul.ac.uk/economics/Research/conflict-analysis/iraq-mortality/Iraqmaps.htmlI don't see any "wild" assumptions here. On the contrary, they've merely assumed that the survey was indeed conducted according to the description of it provided by the Lancet authors. The Lancet authors' assertion that all streets were included in the selection process (which would result in n=0) flatly contradicts the methodology as published. The Lancet authors could instantly clarify this issue by providing the following (so why haven't they?):(a) The list of main streets from which they randomly sampled.(b) A full and detailed description of exactly how they sampled streets not connected to a main street.Until they do so, anyone reviewing the Lancet study must rely on what the Lancet authors have so far published. This is what the MSB team have done.Moving on, what "wild assumptions" underlie "f=15/16"? The MSB team make the assumption that women, children and the elderly stay close to home, whilst allowing for two working-age males per average household of eight, with each spending six hours per 24-hour day outside their own zone. This yields f=6/8+(2/8x18/24)=15/16. Any "wild assumptions" here?Women, children and the elderly staying close to home? Is this a wild assumption? Almost certainly not. To quote a TIME magazine reporter (Bobby Ghosh) based in Baghdad:"Iraqi politics is now dominated by Islamicist parties - Shi'ite and Sunni. And many neighborhoods are controlled by religious militias or jihadi groups. Some of them openly demand that women confine themselves to their homes. Even where there are no such "rules", many women say they feel safer staying indoors, or wearing the veil and abaya when they step out"Does allowing for two working-age males (per average household of eight) to each spend six hours per day outside their own zone require any "wild" assumptions? To give some context, the UN puts the unemployment rate at 27%, but the Washington Post (for example) says it's much higher. Many others have part-time or irregular jobs. One can assume that many men who are outside their homes aren't at work. They may, for example, be in cafes or obtaining groceries (they're more likely to choose local cafes/groceries, etc). And if you are a sunni, you probably avoid entering shi'ite areas (and vice versa) - generally you don't go far from your neighbourhood unless you have to.The MSB authors suggest a value for f of 15/16. This seems reasonable to me based on the above. It could be a little lower, perhaps, but not much. Tim Lambert suggests a value of 2/16. This implies that the average Iraqi (including women, children and the elderly) spends only 3 hours out of each 24-hr day in their own home/zone (presumably sleeping), and spends the other 21 hours outside their zone. Since this is clearly ludicrous, I imagine Tim is redefining "f" in an unspecified way, thus changing the whole equation (in a manner unknown to us). Tim might have grounds for doing this, but if so, can he please state what those grounds are, and on what assumptions they're based (unknown assumptions are worse than "wild" ones).Finally, since we're debating "wild assumptions", one might ask what assumptions underlie the Lancet authors' (so far undemonstrated) claim that their methodology of selecting cross streets manages to include "all" streets in the selection process.
http://scienceblogs.com/deltoid/2006/12/main_street_bias_paper.php
Someone asked me for my opinion on indoor/outdoor interviewing. I'm not familiar with iraqi customs, but it wouldn't seem to me to be a source of bias. I am surprised by the claim that all the interviews were on the doorstep as interviewers often get invited in by respondents in Australia, and once again I suspect that the study authors don't really now what happened in the field. Again, I'm not familiar with iraqi customs, but it seems odd that you'd get a 98% response rate and nobody would invite you in?
BTW, as I'm generally critical of the Lancet paper, I should note that I think the main street bias argument is really weak.
That said, the claimed interview length is suspect. You might think that "it's only a few questions", but the information obtained by the Lancet study is quite complicated to ask and code in a way that will make the results usable.
Actually, what I'd like to see is the questionnaire (in english please). Give me that and I'll estimte the interview length.
Robert S, they get n=10 with a conservative interpretation of main street. This is not plausible as I explained before.
As for fo, you don't seem to have read what I wrote about it. I redefined it and I explained why.
Tim Lambert wrote:
> As for fo... I redefined it and I explained why.
You didn't "redefine" it. You made a few assumptions ("So the relevant probabilities for f are for the times when folks are outside the home..."), and then you wrote: "Hence a reasonable estimate for fo is not 15/16 but 2/16."
Nowhere did you indicate how you calculated 2/16, or how anyone could produce a value based on your "redefinition" (since your "redefinition" doesn't exist).
On n=10, your "explanation" for why it wasn't plausible was: "if you pick a very small number of main streets you can get n=10, but no-one [who] was trying to sample from all households would pick such a small set".
That's a strange argument. No-one who was trying to sample from "all households" would use the published Lancet methodology, period. If you were trying to sample from "all" households you wouldn't use the main street selection process. But since the Lancet team did use that selection process, they must take responsibility for the potential of bias from "busy streets" (a bias which Gilbert Burnham has acknowledged exists). What isn't plausible is for them to assert there was zero main street bias.
If the above isn't clear enough, consider the following... the Lancet authors said that by "main streets" they meant "major commercial streets or avenues". What proportion of all streets (including back alleys) would fall into that category? What proportion of "all" households would be sampled from roads which intersected those "major commercial streets"?
Tim says: "[the MSB authors] expect us to believe that Riyadh Lafta, while trying to make sure that all households could be sampled, came up with a scheme that excluded 91%"
If Riyadh Lafta wanted to make sure "all" households were selected, then the selection process described in the Lancet would have been utterly useless. Conversely, if Lafta did use the published selection process, then the notion that a large proportion of households was excluded isn't "ridiculous" as Tim asserts.
Can anyone help me with the car-bombs ? I quote from the late lamented Drinking From Home :
"The survey sample of 300 violent deaths from March 2003 to July 2006 (page 7 of the survey) is extrapolated to approximately 600,000 for the country as a whole. Of the 300 violent deaths, 30 (10%) were the result of car bombs in the year June 2005-June 2006. Using the survey's methodology, I believe that equates to 60,000 people killed by car bombs in one year. The most recent data available on the Iraq Body Count website lists 15 car bombs in the first half of September (ignoring bombs which targeted non-Iraqi forces); taking the highest figure for reported deaths, these bombs killed 75 people. That's an average of 5 people killed per car bomb. On that basis, 60,000 deaths would require 12,000 car bombs in one year, or 33 per day. Either that or there are hundreds of massive car bombs killing hundreds of people which are going totally unreported."
Robert Shone wrote:
I hadn't realized I was veiled. I'm pretty sure I'm not.
Hmmm. You also wrote, "I've never seen such crass credentialism as your posts consistently express." How would you know how consistently I express my crass credentials if you haven't been paying attention? I asked earlier if your claim was hyperbolic but now it is evident that hyperbole doesn't cover it: you actually sling around charges without doing sufficient research, and then you ask other people to do your leg work for you. Double Hmmm.
Okay. Really, whenever you've seen the name "Robert" it's either you or me (not just on Deltoid but also on Crooked Timber, Soldz's site, and elsewhere that these papers are being discussed) including in this very thread and I'm moderately sure that I don't offer my [background](http://www.demog.berkeley.edu/courses/fall2006courses.html) unless someone asks.
But that's all a red herring, and you ought to know it. Even were I not who I am, you should be wondering why so few demographers, epidemiologists, and biostatisticians are up in arms about MSB if they believed the overall bias factor were truly 3. I've given you my reason but since you doubt me, why don't you call up your nearest school of public health or demography department and find someone to ask? As you have demonstrated above, gathering and weighing evidence before making up your mind isn't something you usually do so maybe you'd enjoy a little change of pace.
James wrote:
James, just to clarify, are you no longer saying that the 98% response rate is implausible?
I've said before that I don't know exactly how the teams collected their data but in other contexts where one needs event histories with person-months of exposure (as in this case), researchers have used a gridded worksheet with small boxes for months across the columns and each row dedicated to an individual. Everyone who was present for the entire study period gets a horizontal line through all the boxes. Births and people who enter the household have lines that begin part way through and continue to the right; deaths and people who leave the household have lines that end before the right hand edge. Some people will have lines that begin and end within the study period. If there's room on the page, the rows can include some individual identifiers or characteristics, such as relationship to the respondent, sex, and age either at the end of beginning of the period. Robert Shone will now go off on a rant about how I can't possibly know how data like these are collected.
Tim,
>>Kevin, Roberts didn't say that they were in the field for two hours a day. He said that if you worked out the total person-hours devoted to the project, there were two person hours per household.
Roberts':
"We had eight interviewers working ten hour days for 49 days, they had two hours in the field to ask each household five questions. They had time."
His quote is ambiguous. For the claim that they had 2 hours per each household:
8 [interviewers] x 10 [hours] x 49 [days] = 3920 [labor hours]
3920 [labor hours] / 2000 [houses] = 1.96 [hours per house]
I grant that the hours now match his claim, and further that a 15-20 minute timeframe per house easily fits into a 2 hour budget per house.
1) This latest interp. contradicts the Nature article yet again. Nature cited Roberts claiming people interviewing separately to a follow up question about 2006. Nature checked with an interviewer and the interviewer said Roberts' claim was wrong; they worked in pairs. Then Nature came back to Roberts and he said he was actually referring to 2004. Apparently not or he's misstating now, on your interpretation of his ambiguous claim. There is no way for him to claim 2 hours of interviewing power per household without the 8 going separately. Burnham says they worked in pairs; the Iraqi doctor says they worked in pairs. Roberts' claim only makes sense if the 4 pairs did in fact work separately; otherwise it's only one hour per house. A pair of interviewers work no faster on one interviewee than an individual would; tag team questioning isn't going to finish an interview in half the time. So did Roberts misstate now or did Burnham and the Iraqi interviewer misstate then?
2) In case there is yet one more interpretation or as yet unpublished reply, that makes the above make sense:
If his experience in 2004 was 3 hours for 2 people [working independently] to do a thirty house cluster and interviews that took up to 20 minutes, it seems an oddly budget busting assumption [unless his workers worked 10 hour days for 49 days free] to plan 2 hours labor per household. Overestimating your high end estimates of a job by a factor of 6 just sounds weird. If 4 teams actually spent 10 hours in the field daily, and interviews took 20 minutes, what were they doing for 1 hour 40 minutes per house? Planning 1 house to be done in 2 hours for a job that took, just 2 years prior, 1 hour per 5 houses per interviewer makes little sense.
Why is it so incredibly hard to come up with a consistent timeframe in this study? Especially for having heard directly from Roberts yourself and him having addressed this specific issue publically previously, it seems unbelievable that it is this hard to produce a clear statement which resolves all contradictions. I grant that even 1 hour per house seems sufficient time to me. However that means, at best, Roberts habitually utters misleading [but rhetorically supportive of his case] nonsequiturs when defending his work.
When asked about 2006 and even 4 interviewing pairs being sufficient for the time specified, he replies ambiguously about 2004 teams working independently. When pressed further about the timeframe of 2006 he replies with a total of labour hours that is doesn't reflect the labor that needed to be performed.
"Can anyone help me with the car-bombs?"
The main thing is to make sure you wear safety-goggles and a sturdy pair of rubber gloves. Also, don't assume that you will always get the right answer by linear extrapolation from small samples. For example, if you test a kilo of plastic explosive and the only casualty is the neighbour's cat, do not asssume that a 1,000 kilos will kill 1,000 cats.
See any good statistics book for further guidance.
Robert wrote:
> I'm moderately sure that I don't offer my background
> unless someone asks.
Thanks for supplying the link concerning your background. Can we take it, then, that the posts in this thread from both "Robert" and "Robert Chung" are yours? I have no problem with that, but it wasn't apparent from the posts.
Robert [Chung] wrote: '...including in this very thread....'
He even italicised it.
Robert Shone asked: 'Can we take it, then, that the posts in this thread from both "Robert" and "Robert Chung" are yours?'
Tim Lambert: So that's is two errors that Spagat correctly reported, but the next mistakes are Spagat's: The word "casualties" does not even appear in the body of the paper.
This is true. The reason for it is that, on the first page of the paper, the word is misspelled as 'casuality'.
Returning to Laban Tall's question about car bombs. I haven't checked whether IBC figures imply an "average of 5 people killed per car bomb", as stated, but if they do then IBC needs a serious reality check. It's hard to get reliable figures for car bomb fatalities because most of them happen in places where nobody is counting, but a glance at Wikipedia's page confirms that it's not hard to kill much larger numbers than that when mass killing is the aim of the operation:
http://en.wikipedia.org/wiki/Car_bomb
The main point remains, of course, that you can't derive a meaningful nationwide figure (for such a specific cause of death) from the Lancet sample.
Kevin -
This article:
http://www.iraqbodycount.org/press/pr14/1.php
says it's 7-8 killed per car bomb, taken over one year.
'Lancet estimates 150 people to have died from car bombs alone, on average, every day during June 2005-June 2006. IBC's database of deadly car bomb incidents shows they kill 7-8 people on average. Lancet's estimate corresponds to about 20 car bombs per day, all but one or two of which fail to be reported by the media. Yet car bombs fall well within the earlier-mentioned category of incidents which average 6 unique reports on them.'
I would just like to state that Roberts et al have never shown invoices for the number of pencils which would be necessary to tabulate this data by the surveyors. certainly the Achilles heel in their argument that people are dying in this civil war.
...and what about all those pairs of worn-out tennis shoes that the interviewers must have gone through?
In fact, I bet we could gauge whether they are telling the truth simply like looking at the callouses (or lack thereof) on their feet.
And if they don't have any callouses they should have to endure a month in solitary listening 24/7 to the I can't believe "arguments" made by people here piped in over a loudspeaker.
Robert Shone asked:
Holy Sweet Moses, I guess I should be happy that you're finally trying to double-check things but are you really truly afraid that I searched around on the internet to find a demographer with the first name Robert and am appropriating his credentials? 'Cuz if I were going to do that I damn sure would've tried to find one with a better looking CV. I'm thinking this shows how easily distracted you are by low probability red herrings. Sort of like the claim that MSB has an overall bias effect of 3.
"Can anyone help me with the car-bombs?"
http://www.iraqbodycount.org/press/pr14/1.php
There's an old saying, 'extraordinary claims demand extraordinary proof'. The Lancet study makes very extraordinary claims throughout, but provides absolutely no proof, let alone extraordinary proof. It just says "Trust us."
I think not. I'll wait for some proof.
Josh (quoting Josh et al.): There's an old saying, 'extraordinary claims demand extraordinary proof'.
Josh et al.: IBC's database of deadly car bomb incidents shows they kill 7-8 people on average.
Now that's a truly extraordinary claim. Of course, it's utterly unsupported. (Or if Robert prefers, it was rectally derived.) What IBC's database shows (assuming their arithmetic is sound) is that on average 7-8 people are reported killed by those deadly car bomb incidents IBC happens to know about.
That's a very different thing.
The only rectally derived claim made in this thread on this topic was:
"It's hard to get reliable figures for car bomb fatalities because most of them happen in places where nobody is counting"
Incidentally Lopakhin, the paper you link to (as did Josh, one of its authors) provides a good example of the fallacy I alluded to in my earlier comment. Tim Lambert also pointed this out in his critique of it.
To follow up on what KD calls "truly extraordinay claims":
"Lancet's estimate corresponds to about 20 car bombs per day, all but one or two of which fail to be reported by the media. Yet car bombs fall well within the earlier-mentioned category of incidents which average 6 unique reports on them."
For KD the Lancet claim which implies that while the supposedly tiny portion of deadly car bombings per day that do get reported average about 6 unique reports on them, the vastly larger number that supposedly occur (according only to L2 and *nothing else on the face of the earth*) somehow all consistently average 0 reports on them, day in, day out.
If someone can't even come to terms with the fact that this is at least an extraordinary claim, they have lost their marbles somewhere along the way.
Lancet estimates 150 people to have died from car bombs alone, on average, every day during June 2005-June 2006.
In case Josh is fooling anyone other than himself, this so-called "estimate" is from Josh et al., not from Burnham et al.
In the real report (as distinct from Josh's fantasy version) car bombs are mentioned five times. Nowhere does it say that 150 people died from car bombs alone, on average, every day during June 2005-June 2006.
KD, your postings make me laugh.
Before getting into substantive issues like whether or not the paper's method implies massive numbers of unreported car bombs and massive numbers of unrecorded death certificates issued, can we attempt to settle something as fundamental as how many people were actually interviewing?
Roberts lately implies that 2 labor hours had been alloted per house in this study and that as a result of having 2 hours per house, the interviewers had ample time for the interviews as claimed. Apparently however, the figure of 2 hours per house is either entirely irrelevant as it doesn't represent actual time spent interviewing or even represent the number of interviewers accurately or it conflicts with his co-author and one of the interviewers claims about the groups interviewing in pairs.
We also have Burnham and Roberts claiming that the interviewers consulted locals in determining centers of population away from the main streets and the interviewer denying this, according to Nature, and Roberts deciding to offer a nonsequitur about his 2004 team interviewing singly when asked about the feasibility of the 2006 claims and clarifying his story when confronted with the denial of an interviewer, also thanks to Nature.
We also have security reasons being cited for why the interviews were conducted quickly and on the doorstep when a similarly timed interview by the UNDP sent interviewers inside homes for an average of 82 minutes.
Taking this at face value, if security risks were so disproportionately high for the Lancet interviews that they needed to interview on a doorstep and ask a bunch of multipart questions involving years of recall, unlike the ILCS, maybe the caliber of their survey data is not the best. It sounds like they needed to maintain a quick pace and literally fear for their lives and the lives of the respondents; not a great atmosphere for getting accurate information out of people.
Let's spell it out for Josh. The references to car bombs in Burnham et al. (2006) are the following:
The "Interpretation" section of the Summary states: Gunfire remains the most common cause of death, although deaths from car bombing have increased.
This is hardly controversial and it clearly does not commit the authors to any bizarre view of what has been happening in Iraq.
Two tables (Table 2 and Table 4) include a breakdown of deaths in the sample households, which included 38 deaths due to car bombs, 30 of them in the period June 2005 - June 2006. It is made plain to the meanest intelligence that the percentages in brackets, following these numbers, are percentages of deaths in the sample. Nobody who is capable of reading and doing simple arithmetic could possibly imagine that these percentages are supposed to represent percentages of deaths in the population as a whole.
Finally, car bombs are mentioned twice in this passage:
Of the 302 violent deaths, 274 (91%) were of men, and within this group, deaths concentrated in the 15-29 and 30-44 year old age groups (figure 1). Most violent deaths were due to gunshots (56%); air strikes, car bombs, and other explosions/ordnance each accounted for 13-14% of violent deaths. The number of deaths from gunshots increased consistently over the post-invasion period, and a sharp increase in deaths from car bombs was noted in 2006.
Once again, it is perfectly clear that what is being described here is the sample. The authors do not suggest, or hint, nor is it at all likely that they could entertain the possibility, that one can safely extrapolate from these figures to the population as a whole. Needless to say, that didn't inhibit Josh.
Josh refers to "the Lancet claim which implies...." and, predictably, what follows is garbage.
Josh, you won't be able to state accurately what the report implies until you get to grips with statistical inference. It's that simple.
Good night.
Kevin has figured out that this stuff comes from the sample. Big shocker there. That's where everything about this study comes from.
The purpose of the sample is to represent the population as a whole. If it did that accurately, it implies, among other things about 60,000 car bombing deaths in June05-June06, or about 150 per day. If it didn't, then it didn't.
You don't clarify what constitutes "safe" or "unsafe" extrapolations from the sample. You seem to want it all ways all the time, and want to confuse people and stop them from looking at the man behind the curtain (ie, what's actually making up these big estimates). Not much else.
If it did that accurately, it implies, among other things about 60,000 car bombing deaths in June05-June06, or about 150 per day.
What's the confidence interval on this estimate, Josh?
Be very careful in giving your answer, as you have embarrassed yourself on this blog on questions of confidence intervals in the recent past.
What's the confidence interval on this estimate, Josh?
What's the relevance of the question? Is it that if it has a wide confidence interval it's a meaningless dartboard that nobody should take seriously? Or what?
Doesn't your question rebound on you, Josh? The difference between you and the Lancet authors is that they put their "dartboard" on view, where Fred Kaplan could have his little jibe at it. Your dartboard is safe in the hands of "the man behind the curtain" as you put it.
Surely it hurts when you punch yourself in the face like this?
I'm just trying to get a straight answer from someone around here about the relevance of the question, a very difficult task apparently.
While we're on the topic though, I failed to notice the Lancet authors' dartboard for "Most disturbing and certain about the results is that more than 80 percent of violent deaths were caused by U.S. forces"? I don't recall ever seeing one, but you say they provided them. So where can I find it?
Josh, you don't understand what you're doing (because you don't understand statistics) but you're actually trying to carry out a hypothesis test here. You're trying to say that since the car bombs statistic is improbable under the hypothesis that the survey was carried out properly, it is therefore probably not the case that the survey was carried out properly.
Since in order to make this claim you would need the standard deviation of the car bomb deaths statistic (and it would obviously be very wide indeed, since this is a subset of the total deaths result, which itself had a very wide confidence interval), and you don't have it, the point I was trying to make is that you're making meaningless statistical arguments without realising that you're doing so. Please stop doing so, if only for the self-interested reason that it reduces your own credibility when you make other, valid points like the one made in your letter to the Lancet.
Josh D: While we're on the topic though, I failed to notice the Lancet authors' dartboard for "Most disturbing and certain about the results is that more than 80 percent of violent deaths were caused by U.S. forces"? I don't recall ever seeing one, but you say they provided them. So where can I find it?
I don't know where you got that from, but Les Roberts says quite the opposite here:
http://www.abc.net.au/rn/nationalinterest/stories/2006/1778810.htm (click on 'Show transcript')
'Peter Mares: And what were the main causes of those violent deaths?
Les Roberts: By far the main cause was people being shot and probably primarily by Iraqis shooting Iraqis.'
dsquared,
Are you trying to awaken Josh from his false consciousness? (Mind you, I don't actually know what "false consciousness arguments" are, but I recall your saying that they don't cut it.) Anyway I think Josh understands his interests pretty well. He wants to put the boot into Les Roberts.
If his plan is to do this by showing that a sample with lots of car-bomb deaths in it must have been cooked he is surely backing a loser. But the big problem isn't that he doesn't have the standard deviation of the car-bomb deaths statistic. He can get around that by using the direct method. Josh "knows" the probability that an Iraqi will be killed by a car bomb, because on his planet the media report nearly all of them and he has added up the numbers. He "knows" the probability for each province and each sub-period covered by the Lancet study. So he can calculate the probability of finding such deaths in a randomly selected cluster. Things get a little messy at that point, because if a cluster contains any car-bomb deaths in it, the odds are that it will contain more than a few - that's the nature of clusters. Your "minefield model" kicks in. Nonetheless, Josh can work out the probability that a random sample (with, let's remember, 12 clusters in Baghdad alone) will pick up 30 car-bomb victims in the period June 2005 - June 2006. If that's very low he will have "proven" to his own satisfaction that the sample is not random.
If he does that I'm sure the likes of Megan McArdle and Anjana Ahuja will give him lots of publicity but I fear he will find the scientific community as a whole won't pay much attention. It seems to me his real problem is that he will have walked straight into the logical trap which Richard Feynman nicely illustrated in one of his lectures:
"You know, the most amazing thing happened to me tonight. I was coming here, on the way to the lecture, and I came in through the parking lot. And you won't believe what happened. I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!"
dsquared, what you write is pretty typical: straw man and sophistry. The CI is irrelevant. The argument is that evidence outside the universe of the study's sample suggests that the point estimate for car bombings is an absurd inflation of reality. You can agree or disagree with that, but that is the argument, that the error appears to go quite clearly in one direction. And yours is an argument that, whether you know it or not, tacitly concedes the point.
Whether reality does or doesn't fall somewhere in the lower end of an unknown wide CI around car bombings is not the point. The point being made is that the point estimate in fact does not appear to be the "most likely" representation of reality, and that remains the case in *whatever* range of uncertainty going in either direction might be derived from that same data.
Similarly, if someone was arguing that the 2004 study's '100,000' was inflated, does it matter to the argument if reality is above 8,000? No. If someone was arguing that certain things (like statistically meaningless outlier data, for example) suggest the '100,000' was an underestimate, does it matter if reality is still probably under 194,000? No again. Not the point.
Furthermore, since L2's sample was not random, there's not much "statistical meaning" to any CI's you'd draw around extrapolations from it. So I'm not sure of the value in making up any CI for car bombings or anything else, because that "meaning" is based on a hypothetical presumption that the data is something it isn't.
And furthermore, I fully understand why you want me to "please stop". You've spent the last three years investing yourself in doing PR for Pepsi and I'm pointing to an argument that suggests Pepsi sucks. End of story.
If I wasn't "embarrassing myself" in the eyes of you and a few of the other crackpot pompous ass "experts" around here, I'd be doing somethign very, very wrong.
Lopakhin says: "I don't know where you got that from, but Les Roberts says quite the opposite here"
What I quoted was from a 2005 MIT paper talking about the 2004 study:
http://web.mit.edu/cis/pdf/Audit_6_05_Roberts.pdf
He says something quite the opposite in what you cite because there he's talking about the 2006 study, which showed something quite the opposite.
[joshd](http://www.apa.org/journals/features/psp7761121.pdf) wrote:
Shorter [joshd](http://www.apa.org/journals/features/psp7761121.pdf): "R.A. Fisher? A know-nothing bum."
What's the most likely number of excess deaths in Iraq over either of the Lancet study periods Josh?
I previously said, "what you write is pretty typical: straw man and sophistry".
Then Robert, as if on cue...
This method of Josh's has fascinating implications. You can use it to prove that the numbers generated by any random number generator are non-random. Simply generate a sample from a uniform distribution on the interval (0,1) and count the numbers in the sub-intervals (0.1, 0.2), (0.2, 0.3), etc. Then, out of these sub-samples pick the one with the greatest frequency. Since there are clearly too many there, the random number generator has to be a dud.
Obviously, this method of proving that the numbers are non-random works best if in reality they are random.
dsquared said: "You're trying to say that since the car bombs statistic is improbable under the hypothesis that the survey was carried out properly, it is therefore probably not the case that the survey was carried out properly."
That's a great statement, dsquared. I had to read it twice, but therein lies its beauty.
"Disproof by Incredulity" can be a most wondrous thing.
[The CI is irrelevant. The argument is that evidence outside the universe of the study's sample suggests that the point estimate for car bombings is an absurd inflation of reality]
Josh, you really can't make the claim in the second sentence quoted there unless you know what the confidence interval is for the estimate.
dsquared,
I really think you are mistaken in claiming that Josh needs a confidence interval to support his argument. Actually, if I understand him correctly, it's a nice argument - rather like Anselm's argument for the existence of God, which nearly everyone agrees is bollocks but they don't agree on why it's bollocks. Let me state the case as I understand it.
Josh knows that in June 2005 the probability that any given Iraqi would be killed by a car bomb before the end of June 2006 was 0.0002. (Or thereabouts; let him supply his own figure if he wants to.) Now let's go into Introductory Probability mode. Suppose 12,801 coloured balls are drawn at random from a very large barrel containing 27,139,584 coloured balls. A red ball signifies death by car bomb and there are just 5,428 red balls in the barrel. What is the probability that the sample will contain 30 or more red balls? It's as near to zero as makes no difference. We don't need no stinkin' confidence interval, see? The sample must be rigged.
Now for the fun part: what's wrong with this argument? The most obvious weakness is the assumption that Josh has sufficiently accurate information regarding car bomb deaths in 2005/2006 to support his claim that p = 0.0002 (or whatever he thinks it is). To my mind though, the bigger problem is the one suggested by Feynman. When your theory is inspired by the sample, you can't present the sample as evidence confirming your theory.
The main problem with Josh's line of argument is that, while an argument from probability may cast doubt on a particular survey finding, it can not rule it out (and therefore can -- and should -- not form the basis of a "proof" that the survey was fraudulent).
The reason for this is that, as long as the probability is not zero, a series of events could nevertheless still have happened as described, no matter how improbable that series might have been ahead of time.
One needs more than mere probability to disprove the survey results. One needs actual numbers. For the case of car bombings, one would have to prove that the total number of car bomb deaths that were claimed in results to the survey exceeded the total number in all of Iraq over the same period. Or, if one has specific information about where the people making the claims lived and when their relatives had died and the number of car-bomb caused deaths for their particular region during the period covered, one would have to prove the same thing for that region.
In fairness to Josh, he doesn't allege fraud, though he comes pretty close at times. And on reflection I don't think he has flouted Feynman's commandment (thou shalt not test a theory using the data which suggested it).
Josh's theory, as I understand it, is that Roberts taught bad habits to the Iraqi interviewers in 2004, so that any sample they produce is likely to be unrepresentative. Using the 2006 data to test that theory is legitimate. The dubious premise is Josh's claim that the proportion of the Iraqi population killed by car bombs in 2005/2006 is certainly less that some (suitably small) number p. If he can establish that, he can reasonably conclude that the sample is non-random.
This is worth looking at. (Certainly it beats speculating about how many interviews a team of doctors can do in a day.) As I remarked in the Megan McArdle thread, the fact that a Baghdad blogger reports "a wave of aerial bombing" audible in central Baghdad, which mainstream reporters don't seem to have bothered even to investigate, looks bad for Josh's claim that he can establish a low upper bound for deaths due to certain causes.
Kevin Donoghue commented "In fairness to Josh, he doesn't allege fraud,"
I don't wish to get into a debate about whether Josh alleges (suggests, implies) fraud with regard to car bombings, interview times, or anything else. I come away with the opposite impression after reading many of his posts (in addition to those on car bombs. (or perhaps my impression is not actually "opposite" your impression, since you qualified with "he comes pretty close at times")
But whether one believes that or not is irrelevant to the main part of my statement above (before the parentheses which contain the reference to fraud):
"The main problem with Josh's line of argument is that, while an argument from probability may cast doubt on a particular survey finding, it can not rule it out"
One cannot disprove things that are clearly possible (ie, have non-zero probability) based on probability alone.
People make the mistake all the time of believing that they can disprove something by showing that an event (or sequence of events) is so unlikely to occur that it can not have occurred (and will not occur in the future).
This is nonsense, of course. I once got dealt a natural royal straight flush in poker (I kid you not -- no wild cards) -- but I did not say to myself "Hmm, this can't have happened".
I would agree with the point you make about fruitful lines of investigation (as opposed to endless and fruitless speculation about "interview times"), however.
Short of actually doing another study, using statistics (and probability) to look for inconsistencies within the data is the best avenue of approach for discovering potential problems with the Lancet study (and any study). I emphasize potential, since without showing outright contradictions between survey results and "reality"*, (eg, with regard to deaths from car bombs), one can not disprove the survey results.
*which is damned hard to show in this case, since everyone seems to have a different version of reality and even the "official" Iraq govt numbers are suspect. Sadly (tragically, actually) no one seems to know what the reality is in Iraq.