The Iraq Study - how good is it?

Update: 14 October.

I just posted another entry on the topic, responding to some of the comments on this post. My conclusions have changed a bit as the result of some of those comments.

As many of you probably know, a study published today in the journal Lancet estimates that just over 650,000 Iraqis have died as a result of the war there. Tim Lambert, Mike the Mad Biologist, and Mark Chu-Carroll have weighed in on the study already (in Tim's case several times), and they think that the estimate and methods were pretty good. Ordinarily, I'd be reluctant to disagree with them, but in this case I think there are some legitimate concerns with the study. Some are methodological concerns, others relate to the tone of the paper.

Before I get to the specific concerns, there are a couple of things that I'd like to make clear.

First, I am not in any way trying to endorse any other estimate for the total death toll. I agree with the authors of the study that the "official" estimates provided by the Iraqi and American governments are quite likely to be massive understatement of the real damage. I think that there are some methodological issues with this latest study, and that these issues might systematically overstate the actual death toll. Still, this study is almost certainly the best one available right now, and it is much more methodologically sound than the official estimates.

Second, and in the interests of full disclosure, I do have a significant complaint about the paper that is unrelated to either the methodology used or the numerical conclusions drawn. On pages 6-7 of the article, the authors write:

Across Iraq, deaths and injuries from violent causes were concentrated in adolescent to middle age men. Although some were probably combatants, a number of factors would expose this group to more risk - eg, life style, automobile travel, and employment outside the home. The circumstances of a number of deaths from gunshots suggest assassinations or executions. Coalition forces have been reported as targeting all men of military age.

They cite two sources - articles in The Christian Science Monitor and the Washington Post - to support that last statement. (I could not locate the article on the WaPo side, but did locate this AP article with the same title, date, and author.) Both of the cited articles refer to the same case, where a single group soldiers accused of murdering Iraqi civilians are claiming that they were ordered to kill all military aged males.

Let me make this crystal clear: I am not, in any way, attempting to ignore, obfuscate, justify, excuse, or minimize this or any other case where US forces have murdered Iraqi civilians. Such acts have occurred, and that is quite simply inexcusable. I do not object to the authors pointing that out. I do object to their decision to refer to a single case where soldiers accused of murder claim to have been following orders to kill all military men as a possible reason for the large number of gunshot deaths, while simultaneously making absolutely no mention of the role of sectarian violence in this context.

To put it bluntly, I think that the decision to single out coalition forces for mention as a cause of gunshot death was absolutely outrageous. The morgue in Bagdhad receives hundreds of people a day who have been shot and killed in the ongoing sectarian violence. That received no mention. A group of soldiers who are trying to use "following orders" as an excuse for murder are mentioned, and that incident is cited twice.

Were that the only place in the paper where coalition forces are singled out for mention or examination, I would write that off as me being overly sensitive. Unfortunately, that is not the case. The word "coalition" appears some 19 pages in the paper, and the authors attempted to separate "coalition-caused" deaths from other types of violent deaths. Their criteria for classifying something as a coalition-caused death was relatively simple, by the way. If everyone in the household agreed that it was coalition-caused, then it was listed as coalition-caused. In contrast, the word "sectarian" appears once, on the first page of the paper, and no attempt appears to have been made to determine if deaths were the result of sectarian bloodshed. Call me crazy, but I think writing a paper about current death rates in Iraq without discussing sectarian violence is like writing a paper about theme parks without mentioning Disney.

Now that I've gotten that off my chest, I think my own bias in this matter is clear. I've done my best to keep this firmly in mind while looking at the methods used in the paper. I probably havn't entirely succeeded, but I've given it my best shot.

Methodological Concerns:

1: Baseline population numbers.
In a prior study, the same authors obtained estimates for the populations of the various Governorates as of January 2004 from the Iraqi Ministry of Health. In the current study, they use estimates as of mid-2004 taken from a joint Iraqi/UN population study. The population numbers in the two studies are quite different:

Province 2003 2004 change (#) change (%)
Baghdad 5139 6554 1415 27.53
Ninwa 2349 2554 205 8.73
Dehuk 650 472 178 27.38
Sulaymaniya 1100 1716 616 56.00
Arbil 1100 1392 292 26.55
Tamin 703 854 151 21.48
Salah ad Din 1099 1119 20 1.82
Diala 1436 1418 18 1.25
Anbar 1261 1329 68 5.39
Babil 1828 1494 334 18.27
Karbala 1047 787 260 24.83
Najaf 1021 978 43 4.21
Wasit 815 971 156 19.14
Qadisyah 779 912 133 17.07
Dhi Qar 1537 1472 65 4.23
Muthanna 514 555 41 7.98
Basrah 1330 1798 468 35.19
Missan 685 763 78 11.39
Total 24393 27138 2745 11.25

(all population figures are in thousands of individuals)

Over the 18 month interval separating the two sets of population estimates, the national population is estimated to have increased by about 11%, but the differences are not even from province to province and there's a lot of variation there. To me, this says that one of three things is going on here: one or both of the population estimates are inaccurate, there was a great deal of migration both into and within Iraq, or, most likely, some combination of the two. (The authors of the paper do not attempt to explain, or, for that matter even mention, the differences between the two sets of population estimates.)

Why is that important? There are a couple of reasons. First, these population figures were used to determine how many clusters to examine in each Governorate. Inaccuracies in the population figures used can result in some Governorates being oversampled and others undersampled. The 2004 population figures were also used, in combination with the estimated death rates in the various Governorates, to calculate the total number of excess deaths. Even if we ignore population differences in the Governorates and just examine the total population, you can see where this could result in problems - if the total population was at the 2003 estimate instead of the 2004 estimate, the total number of excess deaths would drop by more than 10% (from 655,000 to 583,000).

It's probably also worth mentioning that, according to the UNDP report that the authors took the population data from, their 2003 estimate for the population of the country was about 1.6 million more than either a UN estimate or a US Census department estimate. Both the UN and US estimates were about 24.7 million; their source's estimate was 26.3 million. Both the UN and US estimates are similar to the figure of 24.3 million that they had obtained for their 2004 paper. The authors do not explain why they used the highest of the three total population estimates, and they do not mention that the other two exist.

2: Selection of clusters.
Cluster sampling is a well established method for this type of study, and I think that disputing the validity of the study based on their use of cluster sampling is just plain silly. However, I do think that the way that they selected clusters might cause some problems. They assigned the clusters to the different Governorates based on the population of the Governorate - the more people who live there, the more clusters are sampled. Within each Governorate, the authors say that they listed the "constituent administrative units" by "population or estimated population" and again sampled randomly proportionate to population size. (It's probably worth mentioning here that the source they used to obtain the populations of the Governorates does not list the populations of any other type of administrative unit, and the authors do not provide any description of how they did obtain those estimates.)

I'm not that concerned about sampling the Governorates randomly proportionate to population size, since they were able to assign at least one cluster to each Governorate. I am more concerned with their decision to sample the subdivisions randomly proportionate to population size, since in most cases they had fewer clusters than they had administrative subunits. Assigning the clusters in this way would seem likely to sample population centers much more heavily than more rural areas. This isn't a problem if population density isn't a factor in the death rate, but I'd be surprised if it isn't a factor. I'd guess that sectarian violence, in particular, is most likely to occur in higher-population density areas, where the different sects are packed into close contact with each other.

After assigning the clusters within the administrative subunits, they randomly selected a main street from a list of all main streets in the subunit, then randomly selected a residential street that crosses the selected main street. (They do not define either "main streets" or "residential streets," and they do not explain how these lists were obtained.) The houses on the residential street were numbered, one was randomly selected, and that house and the next 40 houses were surveyed. This method also seems likely to oversample population centers and undersample more rural areas - in my experience, towns tend to have more main streets per unit of area than rural areas do.

The bias in the final site selection was avoidable. In the earlier study, they used GPS coordinates to randomly select start points for clusters. They decided not to do that in this case because of (entirely legitimate) concerns that being seen with a GPS unit could get them shot by local residents acting out of fear that the GPS unit was being used to pinpoint targets for an airstrike, or by coalition forces acting out of fear that the GPS was a device being used to trigger a roadside bomb. That is a legitimate concern, but I'm not sure why they didn't just randomly select the coordinates, plot the coordinates on the map, and select the closest intersection on the map.

3: Non-violent excess deaths
The number on non-violent excess deaths was obtained by subtracting the pre-invasion non-violent death rate from the post invasion non-violent death rate. I'd have no problem with that approach, if they had shown that the post-invasion increase was significant, but they did not. In fact, when they conducted a significance test, they obtained a p-value of 0.523, which is not at all significant. (If you aren't familiar with p-values, that more or less means that there's a 52% chance that the difference between pre- and post-invasion rates was just due to the luck of the draw.) Given the total lack of statistical significance, I don't think using the difference in non-violent death rates to calculate "excess" was justified. Excluding those deaths lowers the number of excess deaths by about 54,000. Excluding those deaths and using the lower population levels lowers the number by well over 100,000. I have no way of knowing what effect, if any, biasing the sample toward more urban areas would have, but I suspect that it, too would result in an overestimation of the total number of deaths.

After looking at all of these concerns, there is one thing that I think is clear: even if I am correct, and all of these errors result in overestimates of the total number of deaths, the number is still going to be much higher than the "official" totals. The population of Iraq is being harmed by this war, and it is being harmed much more than either the Iraqi or US government are willing to admit.

Finally, in fairness, I have to admit that I am not an expert in either statistics or demography, and it is entirely possible that some or all of my concerns are simply the result of my misunderstanding the techniques that were used. If that is the case, I suspect that I'll be learning about it soon in the comments.

More like this

My post on the Lancet article has attracted a fair amount of comment, both in the comments here and on other blogs. On the whole, those who have addressed my criticisms have disagreed with them. I've read the criticisms and re-read both the new and the 2004 Iraqi death toll studies a couple of more…
The Lancet study on deaths in Iraq has been released. Get it here. Here's the summary: Background An excess mortality of nearly 100 000 deaths was reported in Iraq for the period March, 2003-September, 2004, attributed to the invasion of Iraq. Our aim was to update this estimate. Methods Between…
Stephen Soldz has posted his discussion with Jon Pedersen about the new Lancet study: [Pedersen thinks that the] prewar mortality is too low. This would be due to recall issues. ... Pedersen thought that people were likely reporting nonviolent deaths as violent ones. These two have to go together…
The BBC did not publish all of Les Roberts' answers. Here are the rest: It seems the Lancet has been overrun by left-wing sixth formers. The report has a flawed methodology and deceit is shown in the counting process. What is your reaction to that? --Ian, Whitwick, UK Almost every researcher who…

wolfwalker: The role of a peer-reviewed journal is to publish all findings which can be defended. It's then the role of the community to critique studies, and the role of the authors to defend them. The fact that the Lancet has published studies which were then shown to be erroneous has no bearing on the Lancet's credibility or lack thereof. You are mistaking scientific journals for opinion journalism.

You should look at this study on its merits, just as Mr. Dunford has done here.

As for credibility, it's pretty hard to believe the government numbers, which government officials are mostly unable to go out of the Green Zone. How many of the death certificates issued in Fallujah make their way to Baghdad? How would that happen? Iraq is a country without infrastructure. The assumptions one would make about the validity of official numbers in say the U.S. simply don't apply to Iraq. And pretty much all of the other estimates are based on official numbers.

The Lancet study has the right idea; the execution might have been flawed, but that's secondary. The study needs to be scientifically critiqued, and re-done with those critiques in mind. It should not be dismissed simply because it's politically inconvenient to those currently in power.

By Denis Robert (not verified) on 14 Oct 2006 #permalink

The editor of The Lancet, Richard Horton, has recently been appearing at Stop the War Coalition rallies here in the UK. Stop the War Coalition is strongly affiliated with the Socialist Workers party, various far left organisations and assorted Hizbollah, Iraqi 'resistance' and 9/11 conspiracy theory supporters.

It is of course his right to attend such marches (and I share some of his concerns), but it arguably raises doubts about his impartiality. Is it unwise that the editor of a leading journal takes an overtly political stance? A link can be found below

http://hurryupharry.bloghouse.net/archives/2006/10/12/fair_and_balanced…

Disclaimer; this doesn't have any relevance to the accuracy, or otherwise, of the research.

Criticims of the count should estimate an error. Is the count off by a factor of 2? Then there have been "only" 325,000 deaths. Or is it off by a factor of 5? That means "only" about 130,000 deaths, or roughly twice the estimated civilian death toll during WW II in the United Kingdom, where Germany intentionally bombed civilian population centers. By any reasonable count, the US invasion has lead to a death toll that any civilized nation would be ashamed of having caused.

I think writing a paper about current death rates in Iraq without discussing sectarian violence is like writing a paper about theme parks without mentioning Disney.

I wonder whether it was necessitated to some extent by the sensitivity of the interview process. The survey was designed to be conservative in its attribution of deaths to coalition forces, but it was also designed to allow respondants more room to attribute blame to the array of sectarian forces. Put simply, I would be wary of an Iraqi interviewer pushing me too hard on just who was responsible for any deaths in my family.

From my perspective, I didn't see the same emphasis on coalition deaths, other than as a result of the survey's being aimed to an Anglo-American readership. By themselves, the numbers suggest that sectarian violence accounts for the greatest rise in violent death rates.

You don't need to bend over backwards. To critically examine the statistics is NOT to endorse the war. By the same token, someone with potentially (i'm not a statistician either, I'm a chemist) fishy numbers should not get a pass because we endorse her politics. Peer review is not a imprimatur. If you see that the emperor is naked, or if his socks just don't match his outfit, you need to speak up.

I generally enjoy scienceblogger writing, but a derisive tone and name calling, while fun, are ultimately unhelpful. Saying someone is a stupid poopyhead (or the moral equivalent) is, well, not really scientific debate, or even a scientific takedown of the dummies who deserve it. Your tone is quite measured, from what I read, but some of those you refer to go way wide of the track on occasion.

The recent trend amongst progressives to want to tar and feather others about disagreements over science is startling, to say the least. (George Monbiot forsees Nuremburg trials for climate change denial. There are people still not convinced of the germ theory of disease, and we are not locking them up, and we should not, despite them being measurably harmful. I hope we haven't thrown of religious Inquisitors just to get scientific ones.)

By Dave Eaton (not verified) on 12 Oct 2006 #permalink

It seems to fair leap to go from calling someone poopyhead to putting someone on trial over their views on global warming. I'm not aware that anyone has suggested that.

see http://gristmill.grist.org/user/David%20Roberts
for a little of the Nuremberg idea. Which he retracts, I guess, but nevertheless, the idea isn't outside the pale for some environmentalists, and it should be.

My point, which I don't want to get lost, is that it is important to not confuse methodological criticism with political view, which is of course how it will be misused (by poopyheads). I worry sometimes that science people, pros and amateurs, refrain from airing concerns because they fear they will be misused.

By Dave Eaton (not verified) on 12 Oct 2006 #permalink

The study claimed that 650,000 Iraqis have died as a direct or indirect result of the invasion, right?

Has it occurred to any of the study's defenders that 650,000 amounts to 2.5% of Iraq's total prewar population? That the study claims that most of the deaths are among males of military age -- which would make the death toll almost nine percent of that demographic group? And yet no other source anywhere gives a figure that's within an order of magnitude of the study's figure. No one else seems to have even noticed the disappearance of one out of every eleven adult Iraqi men.

I don't believe it. I wouldn't believe it even without the timing. With the timing -- two years to the month since the first study, and again within days of a major US election -- I have no hesitation whatever in writing it off as a systematic lie cooked up by deranged morons who have sacrificed their honor, their integrity, and their credibility on the altar of their rabid hatred of George Bush.

By wolfwalker (not verified) on 12 Oct 2006 #permalink

What impartial source is there? There seems to be little effort by the coalition to ascertain a realistic body count.

I also have doubts that in a time of turmoil, that if some sort of a cursory count is done, that many deaths remain unreported. Also what constitutes a death attributable to war/invasion? Are the counts only of those who have died violent deaths due to bombing, shootings etc?

Or do they include those who may have died as a result of unclean drinking water, disease etc, which may not have occurred, had the coalition not invaded?

I don't know. But I think the body count is higher than any of the members of the coalition are politically willing to admit.

And yet no other source anywhere gives a figure that's within an order of magnitude of the study's figure.

Therein lies the rub. No one else has even tried to do the same thing, so this is the only estimate we've got.

However:

No one else seems to have even noticed the disappearance of one out of every eleven adult Iraqi men.

How would someone even go about noticing this? Journalists certainly aren't in a position to do so; they tend to keep to safe places. Iraqi civilians wouldn't have the broad perspective to know the actual numbers, nor would they have the means to get their message out if they did know.

Regardless of whether the numbers are correct, this is the only good-faith estimate we have. The obvious solution is for someone else to take a whack at it; I hope someone does.

Let's drop the argument from implausibility. Reality has been known to be implausible.

How would someone even go about noticing this? Journalists certainly aren't in a position to do so; they tend to keep to safe places. Iraqi civilians wouldn't have the broad perspective to know the actual numbers, nor would they have the means to get their message out if they did know.

Iraqi bloggers, of whom there are many, would be able to find out and get the word out. Incidentally, one of the leading Iraqi blogs is "Iraq the Model," where regular blogger Omar has excoriated the Johns Hopkins study as not only wrong but grossly offensive to the memory of the Iraqis who truly have died in the ongoing violence.

Arab journalists, of whom there are also many, would also get the word out. For all its pro-terrorism, anti-Western slant, al-Jazeera is not a tinkertoy operation. It's a professionally run and equipped organization that does a fairly effective job of collecting information and getting its message out. If southern/central Iraq had lost nine percent of its workforce to violence, al-Jazeera would know, and they'd be reporting on it.

Consider also the fact that the Western news services such as AP and Reuters rely largely on local "stringers" and freelance reporters for field reporting in places like Iraq. Many of those stringers are sympathetic to the anti-Western insurgency. Any news that makes the "occupation" look bad is highly likely to get reported. An average death count of almost six hundred people a day would get reported.

Regardless of whether the numbers are correct, this is the only good-faith estimate we have.

No, it isn't. We have the daily tolls that come from the Iraqi government and do get reported in the Western press. You choose to disbelieve those purely out of personal bias: you assume that anything said by Bush or with Bush's approval is automatically a lie. If anyone had posted a comment here saying that the John Hopkins study must be a lie solely because the people who put it out are political liberals, you'd treat that person with well-deserved scorn, because an argument from political bias with no supporting data is fallacious. Why, then, do you accept an argument from political bias with no supporting data as a valid argument in favor of the study?

By wolfwalker (not verified) on 13 Oct 2006 #permalink

Thank you so much for attempting to look at the data clearly, calmly, and fairly. I've been very disappointed (well, I guess that's naivete on my part) at the way a lot of people have talked about the study without really looking at it.

...regular blogger Omar has excoriated the Johns Hopkins study as not only wrong but grossly offensive to the memory of the Iraqis who truly have died in the ongoing violence.

Ahh, that's the same Omar making the ridiculous argument eviscerated over at Good Math, Bad Math. I read what he had to say, and it's hardly an argument at all. It's another "I don't believe this number" statement.

No, it isn't. We have the daily tolls that come from the Iraqi government and do get reported in the Western press. You choose to disbelieve those purely out of personal bias: you assume that anything said by Bush or with Bush's approval is automatically a lie.

Poison the well much, jackass? I at least criticized your actual argument, whereas you saw fit to ascribe some political bias to me based on no actual statement of mine.

I don't choose to disbelieve the tolls coming from the government and the media; however, I see no reason to assume that either of these have complete (or even close to complete) knowledge of the actual death tolls. The numbers they give are lower bounds on the death toll, since these are essentially certain deaths.

Why, then, do you accept an argument from political bias with no supporting data as a valid argument in favor of the study?

You'll note that I never once said I thought the result was correct. And I don't even know what you mean by "an argument in favor of the study." The question to ask is whether their methodology was correct. Mike has some good honest criticism here. Most of the critics out there seem to be playing this bullshit game of "attack the result", though, and that's what I object to. Your cited "excoriation" of the study is more of the same.

I have no hesitation whatever in writing it off as a systematic lie cooked up by deranged morons who have sacrificed their honor, their integrity, and their credibility on the altar of their rabid hatred of George Bush.

I would buy this argument, but I don't think you used enough derisive language. Perhaps in the future if you use even more name-calling, wolfwalker, you will be more convincing.

About journalists noticing or not noticing something - anybody following the news from Iraq for the past three years has got to know that foreign journalists are increasingly restricted in where they can go. Iraqi stringers have increasingly been under more and more restrictions. Baghdad is a place where you can be tortured and killed after being picked up at a 'police' (read: militia) check point, for having a name associated with the wrong sect.

... a systematic lie cooked up by deranged morons ...

Well, at least wolfwalker has squarely faced what Daniel Davies has identified as the central implication of Lancet denial: without any deep methodological flaw having been found in the study (or likely to be), there's really nothing for it but to call the researchers a pack of liars.

OK, so on one hand we have a group of highly qualified epidemiologists who claim to have performed a careful, well-designed study, and reported its results in one of the world's most respected medical journals.

On the other hand, we have a commenter who just can't believe the study, and thus figures that its authors are
"deranged morons."

Hmmm ... who, pray tell, am I to believe?

[chuckle.wav]

Yes: who, indeed, are you to believe? Folks, you are just too funny for words. The Lancet has a history of publishing "formal medical studies based on the best scientific methods" that were instantly ripped to shreds by the people who were familiar with the raw evidence.

It happened two years ago with the last study they did of "casualties in Iraq."

It happened with a 1998 study that allegedly connected the MMR vaccine to autism. That one was so bad the authors were forced to retract it!

It happened with a study of the 'dangers' of genetically modified foods in 1999.

It happened with a study last January that claimed "selective abortion" was responsible for abortion of 10 million excess female children in India over the last twenty years.

And now we have this study, which claims numbers completely out of line with every other source, numbers challenged by people who live in Iraq and see the violence around them every day, numbers that don't even pass the giggle test (ten percent of the workforce killed off? six hundred 'undetected' additional deaths per day? Give me a bloody break!).

Very simply, The Lancet is not a trustworthy source anymore, and the sooner you realize that, the better.

So yeah, I have no trouble concluding you believe this study only because you're BDS victims and it tells you what you want to hear. I also have no trouble concluding that if the tables were turned -- if this study had produced a death toll lower than the official numbers -- you wouldn't believe it for a second.

And if you find that conclusion offensive, well, that's your problem, not mine. I'm only going where the evidence leads me, as any good scientist should.

By wolfwalker (not verified) on 13 Oct 2006 #permalink

wolfwalker: I had forgotten about the autism article that had to be retracted.

(As a practicing physician, I am amused by the suggestion that by being published in an academic medical journal, a study is beyond criticism. One hopes the initial publication is only the beginning of critically assessing the data related to a particular topic......)

Who were those researchers who won the medical Nobel Prize for their H. Pylori work? That's one of my favorite medical stories!

The Lancet has a history of publishing "formal medical studies based on the best scientific methods" that were instantly ripped to shreds by the people who were familiar with the raw evidence.

Since a few studies published in The Lancet were found to be flawed, we should assume this one is as well. Hooray logical fallacies!

Here's a hint -- this isn't how you go about showing a study is flawed. You do that by actually finding the flaws and then pointing them out. Which is how those earlier studies were discovered to be incorrect.

If you actually had an argument demonstrating some deep errors in the methods of this particular study, or could cite some, then we'd be having an honest-to-goodness discussion on the matter. I'm willing to accept that the study could very well be in error, given sufficient evidence to back such a claim. However, thus far your entire argument comes down to name-calling and baseless accusations.

As a practicing physician, I am amused by the suggestion that by being published in an academic medical journal, a study is beyond criticism.

Where has anyone actually made this suggestion?

Maybe jre (commenter above) didn't mean for his/her? comment to read that way, but a part of it did....read the comment again. Read the last part of it.

I am very interested, as a pathologist, in how the death certificates are generated and how the cause of death is determined. Note, I say, interested, not skeptical or disbelieving or trying to make a point. Just interested. I am also interested in the exact nature of the interview process - but I have to say, I have to read the study first, so I'll stop right here or I will make a fool of myself, if not already.

Take care, all :)

So -- wolfwalker's dismissal of the Johns Hopkins epidemiologists as "deranged morons" is credible ... exactly why, again? Because he doesn't like the Lancet either? Do Johns Hopkins and MIT fall into the "not credible" category as well?

I'm only going where the evidence leads me, as any good scientist should.

Hogwash. You made it crystal clear in your earlier comments that you just could not believe the study, because you distrusted everybody and everything associated with it. You have yet to offer a single item of evidence contradicting it, choosing to cite instead a grab-bag of statistics purporting to make its conclusions implausible. Rather than engage in trench warfare over each and every factoid that wolfwalker considers to be refutation of the study's conclusions, we should focus on the study, and the actual evidence available. Any serious problems with the study's methodology? Nobody's found any yet. Any reason to believe that the data were fabricated? Well, just that wolfwalker is sure that the study's authors are Bush-haters, and therefore liars. It is clear that wolfwalker feels passionately about this, but passion is not a substitute for reason. If wolfwalker had some substantive objection to the study, I suspect we would have heard it by now. Instead, the transmission seems to have signal-to-noise ratio approaching zero: ...deranged morons ... rabid hatred of George Bush ... [fizz, buzz] political liberals ... [crackle] ...
There's just not a lot of there there, if you know what I mean.

Maybe jre (commenter above) didn't mean for his/her? comment to read that way, but a part of it did....read the comment again. Read the last part of it.

You mean this part?

On the other hand, we have a commenter who just can't believe the study, and thus figures that its authors are "deranged morons."

Hmmm ... who, pray tell, am I to believe?

I don't think you have to find a study "beyond criticism" in order to claim that peer-reviewed studies are more convincing than name-calling and unsupported accusations.

I have very little knowledge of statistics, and no way of judging the validity of the study. On the other hand, you don't have to know a lot of statistics to understand that many of the criticisms of the study were ad hoc, uninformed, and motivated by mad-dog partisanship.

However, Mike's criticisms here seem quite reasonable.

If the authors had a political motive, that's fine. This is a major political issue, and people who have information relevant to the issue should absolutely bring it forward, and should do so before the elections rather than afterwards. I do not understand the reasoning of those who object to the very fact that the authors have an opinion about the issue they're writing about, especially because those objecting most loudly are rabidly partisan in the other direction. What they're saying is authoritarian nonsense.

If the authors slanted their study, that was a mistake (though it happens all the time, and "making the best case" is not the same thing as deception). On the other hand, the study would have been viciously attacked no matter how scrupulous it had been. Scientists often think that political issues can be decided by truth and fact, but politics only works that way very, very slowly. Tobacco and lung cancer took decades.

Note that the swarm of criticisms, legitimate or not, has had its effect: this study isn't getting much media play, and when it does it is labelled as "controversial". And that was what was at stake.

I've said many times that the attempt to put a number on the excess deaths was politically useless, because it allowed the debate to be shifted to questions about the exact number and the methodology used. I think that a very convincing case has been made that the earlier estimates are far too low, and also that the Iraqis, as of right now, are significantly worse off.

The qualitative reports we have, as well as a lot of nummerical data on such things as electrical production, already told us that Iraq today is a hellhole. This estimate just adds a bit more detail, and the number doesn't have to be very exact to be significant. I really think that this is a place where scrupulosity about quantification and accuracy is a detriment. The argument really shouldn't be about whether the number is 600,000 or 200,000.

I should also note that many critics use the fact that Iraq is in chaos to argue that the report is too pessimistic (since data gathered in chaos can't be very good). Yeah, but maybe the report is too optimistic, too.

As an innumerate war opponent, what have I concluded? I'm left with two possible conclusions:

1.) A very good report has been smeared by mad-dog partisans. This happens all the time, and a lot of the critics fit the description.

2.) A flawed report has been attacked by mad dog partisans and others.

As many have pointed out, very similiar statistical studies from Darfur or the Congo or Ruanda did not suffer this kind of skeptical assault.

Noone literally said that publishing an article in an academic medical journal makes it beyond critism.

Wolfwalker named several studies that were published in The Lancet an subsequently discredited. He goes on to conclude that therefor the Lancet is discredited. It seems to me that only come to this conclusion if one does not see an academic journal as a way to start peer review. If one concludes that a journal is not trustworthy because some articles were being rightfully critized, than one is likely to believe that a journal is only trustworthy if its articles were beyond critism.

So Wolfwalker is not actually saying it, but implying.

But this of course was spenind way to much energy on Wolfwalker, who does not seem to be able to separate his political from his academic views.

In regards to Dave Eaton's comment. David Roberts was refering to:

"a network of fake citizens' groups and bogus scientific bodies has been claiming that science of global warming is inconclusive" (http://www.guardian.co.uk/climatechange/story/0,,1875760,00.html)

Not about legitimate scientists engaged in serious debate.

Robert's used overly strong language to state his belief that these groups should be held accountable for knowingly disseminating false information. If you believe that the tobacco industry should be held legally liable for their coverup and dissemination of false information, perhaps these groups should be recieve the same treatment. The Nuremburg trials analogy was way over the top, but lawsuits may be justified. In the very least these groups should face serious media scrutiny and public condemnation.

By petewsh61 (not verified) on 14 Oct 2006 #permalink

I am a former math teacher and have a little understanding of the methodology. I became a programmer in the early 1960's when computers were just coming into widespread use in American businesses. Without using personal interviews we often used similar methodology as the Iraqi study in making product sales studies. So I have a little experience with the effectiveness of the method. I don't know George W. Bush, but I have known some of his closest associates for decades. One of them was a student of mine. I know George W. Bush and his tyranno-associates are liars. The study will rise or fall on its merits. The rabid criticism of the tyranno-Right will fall on its merits as it should, but only after having shouted down real criticism of a failed policy -- thereby contributing to the increased death rate of innocents. Evey rabid shout is equivalent to a bullet fired at an innocent heart.

Let's put together the quantitative estimates of the size of the effects you are discussing.

Any study, including the most precise, 20-digit physics measurements, include a number of inconsistencies and imperfections. The key is that the systematic problems implied by these inconsistencies and imperfections are small compared to the much larger statistical and other systematic uncertainties. The existence of some inconsistencies and imperfections do not invalidate a study, as long as they are quantitatively small.

The study provides a measurement of a quantity (601k), and provides an associated quantitative uncertainty (92k standard deviation if treated as a gaussian distribution). If you believe systematic effects cause a quantitative shift, or a systematic increase in the uncertainty, you need to provide a data-based estimate of the size of the effect.

For instance, entirely by eye I'd say your table implies the possible (possible) existence of a 25% uncertainty. However, it's entirely possible that the variations you identify are precisely of the size that are expected given the original study sample size. When you split up a sample into fine pieces they all show larger variation than the total sample result. This is a quantitative question I can't simply eyeball, but it's a check that can be done. If indeed they are consistent with that statistical variation, then there is no basis to assess an uncertainty based on the table.

You identify two possible systematic shifts, one of 10%, the other of between 50k and 100k. In both cases, I would judge that both your view and the authors' views are defensible; let's split the difference and assign an "square box" uncertainty of 1/sqrt(12) to the range. This shifts the results by 5%+75k; and adds two uncertainties, one of 3%, the other of 6k.

The systematic shifts lower 601k to 496k; and in addition to the original 92k uncertainty, it adds uncertainties of 25% (which, as I describe earlier, could very well be 0), 3%, and 6k. This increases the uncertainty from 92k to 154k (assuming the systematic effects you identify are uncorrelated). If the table does turn out to be consistent with the expected statistical variations of the subsamples, then it would change the uncertainty from 92k to 93k.

So the net result is to change the measurement from 601k+-92k to 496k+-154k (or 496k+-93k if the subsample variation is consistent with statistical uncertainties).

You'll have to decide for yourself whether the conclusions you draw depend on whether it is 601k+-92k or 496k+-154k.

Mr Dunford notes at the end of his critique:
"Finally, in fairness, I have to admit that I am not an expert in either statistics or demography, and it is entirely possible that some or all of my concerns are simply the result of my misunderstanding the techniques that were used. If that is the case, I suspect that I'll be learning about it soon in the comments."
I am a demographer and a statistician, and I offer one major criticism of his attack on this paper. He fails to show how any of the points he lists would result in any substantial changes in the final estimate. For example he repeatedly comments about research procedures that are described in detail in the journal article. For example, he notes how the authors failed to describe how they distinguished between major and minor roads when almost anyone can look at a map and see that major roads have a lot of minor roads connected to them. The complaint about using the higher census numbers is thrown out the same way. Demographers have regularly found that better census find more people and the knowledgeable reviewers would not have questioned that decision. But here again, it is thrown out as if it is an effort to increase the numbers.

In general Dunford is using a standard PR technique to create the impression that there a lot of problems with the study and it should not believed without identifying a single real problem that, if corrected, would change the results. The more the uninformed reader reads sentences that notes apparent problems while discussing the article, the less the uninformed reader is going to trust the findings reported in the article. Remember the goal of science is to discover new knowledge while the goal of public relations is to create a false image to manipulate the public. It is clear that John Hopkins study is science and Dunfords tract pure PR.
.

By James H Gundlach (not verified) on 14 Oct 2006 #permalink

Reasonable criticism, but some remarks: First, as to the excess nonviolent deaths: those were not measured independently; therefore the "significance" of the findings for this subgroup of deaths seems irrelevant to me. The only thing that counts is the significance of the total number of excess deaths. All your argument says is "well, it's still most probably 650000 excess deaths, but we can't be sure that 50000 of those were nonviolent".

Second, I have no reason to believe that the rate of violent deaths would be higher in urban areas than in rural areas; nor do I follow your argument as to why urban areas may have been oversampled.

Third, total population and population distribution over provinces: this seems like a valid criticism to me. Not because I have any reason to believe one estimate over another, but because I don't know if the uncertainty in the population estimates - which probably comes from a sample too - was figured into the uncertainty of the result. At least, it should have been mentioned.

However, the conclusion remains: barring outright fraud by the researchers, their sample is consistent with their figures and clearly not consistent with the ones given by IBC or the Iraqi ministry of health; and that is not surprising, as it is fairly well-established that casualties are severely undercounted and underreported in war zones.

By christian h. (not verified) on 14 Oct 2006 #permalink

I'm surprised no one has really taken up Mike's offer to discuss the content of his criticisms of the Lancet study. Maybe I'm not the best one to do it, because my training is in history rather than a mathematical science, but I do have a couple of comments.

First, Mike makes a good point that the authors should have discussed methodologically their decision to accept the UN/Iraq 2004 population estimate. That said, however, I'm not certain why it's a problem to use the most recent numbers available generated by credible governing institutions. The 2004 census Mike questions, after all, was conducted jointly by the UN and the Iraqi government -- not exactly a guy with a calculator and the back of an envelope. My bias as a researcher would be to use more recent numbers over older ones, under the assumption that the more recent study is able to correct flaws discovered in previous studies.

That is, Mike raises a good question here. The authors of the Lancet study should have provided an explanation of why they chose that set of population numbers. Their use of that set, however, does not in and of itself indicate a serious flaw in their methods or conclusions. Rather, Mike's question really is "how good was the 2004 UN/Iraqi census?" It's a legitimate question, and answering it would help nail down how solid the Lancet death rate numbers are.

Second, Mike wonders if the sampling method may have produced a bias towards urban over rural areas. Here I am not really certain how that bias is supposed to have worked. The study reports that clusters were selected within governorates "proportionate to population", which makes it sound like they attempted to sample more and less densely settled population units.

I think Mike's concern is that the start point of clusters was determined by the intersection of streets, which he seems to think indicates an urban locale rather than a rural one. This doesn't make a whole lot of sense to me. First, streets do intersect even in highly rural areas -- I can take Mike out to the far reaches of the rural midwest, where the closest house to the intersection of two county roads is a good half-mile away. Certainly not much density there.

More importantly, however, is that Iraqi settlement patterns differ substantially from American ones. The US model of family farms, where the farmers actually live on the land they work, is unusual in the world. In most peasant cultures, of which Iraq still is one, people tend to live in villages and travel out to their fields to work. Siting a cluster around the intersection of a main street with a crossing residential street could still place you in a very rural locale -- especially if you are selecting clusters "proportionate to population size."

Finally, Mike tells us that the p-value of the increase in non-violent post invasion deaths is insignificant. I'll take his word on that, because I know significance is a highly technical term with a precise mathematical definition. In the much fuzzier world of the social sciences, anything over fifty percent looks good to me. I realize that argument doesn't go anywhere with the folks who actually know the math.

Iraq would also have a deficit of males of an older generation because of the 1980-88 Iraq-Iran war. Since supposedly there were a few hundred thousand casualties in the military alone, this should have produced quite a large deficit of males. Has any news organization "noticed" the deficit?

The WaPo says

" A study earlier this year from Baghdad University, in collaboration with nonprofit women's rights groups, estimated there were 400,000 widows in Baghdad alone, according to the Iraqi organization Women Against War. Before two wars with the United States, a generation of Iraqi men was lost during the brutal eight-year-war with Iran in the 1980s."

I agree that the increase in non-violent deaths is not statistically significant. That is, it could be that non-violent deaths haven't increased at all, and the apparent increase is an artifact of the survey design. But I don't think it follows that we should subtract 50,000 or so from the overall estimate.

To see why, imagine the case had gone the following way. For each of the deaths recorded, the researchers were able to figure out a precise cause of death. Moreover, these were very widely distributed. (This isn't actually the case - gunfire is an overwhelmingly large cause.) It's possible in that case that we would find (a) a highly significant measure that the death rate overall had risen, but (b) for each category of type of fatality, we would only have an insignificant rise in the number of deaths in that category. If we followed Mike's advice and subtracted out each of the categories for which we didn't have a significant rise, we'd end up throwing away a perfectly good measure.

(Consider another case. A national poll of, say, Presidential approval ratings will never find a statistically significant change in the rating in any given state, because the sample is too small. If we subtract out from the sample all the states in which the change is insignificant, we'll never conclude that polls can show a significant change in ratings. But clearly they can.)

So I think the sensible thing to do is to keep the original number (with its large error bars) and say it is probable but a long, long way from certain that non-violent deaths have risen. Which, I think, is exactly what Burnham et al do.

Having said all that, I think there is a reason for being more sceptical of the numbers concerning the non-violent death rate. That's that there is an extra level of uncertainty involved, because we have to calculate both the pre- and post- invasion death rates. If there are possibly systematic biases in the survey, this might lead to an overestimate of the *increase* in non-violent deaths. At the least, it is a reason to say that we could use more raw data before we can say something stronger than that non-violent deaths seem to have increased.

Violent deaths, on the other hand, have increased, and increased by a lot. And nothing in the critiques here or elsewhere makes me any less confident in that conclusion.

Ignore the comments made by wolfwalker. He is displaying truthiness in action. If any of the readers watch the Colbert Report on Comedy Central, you may be familiar with the word, "truthiness." Truthiness was the 2005 American Dialect Word of the year.

According to Wikipedia, "Truthiness is a satirical term coined by Stephen Colbert in reference to the quality by which a person claims to know something intuitively, instinctively, or "from the gut" without regard to evidence, logic, intellectual examination, or actual facts."

Rather than analyze the specific methodology used to construct the study, he relies on personal attacks, innuendo, circular logic and irrelevant arguments to attack the authors of the study. From a "Truthiness" perspective the study must be incorrect. Anything that contradicts what President Bush has said must be due to the fact that the authors "hate President Bush". Pathetic

Arguing with Wolfwalker is pointless, therefore why waste your energy?

First, the number, 601k, in the preceding post by Foland refers to the number of excess deaths the study calculated to be the cause of direct violence: "601â027 (426â369-793â663)". Note that the study's range of confidence limits are closer to 601+-175, a greater range than Mr. Foley quoted.
But why concentrate on the deaths attributable only to direct violence. For excess deaths due to ALL causes, the Lancet study used a pre-war baseline estimate of deaths/1000 and compared this to a post-war estimate of same to calculate all excess deaths attributable to the war. This number of excess deaths from all causes was "654â965 (392â979-942â636)." Note an even broader range of confidence limits here, the low end probability of 393k for deaths from all causes being about 8% lower than the low estimate for violent deaths.
Even assuming the lowest estimates, which Mike Dunford's criticisms don't yet justify, this estimate is 3-4 times the closest alternate estimates, based on morgue body-counts, news reports, household surveys or a combination of these sources.
Now consider that in Darfur the US State Department has estimated a possible range of 98,000 - 181,000 total deaths over 23 months - from March 2003 to January 2005. Other estimates based on sampling methods similar to the Lancet study found 134,000 total deaths in Darfur and Eastern Chad over the 17 months from September 2003 to January 2005.The Lancet Iraq study covers a briefer period, March, 2003-September, 2004 and estimates at least double the number of Iraqi excess deaths than in Darfur.
Finally, the US gov't has deemed the Darfur situationmeets the standard of a genocide. Can there be any doubt as to why George Bush, and his amen choir in the media and blogistan, have peremptorily dismissed the Lancet study?

By peter thom (not verified) on 14 Oct 2006 #permalink

"Were that the only place in the paper where coalition forces are singled out for mention or examination, I would write that off as me being overly sensitive. Unfortunately, that is not the case. The word "coalition" appears some 19 pages in the paper, and the authors attempted to separate "coalition-caused" deaths from other types of violent deaths. Their criteria for classifying something as a coalition-caused death was relatively simple, by the way. If everyone in the household agreed that it was coalition-caused, then it was listed as coalition-caused. In contrast, the word "sectarian" appears once, on the first page of the paper, and no attempt appears to have been made to determine if deaths were the result of sectarian bloodshed. Call me crazy, but I think writing a paper about current death rates in Iraq without discussing sectarian violence is like writing a paper about theme parks without mentioning Disney. "

a. The Coalition was directly responsible for a third of the deaths.

b. The Coalition by failing to provide security, is indirectly responsible for most of the rest. (e.g., these deaths would not have taken place under Saddam Hussein, to put it crudely). Long before sectarian violence became a significant factor as per the news channels, criminal violence was a significant factor in post-invasion deaths.

c. The author of this blog neglects to mention : "Deaths were not classified as being due to coalition forces if households had any uncertainty about the responsible party; consequently, the number of deaths and the proportion of violent deaths attributable to coalition forces could be conservative estimates. Distinguishing criminal murders from anti-coalition force actions was not possible in this survey."

The question for the Iraqis and the rest of the world to answer is - are we better off with Saddam Hussein removed? One would like the answer to be yes, but objective evidence is that the answer is No.

Amen, Arun.

The 19 mentions of the coalition:

1.
(Summary) Interpretation The number of people dying in Iraq has continued to escalate. The proportion of deaths ascribed to coalition forces has diminished in 2006, although the actual numbers have increased every year.

2. (Introduction)
There has been widespread concern over the scale of Iraqi deaths after the invasion by the US-led coalition in March, 2003.

3.
These {MNC-I} data estimated the civilian casuality rate at 117 deaths per day between May, 2005, and June, 2006, on the basis of deaths that occurred in events to which
the coalition responded.

4.,5.
Violent deaths that were directly attributed to coalition forces or to air strikes were classified as coalition violent deaths.

6.,7.,8.
Deaths attributable to the coalition accounted for 31% (95% CI 26-37) of post-invasion violent deaths. The proportion of violent deaths attributable to the coalition was much the same across periods (p=0.058). However, the actual
number of violent deaths, including those that resulted from coalition forces, increased every year after the invasion.

9.
No difference in the proportion of violent deaths in men of military age was noted between deaths attributed to the coalition or other/unknown sources (p=0.168).

10.
{ In table 4, causes of violent death given as Coalition, Other, Unknown.}

11.
We estimate that between March 18, 2003, and June, 2006, an additional 654 965 (392 979 - 942 636) Iraqis have died above what would have been expected on the basis of the pre-invasion crude mortality rate as a consequence of the coalition invasion.

12. {Discussion}
We estimate that, as a consequence of the coalition invasion of March 18, 2003, about 655 000 Iraqis have died above the number that would be expected in a non-conflict situation, which is equivalent to about 2.5% of the population in the study area.

13.,14.,15.,16.,17.
The proportion of violent deaths attributed to coalition forces might have peaked in 2004; however, the actual number of Iraqi deaths attributed to coalition forces increased steadily through 2005. Deaths were not classified as being due to coalition forces if households had any uncertainty about the
responsible party; consequently, the number of deaths and the proportion of violent deaths attributable to coalition forces could be conservative estimates. Distinguishing criminal murders from anti-coalition force actions was not possible in this survey.

18.
Coalition forces have been reported as targeting all men of military age.

19.
Deaths could have been over or under-attributed to coalition forces on a consistent basis.

Reference 5 is available here:
http://www.defenselink.mil/home/features/Iraq_Reports/Index.html
"Measuring Stability and Security in Iraq" - DoD quarterly report to the Congress.

Eyeballing the graphs, around July 06
there were around 790 attacks a week
117 civilian casualties a day
1 infrastructure attack per week
500 "sectarian incidents" with 2000 casualties (presumably mostly civilian?)

ie approx 3500 (117*30 days) casualties in around 3200 attacks (790*4 weeks),
of which 500 attacks were sectarian incidents, with 2000 casualties.

i.e., the DoD claims that 15% of the attacks were sectarian, and produced 62% of
the civilian casualties.

As I quite clearly state, my +- is the standard deviation, which in physical sciences is the conventional way to state the uncertainty. Peter Thom's +- convention is just about twice the standard deviation.

And just to address Darfur--Peter Thom seems to be comparing absolute numbers of deaths in Iraq and Darfur to cast doubt upon the Iraq numbers. This only makes sense if Darfur and Iraq have about the same absolute population. Darfur has about 1/5th the population of Iraq.

The Iraqi/UN study linked to by the author in the text, conducted the bulk of its surveys from March to May 2004. Based on those surveys they estimate 24,000 deaths in the aftermath of the invasion with 95% confidence level ranging from 18,000 to 29,000 (Analytical Report, page 54). They say that this is an underestimate because no correction was applied for the case of households where all members were lost.

Assuming that the 24,000 number is not an extrapolation to the April 2005 release date of the report, and given that the violence in Iraq has only since accelerated,
I would guesstimate from the Iraqi/UN figures

the lower bound of deaths is three and a half years after the invasion:
24,000 * 3.5 = 84,000
if we say the rate of killing doubled in the second year, and doubled again
in the third year, and counting the half year at the same rate as the third year
24,000 * ( 1 + 2 + 4 + 0.5 * 4 ) = 216,000.

Going back the the DoD reference (Measuring Stability and Security in Iraq given in a previous comment, page 32 of the August 2006 report has a chart of daily average
civilian casualties. Eyeballing
1 Jan - 31 Mar 04 : 25
27 Nov 04 - 11 Feb 05 : 50
11 Feb 06 - 19 May 06 : 82
20 May 06 - 11 Aug 06 : 117

"Casualty data reflect updated data for each period and are derived from unverified
reports submitted by Coalition elements responding to an incident; the inconclusivity
of these numbers constrains them only to be used for comparative purposes."

Presumably comparative purposes allow us to estimate the rate, and overall casualty rate reflects the death rate, and so the first doubling of rate is clear, the second doubling took more than a year, and so perhaps 216,000 is a reasonable upper bound(?).

Even the lower bound of 84,000 is much larger than the iraqbodycount.org maximum count of 48,783. However, compare the 24,000 from May 2004 by UN/Iraq survey (min 18,000, max 29,000) to the iraqbodycount release of Feb 7, 2004 :

"As many as 10,000 non-combatant civilian deaths during 2003 have been reliably reported so far as a result of the US/UK-led invasion and occupation of Iraq . These reports provide figures which range between a minimum of 8,235 and a maximum of 10,079 as of Saturday 7th February 2004".

Then, whether we believe Lancet or Iraq/UN, direct news reports of deaths - iraqibodycount.org way of estimating - are an underestimate by a factor of at least 3.

I think if we say that at least a couple of hundred thousand Iraqi civilians have died in the aftermath of the invasion, that will not be an untruth.

Arun, I think your final estimate is a fine attempt, but if I had to choose between such back-of-the-envelope calculations and a serious cluster sample for estimating any other demographic or epidemiological trend, I'd choose a cluster sample. Maybe there's something extra-screwy going on in Iraq, but in general, quite a lot of information in most parts of the world is estimated using cluster sampling. Most of the public figures critiquing the study have approvingly cited cluster-sampled results in the last couple years (unknowingly, I presume) and will again in the next couple.

I think most of your objections are fairly small, except for the possibility of sampling bias. In particular, having done some small-scale sampling in third-world countries, it is almost impossible not to be biased towards more urban areas, no matter how heroic your efforts are not to be. Furthermore, the 2004 population estimates you cite place a greater percentage of the population in urban areas than the 2003 population estimate--which if true, would mitigate some of the effects of the urban-sampling-bias, and if not true, would exacerbate them. And even if you get the urban-rural mix right generally, their street-based sampling is bound to over-emphasize town centers over "suburbs" and seriously undeveloped areas where roads are unnamed and maps are hard to come by.

All that said though, I don't have any way to estimate a correction for such biases--and at the very least, I'm sure that these experts are quite familiar with them themselves. So rather than try to tweak it myself--and certainly rather than going with some back-of-the-envelope calculation of my own--I think the best response is just to take the lower bound of their 95% confidence interval, 400,000. If they did a responsible cluster sample, trying to get the population distribution as close to accurate as possible--and almost every peer in their field seems to say they did--then it would be pretty unlikely, even with the additional biases towards urban areas, that the true number would be less than 400,000. This estimate is certainly better than any of the estimates extrapolated from media reports, and is better than your decent effort, which makes a number of reasonable assumptions that could easily be off by a factor of 2.

Sorry, I realize I conflated Arun's estimates with Mike Dunford's caveats. Mike doesn't try to make his own estimate, which is playing it safe. For all those who do argue with some of the biases in the report, I guess the main message is just to try to work with the Lancet estimate and, if you must, push its estimate down, rather than trying to come up with your own back-of-the-envelope estimate, which is bound to be much more uncertain than even a somewhat biased careful attempt at a cluster sample.

Arun's post listing the 19 mentions of "coalition" shows that number doesn't mean much.

MikeD: "The word "coalition" appears some 19 pages in the paper, ... In contrast, the word "sectarian" appears the word "sectarian" appears once, on the first page of the paper,..."

Contrary to what MikeD says, there are mentions of sectarian violence ("sectarian" appears 7 times) these do note the connections between sectarian violence and the rise of gunshot deaths. Here are the most interesting:

Summary: "The survey also reflects growing sectarian violence,

Introduction: "Controversy and uncertainty surround the number of Iraqis killed by continuing actions by coalition forces and by the escalating sectarian and criminal violence." and "As the war has continued unabated and as sectarian violence has escalated,"

Appendix F: "The large rise in sectarian violence, and the survey�s findings regarding gunshots being the principal cause of death, correlate closely. They also reflect the reports of widespread assassinations."

Appendix G (Policy Implications of High Mortality): "The sharp rise in the respondents� attribution of violent deaths to forces other than those of the U.S.-led coalition clearly tracks with reports of growing sectarian and ethnic violence, as does the steady rise in deaths by gunshots."

It may not assuage MikeD's concern, but it can hardly be said that the report ignores the role of sectarian violence.

As it is, the Lancet report is, again, the best estimate available and even the lower figures you get by tweaking are so much higher than the official US and Iraqi numbers that you know there are shenanigans going on there.

One of the salient points in all this is that a military cannot monitor how well it is doing meeting its obligations of proportionality and precision in the application of force where civilians are likely to become casualties if you don't collect good information. Unless the US military is using numbers very different from the ones Bush has been mouthing, the US military is in no position to know whether or not it is living up to its international law obligations to protect civilians.

Roger,
My point was simply that the Iraq/UN methodology was also a survey; it would likely come up with a lower number; nevertheless, the number is much higher than the news reports would indicate; whatever one thinks of the Lancet report, it is probably appropriate to talk of "hundreds of thousands" instead of "tens of thousands".

Mike,
Yes, the point was that the 19 mentions of coalition (versus one or seven) mentions of sectarian don't mean a thing, read the 19 mentions and decide for oneself.

I should note that the paper as retreived from the BBC link to it has no appendices and has "sectarian" precisely once.

The footer is
"www.thelancet.com Published online October 11, 2006 DOI:10.1016/S0140-6736(06)69491-9"

So perhaps we're referring to different things.

wolfwalker's writings map perfectly to the behaviors of a Right-Wing Authoritarian (RWA). If you have not heard this term, Google it.

Note first his tendency to trust or distrust evidence based on the whether the source is "with him or against him". Second note his tendency to accuse his opponents of his own behaviors -- that is, "projectionism".

You cannot argue with an RWA. So don't try.

By pragmatist (not verified) on 14 Oct 2006 #permalink

Pragmatist, I immediately knew wolfwalker was not to be taken seriously when he cited "Iraq the Model" as a leading Iraqi blog. That comment could only have been made by somebody who's not only swallowed Bush's PNAC Kool-Aid, but fills his swimming pool with it.

The best Iraqi blog, in my own limited experience, was "Baghdad Burning", run by "Riverbend". I say "run by" because I think -- or hope -- that she's finally fled what's left of her country.

Before the US-led invasion, "Riverbend" had a job as a computer programmer. She wore jeans to work and rode the bus around town, listening to tunes on her CD player. She lost her job when her boss decided that he didn't want to offend the religious gangs that were already sprouting in the wake of the invasion. The last we'd heard of her, she was wearing a hijab and couldn't leave the house without two or more of her male relatives with her.

As many have pointed out, very similiar statistical studies from Darfur or the Congo or Ruanda did not suffer this kind of skeptical assault.

In retrospect, these should have undergone more critical review. Aren't some of the authors common to all of the studies?

BTW, Iraq Body Count (hardly a right-wing organization) has weighed in on this debate.

Roger,

it is often said that cluster sampling is well validated.

Yet, when I look at the ILCS, which is a very large cluster sample, and due to its sample size, in particular, the best recent one for Iraq in my opinion, I see some serious contradictions, firstly internally, neonatal mortality for the South is much lower than for the Centre region for the last 15 years. Secondly, it is in direct contradiction of surveys of infant mortality done in the 1990's.

I have tried hard to find information, including in the peer reviewed literature, on how cluster sampling for mortality in crisis zones would have been validated, and I have come up empty.

I would also argue that mortality is fundamentally more difficult to sample for than say political opinions. You cannot sample the dead directly in a poll, and death is a relatively rare event.

The most interesting take on the whole issues I've seen is on the following blog

http://notropis.blogspot.com/

What I found particularly interesting are the questions raised there about training people for surveys, and how this can cause trouble.

For the period covered by the ILCS, it gives a much lower number for "war related" deaths than either Lancet study gives for "excess death". In principle, this could be explained by lot of "excess death" not being "war related". I've got two problems with that. Firstly, it's pretty hard to make that case when most (first Lancet study for the ILCS period) or more than 100% (for the second Lancet study for the ILCS period) of excess death is violent. Secondly, for a calculation of the toll of the war, wouldn't it be quite appropriate to only count "war related" deaths?

Overall, at the moment, I don't trust cluster surveys of mortality in war zones, having looked at the issue enough that I am not willing to accept appeals to authority (it's well proven, everybody is doing these surveys - really? How come that I've found only a handful of such studies for mortality in war zones and zilch coming even close to validation), and where the Lancet surveys are concerned, that's only amplified. Given a procedure that makes them dependent on getting sampling and interviews right and that's questionable even when the authors and people employed to do the actual interviews are not biased, I do essentially think that the Lancet estimates are completely worthless, ie I am not willing to trust any conclusion coming from them that wouldn't already be validated through other information.

Haven't read the study. A study author was interviewed on radio shortly after the results were announced and was asked about the timing of the study's publication. I can't recall the exact words, but it was something along the lines that partly this was due to Journal and partly he was worried about the consequences to a friend of his if the report came out late - which makes me wonder if he had a similar worry if the results of his study came out wrong. I think I recall the interview being on NPR, though it might have been Pacifica.