The Lancet has published a study of mortality in Iraq, a followup to a similar study from a year ago. In this study, they estimated that over 650,000 more people died in Iraq during the US occupation than would have died otherwise.
The Questionable Authority has some objections. I’ll start off by pointing out that he isn’t disputing the basic conclusion. Mike writes:
even if I am correct, and all of these errors result in overestimates of the total number of deaths, the number is still going to be much higher than the “official” totals. The population of Iraq is being harmed by this war, and it is being harmed much more than either the Iraqi or US government are willing to admit.
And his critique raises some valid points. Emphasizing mortality caused by coalition forces while not emphasizing sectarian violence is a mistake. It’s worth noting that the sectarian violence is itself a symptom of the occupation, and given that the militias are civilians, it’s hard to separate sectarian violence from other forms of violent death (the study splits violent death into three categories: coalition-caused, caused by others, or unknown). American forces wear uniforms and drive American military vehicles, so people who report that a violent death was caused by coalition forces have some basis for that assessment.
There are other valid reasons to focus on coalition-caused deaths. We control the coalition, and the coalition is supposed to be helping people. If we aren’t better than the terrorists, why are we even bothering? However, those objections are political, not scientific, and that’s Mike’s point. And he’s right.
His methodological critiques are less accurate. He makes four points:
- The baseline population estimates are different than the previous study.
- The number of clusters sampled in an area is proportionate to population.
- The street intersection sampled is chosen randomly.
- Computing differences between nonviolent mortality rates that are not statistically different.
The last point is right, but he gets a statistical point wrong.
The number on non-violent excess deaths was obtained by subtracting the pre-invasion non-violent death rate from the post invasion non-violent death rate. I’d have no problem with that approach, if they had shown that the post-invasion increase was significant, but they did not. In fact, when they conducted a significance test, they obtained a p-value of 0.523, which is not at all significant. (If you aren’t familiar with p-values, that more or less means that there’s a 52% chance that the difference between pre- and post-invasion rates was just due to the luck of the draw.)
The parenthetical is wrong. A p-value of 0.523 does not indicate the probability that the two populations (pre- and post- invasion) are different, it indicates how likely you’d be to get this result by chance alone if there were no difference.
That matters because it actually strengthens his point. A p-value of 0.99 would indicate that the two samples are surprisingly similar, perhaps even more than you’d expect by chance. A p-value of 0.5 says that nothing at all unusual is going on, there’s really no difference. And as Mike observes, removing that effect drops the number of excess deaths by 50,000. “Only” 600,000.
The other three points Mike raises seem less valid.
Choosing a random street corner to start with (a random main street intersecting a random residential street) introduces no bias. Mike writes that “towns tend to have more main streets per unit of area than rural areas do,” but since they are just choosing one location, that doesn’t seem to matter much.
Choosing a higher population estimate does inflate the mortality estimate to some extent, but unless there’s some basis for thinking the UNDP estimate is worse than other estimates, there’s no basis for claiming that this is a flaw, or even a bias (except in comparisons between this study and the earlier one).
Sampling clusters in proportion to population seems entirely reasonable to me. Sampling populations by choosing a random geographical point (as done in the 2004 study) seems like it would bias you towards rural areas which would tend to deflate your estimate of excess mortality. Sampling proportional to population will tend to sample households (as opposed to geography) at random, which seems more accurate to me.
This is all borne out by the paper’s own internal checks. The graph here shows that this analysis and the previous analysis are very close to identical ? compare the green line and the purple one. The authors point out:
Application of the mortality rates reported here to the period of the 2004 survey gives an estimate of 112,000 (69,000?155,000) excess deaths in Iraq in that period. Thus, the data presented here validates our 2004 study, which conservatively estimated an excess mortality of nearly 10,000 as of September, 2004.
They got nearly identical results for identical time periods, which means that the changes to their technique did not unduly alter their analysis.