This is the question to always keep at the front of your mind when arguments are being slung around (and it is the general question one should always be thinking of when people talk statistics). How Would One Get This Sample, If The Facts Were Not This Way? There is really only one answer - that the study was fraudulent.
When someone dies, you get a death certificate from the hospital, morgue or coroner, in your hand. This bit of the death infrastructure is still working in Iraq. Then the person who issued the death certificate is meant to send a copy to the central government records office where they collate them, tabulate them and collect the overall mortality statistics. This bit of the death infrastructure is not still working in Iraq. (It was never great before the war, broke down entirely during the year after the invasion when there was no government to send them to and has never really recovered; statistics agencies are often bottom of the queue after essential infrastructure, law and order and electricity). Therefore, there is no inconsistency between the fact that 92% of people with a dead relative could produce the certificate when asked, and the fact that Iraq has no remotely reliable mortality statistics and quite likely undercounts the rate of violent death by a factor of ten.
Lindsay Beyerstein rounds up warblogger reactions and comments:
Cowards, all of them. They own this war, but they won't face up to the fact that their little adventure helped kill over half a million people.
Over here at ScienceBlogs we have revere:
Given the security situation, a population based random sample is not feasible, but this still leaves some scientific methods available, one of which, cluster sampling, is a well accepted and conventional technique. In this case it also produced data that is consistent with data from other sources. For example, the paper estimates pre-invasion mortality of 5.5 deaths/1000 population (Crude Mortality Rate), in line with independent estimates from the CIA and the US Census Bureau. I haven't had a chance to read the paper in detail, yet, but the authors are extremely competent and their statistical consultant (Scott Zeger), chair of the Biostatistics Dept. at Hopkins is someone I have been on committees with. He was just elected to the Institute of Medicine, part of the National Academies of Science. These folks are not scientific lightweights. They know what they are doing and The Lancet is one of the world's premier medical journals.
and Mark C. Chu-Carroll:
Believe me, nothing would make me happier than being wrong about this. I really don't want to believe that my country is responsible for a death toll that makes a homicidal maniac like Saddam Hussein look like a pansy... But facts are what they are, and the math argues that this mind-boggling death toll is most likely all too real.
and Mark C. Chu-Carroll again:
The Lancet study is far from perfect. And there are people who have come forward with legitimate questions and criticisms about it. But that's not the response that we've seen from the right-wind media and blogosphere today. All we've seen is blind, pig-ignorant bullshit - a bunch of innumerate jackasses screaming at the top of their lungs: "IT'S WRONG BECAUSE WE SAY IT'S WRONG AND IF YOU DISAGREE YOU'RE A TRAITOR!".
The conclusion that I draw from all of this? The study is correct. No one, no matter how they try, has been able to show any real problem with the methodology, the data, or the analysis that indicates that the estimates are invalid. When they start throwing out all of statistical mathematics because they don't like the conclusion of a single study, you know that they can't find a problem with the study.
and Alex Palazzo
Will people wake up to the reality there? Americans are upset about the whole war, but I'm afraid that the truth is worse than the vast majority believes.
and Mike Dunford has some methodological concerns which Josh Rosenau addresses.
I don't approve of those who won't take a serious look at this study, but it would probably help if the timing of this study's release, like the last one, wasn't so utterly dubious.
I also believe some of the figures, based on facts we can, to some extent, verify against other sources (like the airstrike data) seem highly unlikely.
I would also like to remind Mr. Lambert that those who trumpet this study are often as innumerate as those who criticize it.
genocide
If I understand correctly, the study concludes that the invasion has left non-violent mortality more or less unchanged. I find this surprising. Deteriorating infrastructure can be expected to increase infant mortality and infectious disease mortality. Also, I was under the impression that the first Lancet survey had about half the excess mortality come from non-violent deaths.
Can someone comment? Thanks.
Daniel Davies seems to be very good at issuing assertions, but not very good at supporting them with anything.
Can someone comment? Thanks.
Just a conjecture, but that might have something to do with the sanctions put on Iraq after the first gulf war. It's commonly accepted that those killed around half a million civilians, so perhaps the infrastructure was pretty much useless in the situation.
I'm pretty disappointed that the pseudo-intellectuals at "climateaudit" haven't seen fit to "audit" this study. I mean, c'mon, the stats should supposedly be right up their alley to pick apart and impress themselves with! All they've got is an opinion piece from Screamin' Bill Gray!
Would anyone like to comment on potential selection bias in this study? I am not saying it's less accurate than government figures (it's surely more accurate), but that does not mean there hasn't been significant selection bias as a result of:
1. Urbanity: The survey is spread across provinces (which is good), but from the commentary I've read seems heavily urbanized (which is bad for the representativeness of the sample, and probably skews the numbers upward).
2. All Statistics is Political: The paper acknowledges that clusters had to be altered based on the situation on the ground at the time. What sort of bargains did they have to strike with certain factions to gain access, and how could political conditions have affected the results?
My personal feeling is that the analysis is sound but the collection contains significant selection bias (which other experts all but admit is inevitable), and the number is high by a factor of 2-3...which is still 200,000-300,000 dead (200-250 dead/day). Which is still a humanitarian tragedy.
cwendt- as far as I understand it, Iraq is a heavily urbanised country, with most of the population living in cities, hence why they went to so many.
Sortition- there was an increase in heart attacks and some other problems, I would suggest due to stress, lack of medicines and perhaps problems with electricity supply meaning lack of air conditioning.
From the paper: http://www.thelancet.com/journals/lancet/article/PIIS0140673606694919/f…
"Sampling followed the same approach used in 2004, except that selection of survey sites was by random numbers applied to streets or blocks rather than with global positioning units (GPS), since surveyors felt that being seen with a GPS unit could put their lives at risk. The use of GPS units might be seen as targeting an area for air strikes, or that the unit was in reality a remote detonation control. By confining the survey to a cluster of houses close to one another it was felt the benign purpose of the survey would spread quickly by word of mouth among households, thus lessening risk to interviewers."
So, isolated homesteads were excluded but not, I think, villages. But I would imagine that living in an isolated homestead would be even more dangerous in modern Iraq. So there is potential for bias, but this bias is probably downwards.
So, here we have a situation where Iraqi Doctors are too scared to walk down the street holding a GPS unit in case they get killed, and people are saying that 500 deaths a day doesn't fit their 'smell test'? Some people are just crazy.
Tim, D Squared:
Hi, it's been a while. Hope you're both doing well and enjoying life. Since my main man Heiko hasn't weighed in on this by now, I guess I'll have to sub in.
Sortition beat me to it, but didn't follow through. Everyone seems to be taking at face value the repeated assertions of the study authors that the 2006 survey corroborates the findings of the 2004 survey. To quote from the companion document to the 2006 study:
" That these two surveys were carried out in different locations and two years apart from each other yet yielded results that were very similar to each other, is strong validation of both surveys."
The problem is, the two surveys didn't yield " very similar results."
The authors are basing this supposed mutual corroboration entirely on the similar excess death figures derived from the studies, for the time frame covered by the 2004 survey (100,000 excess deaths in the 2004 survey and 112,000 from the 2006 survey).
However, the composition of the excess death tolls differs radically from study to study. As I argued here in 2005 when comparing Lancet 1 to the UNDP survey, deaths from various causes are not interchangeable when using one study's bottom line to bolster the bottom line of another. In other words, one can't simply say " we'll make up for a shortfall in coalition air strike deaths in one study with some heart attack deaths from the other."
Recall that in the 2004 survey, the headline-grabbing 100,000 excess death figure owed much of its punch to the 40,000-plus excess deaths attributed to non-violent causes. The 2006 survey tells an exceedingly different tale of mortality in Iraq during the first 18 months of the war. Not only is all of the excess death toll in the 2006 survey the result of violence, it's actually greater than the entire 112,000 excess death figure, because the death rate from non-violent causes is significantly less than the base line non-violent death rate. My rough math indicates an extrapolated violent death toll of more than 130,000 for the second survey, for the same time frame covered by Lancet 1.
As I recall from the 2004 discussions and debates, the study authors and their supporters here and elsewhere attributed the large non-violent excess death toll to the inevitable effect on infrastructure, health care, etc arising from the chaos of invasion and subsequent insurgency.
That sounded reasonable enough at the time, assuming one accepted the premise that the invasion and occupation had severely degraded and impaired Iraq's domestic infrastructure. But now, in a complete contradiction of the defence of their earlier work, the study authors are telling us that said chaos of war had no effect on the overall post-invasion non-violent death rate until the beginning of 2006, and in fact the post-invasion non-violent mortality rate was actually lower than the pre-invasion rate for more than two years after regime change.
Obviously, there are two serious problems arising from the comparison of the data sets for the two studies. First, unless I've completely botched the math, the 2006 survey extrapolates approximately 75,000 more violent deaths and 63,000 fewer non-violent deaths than the first survey. This is a monstrous discrepancy, when considering the base line annual mortality figure is approximately 120,000 deaths.
Second, if there is a correlation between extreme violence in a society and an increase in the non-violent death rate, why does one study confirm the correlation, while the second rebuts it?
Re non-violent mortality: if you crunch the numbers provided, you can determine that non-violent mortality among women and the elderly actually rose significantly post-war among the sample group. Infant mortality stayed steady, and the non-violent mortality among adult males went down (at the same time as their violent mortality rate went through the roof).
There are a couple obvious possible explanations for this, none of which the Lancet authors evidently felt the statistics were strong enough to commit to.
BruceR wrote:
One of the obvious ones is competing risks, but I suspect critics will settle on charges of fraud. Nonetheless, allocation of the cause of death shouldn't distract from the fact that the deaths occurred.
Bruce:
Is that an observation unrelated to my post, or are you suggesting that your comment addresses the huge discrepancy between the two studies in the context of violent/non-violent deaths?
I'm not trying to be snarky. I'm a big fan of your blog, and a fellow Canuck to boot. I wish you'd pop in more at Daimnation.
"Nonetheless, allocation of the cause of death shouldn't distract from the fact that the deaths occurred.'
Is there a " take back " feature available for commenters here Tim? Robert is in dire need of same.
I can't believe you said that Robert.
Mike: "I don't approve of those who won't take a serious look at this study, but it would probably help if the timing of this study's release, like the last one, wasn't so utterly dubious."
Well, the release of the 'Invade Iraq Now!' campaign occured at a very politically convenient time for Bush, back in 2002. Should we have dismissed the case for the invasion out of hand?
"I also believe some of the figures, based on facts we can, to some extent, verify against other sources (like the airstrike data) seem highly unlikely."
Please clarify - so far, 'gut' and 'feel' have been almost totally wrong.
"I would also like to remind Mr. Lambert that those who trumpet this study are often as innumerate as those who criticize it."
Now that's just plain wrong. We've seen numerous numerate defenses of the study. There's been an incredible tidal wave of extremely innumerate, attacks, many of which don't even make 'common' sense, let alone display Stat 101 knowledge.
Mike H, given the wide CI for the first Lancet study, the close agreement (98k vs 110k) with the second study is surprisingly good. I'm expecting someone to seize on this and accuse them of cooking their statistics any time now. When you break it down into violent and non-violent deaths things get more uncertain. I'm pretty sure that if you did the test you'll find no statistically significant differences in the violent and non-violent rates between the two studies.
There's a 50% chance that the differences will have opposite signs, and partly cancel, which is what happened
Mike H wrote:
Hmmm. You're saying that allocation of cause of death should distract from the fact that the deaths occurred?
First off, one can argue that in both studies the pre-war period they chose to compare to is not representative as it did not contain any of the major death toll events of the Saddam era. Since the purpose of the war was to remove Saddam, in large part explicitly because of those very events, it seems fair to include them, but of course we know from their statements that the authors had no interest in being fair (see the wiki).
The major methodology problem in both studies was the reliance on reporting from citizens, which assumes a population culturally prone to exaggeration and in some areas very hostile to the war effort is reliable in giving estimates of war dead. Hell, take a poll of Americans who claim to have been at Woodstock and you'll probably find 8-16 times as many as were actually there. Death certificates are hardly reliable in a country where it has been widely reported that all kinds of identity documents are being forged. And since American forces typically compensate the victims of violence, there is financial incentive to exaggerate claims as well, and to have paper "proof" of those claims. And of course even that also assumes our biased researchers are being honest in how they sample.
Remember, too, that the major violence is primarily restricted to just three provinces plus Baghdad. That means this would be 600K excess deaths out of about 10M, or 6%. We would also expect to see about 5 times as many who are merely seriously wounded. That means 3.6 million dead and wounded, out of 10 million, or more than a third.
Are they joking? Baghdad would be a ghost town with numbers like that. Hospitals would be crammed full and spill into the street with casualties. Morgues would be stacking bodies twenty feet high in the midday sun. The smell alone would drive people out.
Worst of all, perhaps, the "scientists" doing the study didn't even pretend to be objective, and have now twice published their results right before an election with the express intention of affecting the result. Rarely is even the worst agenda science this shameless. This entire episode reflects very badly on Lancet and John Hopkins.
Are they joking? Baghdad would be a ghost town with numbers like that. Hospitals would be crammed full and spill into the street with casualties.
TallDave, did you notice that their figure for births is quite a bit higher than their figure for deaths? So, are you claiming that the report can't possibly be right because the maternity wards just couldn't cope?
BTW that's not the only bit of your argument which is childish. But life is short.
TallDave are you claiming that they forged the death certificates on the spot?
Kevin, if the survey claimed to find 650,000 excess births as compared to the prewar period, you might have conceivably come close having a point.
No Tim, as I said many could been pre-forged for unrelated reasons, though it's impossible to know for sure.
The Iraqi officials whose job it is to monitor these things have been quite adamant that the survey is wrong, as have the people at Brookings who monitor Iraqi statistics. While things are chaotic, it is hard to imagine they could all be off by an order of magnitude. 90% of the bodies in an area do not disappear unnoticed.
Steve Sailer has updated his original post and offers an interesting theory on the incredibly low refusal rate for this survey:
http://isteve.blogspot.com/2006/10/updated-depressing-news-of-day.html
So, here we have a situation where Iraqi Doctors are too scared to walk down the street holding a GPS unit in case they get killed, and people are saying that 500 deaths a day doesn't fit their 'smell test'? Some people are just crazy.
SG is a perfect example of the innumerate defenders of this study I referred to. Many people wouldn't be caught dead walking down the streets of a rough neighborhood in Washington, Los Angeles, New Orleans, or New York, yet no one would claim that most of these neighborhoods have anywhere near the violent crime rate that is being discussed as being conceivable in the worst areas of Baghdad in this study.
Is that two different Mikes there? Steve appears to think the numbers are okay. To quote: "More analysis is necessary, but, after a few hours of kicking the tires, these numbers don't strike me as obviously implausible."
(I do love the turning of the "innumerate" label onto the people defending the study. Somebody has their Rove playbook out....)
Yes Justin, there are two different Mikes in this thread.
Tim, you said " I'm expecting someone to seize on this and accuse them of cooking their statistics any time now."
I see this exactly opposite. In my view, this proves the authors weren't fudging their numbers, because the statistical variance I noted causes the study serious credibility problems. If the study authors were going to cheat, they surely would have patched up the difference prior to the release of the study. They didn't do that, but they certainly seem to have gone out of their way to avoid mentioning it, let alone addressing it.
I've read both the study and the companion paper, albeit rather quickly. I see one mention of the fact that the earlier study found nearly half of the excess deaths to be non-violent. I didn't see any mention of that in the 25 page companion paper (although I may have missed such a mention). In the case of the comment in the study itself, there isn't any juxtaposition with the lack of a rate of non-violent increase in the new study, much less an attempt at explanation.
"When you break it down into violent and non-violent deaths things get more uncertain."
Tim, given how stridently you and many others defended the integrity of the death subsets when we debated the original study, I don't think your response passes muster.
" I'm pretty sure that if you did the test you'll find no statistically significant differences in the violent and non-violent rates between the two studies."
You've lost me there Tim. What test are you referring to? As you may recall, I'm no statistician. However, from my layman's perspective, it seems obvious that a rough agreement in the breakdown of violent/non-violent deaths would be a fundamental benchmark criterion for confidence in the accuracy of the studies. I can see cutting the authors some slack in terms of the various death subsets beyond violent and non-violent, but the numbers I cite are ridiculously far apart.
By the way, speaking in your capacity as a numerate, have I got them about right?
"Hmmm. You're saying that allocation of cause of death should distract from the fact that the deaths occurred?"
Absolutely, Robert.
An Iraqi may die of cancer, or he may die in a car bombing, but he can't die from both to appease two different survey results.
The death subsets have been extrapolated by the study authors to make some very bold claims about the nature of the excess death, not just the extent of it. My argument in relation to the second study is the same as it was in the first. I'm not disputing that the study was conducted using peer-accepted statistical methodology. What I am disputing is the ability of the methodology to deliver an accurate estimate of both the overall number of deaths as well as their breakdown.
When I debated this in 2004 and 2005 at Deltoid, I said on several occasions that additional studies would likely be all over the map in terms of results. The dramatic difference between the two studies in the breakdown of non-violent versus violent deaths is evidence of this. As I mentioned to Tim above, this is too fundamental a benchmark for consistency to be sloughed off.
If the numbers were similar from study to study, it would certainly weaken efforts for critics of the studies to further parse the differences between the two in relation to attribution for violent death. However in my view, that has become completely moot because the huge statistical variance between the studies pops up before we even get to an analysis of attribution for violent death.
To put this another way, I'm not claiming the vast majority of deaths recorded in either survey were improperly or inaccurately recorded. But once you start swapping deaths between violent and non-violent categories to the extent we see with the two studies, the reliability of the study's extrapolations comes unraveled, in my opinion.
I find on my home computer that every time I jump from Name to email to comment, the computer tries to send the message and gives me an error message. This happens at home, not at work, so I'm not sure what causes it. (Not that I would know anyway.)
Mike, as a fellow nonstatistician, I think the problem is that when you break down a relatively small data set into subsets, it becomes less and less reliable. So (assuming one accepts the data), the overall death toll is fairly solid, but the breakdown according to causes is much less solid. So maybe there were about 100,000 excess deaths in the period of the first study, but whether it was 60,000 violent and 40,000 nonviolent (which seems more plausible, since you'd expect some increase in nonviolent deaths) or mostly violent wouldn't be settled by either study. You'd need a much bigger study with larger numbers to settle it. One could probably invoke the UN survey, which according to the IBC critics gave a figure that should be extrapolated to about 30,000 war-related deaths (excluding criminal violence) for the first 18 months. That was only about 30 percent less than the first Lancet midrange figure (if you stick to the same category of numbers). Then this current study gives a violent death toll which is too high for the first 18 months.
I'm no expert, but it seems to me that in other fields it's common for scientists to do various studies which flatly contradict each other and the final resolution takes years. And people get quite heated about it, even when it's not a matter of life and death. I'm not sure why this question would be any different.
"Mike, as a fellow nonstatistician, I think the problem is that when you break down a relatively small data set into subsets, it becomes less and less reliable. So (assuming one accepts the data), the overall death toll is fairly solid, but the breakdown according to causes is much less solid."
Sorry Donald, I don't buy that at all. The problem between the two studies isn't a function of the bottom of the chain subsets, it arises within the two main subsets, violent and non-violent. All deaths, whether excess or not, fall into one or the other of the two, without exception. I cannot stress enough the fact that the two studies aren't just slightly out of whack in this regard. They're hugely contradictory.
How can you suggest that the " overall death toll is fairly solid," when the two studies have so little in common about the nature of their respective excess death tolls?
Donald, indulge me if you will, and go back to my first post in this thread. Can you offer me an opinion on this observation:
"Second, if there is a correlation between extreme violence in a society and an increase in the non-violent death rate, why does one study confirm the correlation, while the second rebuts it?"
The authors and defenders of the first study pointed to a dynamic where violence begets more non-violent deaths. Apparently, the results of the second study rebut that dynamic, to a point where we now have magically more than 20,000 less deaths from non-violent causes than we had pre-invasion. These studies are painting two very different pictures of the nature of violence in Iraq during the first 18 months. The bottom line similarities are therefore largely meaningless.
I cannot stress enough the fact that the two studies aren't just slightly out of whack in this regard. They're hugely contradictory.
This is incoherent. Suppose you have a "population" of deaths, some proportion of them violent, the rest non-violent. You draw two samples and one of them is mostly violent while the other is mostly non-violent. In what sense are the samples "contradictory"?
Mike H, can you explain your comment:
Apparently, the results of the second study rebut that dynamic, to a point where we now have magically more than 20,000 less deaths from non-violent causes than we had pre-invasion.
As far as I can see from reading the Lancet paper, there are more non violent deaths in the after invasion period than before. Post invasion deaths are 247, pre-invasion 80. I assume that you can imagine that in a more stressful situation, with badly damaged water and food supplies, as well as lack of decent care facilities, non violent deaths would increase.
Dammit, one of these days I'll work out how to do formatting. My post should read:
Mike H, can you explain your comment:
"Apparently, the results of the second study rebut that dynamic, to a point where we now have magically more than 20,000 less deaths from non-violent causes than we had pre-invasion."
As far as I can see from reading the Lancet paper, there are more non violent deaths in the after invasion period than before. Post invasion deaths are 247, pre-invasion 80. I assume that you can imagine that in a more stressful situation, with badly damaged water and food supplies, as well as lack of decent care facilities, non violent deaths would increase.
"This is incoherent."
What are you talking about?
As I said in an earlier post, an Iraqi can die of cancer, or he can die in a car bombing, but he can't die both ways to appease two different study results. That seems very straightforward Kevin, and I've no idea what you're playing at here.
More than 40% of the excess deaths from the first study were from non-violent causes. The second study erases all those deaths, takes away better than 20,000 more from the baseline mortality extrapolation, and then replaces all of them with violent deaths.
Evidently, you see no problem with this. I'm anxious to hear a detailed explanation as to why.
Guthrie:
The reason there are more non-violent deaths in the post-invasion period than the pre-invasion period is due to the time frames involved. The post invasion period is 3 times longer. If you check the study, you'll see the survey data suggests the increase in the non-violent death rate after the invasion as opposed to the pre-invasion period doesn't kick in until more than two years after regime chance commenced.
My question is " what took so long?" Especially since the first Lancet study detected a significant increase in the non-violent death rate immediately. If we accept your premise that " ....in a more stressful situation, with badly damaged water and food supplies, as well as lack of decent care facilities, non violent deaths would increase," then why did the second study show a decrease rather than an increase, for such a long period of time after the start of the invasion?
My understanding from the media etc is that it took over a year for the insurgency/ murders/ terrorism to get going properly, since the cells had to be organised, supplied with weapons etc. And it took that long to build up a current of resistance to the USA. Actions such as chucking the army out of work didnt help, and the heavy handed approach of the USA in various situations also built up tensions.
You do agree that violence takes time to take its toll on infrastructure and peoples lives?
I don't have an explanation as to why the second study wouldn't show an increase in non-violent deaths. (I haven't read and reread the thing very closely yet, so I'm not really up on the details of it, but I'll take your word for this.) In fact, it doesn't make too much sense to me that there wouldn't be an increase in nonviolent deaths and I find the first study's results much more plausible.
There are a number of things about this second study that seem a little weird to me, if I have the details right. I think you brought up the number of car bomb deaths--13 percent or 80,000. I would have thought that we would get fairly good coverage of car bomb deaths in the press, but maybe not. Someone else (Steve Sailer?) mentioned on a blog about the extremely high response rate--that is, less than 1 percent of households refused to participate. Maybe there's some explanation in terms of Iraqi culture for this, but I am amazed by that. In a situation where anywhere from tens to hundreds of thousands of people are being murdered, I don't think I'd be too trusting if strangers showed up at my door and started asking questions and frankly, if I did participate I'd feel free to lie about details if I thought it would make me safer.
Anyway, I think rightwingers are wrong to dismiss the study and that they do so out of bad faith, and I also think it's quite possible the death toll really is as high as this paper says, astonished as I first was when I read it, but I'm agnostic about this study's results. I'd like to see another study done by a different group on a larger scale with US government support (but absolutely no control over the study itself). It's what every decent American should call for. The US has just been accused by a credible source of direct responsibility for 200,000 deaths (many or maybe most civilians) and indirect responsibility for 400,000 more, so determining if these numbers are correct should be a very high priority for all sorts of reasons. It is not surprising to me that in mainstream American political culture there is no serious interest in this question. I'm awaiting with breathless anticipation the NYT editorial that will call for such an investigation, but I think I will have a long wait. And I don't think rightwingers want it at all.
"You do agree that violence takes time to take its toll on infrastructure and peoples lives."
No, I don't Guthrie. But more importantly, neither do the authors of the 2004 study, or many of its supporters I encountered here and elsewhere. The increase in the non-violent death rate was partly attributed by both the study authors and defenders of the study to the initial weeks of the invasion itself, and the degradation of emergency services resulting from combat between coalition forces and the Iraqi army and irregular Iraqi forces.
In any event, even if we accept your contention (and I don't) that it took somewhere around a year to start to see a rise in the non-violent mortality rate from the pre-invasion rate, that still leaves you up the creek in relation to the 2006 Lancet survey, which measured a decrease in the non-violent death rate more than 2 years after U.S. forces invaded.
Mike H suggested:
So I went back there. From his first post in this thread:
Mike H, I'm confused. First, I can't find a breakdown in the 2006 paper to the first 18 months after March 2003 (i.e., to September 2004): Table 3 shows a breakdown from March 2003 through Apr 2004 and then from May 2004 to May 2005. Second, Table 3 says that the post-invasion nonviolent mortality rate for those two periods is not significantly less than the pre-invasion period. Am I not looking where you are looking?
"In fact, it doesn't make too much sense to me that there wouldn't be an increase in nonviolent deaths and I find the first study's results much more plausible."
Well that's it, isn't it Donald? It doesn't make "much sense" that there wouldn't be an increase in non-violent deaths if you more than double the violent death rate from study 1 to study 2, for the same time frame. But let's take this to its logical conclusion. Can you come up with a worthy expression to describe just how non-sensical it is that the non-violent death rate actually went below the pre-invasion rate for more than 2 years, according to the much more violent second study?
In spite of that, you put the blinders back on and come out with this:
"Anyway, I think rightwingers are wrong to dismiss the study and that they do so out of bad faith, and I also think it's quite possible the death toll really is as high as this paper says, astonished as I first was when I read it, but I'm agnostic about this study's results."
If you think a violent death toll of 600,000 (no, let's set that straight, 650,000. Iraq has experienced at least 50,000 more violent deaths since the study's June 2006 cutoff, if the study is to be believed) is " quite possible," then you're no agnostic, Donald.
You believe what you will.
Robert:
On page 6 of the survey, you'll see this statement:
" Application of the mortality rates reported here to the period of the 2004 survey gives an estimate of 112,000 (69000 - 155000) excess deaths in Iraq in that period. Thus, the data presented here validates our 2004 study, which conservatively estimated an excess mortality rate of nearly 100,000 as of September 2004."
Check out the bottom left mortality rates table on page 4. You'll see that non-violent mortality for the pre-invasion period was measured at 5.4 per thousand. The non-violent mortality rate for the first year after regime change fell to 4.5 per thousand. Why the authors this drop as insignificant is puzzling. It clearly isn't. The non-violent mortality rate for the second year after the commencement of the invasion was still below the baseline figure, coming in at 5.0 per thousand.
You can also go to the companion paper published by the authors (Tim linked to it in one of his first posts). Check the paragraph under "non-violent death rates," page 7. You'll find what you're looking for there.
Mike, it's plausible to me that the total death toll could be in the hundreds of thousands--600,000 is more than I would have guessed possible a few days ago. But I don't know. I'm surprised (well, not really) how certain so many people are about the issue. Anyway, if the study had come out and said the total was 300,000, with 200,000 from violence, I wouldn't have batted an eye. 600,000 makes me uneasy.
With few exceptions, I've noticed that the critics of the report don't call for an independent investigation into the real total. I noticed you skipped over that part and focused instead on the all-important question of my personal gullibility--if I'm not with you 100 percent I'm against you, I suppose is your logic. One would think that all people of good will would unite in supporting a call for an independent large-scale investigation to determine both the true death toll and whether the US is killing civilians by the tens or hundreds of thousands, but that doesn't seem to interest you much.
"600,000" makes me uneasy--- That sounds cold-blooded, along with the previous statement of not batting an eye over 300,000 deaths with 200,000 from violence. I meant I wouldn't have been surprised by those results. 650,000, with 600,000 from violence, was very surprising, because I wouldn't have guessed the total could have been that high. Anyway, I think I've babbled enough on this topic for one day.
Donald:
If you'd like, you can waste a pile of time going back into Tim's archives in late 2004 and early 2005, or you can take my word for it when I tell you that I was completely in favour back then of additional, broader studies. If I " skipped over that part " it was certainly not done to be evasive. I've no reason to be on the question of additional surveys.
It was my belief then, and it is now, that the best method for determining the death toll is an actual count, when conditions on the ground permit. Every deceased Iraqi had a name, a family, a place of residence. Putting names to as many of the deceased as possible will give us a definitive minimum, and then an assessment can be made in terms of how close the minimum might be to the true, inevitably higher actual toll.
Donald, you're a nice guy. But you are fanatically, viscerally opposed to regime change and the occupation. That's fine. You have good reason to be, and I've certainly soured on the exercise as well, although not to the extent you have.
However, because of the level of your displeasure with regime change, you are reflexively inclined to want to adopt the worst case death estimate that comes down the pipe. When arguments questioning the credibility of the numbers are presented, you waffle for a while, something along the lines of " yeah............., maybe....., but I still think the worst case scenario looks like the right one," even when you have no explanation for serious discrepancies, as you admitted earlier. I recall similar discussions with you back in 2004.
In this thread alone, you've called the 600,000 figure "quite possible," and later " it makes me uneasy."
Mike H wrote:
That's *exactly* the table I was pointing at. The numbers under the 4.5 describe the CI around it. They mean that the drop in nonviolent mortality is not significant.
"That's exactly the table I was pointing at. The numbers under the 4.5 describe the CI around it. They mean that the drop in nonviolent mortality is not significant."
Robert, it is significant. The difference works out to a reduction in the non-violent death toll of more than 20,000 less deaths from Lancet 2006 to Lancet 2004.
"Mike H, given the wide CI for the first Lancet study, the close agreement (98k vs 110k) with the second study is surprisingly good."
Actually, they don't seem to agree at all. The 98k had 57,000 from violence and 33,000 from non-violence. The new one has vastly more violence and vastly less non-violence.
If "given the wide CI" for the first means this is all ok, and this does not show that the estimates in one of these two studies were pretty badly off, then it would appear that those who said the first study was basically a "dartboard" which could mean anything were pretty much right.
Furthermore, I recall that the first study was "vindicated" by supposedly lining up with the estimate of violent deaths in the UNDP survey [UNDP good].
Yet it's obvious that the second study doesn't even remotely line up with the UNDP estimate. And indeed the UNDP study, where it is only mentioned briefly in the supplement, is rubbished and quickly dismissed [UNDP bad]
If the first study supposedly lined up with UNDP, while the second obviously does not, and indeed the authors say it does not in the supplement. So how can the first and second be considered to neatly corroborate? Obviously they have produced quite different estimates here, unless the first is a "dartboard" afterall, and can be said to "agree" with pretty much anything.
[Note to Tim: The Party line has obviously changed. Proceed at once to camp #2 as described here: http://www.iraqbodycount.org/editorial/defended/app.3.6.a.php, and remember: You were never fighting Eastasia.]
I lied. I'm back. And I came back to say what hit me while I was gone, but Josh beat me to it. The UNDP survey is in sharp disagreement with this paper. One or the other is wrong, and the first Lancet paper was in rough agreement with the UNDP survey, so I'd say this second one is likely to be wrong.
The UNDP survey said there were 24,000 violent deaths excluding criminal murders in the first 13 months or so of the occupation and this 2006 Lancet paper says there were about 90,000 violent deaths in that same period. (45 violent deaths in Table 4). The numbers aren't quite comparable, because the 90,000 would include criminal murders, but it's very very hard to believe that's supposed to make up the difference. And we're talking about a factor of four here. Less than that if you figure out how to factor in the criminal violence.
I have no clue why this second paper is off, and maybe it's the UN paper that was survey that was wrong, but I'd like to hear why. I don't think there's any reason to take the 400-900,000 confidence interval seriously if on the one figure where we have something of an independent check we find this rather drastic difference. The statistical methodology might be great in theory, but something may be seriously biasing it in practice. It's to the credit of the Lancet authors that they point out so many ways their results could be biased.
Josh, you could try making your points without being a jerk. You were doing great until the last paragraph.
Mike, if you're in favor of an independent investigation I don't have a quarrel with you here. Well, probably not much of one.
Despite all evidence to the contrary, Mike H insisted:
Oh dear.
Robert, as Mike recognized my own bias is to think the worst of the American occupation, but to us laymen it does look like that on the one number where you can apparently compare the 2006 report with other surveys, the numbers don't seem to match. In theory these guys could have just decided to measure the violent death rate in the first 18 months of the occupation, just to see if they'd get what they got before. Or they could compare the first 13 months to the ICLS estimate to see if they match. They don't, or it doesn't seem like it to us amateurs. Maybe there's something wrong with the UN survey, or maybe there's something wrong with my reasoning. I wouldn't have too much trouble believing that, but it needs to be pointed out to me. Doesn't have to be you either--anyone is welcome to do this.
Not that is going to change my moral opinions any one way or the other--I've been going around thinking that Bush was a horrific war criminal with the 200,000-300,000 excess death guestimate in my head. I might think slightly more kindly of the US effort if I took the media figures compiled by IBC at face value with their amazingly low US-inflicted death tolls in most months, but those seem implausible to me.
I relaly dont expect teh nunmbers to match that much; a difference of 20,000 is not that great after all. BUt RObert could do the statistically illiteate of us a favour by explaining why Mike H's contention is wrong.
"Oh dear."
And.................?????
"BUt RObert could do the statistically illiteate of us a favour by explaining why Mike H's contention is wrong."
Yes, he could, Guthrie. I'm still waiting.
Let me out go over the same ground again, in a slightly different manner. Again, I'm comparing the 18 month period covered by Lancet 2004 to the same time period covered by Lancet 2006. I apologize for the length.
The 2006 Lancet study settled on a pre-invasion crude mortality rate of 5.5/1000 per year, slightly more than the 5.0 figure used in the 2004 study. Had the war not occurred, the estimated total number of deaths in Iraq for the 18 month period covered by the two studies should have been approximately 198,000, based on a population of 24 million. According to the study, violent deaths were a statistical non-entity (0.1 per 1000) in the pre-invasion period, so nearly all of the deaths that would have occurred had there been no war can be considered to be the result of non-violent causes.
For the first 14 months of the war (March 03 - April 04), the 2006 study revealed a non-violent mortality rate of 4.5 per 1000 (that's nearly 1 per 1000 less than the pre-invasion rate), and a violent mortality rate of 3.2 per 1000. On page 6 of the companion paper for the study, in the " Non-violent death rates " section, the authors state " Immediately post-invasion, the death rate due to non-violent causes dropped slightly, then stayed level for the next period, but began to rise in the period from June 2005 until June 2006."
When the authors refer to " immediately after the invasion, " they're actually referring to a 14 month period where the non-violent death rate had " dropped slightly." Any way you slice it Robert, you're still left with an admission from the authors that the non-violent death rate was less than the pre-invasion rate, for 14 months. Even for the next 14 month period, while the authors characterize the non-violent rate as " staying level," it was still pegged at 5.0 per 1000, which is still less than the pre-invasion rate, albeit very slightly.
The authors give an excess death figure of 112,000 for the 18 month period from the 2006 survey, all of it from violence. With a violent death rate of 3.2 per year for the first 14 year period of the war, that works out to just under 90,000 violent deaths. But we still need to factor in the excess deaths for the first 4 months of the next period (May 04 - May 05) to get through the full 18 month period where the two studies overlap.
The study doesn't provide the necessary data to determine how many excess deaths occurred in those 4 months, but we do know two things. First, they will all be violent deaths, because the non-violent mortality rate for this period is still less than the pre-invasion rate. Second, we know that the violent mortality rate for the next period is more than double (3.2 to 6.6) the rate for the period that rendered 90,000 violent deaths. A violent death rate of 6.6 per 1000 leaves a total violent death figure of nearly 185,000 for the second 14 month period. If we take a statistical liberty, and simply take a 28 % chunk of this figure (4 divided by 14), we get around 52,000 more violent deaths. Combine the two, and the violent death figure (all of it excess death) for the 18 month period covered by the two studies is 142,000. In keeping with the study's premise that violence is increasing for the most part as time progresses, I'm not suggesting that it is 142,000. It is however safe to say, in my view, that the number the authors would give us, if asked, would be well over 130,000.
Since the study gives a total excess death figure of 112,000, something obviously has to give at the other end, the non-violent side of the equation. It's my belief that this number is at least 20,000, meaning that at least 60,000 (20,000 + the 40000 the first study revealed)less non-violent deaths were recorded in the 2006 study than in the 2004 study.
To give this a big picture comparison, here are the numbers extrapolated from the first study in 2004:
The study used a crude mortality rate pre-invasion of 5.0 per 1000 per year. That extrapolates to approximately 180,000 deaths, nearly all non-violent, for the 18 month post-invasion period, had there been no war.
The study settled on approximately 98,000 excess deaths, approximately 57,000 violent and 41,000 non-violent.
In rough figures, this is how the two studies line up, side by side:
2004 study - approximately 278,000 total deaths (180,000 baseline + 98,000 excess) for the first 18 months of the war, 218,000 from non-violent causes and 60,000 from violence (3,000 being pre-invasion).
2006 study - approximately 310,000 total deaths (198,000 baseline + 112,000 excess) for the first 18 months of the war, approximately 178,000 from non-violent causes and approximately 132,000 from violence.
And the authors claim these two studies are corroborative of each other?
Donald:
Thanks for your honesty.
For what it's worth, I think it's important to remember the UNDP survey was a much bigger cluster survey than the Lancet efforts. In effect, it is the big, mega-survey that Lancet critics and defenders alike were calling for when first debating the 2004 study. As a result, its numbers shouldn't be taken lightly.
I didn't agree when the UNDP study was released that it was all that corroborative of Lancet 2004. My biggest quibble back then was the fact that the UNDP study, in addition to recording civilian deaths, also seemed to be intended to capture the deaths of Iraqi soldiers killed during the conventional fighting. I believe it very likely did capture such deaths. Since the Lancet found no soldier deaths, this would dilute the corroborative effect of the UNDP study for the Lancet, in terms of civilian casualties, since the UNDP's 24,000 figure for year 1 would consist in some part of regular Iraqi troops.
But now, when you compare these two studies to the latest Lancet effort, well, suddenly the UNDP and Lancet 2004 look alot more corroborative to me than they once did.
Nobody is setting me straight, so I'll give it a shot. Table 3 says the violent mortality rate in the March 03-April 04 period was 3.2 per 1000, with a CI of 1.8 to 4.9. The 3.2 figure would be about 80,000 deaths per year and the 1.8 would be 45,000 per year. Add another 5000 for the 13th month and you'd have 50,000 for that period at the low end of the confidence interval. The UNDP report gives 19,000-29,000 war-related deaths for that period, so if you took the highest UNDP number and tacked on 20,000 criminal murders, you could get the highest end of the UNDP range to agree with the very lowest end of the latest Lancet paper.
Which to me suggests, once again, that the two surveys don't agree very well. One or the other seems to have had a bias problem of some kind. (Or maybe both.)
Donald (and Mike H and guthrie):
Sorry for the delayed response--family commitments.
I can appreciate that this can be hard to understand for laymen: everyone is a layman in some area or another. However, I'm a little hamstrung in trying to explain what is essentially a statistical concept without resorting to the use of statistics. If you'll cut me some slack, I'll try.
The basic issue is that small pieces of larger aggregates are often less well-estimated than the aggregate as a whole. We can pretty much estimate how many deaths occur on the nation's highways each month, but there will be more variation in how many will occur on any particular day, or on a particular road, or by single-car accident vs. multi-car accident. Day-to-day variation in those counts doesn't invalidate an overall guess at the total number for all causes for all locations.
That's the basic issue. There are several side issues that have to do with identifying cause-specific counts and the cause-specific death rate that make calculating differences in trends in cause-specific rates harder (this is especially true when the causes are complementary--for example, violent vs. non-violent--because they are not then independent), but they all go in the same direction: estimates of parts of things aren't as reliable as estimates of the total.
The bottom line is that there may indeed be problems with the 2006 Lancet study, but focusing on differences in the violent vs. non-violent death rate in the 2004 and 2006 studies is probably not the most convincing one.
Robert:
You're overlooking the fact that the issue I raise doesn't involve the " small aggregates," as illustrated in your traffic accident analogy.
We're dealing with the two biggest numbers in the study (aside, obviously, from the overall combined excess death number), those being violent and non-violent deaths. All deaths fall into those two categories first, then trickle down into various subsets.
If a methodology is experiencing great difficulty getting consistency from study to study in relation to the attribution of violent vs non-violent deaths, then I consider that a serious problem.
I thought mortality studies were as much about the " how" as they were the " how many."
Mike H wrote:
Actually, it does. In fact, it's exacerbated because the two categories "violent" and "non-violent" are complements: first, one identifies deaths, then one splits them into two complementary categories that together add to 100%. That means that, subject to the overall constraint, any change in one must be mirrored by an opposite change in the other. In the study of cause-specific mortality, these are "gross" rates as opposed to "net" or "independent" rates.
"That means that, subject to the overall constraint, any change in one must be mirrored by an opposite change in the other."
Does that make sense to you, in a real world application, Robert? As I mentioned just now at Crooked Timber, this smacks of resurrecting from the dead tens of thousands of victims from the Lancet 1, only so they can be killed off again in Lancet 2 in an entirely different way, in order to balance the bottom lines of the 2 studies.
As I mentioned above, if the " how " varies so much from one survey to the next, why should one feel confident of the " how many?"
Mike H asked:
Yup. In this particular study, the interviewers first asked about deaths, then they asked about the cause. The allocation is secondary to the total count, so any allocation error has "double" the weight in terms of estimating cause-specific rates.
Robert,
I understand your point about double weight, but given non-violent causes contributed 40% of excess deaths in Lancet 1, what are the chances they would contribute 0% in Lancet 2?
One of the arguments of the more-informed nay-sayers is not that there is anything intrinsically wrong with the methodology, but that some systemic bias has crept into the counting (e.g. the fact that main roads are sampled, etc.). If these biases were avoided, what chance the interviewers would find themselves in front of such a completely different audience second time around?
Even if these differences can be explained, they remain differences and at the very least cast a shadow over confident claims of mutual corroboration between Lancet 1 and Lancet 2.
Robert, I do understand that much--that there will be much less certainty when you start looking at subsets of the data, or try to estimate what percentage died because of what cause. Mike and I are picking at the study from slightly different angles--he's comparing it to the first Lancet paper, which I'm not, because that first one had a very large error bar.
I'm comparing the CI for the violent mortality rate in the March 2003-April 2004 period in the 2006 paper with the war-related mortality rate in the UNDP study which covered exactly the same period and had a much narrower CI. Unfortunately the numbers don't refer to exactly the same thing, since the UN study left out criminal murders. But it does appear that you'd have to have 20-30,000 criminal murders that first year to bring the UNDP estimate (19-29,000) up to the very lowest end of the CI for the 2006 Lancet papers.
I won't claim that proves very much, but it is the closest thing I can think of to a direct check of the Lancet paper to something else quantitative. How much it shows I don't know, of course. I'd have liked to have seen a discussion of the difference in the paper itself.
Anyway, thanks for the response or for any further responses.
Robert:
Like Brownie, I understand allocation of the death is secondary to the total count. That doesn't refute the point I was making about the shortcomings of the Lancet studies in terms of consistency.
When one moves further down the subset chains of causes of deaths from the top (violent and non-violent), there will inevitably be good-faith, honest errors in attribution.
If an Iraqi told an interviewer that their loved one died of a heart attack, when he actually passed away from cancer, that isn't statistically significant. However, I find it hard to believe that an interviewee would get wrong the fact that their loved one died in the street from a bullet in the head, and instead believe they drowned.
It's extremely unlkely that the vast majority of those interviewed were mistaken about whether their family members died violently or not.
When I debated the original Lancet study here, defenders of the study, many of them highly knowledgable, attributed the 40,000 excess deaths from non-violent causes to the effects of war. There were many lengthy exchanges over the question of whether the war had exacerbated infant mortality, accidental deaths, heart disease. There was near-unanimity back then among defenders that of course these excess deaths were legit, actual and expected.
Well now, in Lancet 2, they're gone. And I'm being told by these same people it doesn't matter. Forget about it, they say. You're not a statistician.
Hardly convincing
"Unfortunately the numbers don't refer to exactly the same thing, since the UN study left out criminal murders. But it does appear that you'd have to have 20-30,000 criminal murders that first year to bring the UNDP estimate (19-29,000) up to the very lowest end of the CI for the 2006 Lancet papers."
We can't really be sure of that, Donald. I fully expect there is some overlap between " criminal homicide " deaths from Lancet 1, and " war related " deaths from UNDP.
If your husband was a local level Saddam intelligence service thug, and he was killed shortly after Saddam was ousted by some people whom he hadn't been very nice to prior to the war, then I expect that death would show up as a criminal homicide in Lancet 1, and I also expect it quite likely that his widow would consider it a " war related " death if surveyed by the UNDP, and report it as such.
That, and the point I made earlier about Iraqi army fatalities from the invasion being captured by the UNDP, survey, makes it suspect to label the 24,000 war related death as entirely violence from war, and overwhelmingly civilian.
Mike, neither the first nor the second Lancet report singled out civilians for measurement. You get that impression especially in the first because of the famous claim that most people being killed were women and children from air strikes, but that was because of the Fallujah outlier.
Gotta go.
Donald:
I'm not saying the Lancet reports singled out civilians for measurement. They didn't.
What I am saying is the first Lancet report didn't identify any non-civilian violent deaths. For all intents and purposes, their excess death estimate of 100,000 was entirely civilian in composition.
I'm also saying that one needs to be cautious about claiming the UNDP 24,000 figure contains no civilian homicides. In addition, there is a distinct possibility that the UNDP figure includes deaths of Iraqi soldiers during March and April 2003, which the first Lancet study definitely does not.
I have been studying the Lancet article, and intend to write something more substantial, but here are a few concerns.
1. Cluster sampling can be a very good way of getting data in wartime situations, but it can also be very problematic. The most basic methodological error of this study is that it extrapolates from violent regions to less violent regions. The study should have been conducted by breaking Iraq into three different regions - Kurdistan, Central, and South - the central flaw of the study is that it generalizes to the entire population data from specific areas which are more violent. A solid researcher would have operationalized measures of violence based on observable military actions, number of troops in regions, etc., observable degrees of stability, etc. and then looked more closely at regions with the same methodological strategy. Anyone who is familiar with Kurdistan should know that one cannot extrapolate from Baghdad to that region with any degree of validity.
2. Another major flaw: the researchers simply picked clusters of households that were close to one another for the sake of their own safety. Well, I'm glad, in a country where there are supposed to be 650,000 dead bodies (where are they? I mean - the media is everywhere in Iraq and I don't know how they could miss that many bodies) that the researchers weren't killed ( what is the probability that they would have been if their numbers are that high?. My concern is the haphazardness with which they selected the samples: from what I could tell, they just chose streets randomly from a map and then interviwed people who live close to one another. Now, did they control for factors such as bombs having gone off in the are? How do we know for sure they did not select certain neighborhoods that were more violent? Then there is the fact that they said some household did not participate. How many? We aren't told this. Why not? They say they saw the death certificates, and I have no reason to doubt this, but a truly scientific study, one which would be published in a reputable journal such as the Lancet, would demand that we have copies of these in order to verify the data. There are so many claims and data that cannot be falsified that the level of scientific integrity is seriously in question.
Perhaps the most glaring problem is that the results are not replicable. If scientific findings are not replicable or falisfiable, then they are not valid.
3. It is unconscionable that the authors did not distinguish the status of the dead -- if you are a good Quaker, and you sincerely believe that all death, even of vicious Baathist and al-Qaeada thugs is lamentable, or if you are Ronald Reagan and think that SS officers were "victims" of Hitler and the Nazis, then I suppose that you would not want to distinguish between victims and perpetrators. This sort of thing happened in Bosnia has well. People would say that many people on "all sides" were killed, and this was true, but for the most part, it was Bosnian Muslims who were being killed by nationalist Serbs. The point here, is that the study insinuates, and the political uses of this bear this out, that the "insurgents" themselves are somehow victims of the US invasion, rather than active agents who have exploited the fog of war for their own attempt to terrorize Iraq away from its path to democracy. We all need to think about causation here: did the US "cause" the deaths? Most of the dead are victims of Islamic terrorists, and to be a terrorist is to specifically make yourself an agent of destruction. So we need to think about this more. One can never get away from the fact that the war set up the context in which they could do their killing, but we need to remember that the majority of Iraqis (as evidenced by voting in fair and free elections and public opinion polls) want democracy and it is the latter that are the targets of terrorists.
4. If you look carefully at the provenance of the study and the political language of the authors, it is blatantly clear that they have partisan positions against the war. You can actually see it in the language they have used subsequent to the publication of the study. In the Lancet, they have put on the holy mantle of science, but in their comments they belie a deep hostility toward the war that I think motivate They are asking us to put blind faith in the scientific method, as if it can resist all political biases. I sincerely believe from my battles with the anti-war crowd that many of them have adopted the same kinds of duplicitous tactics as the Bush administration. We are supposed to take these scientists at their word, but I really must at least raise the question since the Lancet has a strong reputation for publishing politically loaded scientific findings.
In a sense, I think that people who are criticizing the study but not really tucking into some of the shadier aspects of the methodology are basing their criticisms on the fact that the findings are counterintuitive. An MD friend of mind calls it the "interocular sensitivity test." The data are not backed up by bodies that can be seen and counted, the samples are not valid because they were too random (!), they don't square with any other reports of either a formal or informal nature. The anti-war factions have always prophesied gloom and doom for Iraq, and have always ignored good news coming from the country. Given the ideological corruption I have seen on American campuses in relation to the war, and given the ideoligical proclivities of the researchers themselves, as expressed in their own words in defense of the study, I cannot -- but wish I could - rule out the possibility that the researchers produced a set of figures that would bear out their most dire prophecies. There are enough methodological infelicities in the study to make me have to honestly consider that sordid hypothesis.
That's mostly irrelevant political stuff, Thomas Cushman. I despise Bush and think he's a war criminal, but I also mistrust this study and not for the reasons you give. No sane person denies that Iraq is in a pretty horrible state right now--the question is whether it's quite as horrible as this study claims.
Iraq Body Count has just come out with their analysis, which Josh links to in another comment section. Now that contains some relevant reasons for skepticism.
Donald, when you write "UNDP" do you mean the 2004 ILCS study? I just went back and looked at the questionnaire from that. First, that's a kind of long questionnaire and there are only two relevant questions (HM01 and HM05) on it. It's true that the sample size was larger, but the detail is missing -- for example, you cannot tell from the ILCS anything about timing or pre- vs. post-invasion mortality. I don't know whether it is the case here, but it's quite common that a smaller survey that focuses deeply on a single topic can turn out to be more reliable that a larger general purpose survey.
That's the one, Robert. And I'm open to the possibility that it is the ICLS study that is wrong and has some hidden bias and that just because it is bigger and has a smaller CI doesn't necessarily make it right--my point is that we do have one other independent survey to compare the second Lancet paper to and they don't seem to agree.
IBC also has a large number of criticisms out on their webpage--I thought they made good points, but will sit back and see what others have to say in response.
Donald, I respect your opinion that what I say is politically irrelevant, but disagree. I think it is relevant that the PIs in this study are deeply partisan, and in the case of Roberts, have indicated in public that they are willing to play fast and loose with interpretations for political purposes. Also, what is striking is the "fast track" publication at the Lancet - any standard medical article would have been vetted far more seriously and even the editor, in his editorial about the article belies his political sympathies. This study could have been much more comprehensive, with much more data, and some triangulation would have made it stronger.
I do agree that whatever the case, the situation is dire, and we all need to take that into consideration. On the other hand, if it is indeed the insurgents who are doing most of the killing (and that is the case), then from a public health standpoint, it would be contraindicated to leave the country and let them have it to do more -- in other words, the logical conclusion of the Lancet study, if they were to acknowledge who is doing most of the killing, is to intensify the effort to stop them, train the Iraqi army, and then leave when it can do the job itself.
In any case, you are right
I think it is relevant that the PIs in this study are deeply partisan, and in the case of Roberts, have indicated in public that they are willing to play fast and loose with interpretations for political purposes.
When and where did Roberts express this willingness?
Donald,
You're right, Bush and many in the civilian leadership of the US-UK war party are war criminals. But, for the record: how bad does Iraq have to be to be described as
'Hell on Earth' or even worse? The first Gulf War destroyed the civilian infrastructure of the country. Here is the tally: The US and its allies destroyed Iraq's water, sewage, and water purification systems and its electrical grid. Nearly every bridge over the Tigris and Euphrates was demolished. Twenty eight hospitals were bombed and thirty eight schools were destroyed. All eight of Iraq's hydropower dams were hit. Grain silos and irrigation systems were also attacked. Farmlands near Basra were inundated with saltwater after allied attacks. More than 95% of Iraq's poultry farms were destroyed, along with 3.5 million sheep and more tha 2 million cows. The US and its allies bombed textile plants, cement factories, and oil refineries, pipeline and storage facilities, all of which contibuted to an environmental and economic disaster that continued unabated for more than 12 years.
When an American general was confronted by the press with news reports of Iraqi women carrying back buckets of filthy water from the Tigris river that was contaminated wit raw sewage from the bombed treatment plants, he replied, 'People say, "You didn't recognize that the bombing was going to have an effect on water and sewage". Well, what were we trying to do with sanctions: help out the Iraqi people? What we were doing with the attacks on the infrastructure was to accelerate the effect of the sanctions'.
Let's be honest here: with Iraq's civilian and military infrastructure in ruins, the intent of the sanctions was to keep Iraq not only from rebuilding its army but the very foundations of its economy and society. And then the US launches a further invasion - in a country already flattened after the first war and after more than a decade of sanctions that resembled a medieval siege. Let's assume that the war party are only accountable for half of the death total alluded to in the Lancet study. Isn't 300,000 people a lot of carnage? Even genocide? And remember that this total ignores civilian deaths between 1991 and 2003.
At the end of the day, people are seen int terms of their usefulness to 'us'. There are worthy victims, such as the Kurds in Iraq, and unworthy victims, such as the Kurds in Turkey. There are 'good' tyrants, like Suharto, one of the biggest torturers and mass murderers of the second half of the twentieth century, and 'bad' tyrants, like Saddam Hussein, at least when he 'slipped the leash'.
When I read comments from the likes of KevinP and Ben, making all kinds of apologetics for US crimes, either by suggesting they don't occur (KP) or comparing them with crimes committed by officially designated enemies (Ben) I cringe. I know that the US drip feeds much of its population with ideas of US exceptionalism from the cradle, but heck, KP and Ben, you sound like smart enough guys to look beyond the lies and propoganda. I suggest that Ben reads up a little bit on US involvement in destroying populist based change in Latin America in the 1980's and 1990's. When a senior US adviser to the regime in El Salvador advised the military to 'go primitive' in dealing with opposition to the regime there, he wasn't suggesting a rough camping weekend. All of the horrors Ben described were being perpetrated by Saddam were common practice amongst CIA-trained militias in Nicaragua, El Salvador and Guatemala. Ben should read about US involvement in the massacre at El Mozote by the Atlcatl battalion, an elite bunch of thugs trained by the CIA.
At the end of the day, the truth is, like it or not, that 'our side' is also guilty of terrorism and mass murder. To be fair, the US is only the latest imperial power to embrace barbarism to attain political objectives abroad. Imperial Rome, Britain and France (and more recently, the Soviet Union and Germany) all have had great experience in slaughtering distant peoples while claiming they were on 'civilizing missions'.
Donald wrote:
I think I was one of the minority that thought the 2004 ILCS report wasn't good evidence either for or against the 2004 Lancet study, and that a possible explanation is that cause of death as collected in these types of retrospective studies isn't terribly robust. Still unanswered is whether poor allocation of cause of death (if that is what is happening) is an indicator of deeper problems with the overall counts of all-cause deaths. The 2004 and 2006 Lancet studies seem roughly consistent in pre-invasion rates and total all-cause rates, so I have to think about what mechanisms might allow those to happen and still bias the totals high.
Thanks for the reply, Robert. I think that what would be good would be for epidemiologists and human rights experts to get together and compare the Lancet case to IBC's case--I think it'd be fair to let IBC represent the strongest objections to the Lancet case. (Although the two sides are being polite to each other now, there's been something of a blood feud going on between them.) Maybe this will happen now that IBC has come out with their response.
Jeff, I mostly agree with your political views on Iraq and US foreign policy. I'm just not so sure about this paper's validity, but I could wrong.
A response for Robert's query about Les Roberts, by way of explaining how he approached Iraq before the study : he ran for Congress for a New York seat on a specifically anti-war platform. An interview with him can be found at www.thatsmycongress.com/lesroberts.html. Now, if those of you reading this, including Robert, honestly believe that this man did not have a conflict of interests (between his avowed ideological opinions and his science), then I probably cannot persuade you but let me try:
1. He notes: "For example, I did a study about how many civilians had died in Iraq in 2004. It was funded by Johns Hopkins University. I had worked for them for eleven years at that point. I had a bank account here in Cortland that I have had for the last four and half years, and when Hopkins wired $20,000 to pay for the expenses of that study to me, my bank called me up and said, "We'd like you to come in here, physically, and explain to us why you just got $20,000 from Johns Hopkins with a note that says, 'RE Iraq Study', and we have to inform the federal government before we can deposit this into your account.You know, here, in Cortland, New York, the Big Brother phenomenon had extended down to my relationship with my bank. So, I agree with Gore about civil liberties being impinged upon by our national security concerns."
The federal gov. had a law BEFORE Bush became president that requires banks to report all deposits above $10,000. So he is spinning Bush into Big Brother, nonsense.
2. In his interview he notes: "Sure. I spent the month of September, 2004 in Iraq. I never was protected by anyone with a gun. I stayed healthy and free, just by trying to be invisible and by the good will of the Iraqi people. Of every Iraqi I spoke to who could communicate in English, or when my driver was there and I could communicate with them through my driver and he felt it was safe for me to speak English in public, I would ask, "Why do you think the coalition came?" No one, not one person said anything related to Saddam. They thought we had come for two reasons: We came for oil and we came to create a setting of anarchy so that every nefarious element in the region would come and fight us there, rather than fighting us in North America or fighting us in Israel. When, a few months ago, the British military commissioned a poll, they found 82 percent of Iraqis want the coalition gone now, I think it shows us that we do not have either the moral high ground or the minimal support required to be a force of stabilization."
This is complete and utter spinning of interpretation in his own partisan directions. I was and am deeply involved in interpreting survey research from reputable (and much less partisan) sources than the Lancet and their "team". In September 2004 a large majority of the Iraqis favored the war, did so because it freed them from Saddam, and were very optimistic about the future. Even in Sept., 2005 70 percent of Iraqis were optimistic about the future (this was established by several surveys, see in particular Oxford Research International). Roberts ignored all of that research as did most of the anti-war movement, a fact that I as a writer will never let them forget. It is patently clear that he was defining the situation for his own political purposes, and I have very reason as a skeptic and a scientist to believe, based on concrete evidence of things that the researchers have said before and after the study that they were not neutral and that their study is a misuse of science for ideological purposes. It is shabby, hurried, methodologically weak, and untrustworthy. Many people beside me have come out and criticized this, people who are far more methodologically saavy than me (see iraqbodycount.org and recent article by methodologist in WSJ just to name a few). My role, as I see it, is to cast a skeptical eye on the the ideolgogical dispositions and tendencies of those who did the study, to point out that they were NOT disinterested, and that this is yet a further case of how those who oppose the war will, like those who started it, resort to bad data and analysis in order to foster their cause.
3. Finally, and very briefly, in one of his interviews, he notes a cab driver who cried and mournfully mentioned Abu Ghraib and then he projected that as the view of most Iraqis. From what we know, most Iraqis cared very little about Abu Ghraib (again see longitudinal studies of Oxford Research International), it was a cause celebre among anti-war factions in the West. I agree it was awful that it happened, but it is also clear to me that Abu Ghraib obfuscated the more general and positive processes of democratization that the left WANTED to ignore because they opposed the war.
One more thing, if I might: several people in other places have pointed out that the study failed to properly estimate the pre-war total by specifically IGNORING the deaths which occurred as a result of the sanctions. This is ironic because before the war, left-wing analysts constantly argued for lifting the sanctions because of the excess mortality. Now they have conveniently forgotten these data. As a columnist in National Review, obviously a partisan mag, pointed out if you count in these figures and imagine that they continued that way, then the net result, even if the Lancet study is true, is that the war has SAVED 100,000 lives. I'm not sure I concur with this logic, but if you do the math that way, it works. It all depends on where you are standing.
Thomas, [I already refuted your National Review based talking points](http://scienceblogs.com/deltoid/2006/10/goldblatt_on_lancet_study.php).
Thanks for demonstrating that you don't understand the statistics.