IFHS study on violent deaths in Iraq

A new study of violent deaths in Iraq has been published in the NEJM. You can read it here. Here's the abstract:

Background Estimates of the death toll in Iraq from the time of the U.S.-led invasion in March 2003 until June 2006 have ranged from 47,668 (from the Iraq Body Count) to 601,027 (from a national survey). Results from the Iraq Family Health Survey (IFHS), which was conducted in 2006 and 2007, provide new evidence on mortality in Iraq.

Methods The IFHS is a nationally representative survey of 9345 households that collected information on deaths in the household since June 2001. We used multiple methods for estimating the level of underreporting and compared reported rates of death with those from other sources.

Results Interviewers visited 89.4% of 1086 household clusters during the study period; the household response rate was 96.2%. From January 2002 through June 2006, there were 1325 reported deaths. After adjustment for missing clusters, the overall rate of death per 1000 person-years was 5.31 (95% confidence interval [CI], 4.89 to 5.77); the estimated rate of violence-related death was 1.09 (95% CI, 0.81 to 1.50). When underreporting was taken into account, the rate of violence-related death was estimated to be 1.67 (95% uncertainty range, 1.24 to 2.30). This rate translates into an estimated number of violent deaths of 151,000 (95% uncertainty range, 104,000 to 223,000) from March 2003 through June 2006.

Conclusions Violence is a leading cause of death for Iraqi adults and was the main cause of death in men between the ages of 15 and 59 years during the first 3 years after the 2003 invasion. Although the estimated range is substantially lower than a recent survey-based estimate, it nonetheless points to a massive death toll, only one of the many health and human consequences of an ongoing humanitarian crisis.

I'll put my comments in a separate post, but here are some comments from Les Roberts:

1) There is more in common in the results than appears at first glance.

The NEJM article found a doubling of mortality after the invasion, we
found a tripling. The big difference is that we found almost all the
increase from violence, they found 1/2 the increase from violence.

IBC adds to their estimate for months after a given date; back at the
end of June 2006, IBC estimated 41,000 deaths (my notes suggest 38,475
to 42,889 on June 24, 2006). This new estimate is 4 times the "widely
accepted" number of that moment, our estimate was 12 times higher. Both
studies suggest things are far worse than our leaders have reported.

2) There are reasons to suspect that the NEJM data had an
under-reporting of violent deaths.

The death rate they recorded for before the invasion (and after) was
very low....lower than neighboring countries and 1/3 of what WHO said
the death rate was for Iraq back in 2002.

The last time this group (COSIT) did a mortality survey like this they
also found a very low crude death rate and when they revisited the exact
same homes a second time and just asked about child deaths, they
recorded almost twice as many. Thus, the past record suggests people do
not want to report deaths to these government employees.

We confirmed our deaths with death certificates, they did not. As the
NEJM study's interviewers worked for one side in this conflict, it is
likely that people would be unwilling to admit violent deaths to the
study workers.

They roughly found a steady rate of violence from 2003 - 2006. Baghdad
morgue data, Najaf burial data, and our data all show a dramatic
increase over 2005 and 2006.

Finally, their data suggests 1/4 of deaths over the occupation through
6/06 were from violence. Our data suggest a majority of deaths were
from violence. All graveyard reports I have heard are consistent with
our results.

Tags

More like this

I'm putting on my David Kane hat here:

the household response rate was 96.2%.

FRAUD!!! Where is the data!?

After adjustment for missing clusters

They haven't revealed their methods of adjustment! It's a trick! FRAUD!

the overall rate of death per 1000 person-years was 5.31

That's way too low for any country! My god, it's lower than the Great US of A!! We can't take this study seriously!

This rate translates into an estimated number of violent deaths of 151,000 (95% uncertainty range, 104,000 to 223,000)

That confidence interval isn't symmetric! They used a cunning statistical trick I don't understand to hide their real findings, which were that people were resurrected! I can't trust this study till I see the data and all the programming, and get to interview all the recruiting personnel myself!!

SG, you missed this:

>Only 0.4% of households declined to complete the questionnaire.

Kane will be along to accuse them of fraud any second now.

only 0.4%? I bet those interviewers fabricated data - I demand to see the record of interviews! Not the names mind you, just cluster-level data which they are forbidden from revealing for privacy purposes. And if they don't give it to me straightaway... FRAUD!

You will notice that this study is getting a far wider (and very different) response to the Lancet Study. Why could this be, do we think? Here's a hint (in the increasingly right wing New Scientist).

'Iraqi war death toll slashed by three quarters.
The number of dead since the US invasion may be far lower than previously claimed, say a team working for the Iraqi Ministry of Health.'

Is how they reported it.

The Guardian is also going strong on the 'You see? It was a humanitarian intervention after all!' line.

SG, if the Lancet study had never been undertaken, making the NEJM study the highest estimate of Iraqi deaths that's pretty much exactly the response it would have received.

As it is, of course, the NEJM study will be praised to the high heavens by people who up until now were trying to claim the IBC figures weren't a serious undercount.

By Ian Gould (not verified) on 09 Jan 2008 #permalink

"2) There are reasons to suspect that the NEJM data had an under-reporting of violent deaths.

The death rate they recorded for before the invasion (and after) was very low....lower than neighboring countries and 1/3 of what WHO said the death rate was for Iraq back in 2002."

Does that make any sense?
Using a lower pre-war death rate would attribute a greater proportion of the current death rate to the post-invasion environment.

How could it make sense to suggest that this could contribute to an undercounting of deaths attributable to post-invasion factors?

Personally I always assumed the problem with Roberts' Lancet estimates was due to them using a too-low pre-war death rate and hence you get a lot more dead bodies estimated than anyone knows anything about.

Surely the bottom line is that this study has a greater number of samples and therefore is more likely to have an estimate of mortality closer to the real figure (which we will never know). I'm sure even Les Roberts would have been pleased when he was undertaking his study, if he had been told he could have had ten times more sampling clusters.

No one study can be considered to be the truth. They are estimates. There are examples of epidemiological work in other areas which has been later found to have been less likely after further studies have been carried out.

There is a danger that in defending the Lancet study, one can end up making it the "truth". For example, are people who consider this new study to be a more reliable because of its numbers "denialists"?

All the studies published need to be looked at as a whole. Setting up a particular study, which happens to have the highest estimate, as the truth, and seeing the others as attacks on it is not a sensible way of looking at this.

Even if you accept the IFHS figure, I don't think critics of the war need to be concerned that it can be used as a stick to beat them with. As the authors state "Although this number is substantially lower than that estimated by Burnham et al. it nonetheless points to a massive death toll in the wake of the 2003 invasion - and represent only one of the many health and human consequences of an ongoing humanitarian crisis". It is notable that even one of the authors is also listed as deceased, having been killed on his way to work in August of last year.

It would be regrettable if this study was merely seen as another attack on The Lancet study akin to that of the Kane or The National Journal article you link to in the previous post.

"Kane will be along to accuse them of fraud any second now."
Posted by: Tim Lambert | January 10, 2008 3:59 AM

And I hope you will be along shortly to defend or criticise this estimate with the same vigour you did for the Lancet estimates, solely on the basis of whether their methodology was sound.

[I'm sure even Les Roberts would have been pleased when he was undertaking his study, if he had been told he could have had ten times more sampling clusters]

Not necessarily if he was then told that 11% of his clusters would be too dangerous to visit and would have to have their results inferred on the basis of IBC data. That's the definition of "informative censoring" and I don't understand why the authors seem to downplay the effect of this. It looks like a reasonably good survey and it has similar qualitative conclusions to the Roberts et. al one (ie, that the hypothesis that the Iraq invasion wasn't a disaster can be rejected with high degree of confidence) but I don't think it's grounds for completely throwing out Roberts et al and Burnham et al - I think Les Roberts' point that this group did have a big undercount in their child mortality data is a good one.

You agree then that this estimate is probably, in the balance of things, a more reliable estimate of the number of deaths - since it does not influence your own, or Les Roberts', "qualitative conclusion" from his quantitative study that the Iraq war was a disaster?

SG said: "I'm putting on my David Kane hat here"

You mean your "dunce cap"?

Go sit in the corner, SG, and don't come out until you have written "Just because I can't calculate pre- and post-invasion CMR's for Iraq doesn't mean no one else can" 100 times.

So can we at least bury the IBC? I think the scientific debate will be interesting to watch. At least now there are published results to work with.

You agree then that this estimate is probably, in the balance of things, a more reliable estimate of the number of deaths - since it does not influence your own, or Les Roberts', "qualitative conclusion" from his quantitative study that the Iraq war was a disaster?

No and no. I don't necessarily agree it's a more reliable estimate of the deaths, because I don't understand the way in which they've dealt with the censored clusters and I think Roberts' point about past undercounts makes sense. And my assessment of whether it is reliable or not would be a cause, not a consequence, of my conclusion that the Iraq war was a disaster.

Why bury the IBC? You just need to be aware of the nature of the data they have collected. I really do not understand this desire to bury things. The same goes for dsquared's comment about not completely throwing out Roberts et al and Burnham et al. He is right, you don't have to. The better option is too judge each thing on its merits and potential weaknesses and make a judgment about which is the more likely to provide the best estimate. Obviously the IBC then becomes one of the less likely estimates.

The good thing (or rather bad thing in reality) about this debate in comparison to other scientific issues, is that the conclusion is pretty much clear from an overview of published data we have. A large number of people have died and it has been a tragedy for the Iraqi people and their civil society. You don't have to start being a partisan for the IFHS study or the Lancet study to make this point.

The point that Hidari makes in comment four about this new study being seen by some as vindication, which it is not, is a natural result of the politicisation of The Lancet article by all sides - including, rather unfortunately in my own opinion, the editor of the Lancet. Given the nature of study, it is hard to say that this was avoidable. However, I think the best thing to do now is deal with the data and the methods.

my assessment of whether it is reliable or not would be a cause, not a consequence, of my conclusion that the Iraq war was a disaster.

Glad to hear it, I expected nothing less. I was somewhat surprised by the hypothesis you put forward though in comment 9, as though there might be a threshold of deaths you hold below which Iraq might not have been a disaster.

Anyway, it would be a shame, would it not, if this study was judged as an attacks on the Lancet study similar to those that Tim has highlighted on his blog. I suspect that some are who highly attached to a figure of 650,000, in a less dispassionate way than you no doubt are.

"You agree then that this estimate is probably, in the balance of things, a more reliable estimate of the number of deaths - since it does not influence your own, or Les Roberts', "qualitative conclusion" from his quantitative study that the Iraq war was a disaster?"

Anthony, even if the IBC figures were correct- and clearly now they are not - the Iraq War would have been a disaster.

I can't speak for others but my concern throughout the debate on the Lancet studies has been to defend the reputation of reputable scientist from unreasonable attack and to defend what appeared to be valid scientific work.

By Ian Gould (not verified) on 10 Jan 2008 #permalink

On a brief look I think it seems like a decent effort. My main concerns about it would be that:

1) adjustment for missing clusters was based on IBC, which is assuming a lot, especially since they then use their results to conclude IBC gives a reliable measure of trends.

2) it's going to be very vulnerable to claims of trickery from the wingnut world, because the final result is based on Monte Carlo simulations to find the most likely death rate given the many uncertainties. Everyone knows that simulations are just a way of massaging figures! My God, global warming science is based on simulations, and look how wrong that is!

3) the authors make repeatedly the point that their death counts are massively underreported

I also note that the authors of this paper had access to what they describe as "microdata" from the Lancet studies. Yet they don't have any fraud accusations to make ... it's a conspiracy, I tell you!! All them epidemiomoliogolists are protecting each other!

[I was somewhat surprised by the hypothesis you put forward though in comment 9, as though there might be a threshold of deaths you hold below which Iraq might not have been a disaster]

A negative number of excess deaths would obviously and unambiguously have been a triumph - in utilitarian terms I would be tempted to say that even a positive excess deaths figure similar in scale to the Northern Ireland casualty count could be a reasonable trade-off for the introduction of democratic institutions and removing the risk of some future Saddam massacre. But as Ian says, even at the IBC level, there are far too many broken eggs given the quality of the omelette delivered.

Anthony, even if the IBC figures were correct- and clearly now they are not - the Iraq War would have been a disaster.

Ian, my point in comment ten was to note that whether 150,000 died or 650,000 died dsquared's view of the war in Iraq will be that it was a disaster. Which is precisely the same point you are making with regard to your own views. Therefore, political considerations should be cast aside in judging this new study, since the point is well made that the consequences of the Iraq war have been appalling for Iraqi civil society regardless of which study gives the most reliable estimate of deaths.

"even if the IBC figures were correct- and clearly now they are not"

Why do people keep saying things like this, as though there has ever been a question raised as to what the IBC figures represent?

Even if the freaking IBC didn't explain to you clearly what their figures were, you could realise just by looking at it it that they are tabulating media reports of deaths.

Do you read your own newspaper and feel the need to tell people that the sport scores in it don't represent all the sport that was played that weekend? More than once? Continually for years on end?

But as Ian says, even at the IBC level, there are far too many broken eggs given the quality of the omelette delivered.

This is, of course, an entirely rational position to take. However, it is judgment, not a scientific conclusion. The main point for my posts in this thread is to suggest that the WHO report does not need to be attacked in politicized and partisan manner, and nor does it need to be undermined to protect The Lancet paper or emotional energy expended in defending it, in order to support any thesis that the war in Iraq was a disaster (in your terms).

Present company excluded, I suspect many may find that hard. Especially people who rant at anti-war protests about the "axis of Anglo-American imperialism".

Anthony says: "Surely the bottom line is that this study has a greater number of samples and therefore is more likely to have an estimate of mortality closer to the real figure (which we will never know)"

Not necessarily. If Saddam Hussein himself had done a survey asking Iraqis whether they liked having him as their leader, how many people do you suppose would have answered "No"?

...which, of course, is the point Les Roberts is making when he says

"past record suggests people do not want to report deaths to these government employees.

We confirmed our deaths with death certificates, they did not. As the NEJM study's interviewers worked for one side in this conflict, it is likely that people would be unwilling to admit violent deaths to the study workers."

I'd have to say that is a very serious criticism -- perhaps the most serious of all.

We now have another apparently competently conducted study to add to the mix. There certainly are issues to be explored, as their are with the Lancet studies.

One point on Les Roberts' claim that:

The last time this group (COSIT) did a mortality survey like this they also found a very low crude death rate and when they revisited the exact same homes a second time and just asked about child deaths, they recorded almost twice as many. Thus, the past record suggests people do not want to report deaths to these government employees.

I raised this issue with Jon Pedersen when I spoke with him after Lancet 2. He felt that there was no relevance of the undercount in child mortality estimates to adult mortality. He stated that demographers are well aware that child mortality is often underreported in such surveys, which, if I remember correctly, was exactly why they examined the figures in time to resurvey. I'm certainly not an expert here, but do feel that this point should be rebutted before using the undercount of child mortality to discount the ILCS estimates of adult mortality.

That said, let the examination of the new study proceed!

BTW, does anyone know what happened to the ORB survey's new sampling, that was to be reported in early October and never was? And did they ever put out any methodological detains that can be examined?

Yes the number of deaths remains huge and yes Ian Gould you were just defending 'what appeared to be valid scientific work' but you were wrong and so was Deltoid on a scale.

The IBC estimates, whatever their faults, were closer to the best estimates now available than Lancet.

On the matter of missing clusters, I notice that the authors of this NEJM article compared the proportion of deaths in Baghdad for their paper with Lancet 2. For them it was about 50%, for Lancet 2 26%. Which makes me think that a much higher rate of death for the Lancet 2 paper occurred in exactly the areas which the NEJM under-sampled for security reasons. They then adjusted their death rate for the missing clusters using the IBC data, which also had about 50% of deaths in Baghdad. This would have introduced a bias towards the null compared to the Lancet paper.

I suspect that adjusting the missing clusters using Lancet 2 data would have given marginally higher death rates, but they didn't want to do this because Lancet 2 was a cluster survey too, so presumably risked the same biasses. It's a shame they didn't do a sensitivity analysis for some of their assumptions, though, or include estimates using Lancet 2 mortality distributions (which they didn't as far as I can see).

You guys are so funny!

1) I have been told that the data and code from IFHS will be made available. If it isn't, you will hear many complaints from me. But if it is made available, can we all agree that the behavior by the L1 authors is ridiculous?

2) I tried to get the data for L1 out of Roberts for a year before accusing anyone of anything. Science takes time.

3) Response rates! Indeed, this will be a fun topic. For background, see endless discussions here. The short version is that response rates are made up of two parts: contact rates and participation rates. The participation rates in L2, while extremely high, have been in, at least, the realm of the possible. Once you are talking to someone in Iraq, they seem willing to answer your questions. The contact rates in L2 were always the key red flag, especially since L2 required talking to either the head of household or the spouse and L2 only made one contact attempt at each dwelling. I have not studied this new paper closely (and I do not think that it reports all the data that we need), but we will return to this topic in due course. If we see similar response rates between L2 and this, I will retract my criticisms. If L2 rates are much higher than here, will you join me in my suspicions?

More to come.

And, just to be clear, isn't Roberts a bit misleading above? IBC measures only civilian deaths. So, assume that IFHS is correct and 150,000 violent deaths occured in a period in which IBC reported 40,000 civilian deaths. Isn't it plausible that the 110,000 violent deaths that IBC does not report are combatant (soldiers, police, insurgent and so on)? In other words, IBC and IFHS match up pretty darn well, depending on what you think the ratio of civilian/combatant deaths would be.

By David Kane (not verified) on 10 Jan 2008 #permalink

HC: "The IBC estimates, whatever their faults, were closer to the best estimates now available than Lancet."

At this point, I'd withhold judgment on whether the NEJM represents the best estimates under discussion over issues such as the missing clusters sort themselves out.

Furthermore the IBC count, IIRC was around 40,000 for this.

The median value for Lancet 2 was 151,000 and the Lancet 2 median value was ca, 600,000.

So IBC was under by a factor of 4, Lancet 2 was over by a factor of 4 - assuming as I say that NEJM is validated.

By Ian Gould (not verified) on 10 Jan 2008 #permalink

The IBC estimates, whatever their faults, were closer to the best estimates now available than Lancet.

Actually, we don't know that yet. We know that a new paper CLAIMS this to be true. Perhaps you should wait until the paper's been scrutinized a bit more closely before leaping to your conclusion?

Or do you only demand scrutiny when you don't like the numbers a study delivers?

The IBC estimates, whatever their faults, were closer to the best estimates now available than Lancet.

what do you mean with "closer"?
who decides what the "best" estimate is?

the paper is good, and it is another proof of the amount of death and chaos that the invasion brought to Iraq. i have three points that worry me a little:

1. the use of the paper to contradict the lancet paper.
big parts of the paper actually support the lancet paper. in many places, the authors mention a potential bias towards the estimates being too small.
they should have made this more clear. instead, by constantly mentioning the IBC count and by not attributing the increase of non-violent death clearly to the war in situation in iraq, they provide ammunition to those who prefer to belittle the effect.

2. stable number of violent death.
this is simply unbelievable. look at fig 1 on page 490. even the US army noticed the increase in violence. if a survey doesn t show it, serious questions need to be asked!

3. displacement
while it looks as if they included displacement numbers into the popuplation count, i don t think that a survey for martality in Iraq can be done now, without taking a very close look at this issue.
it most likely is the explanation for the lower violent death rate found in this survey.

And, just to be clear, isn't Roberts a bit misleading above? IBC measures only civilian deaths. So, assume that IFHS is correct and 150,000 violent deaths occured in a period in which IBC reported 40,000 civilian deaths. Isn't it plausible that the 110,000 violent deaths that IBC does not report are combatant (soldiers, police, insurgent and so on)? In other words, IBC and IFHS match up pretty darn well, depending on what you think the ratio of civilian/combatant deaths would be.

a pretty wild guess for someone who claims to be doing statistics.

let me just state this clearly:
you just made the assumption that EVERY SINGLE violent civilian deathcase in Iraq is reported in the newspapers?!?

sorry David, but even for you that is an idiotic claim.

David, we will not agree about the behaviour of the L1 authors until you accept the possibility that they have honest, privacy reasons for not releasing that data.

Also, given the L2 authors appear to have given "microdata" to the NEJM authors, who apparently were satisfied with it, perhaps it's time you recognised that the problem in this little dialogue was the chap standing around screaming FRAUD. Which, from memory, you did very early on.

Given that the response rate for this paper was 96.2%, and 0.4% of those contacted refused to participate, doesn't that make the contact rate up around 97% of households? And wasn't your issue with L2 that its contact rate was 98%?

Have you checked all the RRs in this new paper against the count data, to see that they don't contradict? Presumably your radical new theory about ratios of normal distributions applies equally to this paper...? And aren't you a little bit concerned about that non-symmetric confidence interval? Surely there's a trick there...

"Why bury the IBC? You just need to be aware of the nature of the data they have collected."

Probably most people agree with that, but if you were around here or at medialens for the Lancet/IBC wars, you'd have found that IBC representatives were usually extremely reluctant to acknowledge an undercount of more than a factor of two, despite the lip service they pay to limitations of their data on their webpage. Sloboda did acknowledge the outside possibility of a factor of 4 once, but in the anti-Lancet debates they generally weren't that generous in their concessions, or they wouldn't have tried so hard to discredit Lancet1.

I've provided a link to one page of the IBC response to Lancet2, which gives you some idea of their perspective. They cite the Iraq MOH statistic of 60,000 wounded in the period from mid 2004 to mid 2006 and they argue that this is likely to be a fairly complete count, since hospitals would want to be accurate on how many people they've treated. It's a little hard to believe that 150,000 people died from March 2003 until mid 2006, but only 60,000 were seriously wounded from mid 2004 to mid 2006.

http://www.iraqbodycount.org/analysis/beyond/reality-checks/4

By Donald Johnson (not verified) on 10 Jan 2008 #permalink

David thinks most of the 150,000 dead are non-civilians.

As for the number of non-civilian deaths, the US claims to have killed 19,000 insurgents from June 2003 through the summer of 2007--

http://www.usatoday.com/news/world/iraq/2007-09-26-insurgents_N.htm

As for police, etc, I'm not sure, but IBC may include police as civilians and count their deaths. I haven't looked for the number of Iraqi soldiers killed, but would they show up in a household survey?

By Donald Johnson (not verified) on 10 Jan 2008 #permalink

Off the top of my head I would make the following propositions:

1. The IBC represents the absolute floor for deaths in Iraq. It is overwhelmingly likely that it seriously under reports the number of deaths.

2. The most dangerous areas in Iraq are likely to be the ones where the violent death rate is highest.

3. The NEJM did not survey the most dangerous 11% of areas in Iraq, but instead used IBC data.

4. The NEJM is therefore substituting data which likely seriously undercounts the violent deaths in the very areas where we would expect them to be highest.

By McKingford (not verified) on 10 Jan 2008 #permalink

Let's suppose David's clearly ridiculously low estimate of 40,000 civilian deaths in Iraq 2003-2006 was accurate. How would this translate to the United States, if it had been invaded and occupied by another country? If we are measuring death toll in terms of proportionately comparing the Iraq and US populations, this would translate to about 500,000 dead US civilians. Half a million. In other words, total carnage.

Moreover, David appears to suggest that the death of those non-civilians in Iraq opposed to the US occupation - people he calls 'soldiers, police, insurgent and so on' is legitimate. How utterly convenient of him.

By Jeff Harvey (not verified) on 10 Jan 2008 #permalink

The NEJM did not survey the most dangerous 11% of areas in Iraq, but instead used IBC data.

Well, they used IBC data to compare ratios, which is not an unreasonable method. But it does have a possible downward bias, as they note:

This adjustment involves some uncertainty, since it assumes that completeness of reporting for the Iraq Body Count is similar for Baghdad and other high-mortality provinces.

I would say that this isn't a good assumption since areas that are too violent for researchers would probably be too violent for journalists as well.

BTW, I would refer to #36 as the "No way am I going down that f***ing street" bias. :)

"Let's suppose David's clearly ridiculously low estimate of 40,000 civilian deaths in Iraq 2003-2006 was accurate. How would this translate to the United States, if it had been invaded and occupied by another country? If we are measuring death toll in terms of proportionately comparing the Iraq and US populations, this would translate to about 500,000 dead US civilians. Half a million. In other words, total carnage."

In my opinion, it beats living under a brutal dictatorship for the rest of ones life. The Iraqi's had no chance on their own to get out from under Saddam's thumb. I'd rather be dead than live under a dictatorship such as that.

ben, actually, if it came down to it...I'm betting you would rather actually be alive than dead, no matter what your government.

Speaking personally, I wish that Saddam had been removed in the first Gulf War. There was a good case for doing so, which got ignored largely because many Western governments felt they would get more profit out of having his secular government (which still owed a lot of Western arms dealers a lot of money) in place. However, this doesn't (any of it) nullify the fact that then the US (and UK, to be fair) invaded another country which had not performed an act of aggression towards us, on marginally legal [ok, legal if you squint and flop one ear back and look at it in dim light] grounds, and that said invasion was incompetently planned for and incompetently handled and has resulted in the complete disintegration of the invaded country into a morass of internecine warfare.

And whether you say 150,000 or 655,000 casualties, I think any way you cut it that translates to too damn many. And no, I don't have any suggestions at this point on how they might sort it out. Some things just don't have a good answer. However, I think that many Iraqis have a good point in stating that their chances for survival were probably greater under Saddam, and I don't blame them for being pissed off at how this has been handled.

By Luna_the_cat (not verified) on 10 Jan 2008 #permalink

In my opinion, it beats living under a brutal dictatorship for the rest of ones life. The Iraqi's had no chance on their own to get out from under Saddam's thumb. I'd rather be dead than live under a dictatorship such as that.

But ... Ben! Just a moment! You're a LIBERTARIAN.

Don't you think that you ought to be able to decide that for yourself, rather than have GOVERNMENT, a foreign one at that, make that decision for you?

Gotta love these freedom-loving libertarians who don't want government to impose the people's will on them, but are happy to see their government impose their will on the citizens of another country entirely.

Thanks, dhogaza -- I was thinking that too, but I couldn't think how to say it.

By Luna_the_cat (not verified) on 10 Jan 2008 #permalink

And whether you say 150,000 or 655,000 casualties, I think any way you cut it that translates to too damn many.

Let's be clear, we're talking DEATHS, not casualties. Casualties include those who are injured but not killed, and that figure is surely much, much higher.

True, and noted. Apologies for my imprecision.

By Luna_the_cat (not verified) on 10 Jan 2008 #permalink

> 150,000

The 150,000 figure is for violent deaths only. Excess mortality for the period, according to this study, is about 400,000. This is the number that matters - it hardly matters to an Iraqi if the invasion caused him to be shot, blown up, or die of disease through breakdown of critical infrastructure and social arrangements. Western corporate media, of course, choose to play up the lower number.

[The fact that all excess mortality was attributable to violence was a serious problem with the findings of Lancet 2. It seems likely that some problem with Lancet 2 caused a large number of the non-violent deaths to be counted as violent.]

Excess mortality for the period, according to this study, is about 400,000.

I wonder how long it will take the RW blogosphere, the WSJ and all the rest that are so quickly and eagerly piling on "this debunks the study published in the Lancet" to realize that they are endorsing an excess mortality figure of 400,000 rather than 600,000+?

This really destroys the argument that the Lancet figure was exaggerating many times over the "real" figure.

This is from the new report--

"Recall of deaths in household surveys with very few exceptions suffer from underreporting of deaths. None of the methods to assess the level of underreporting provide a clear indication of the numbers of deaths missed in the IFHS. All methods presented here have shortcomings and can suggest only that as many as 50% of violent deaths may have gone unreported. Household migration affects not only the reporting of deaths but also the accuracy of sampling and computation of national rates of death."

If I understand them correctly, does this mean their own violent death count might be too low by a factor of two?

If so, for those of us who aren't experts, there's something misleading about citing the CI as though it represents the likely range in which the true value could be found. Of course the same criticism could possibly be made of Lancet2. Maybe the truth is somewhere in-between the ranges given by the two studies.

By Donald Johnson (not verified) on 10 Jan 2008 #permalink

I don't think the IFHS authors will endorse an estimate of excess deaths. They will argue, not completely inappropriately, that the prewar mortality estimate is clearly way too low. Again to refer to Jon Pedersen's [I don't know if he was involved in this -- his name isn't on the list, but he was lead on the ILCS] comments to me, he expressed skepticism about the ability to retroactively assess prewar mortality rates, feeling that anything much beyond 6-months to a year was bound to be low. That was, again if I recall our conversation correctly, the reason he focussed upon violent deaths -- you don't need an accurate prewar estimate.

So we know that there's underreporting here. We don't know how much. But we do know that it is likely severe for the prewar period. Thus, all the estimates are exactly that, estimates. But an estimate of "excess deaths" from this data is bound to be flawed, more flawed than the estimates of violent deaths.

I think there are a number of concerns here. The 11% of clusters not visited, method of estimating missing data, the likely prewar mortality undercount, the extent of mortality from Baghdad [I believe the survey that asked about deaths in one's household could be used to estimate the geographic distribution -- my recollection is that it had a much lower rate from Baghdad, consistent with L2], and the fact that this was conducted by the Iraqi government, with possible difficulties in data collection. On the other hand, the sample is large and its not as if L2 was flawless.

Alas, as scientists, and as concerned citizens, we may never know the true figures, given the mass exodus in recent years. This whole episodes is a complex case study in how to use imperfect data on important policy questions. The only thing I suspect we know pretty well is that enormous numbers of Iraqis have died, probably at least 200,000 [given the last violent years]. This shows that the invasion has been a humanitarian disaster.

By Stephen Soldz (not verified) on 10 Jan 2008 #permalink

> I don't think the IFHS authors will endorse an estimate of excess deaths.

Playing coy is not an option - like the IBC numbers, the IFHS numbers are going to be interpreted as indicating the cost of the invasion in human lives. 400,000 may not be a very reliable number, but it is a much more realistic reference point than 150,000.

It is also important to remember that the 400,000 deaths estimate covers only 3/03-6/06.

"I'd rather be dead than live under a dictatorship such as that."

Yeah, its real easy to decide other people would be better off dead.

Personally I think Americans would be better off dead than continue to struggle on without universal health care.

Incidentally, while nominally elected, the current Iraqi government is every bit as bad a Saddam's in various measurable ways - violence and discrimination against women and gays have both increased; torture by the security forces is worse now than it was in the 1990's.

By Ian Gould (not verified) on 10 Jan 2008 #permalink

"IBC measures only civilian deaths."

IBC TRIES to measure only civilian deaths.

When groups of bodies of adult males are dumped showing signs of torture and murder by death squads or when hundreds of adult male bodies are reported to have turned up in a morgue over a period of months dead from bullet wounds it's a reasonable inference that some of them are in fact insurgents.

Similarly there are a LOT of suicide bombings where their minimum and maximum casualty figures vary by one, the simplest explanation for that is that some of the media sources they rely upon include the bombers in the figures.

Furthermore there are some incidents in the database which just seem blatantly to contradict their "civilian deaths" criterion such as reports of Iraqi troops or police being killed by car bombs.

By Ian Gould (not verified) on 10 Jan 2008 #permalink

"In other words, IBC and IFHS match up pretty darn well, depending on what you think the ratio of civilian/combatant deaths would be."

So do IBC and Lancet "depending on what you think the ratio of civilian/combatant deaths would be".

By Ian Gould (not verified) on 10 Jan 2008 #permalink

I think that for practical purposes (arguing for or against the war), a vague but undoubtedly correct claim that the war has caused hundreds of thousands of excess deaths is good enough. We may never know if the excess (for 2003-2006) is in the low hundreds or high hundreds of thousands. I'd like to know how solid the numbers are for the claims given for Saddam's killing--HRW claimed 300,000 dead from his security forces, and then sometimes people throw in hundreds of thousands for the war with Iran and then (for people who ignore the Western responsibility) there are the sanction deaths (with estimates for those varying all over the place), so you end up with people blaming a million or more deaths on Saddam, with no one expected to produce strong evidence for any of these numbers.

Going back to David Kane's statement

"Isn't it plausible that the 110,000 violent deaths that IBC does not report are combatant (soldiers, police, insurgent and so on)? "

I take it we're supposed to assume IBC is counting most or all the noncombatants. The US claims to have killed 15,000 insurgents from June 2003 to the end of 2006 (see the link I provided in # 33). That still leaves nearly 100,000. So if David is right, Iraqi combatants are killing each other at a rate many times greater than the US military has managed and not in death squad activities or car bomb attacks, because those are included in IBC's data. This is surprising--one gets the impression that most of the killing in Iraq is of civilians, but actually, as David now informs us, most of it must be occurring in pitched battles that have completely escaped the attention of the press.

By Donald Johnson (not verified) on 10 Jan 2008 #permalink

#44: Where did you find that the excess mortality from this new study was around 400.000? I couldn't find it in the paper itself, but it would be really interesting if it was true...

If you derived this figure from data in the paper, would you mind unpacking your reasoning a little bit for the lay people amongst us (myself included)?

The BRussells Tribunal presented recently a new important survey: "Deterioration of Women's rights and Living Conditions Under Occupation", written by Iraqi scientist Dr. Souad N. Al-Azzawi, specialist on the subject of depleted uranium and member of our Advisory Committee: (http://www.brusselstribunal.org/pdf/WomenUnderOccupation.pdf)

The conclusions of that study confirm earlier studies, polls and surveys like the Lancet survey and ORB's poll. (http://www.justforeignpolicy.org/iraq/iraqdeaths.html) And even worse: the survey concludes that the total number of deaths amongst the 4.5 million internally displaced or forced migrated people both inside and outside the country is estimated to be 868,500.

The survey composed of 21 questions was distributed in two major cities:

Inside Baghdad, Iraq in the Karada District, and Kudsiya area in Damascus, Syria where more than 200,000 Iraqi refugees live.

The questionnaire was distributed in the selected areas by two teams. Each team consisted of a PhD, MS, and a B.Sc. holders who work, or used to work with the author in Baghdad University.

The 150 women who answered the survey were a part of 150 families or households composed of a total of 502 Iraqis.

Further indications that the WHO figures are way too low:

"A study of 13 war affected countries presented at a recent Harvard conference found over 80% of violent deaths in conflicts go unreported by the press and governments. City officials in the Iraqi city of Najaf were recently quoted on Middle East Online stating that 40,000 unidentified bodies have been buried in that city since the start of the conflict. When speaking to the Rotarians in a speech covered on C-SPAN on September 5th, H.E. Samir Sumaida'ie, the Iraqi Ambassador to the US, stated that there were 500,000 new widows in Iraq . The Baker-Hamilton Commission similarly found that the Pentagon under-counted violent incidents by a factor of 10. Finally, a week ago the respected British polling firm ORB released the results of a poll estimating that 22% of households had lost a member to violence during the occupation of Iraq, equating to 1.2 million deaths. This finding roughly verifies a less precisely worded BBC poll last February that reported 17% of Iraqis had a household member who was a victim of violence.

There are now two polls and three scientific surveys all suggesting the official figures and media-based estimates in Iraq have missed 70-95% of all deaths. The evidence suggests that the extent of under-reporting by the media is only increasing with time." (Les Roberts, 20 Sep 2007 - http://www.globalresearch.ca/index.php?context=viewArticle&code=ROB2007…)

Can we please have a criticism of this study's methodology by David Kane, preferably pointing out where they made the same errors as Roberts and where they have got it better, and why this study is more reliable than IBC whereas Roberts' ones weren't? I've been really looking forward to it and we're already 50 posts in.

J

By jodyaberdein (not verified) on 10 Jan 2008 #permalink

If you derived this figure from data in the paper, would you mind unpacking your reasoning a little bit for the lay people amongst us (myself included)?

the table on page 489 lists death rates.
before war, all causes (all ages): 3.17
after invasion, all causes (all ages): 6.01

before war, violent (all ages): 0.10
after invasion, violent (all ages): 1.09

so the missing approx. 2.0 increase in death rate should be from non-violent causes.
if an 1.0 increase transfers into about 150000 (violent) additional dead, the 2.0 increase should give about another 300000.

(pretty crude calculation, but i think you can do the exact math with the data provided)

Jody said: "Can we please have a criticism of this study's methodology by David Kane"

Yes, I too am waiting with baited breath for Kane to weigh in with his Fields Medal-worthy analysis.

You know, educate us mere mortals about the finer points of calculating CMR's and the like...

Well, maybe not CMR's, but you know what I mean.

...he expressed skepticism about the ability to retroactively assess prewar mortality rates, feeling that anything much beyond 6-months to a year was bound to be low. That was, again if I recall our conversation correctly, the reason he focussed upon violent deaths -- you don't need an accurate prewar estimate.

sorry, but that sounds like bogus to me!
we are talking about people, who are living in the same household!
how can you assume that someone wouldn t remember that his son died 7 months ago?

someone (Burnham?) wrote a very good explanation on how to get this numbers right: people forget the year of death, but they will mostly get date rigth in comparison to an important event. (invasion. hint hint!)

ps: and deathdates should be written down on the death certificate. surely David Kane will applaud the Lancet team for having the wits of asking for those. and will attack this study for failing to do so. David?

sod #58: someone (Burnham?) wrote a very good explanation on how to get this numbers right: people forget the year of death, but they will mostly get date rigth in comparison to an important event. (invasion. hint hint!)

So you believe that pre-war mortality in sanctions-hit Iraq was less than half that of Iran and Syria?

From the article:

http://content.nejm.org/cgi/content/full/NEJMsa0707782

The pre-invasion rates of adult mortality from any cause per 1000 person-years were 2.0 for men and 0.8 for women in the IFHS, with a relatively small proportion of deaths attributed to violent causes. In a regional comparison for 2002, a study by the WHO estimated that in Syria and Jordan, the rates of death for adults were 4.2 for men and 2.8 for women. In Iran, the rates were 4.7 and 2.9, respectively. If we assume that the rate of death in Iraq would have been at similar levels without the invasion, underreporting of adult deaths in the IFHS would be as much as 55% for men and 70% for women for reported deaths occurring in 2001 and 2002. The underreporting of deaths was expected to be lower for more recent years.

So you believe that pre-war mortality in sanctions-hit Iraq was less than half that of Iran and Syria?

no. i assume this paper produced an undercount. as it did with violent deaths.

> If you derived this figure from data in the paper, would you mind unpacking your reasoning a little bit

I figured as sod (#56) indicates.

The rate of violent death is 1.09 deaths/KP-Y. The total excess mortality rate is 6.01 - 3.17 = 2.6 deaths/KP-Y, or 2.60 higher than violent deaths alone. Scaling the 151,000 by 2.60 I get about 393,000.

The mortality rates are from table 3. The 151,000 is from the abstract.

150,000 is a massive difference from 600,000. That deserves a serious explanation. It will be interesting to watch this debate.

Luna_the_cat wrote:

Speaking personally, I wish that Saddam had been removed in the first Gulf War. There was a good case for doing so, which got ignored largely because many Western governments felt they would get more profit out of having his secular government (which still owed a lot of Western arms dealers a lot of money) in place. However, this doesn't (any of it) nullify the fact that then the US (and UK, to be fair) invaded another country which had not performed an act of aggression towards us, on marginally legal [ok, legal if you squint and flop one ear back and look at it in dim light] grounds, and that said invasion was incompetently planned for and incompetently handled and has resulted in the complete disintegration of the invaded country into a morass of internecine warfare.

Who are you kidding? Do you honestly think internecine warfare wouldn't have been a problem if Gulf War I had resulted in Saddam's removal? Any war to turn a tribal, sectarian, endogamous Mideastern country like Iraq into a democracy was a mistake in concept, not merely execution.

You are all missing the point. This study does not report 151,000 deaths in Iraq. It reports 151,000 violent deaths through June 2006. If you do the calculations, as I have done here, you'll find what the report actually reports is 610,000 "excess deaths" in Iraq through today. A number very much in line with, if not identical (it would be shocking if it were, given the imprecision in both) to the numbers from the Johns Hopkins study. Yes, there are differences in details, which is also hardly surprising given the difficulties in performing either study, the assumptions required to analyze the data, and so. But considering that the report talks about "a massive death toll," it can hardly be considered a whitewash or some kind of attempt to discredit the Johns Hopkins study, or to diminish the extent of the tragedy in Iraq.

Sortition, I agree that the difference in the ratio of violent to non-violent deaths between IFHS and L2 seems bad, but I think it could be a consequence of the study's affiliations and the clearly non-random missing data. This study recorded more violent deaths in Baghdad than did L2, which could be because the people in Baghdad were more conducive to cooperating with Health Dept-affiliated employees than were the people in Anbar (where I think L2 found relatively more violent deaths). I imagine sunnis in Anbar thought twice about telling a representative from a Shiite agency that their relative was murdered by a Shiite death squad. Also possibly the highest rates of violent death in Anbar were in precisely the areas that weren't sampled.

But I agree, this study could give an indication that L2's violent death rates were too high. However, since Iraq is so dangerous, I suppose we will never know.

You are all missing the point. This study does not report 151,000 deaths in Iraq. It reports 151,000 violent deaths through June 2006. If you do the calculations, as I have done here, you'll find what the report actually reports is 610,000 "excess deaths" in Iraq through today. A number very much in line with, if not identical (it would be shocking if it were, given the imprecision in both) to the numbers from the Johns Hopkins study. Yes, there are differences in details, which is also hardly surprising given the difficulties in performing either study, the assumptions required to analyze the data, and so. But considering that the report talks about "a massive death toll," it can hardly be considered a whitewash or some kind of attempt to discredit the Johns Hopkins study, or to diminish the extent of the tragedy in Iraq.

Unfortunately, I don't think that is accurate. The authors of the Lancet study themselves concluded this:

We estimate that between March 18, 2003, and June, 2006, an additional 654,965 (392,979-942,636) Iraqis have died above what would have been expected on the basis of the pre-invasion crude mortality rate as a consequence of the coalition invasion. Of these deaths, we estimate that 601,027 (426,369-793,663) were due to violence.

Either approximately 600,000 deaths were due to violence or or the Lancet study is incorrect. It is a cop out to defend the Lancet figure as reliable all this time and then, when presented with contradictory evidence, to exclaim "Well, a lot of people did die, somehow!"

"Don't you think that you ought to be able to decide that for yourself, rather than have GOVERNMENT, a foreign one at that, make that decision for you?"

Er, um, like you think you actually have a choice when you live under a brutal dictatorship? In that case, all bets are off.

Thanks Sod and Sortition about your replies!

So, am I missing something here?

**Lancet 2**

Pre-invasion mortality rates = 5.5 per 1000 people per year

Post-invasion mortality rate (40 months) = 13.3 per 1000 people per year

Total Excess mortality = 13.3 - 5.5 = 7.8

**Increase in mortality rate post invasion = 13.3/5.5 = 2.4**

**NEJM**

Pre-invasion mortality rates = 3.17 per 1000 per year

Post-invasion mortality rate (40 months) = 6.01 per 1000 people per year

Total Excess mortality = 6.01 - 3.17 = 2.84

**Increase in mortality rate post invasion = 6.01.3/3.17 = 1.9**

Violent death-rate Pre-Invasion = 0.10

Violent death-rate Post-Invasion = 1.09

**Excess Violent Deaths = 1.09-0.10 = 0.99 = 1**

Conclusion: 1 per 1000 per year = 151.000 (violent) deaths

Scaling Total Excess Mortality = 2.84 * 151.000 = 428.840

**Summary:**

JHU:
2.4 increase in mortality post invasion
650.000 excess mortality

IMH:
1.9 increase in mortality post invasion
430.000 excess mortality

The only major discrepancy seems to be in the attribution of the cause of death in the two studies. Am I wrong in thinking this?

oops. Substitute "IMH" for "NEJM" in comment above.

SG,

There are many ways to try and explain the discrepancies between Lancet 2 and IFHS. Like all real life studies, both have a lot of points which can be called into doubt. I don't think that the differences between the findings of the studies are jarring to the point that they could be seen as contradicting each other. In fact, despite pretenses in the media, this is a vindication of the "incredible" estimate of Lancet 2.

That said, the lack of increase in non-violent deaths in the Lancet 2 findings seemed unlikely to be true a-priori, even before seeing the IFHS data.

I agree, the question I suppose becomes whether it was due to misreporting by study subjects, and if so in which study. One would assume that the Lancet was more accurate, given it collected death certificates, but the IFHS more precise, since it had a larger sample. I think that, sadly largely because of the level of destruction which has made these studies necessary, we will never know the truth.

"Er, um, like you think you actually have a choice when you live under a brutal dictatorship? In that case, all bets are off."

Ben you argued that Iraquis should be happy to die rather than live under Saddam.

Guess what, in order to exercise that particular "choice" all you had to do was say or do anything critical of Saddam.

Most Iraqis chose to live (as did, for example, most Russians and Eastern Europeans), then the US government decided to make that choice for them.

By Ian Gould (not verified) on 10 Jan 2008 #permalink

Mike C"Either approximately 600,000 deaths were due to violence or or the Lancet study is incorrect."

To my mind, this is the most credible critique of the Lancet studies.

Most wars see far more deaths from disease, starvation and exposure than from violence.

Iraq may be an exception to that generalisation and there are some reasons why that could be the case.

For example, many Iraqi refugees and internally displaced people seem to have been taken in by members of their extended families rather than ending up in refugee camps.

Further, the near-collapse of the infrastructure of the country under sanctions seems to have had some perversely beneficial effects post-invasion. For example, due to the collapse of the water treatment infrastructure many Iraqi families were already boiling their drinking water. This prevented any major outbreaks of cholera, normally one of the big killers in conflict zones.

By Ian Gould (not verified) on 10 Jan 2008 #permalink

Also Ian, if the majority of violent deaths are death-squad related, many people could be dying of violent deaths without any consequent breakdown in the type of infrastructure required to prevent child deaths and disease. In such a case the main cause of increased non-violent deaths would be internal displacement, and as you say these people could be being taken in by relatives. It's the sort of pattern one might expect in a developed nation that had suddenly and inexplicably descended into sectarian violence (I wonder however that could have happened...)

To my mind, this is the most credible critique of the Lancet studies.

Most wars see far more deaths from disease, starvation and exposure than from violence.

This is a good point. While Iraq isn't exactly World War I, the ratio of violent deaths to deaths as a result of (the well publicized) lack of medical supplies and the like does seem remarkably high.

Following a comment by donald johnson on the [CrookedTimber thread](http://crookedtimber.org/2008/01/10/post-invasion-deaths-in-iraq/#comme…), I decided to see if it was possible to replicate the results from the first Lancet study using the data from this recent NEJM.

According to Table 4 from the Supplementary Materials:

Mortality Rate All causes Pre-Invasion = 3.17

Mortality Rate All causes for Mar03 to Dec04 = 5.92

The excess death for Mar03 to Sep04 period ( = Lancet 1):

(5.92-3.17)/1000 * (17.8/12) * 24400000 = 99531

Did I make a huge mistake somewhere or do the results from this new NEMJ article and Lancet1 match almost perfectly?

Disclaimer: I'm not an epidemiologist nor a statistician, I just used the [formula](http://scienceblogs.com/deltoid/2007/08/robert_chung_on_david_kane.php#…) that Robert used to show David Kane how it was possible to calculate the 98000 estimate excess deaths found by Lancet1.

Yes dalazal, the agreement is almost perfect.

The can only be one conclusion -- L1 is a fraud. Les Roberts used a time machine to find out what the results of the NEJM study would be and cooked the L1 numbers so they agreed. What other explanation is possible?

Maybe the resurrected dead created by the Fallujah cluster are like, Quantum resurrected dead, which can appear at any point in time or space, dead or alive, to prove or disprove any paper according to Kanian ZombieDynamics? These Quantum Undead have a kind of intrinsic spin, if you will, which certain right wing academics are capable of harnessing to prove whatever they want.

Most wars see far more deaths from disease, starvation and exposure than from violence.

I am not a big fan of these rules of thumb (like that cursed "3:1 wounded/killed" ratio which has made its way from the world of tabletop wargames to the Lancet debate). Conflicts can all be very different and when we're talking about death squad and ethnic massacres rather than battlefield wars, this might not be true - the limiting case would be something like the Rwandan civil war where practically none of the deaths came about from these ancillary causes.

Mike C says, "Any war to turn a tribal, sectarian, endogamous Mideastern country like Iraq into a democracy was a mistake in concept, not merely execution".

Good God Mike, do you honestly believe that was what the US/UK aggression was about? Turning Iraq into a democracy? The fact is, that those with concentrated wealth and power loathe democracy because it puts power into the hands of the majority of the ordinary population. Thomas Carrothers, a member of Reagan's government, whose portfolio was 'democracy enhancement' explained it well when he said that the US supports democracy if and only if 'it is in line with US economic and miltary interests. When it isn't', he said that 'it is downplayed or even ignored'. He went on to say that there is a 'strong line of continuity' in US foreign policy in 'supporting limited, top-down forms of government that does not risk upsetting the traditional structures of power with which the US has long been allied'.

As Pepe Escobar has explained, the US models in Iraq and Afghanistan, are no nation building, chaos, and puppet governments. The State Department asserted in 1950 that the Middle East oil-producing region is the 'greatest material prize in history' and a 'source of stupendous strategic power'. Further, given that senior planners like George Kennan stated openly that any country controlling the region has 'veto power over the global economy', and historical precedents, one can only wonder how anyone with a half-functioning brain can honestly believe that a major US aim in invading Iraq was to bring democracy. The US has an historical record of undermining many democracies wherever and whenever they threatened to break out. Three classic examples: Mossadegh. Arbenz. Allende. The real enemy to US elites has always been indigenous nationalism; that countries will attempt to develop their economies independent of US control. Moreover, the US is more than happy to support repressive dictatorships when it suits their economic and political agenda. Look at Saudi Arabia and Egypt; again, there are countless historical examples across Latin America.

As far as Luna's point regarding removal of Saddam in 1991, this could have occurred had the US military not done everything in its power to ensure that the status quo was retained. When Brent Scowcroft said that there was no way the US was going to allow ther Shia majority to take over, he wasn't kidding. The US military did everything in its power to prevent a coup, because they feared the alternative. They denied Shia access to munitions stores. They impeded the movement of Shia militias and watched as Saddam massacred them in their thousands. Robert Fisk details the betrayal in his lenghty 2006 tome, "The Great War for Civilization".

By Jeff Harvey (not verified) on 10 Jan 2008 #permalink

"As far as Luna's point regarding removal of Saddam in 1991, this could have occurred had the US military not done everything in its power to ensure that the status quo was retained. When Brent Scowcroft said that there was no way the US was going to allow ther Shia majority to take over, he wasn't kidding. The US military did everything in its power to prevent a coup, because they feared the alternative. They denied Shia access to munitions stores. They impeded the movement of Shia militias and watched as Saddam massacred them in their thousands."

They also allowed repeated violations of their self-declared southern no-fly zone to allow Saddam to use US-supplied helicopter transports to ferry troops around the south.

By Ian Gould (not verified) on 10 Jan 2008 #permalink

Ian,

Absolutely. As Tom Friedman said in one of his NYTimes pieces as I recall, the best option from the US perspective at the time was to have an iron fisted ruler in Iraq who did as he was told, a kind of 'Saddam-lite'. The US demands obedience from its client states, and Saddam had slipped the leash. This is why he eventually 'had to go' - not because of his many brtual atrocities, which are well documented, but because he was no longer a reliable tyrant. The story was similar when Suharto, one of the world's biggest torturers and mass murderers, and fully supported by Washington through almost all of his brutal regime, became uppity in 1998 during the collapse of SE Asian economies and challenged IMF dictates. A Clinton aide had called him 'our kind of guy' only two years earlier, in spite of full knowledge of his vast crimes and brutality. But when he started standing up to Washington, he had to go. His removal was far easier, but in the end the death toll under his regime probably far exceeded even Saddam's in Iraq. But this information is of little interest to the western media, anxious to spread the usual myths of our 'nobel intent' and 'basic benevolence'.

By Jeff Harvey (not verified) on 10 Jan 2008 #permalink

Jeff, this was the so-called "Sunni General" scenario so beloved of the George HW Bush and Clinton administrations in which Saddam would be overthrown by an internal bloodless coup by a more moderate and malleable Sunni dictator.

This scenario nearly worked when Saddam's two sons-in-law made their attempted coup in the mid-90's.

On a purely pragmatic basis, the world would probably be better off today if they'd succeeded.

As you know, I generally think you're too harsh in your analysis of American foreign policy.

I had a reasonably up-close perspective on the Australia/Indonesia relationship in the 1990's and I believe the US position was similar.

It wasn't so much "who cares what Suharto does so long as he follows orders" as "He's a thug and a monster but he has a reasonable degree of popular support and any attempt to remove him would probably make the situation worse."

It's not so much that Suharto's defiance of the IMF made him expendable, it's more that his eroding domestic support gave the US a chance to get rid of him.

Take a look at Marcos, the US had no more loyal stooge right up to the last - but once his position became untenable the US was only too happy to sdee the back of him.

By Ian Gould (not verified) on 10 Jan 2008 #permalink

"Either approximately 600,000 deaths were due to violence or or the Lancet study is incorrect."

There is actually a third possibility.

Interpretation of the wording of the survey.

I have not seen the questions on the L2 survey, but it is at least possible that the question determining whether a death was violent or not was worded in a way that inclined people to attribute the vast majority of deaths to violence.

When one is surrounded with violence that one previously had not known and someone one whether the death of one's child (from an illness that you have no explanation for) was due to violence, one may be inclined to say yes, unless it is made very clear that "death from sickness" (even an undiagnosed sickness that could not be treated because of lack of available doctors due to the war) does not fall in the "death from violence" category.

If that were the case, it would be a problem with the study, but it would not affect the total number of excess deaths, which is really the most important thing from the standpoint of impact of the war.

It would certainly not "negate" the results of the study, at any rate.

Hi Ian,

I see many of your points. However, let us not read too much into the scenario of the US 'tolerating' Suharto until it was easy to dispose of him. After all, Paul Wolfowitz, whose influence on different US administrations cannot be understated, was ambassador to Indonesia for many years and a great supporter of Suharto, even after he was deposed. Moreover, it was the US, in combination with Australia and the UK who fomented the coup to overthrow the Sukarno government in 1965. General Suharto was supplied with lengthy lists of names of PKK members and Sukarno supporters by the CIA and the US/Australia/UK fully supported the resulting carnage, as declassified documents from the period show. Suharto, in fact, might have been the first to implement absolute free market policies after he came to power at the behest of US and European multinationals.

My criticism of US foreign policy is that it is not, and never has been, based on noble intentions of spreading democracy, freedom and human rights. To be fair, neither has or is the foreign policy of the UK. The aim has always been to expand western markets and to ensure that capital flows in the 'right' direction. Moreover, real democracy from the 'bottom-up' is in complete conflict with the absolutist agenda of the free-market orthodoxy, because wherever Washington Consensus policies have been implemented, they have led to between 30 and 60% of the population being driven into absolute poverty. The wealth that is generated by this doctrine, which is promised to 'trickle down', never does. Hence, implementation of free market absolutism at home or abroad by successive US governments has generally required (a) some kind of brutal, authoritarian regime, as in Chile, (b) a war fought on the basis of disinformation, but promoted using the same old 'noble' propaganda, as in Iraq or (c) a natural disaster, such as the Asian tsunami or hurricance Katrina.

By Jeff Harvey (not verified) on 11 Jan 2008 #permalink

Lopakhin:

Thanks.

That certainly seems to rule out the explanation I proposed above.

Just a couple of very quick thoughts:

I think I've seen a few comments on the possibility that deaths are higher than expected in proportion to injury and maiming, for a couple of reasons. The first is the prevalence of sectarian killings and assasinations. A lot of the violence is with the purpose of making sure that a target is dead, not just "out of action". This is certainly going to skew things away from previous war sitations, where it might be most "effective" for a side to make sure that there were more wounded than dead, to chew up resources. Certainly, the sheer number of individual revenge killings is going to skew this a bit. The other possibility to take into account is that even the more non-targeted efforts, such as suicide bombing and car bombs, are being built to maximise deaths, not just injury, by upping the amount of explosives and deadly shrapnel -- and medical supplies for the treatment of injury and infection are somewhere between "thin on the ground" and "nonexistant", which probably boosts the fatality rate. Hospitals were looted early, and it is my understanding that many of them have not been able to recover the ground they lost.

The other quick thought is that while it is entirely possible that removing Saddam during or right after Gulf War I might have resulted in the suppressed sectarian hostility flaring (as it did in parts of the former Soviet Union), as other posters have noted, it probably would have been resolved much more quickly -- had we not hung the internal elements who wanted to remove Saddam out to dry. In addition, the current situation is exacerbated by the fact that dedicated Islamic extremists from a number of other countries are actively at work there right now to keep the sectarian violence as high and vicious as possible, at least in part to enhance hostility against the US as well. I can't see that the same situation would have existed in 1992.

By Luna_the_cat (not verified) on 11 Jan 2008 #permalink

Good God Mike, do you honestly believe that was what the US/UK aggression was about? Turning Iraq into a democracy?

It doesn't matter what our intentions were. The result would still be the same regardless: tribal warfare and sectarian violence.

In some ways, Mike C it matters very much.

There's an emerging right-wing anti-war isolationist position that says "We went in there with the noblest of intentions but those dirty Iraqis don't want to be free, serves them right if they end up with another Saddam."

The truth is far more complex: most of the post-war violence could have been avoided with adequate planning. As a quick example, the original deployment plans involved sending several MP units to Baghdad specifically to maintain public order. Rumsfeld personally intervened to stop that happening. (Because, you know, once they'd rolled over Iraq in three montrhs in another brief, glorious and near=bloodless war like Afghanistan those units were going ot be needed in Syria, Iran, Libya and elsewhere.)

By Ian Gould (not verified) on 11 Jan 2008 #permalink

In #23, Stephen Soldz wrote:

I raised [estimation of adult mortality] with Jon Pedersen when I spoke with him after Lancet 2. He felt that there was no relevance of the undercount in child mortality estimates to adult mortality. He stated that demographers are well aware that child mortality is often underreported in such surveys, which, if I remember correctly, was exactly why they examined the figures in time to resurvey. I'm certainly not an expert here, but do feel that this point should be rebutted before using the undercount of child mortality to discount the ILCS estimates of adult mortality.

Hmmm. It is true that undercounting child mortality isn't necessarily a reason to discount estimates of adult mortality; however, it is also true that accurate estimation of child mortality is no reason to think that adult mortality is also accurately estimated. The ILCS question cannot be used to produce an accurate estimate of the rate of adult mortality.

Luna_the_Cat, I posted up a link to a study during the last round of this debate, which showed ratios of violent to non-violent deaths in different situations. One of them had a ratio of 20:1 or something similar, I think it was the Port Arthur Massacre. The implication was that that 3:1 figure is a complete waste of time.

But that 3:1 figure is irrelevant anyway since it's possible that in Iraq most of the wounded die anyway, and the non-violent deaths figure has nothing to do with the injured victims of death squads. (But I know you knew this anyway).

So, David Kane says that even though the IFHS paper includes data which makes it possible to roughly calculate excess deaths using their survey results, we can't do so because they didn't do so.

That's astonishing.

And since the authors don't mention L!, we are not allowed to mention that the results of L1 and IFHS seem quite similar.

That's also astonishing.

Also astonishing, but refreshingly honest (how many times do we get to say "honest" and "David Kane" in the same post?), Kane no longer waffles with "50% chance of fraud" type statements.

He clearly and unambiguously states his charge:

People like me think that the reason for this is that the L2 data from those provinces is fraudulent,

And Kane has another post where instead of using the authors' number for violent deaths in L1, he does his own calculation and includes Falluja to come with a much higher number. All to avoid admitting that L1 and IFHS agree.

And Kane has another post where instead of using the authors' number for violent deaths in L1, he does his own calculation and includes Falluja to come with a much higher number.

So Kane says it isn't kosher for us to do our own calculations using data in the IFHS paper for comparision with L1, while at the same time he does his own calculations using the L1 data...

And he wonders why we think know he's dishonest.

Tim claims that I want to "avoid admitting that L1 and IFHS agree." Not really. What I want to do is examine closely whether or not L1 and IFHS agree just as the IFHS authors examined closely whether or not L2 and IFHS agreed. That is the way that science moves forward. But, to do this, we need the L1 data at the same level of detail as the L2 data. Will Tim join me in asking Les Roberts to provide that data? Without it, it is hard for anyone to know whether or not L1 and IFHS agree.

By David Kane (not verified) on 12 Jan 2008 #permalink

Will Tim join me in asking Les Roberts to provide that data?

You're on record claiming that the study's a fraud.

Why would Les Roberts do anything to help you?

What I want to do is examine closely whether or not L1 and IFHS agree just as the IFHS authors examined closely whether or not L2 and IFHS agreed. That is the way that science moves forward.

your pretty fast in accusing fraud on the lancet.

you re pretty slow in confirming that IFHS confirms big parts of the Lancet results.
you re slow in looking at the response rate as well.
or at the fact that IFHS does not show any increase in violence in 2006.

the way you are moving science forward in a rather special way!

David Kane is right to question how Lancet defenders (NEJM "denialists"?) are making up their own estimates and imputing them to WHO/NEJM. It seems pretty clear they do this basically because the violent death estimate that has actually been published in NEJM sharply contradicts the violent death estimate published in Lancet. So they want to move the goal posts to "excess deaths" (for which there is no estimate given in NEJM), where they think they can draw some semblance of similarity with Lancet.

This is a difficult proposition though, considering that the interpretation of the violent death estimate in NEJM is pretty complex. They do not estimate violent deaths simply by extrapolating the raw data to the population. They do a number of things which wind up raising the violent death estimate up quite a bit above what the raw data alone would have produced.

So when the Lancet defenders move the goal posts to "excess deaths" by creating their own estimate of this to impute to NEJM, are they doing this by just extrapolating the raw data alone (assuming no similar adjustments are necessary for non-violent deaths)? Are they assuming the upward adjustments used for the violent estimate apply across to the non-violent data? Etc.

What the NEJM report does say is:
"Overall mortality from nonviolent causes was about 60% as high in the post-invasion period as in the pre-invasion period. Although recall bias may contribute to the increase, since deaths before 2003 were less likely to be reported than more recent deaths, this finding warrants further analysis."

The "although" - and the need for "further analysis" - seems to disappear when the Lancet defenders move the goal posts and create their own "excess deaths" estimates to impute to NEJM. If recall bias is substantial in pre-war, and less so post-war, as the authors imply, or if this bias has an increasing effect as the recall period moves further back in time, then large numbers of the Lancet-defender-imputed "excess" nonviolent deaths would disappear.

It should be noted that when the Lancet defenders (including Roberts) do this, they are making up their own estimates for this and answering all these kinds of questions the way they want.

So they want to move the goal posts to "excess deaths" (for which there is no estimate given in NEJM), where they think they can draw some semblance of similarity with Lancet.

There is no "excess death" estimate in NEJM, however they DO publish a "total death rate".

We are supposed to accept their calculation of the violent death rate and pretend that the excess death rate does not exist?

Why?

Is your claim that this figure is inaccurate?

If so, then why would we assume the violent death rate is inaccurate?

Theh did a survey. On the survey was a bunch of questions that attribute cause of death to violent and non-violent causes.

You and Kane argue that we must ignore the survey results that include deaths from non-violent causes.

Again, why?

There's absolutely no reason to do so.

The "although" - and the need for "further analysis" - seems to disappear when the Lancet defenders move the goal posts and create their own "excess deaths" estimates to impute to NEJM.

That's not "moving the goalposts", that's simply running the numbers on the data given by the paper.

It should be noted that when the Lancet defenders (including Roberts) do this, they are making up their own estimates for this and answering all these kinds of questions the way they want.

They aren't "making up their own estimates", they're doing straightforward calculations on the data provided within the paper.

To make it simple ...

NEJM gives a "before the shit hit the fan" rate of deaths.

They give two rates for the "after the shit hit the fan" timeframe.

1. violent death rates
2. non-violent death rates

(2+1-normal) is a perfectly reasonable way to compute the total "after the shit hit the fan" death rate.

Essentially what we're hearing here is that two of the three death rates can't be used, but #2 above is golden.

Why only #2? Why aren't all three, or non of the three, golden?

"They aren't "making up their own estimates", they're doing straightforward calculations on the data provided within the paper."

It's still making up your own estimates.

The authors reject a simple straightforward calculation for the violent death estimate, which would have produced a lower violent death estimate. So if you think a few calculator clicks based on the raw data is fine, you should be saying the estimate of 151,000 violent deaths is too high, and a lower one is more appropriate. Or is it only fine for "excess deaths" but not for violent deaths?

The authors do not give any estimate of "excess deaths", presumably in part because of their warning with nonviolent deaths that "recall bias may contribute to the increase, since deaths before 2003 were less likely to be reported than more recent deaths, this finding warrants further analysis."

They're essentially saying that a straightforward calculation would probably exaggerate the rate of increase - exaggerate an "excess deaths" estimate - and this needs further analysis. IOW, it would appear it's not so straightforward doing that kind of estimate in the view of the authors (even if it is in the view of those imputing these estimates to NEJM.)

The estimates you are making tend to disagree with the little the authors have actually said, and rest on your own interpretations which would hold that a straightforward calculation from the data is right for "excess deaths" (but not for violent ones?), and that these do not need further analysis before drawing conclusions, as the authors suggest.

The reason seems pretty clear. The direct violence to violence comparison would imply that L2 overestimated violent deaths by 450,000, which is not very flattering for L2, while moving the goal posts to "excess deaths" by manufacturing your own estimate to impute to NEJM creates at least the appearance that they're somewhat closer.

[btw, this sort of reminds me of a previous case where a death estimate was manufactured and imputed to an NEJM paper, also against the even more clear judgment of its authors, and again to create the appearance of support for Lancet. Though in that case it was manufactured just entirely from whole cloth: http://scienceblogs.com/deltoid/2006/06/ibc_vs_les_roberts.php ]

They're essentially saying that a straightforward calculation would probably exaggerate the rate of increase - exaggerate an "excess deaths" estimate - and this needs further analysis.

the excess DEATHRATE is in the paper. the death rate doubled. fact.

the claim about a "recall bias" doesn t convince me either. we are not talking about getting the death of a distant relative several decades ago right to the date.
instead we are talking about household members and less than 5 years. and basically the only important information is, whether he/she died before the invasion.

i like the paper, but it doesn t seem really neutral to me. isn t it strange that it seems to practically endorse IBC while it is pretty sceptical of the Lancet results?
with the latter being a scientific study (and pretty close in methodology to this one) and the former an internet project (on a completely different basis, which ENSURES a massive undercount?)?

i have seen little discussion from guys like you and David about the "50% of violent dead might have been missed" part.

and how about finding no real increase in the june 05-06 period? do you really believe that the time before and after the Samarra bombing was NOT different in death toll? at all?

will we finally hear someone of you comment on their failure to look at death certificates? praise for the Lancet for doing it?

silence, again?

1) I am not sure I understand the "50% of violent dead might have been missed" part. Walk us through it.

2) The (lack of) death certificates is interesting. First, note that this survey was done before L2 was published. So, perhaps the IFHS authors did not realize that every family in Iraq keeps its death certificates close to hand! Second, perhaps they, like the IBC folks, think that the L2 interviewers did not really ask for or see death certificates. (I take no position on this.) So, why ask for something you don't think exists? Third, just because something is not mentioned in this paper does not mean that the IFHS authors don't have an opinion on the topic. You think that this was their last word? Think again.

What will you say if the IFHS authors (or someone else) report asking a sample of Iraqi families for death certificates for some 2005 deaths and can't find them?

The IFHS authors made it very clear that they think the L@ estimates are very wrong. They will have much more to say. Stay tuned.

By David Kane (not verified) on 12 Jan 2008 #permalink

David,

1) up to 50% of the violent dead may have been missed. The paper says so. Is it that you can't read, or you only read what you want to hear?

2) are you now suggesting that the L2 authors faked the death certificates?

The IFHS authors have nothing more to say about the L2 estimates than that they are an overestimate. Anything else they say will, undoubtedly, be twisted heroically by you. I'm sure we all can't wait.

And where is your commentary on the response rate, Mr. Fraud? You have been asked many times.

If I understand the explanation of the IBC based adjustments in paper correctly, then it is a major weakness of the study.

The explanation (p. 486) is quite terse and leaves some matters unclear (for example, the description only talks about adjusting Baghdad and Anbar, and remains silent on the matter of adjusting Nineveh and Wasit - in which some clusters were not sampled as well).

However, as far as I can understand the description, the adjustment procedure for Baghdad and Anbar is such that the data collected in all of Baghdad and all of Anbar province is in effect discarded and the entire estimate for those areas is made by scaling the estimates from three other provinces ["the three provinces that contributed more than 4% each to the total number of deaths reported for the period from March 2003 through June 2006" - likely 3 out of the following 4 provinces: Babylon, Basra, Diyala, and Salahuddin], by factors derived from the IBC data.

That is, the estimate given by IFHS is based solely on the data collected outside of Baghdad and Anbar, and on IBC.

The extrapolation from those areas to Baghdad and Anbar (which are estimated to contain more than 50% of the deaths) is based on the assumption that the IBC coverage of Baghdad and Anber deaths is identical to the average coverage in the 3 reference provinces. Any deviation from this assumption (which seems an unlikely assumption, a-priori) would cause significant errors in the estimates.

1) I am not sure I understand the "50% of violent dead might have been missed" part. Walk us through it.

bottom of page 491.
All methods presented here have shortcomings and can suggest only that as many as 50% of violent deaths may have gone unreported.

you surely read the paper, didn t you?

2) The (lack of) death certificates is interesting. First, note that this survey was done before L2 was published. So, perhaps the IFHS authors did not realize that every family in Iraq keeps its death certificates close to hand!

for someone who comments so much on the lancet papers and someone who has such a determined opinion about it being based on fraud,
you are remarkebly unfamiliar with the papers!

death certificates were used to confirm the results of lancet 1 already:

Within clusters, an attempt was made to confirm at
least two reported non-infant deaths by asking to see the
death certificate. Interviewers were initially reluctant to
ask to see death certificates because this might have
implied they did not believe the respondents, perhaps
triggering violence. Thus, a compromise was reached
for which interviewers would attempt to confirm at least
two deaths per cluster. Confirmation was sought to
ensure that a large fraction of the reported deaths were
not fabrications.

(page 3)

that lancet 2 was not published shouldn t have stopped the authors from contacting the other team of researchers, who had experience with polling death rates in Iraq.

However, as far as I can understand the description, the adjustment procedure for Baghdad and Anbar is such that the data collected in all of Baghdad and all of Anbar province is in effect discarded and the entire estimate for those areas is made by scaling the estimates from three other provinces

the relevant table in the appendix is labeled "E-Table 1: adjustment for missing clusters using IBC data".
so i would hope that they adjusted only the missing clusters.
http://content.nejm.org/cgi/data/NEJMsa0707782/DC1/1
(pdf page 8)

i think a look at the questionnaire being used for this survey provides a LOT of informations:

http://www.emro.who.int/iraq/pdf/ifhs_household_questionnaire.pdf

as David Kane surely will notice in his next review of it, it violates one of the most important rules of polling:
keep it short
(the one page rule)

the Household Mortality questions start on page 16.
David will surely note as well, that it is quite difficult to compare numbers from a DEDICATED mortality survey to those of some byproduct of a general health survey.

you will look beyond that first page David?

David Kane says ...

Third, just because something is not mentioned in this paper does not mean that the IFHS authors don't have an opinion on the topic. You think that this was their last word? Think again.

What will you say if the IFHS authors (or someone else) report asking a sample of Iraqi families for death certificates for some 2005 deaths and can't find them?

The IFHS authors made it very clear that they think the L@ estimates are very wrong. They will have much more to say. Stay tuned.

This is starting to sound more and more like a coordinated attack ...

Either that or pure bluster on Kane's part, since he's clearly hinting that he believes the IFHS team will be supporting his claims of fraud. I'm reaching a bit, but I think that's a very reasonable interpretation of what Kane's written above.

This has been a very, very nasty piece of work on Kane's part thus far, and I'm left with the opinion that it's going to be getting much worse.

And somewhere in these threads, over the last day or two, Kane's made a comment regarding Roberts refusal to turn over "data and CODE" as being very suspicious.

Again, right out of the Climate Audit playbook. "free the code"...if the software source isn't released, why, the paper's a fraud!

"One would assume that the Lancet was more accurate, given it collected death certificates"
Posted by: SG | January 10, 2008 8:29 PM

What on earth are you talking about?
The Lancet study claimed their surveyed deaths were supported by death certificates to a 90% degree, from which they produced an estimate that was supported to a 10% degree by death certificates.

That was an indication of accuracy in their work was it?

What on earth are you talking about? The Lancet study claimed their surveyed deaths were supported by death certificates to a 90% degree, from which they produced an estimate that was supported to a 10% degree by death certificates.

And how would you argue that a survey unsupported by death certificates at all, used to produce an estimate of total mortality unsupported by even a single death certificate, would be as accurate?

Both papers are based on surveys, not an exhaustive census.

On the IFHS survey, can anyone find or reliably intuit the longitudinal data on non-violent deaths? The flat rate of violent deaths is extremely peculiar, and can be explained by reluctance of many Iraqis at the height of the conflict to speak honestly to MoH interviewers. The increase in death rates for things like road accident and the odd category of "unintentional injuries" were striking. Such things could happen in war time, of course. But then someone could be run off the road by an army convoy and have it called a road accident. Without accounting for this flat rate---this is why absence of death certificates is important---the IFHS estimate for violent deaths is not credible.

I suggest that getting down in the weeds on matters like this is more important right now that fussing with the obsessive Lancet critics, high-mortality deniers, and those afflicted with Soros Derangement Syndrome.

By John Tirman (not verified) on 13 Jan 2008 #permalink

1) I thank sod for reminding me that L1 asked for death certificates as well. I had forgotten that! (Of course, L1 was not as successful, if memory serves, in finding death certificates as L2.) I do not know why, if death certificate information is easily available in Iraqi households, the IFHS team would not ask for it. An excellent question. Why do you think they didn't ask? I am honestly curious.

2) SG asks "[A]re you now suggesting that the L2 authors faked the death certificates?" No, although the IBC folks have certainly argued that they did. And, to be clear, only one L2 "author" was in Iraq and their were two survey groups. So, for half the time, the interviewers were working without any supervision by an L2 author. They could have have reported the existence of imaginary death certificates with no one being the wiser. And, as I document (pdf), the interviewers tended to "forget" to ask for death certificates in a highly non-random fashion, so there is something screwy going on.

But, again, I am not asserting that the reported certificates were faked. I don't know enough about death certificates in Iraq. Really! But the thing that I love about the death certificate issue is that it provides a great opportunity for replication. A different group should go do a survey in which they just ask about deaths (without asking for cause or anything else) and then ask to see the death certificates. If they find them at the 90% rate that L2 claimed, I will adjust my priors toward L2's results substantially. (And I will think much less of the professionalism of IFHS.) If death certificates are not present in Iraqi households at anywhere near a 90% rate (as the IBC folks predict will happen), then we should all be highly suspicious of the data underlying L2.

By David Kane (not verified) on 13 Jan 2008 #permalink

I agree with John Tirman that "getting down in the weeds" of knitty-gritty data analysis is important. I have been told (third-hand) that the data/code behind IFHS will be released. Since professionals provide their data/code to other researchers and since the IFHS authors are clearly professionals, I expect that this will happen, although I am unsure of the time-line since I think that more papers are forthcoming.

So, perhaps John Tirman will join me in calling for the release of the data/code behind IFHS. That is by far the best way to understand "the longitudinal data on non-violent deaths."

But wait! Tirman just called the IFHS reports "not credible." The L2 team has refused to share data with some folks, like Spagat, on the grounds that they have offered opinions in opposition to L2. Does Tirman expect IFHS to share their data with him while he refuses to share his data with Spagat? Why?

And, even better, there is lot of data from L1, especially the demographic variables that Roberts claimed were collected, that I (and others, probably including the IFHS authors) would like to see that data. Surely Tirman will join us in asking Roberts to release it.

By David Kane (not verified) on 13 Jan 2008 #permalink

DK sez:

>If they find them at the 90% rate that L2 claimed, I will adjust my priors toward L2's results substantially.

How's the adjustment of your priors going on the response rate thing?

The IFHS survey is "not credible" in the sense that they report a flat rate for violent deaths throughout the occupation and every other observer and measurement report escalating violent deaths for the same period. So IFHS is not believable. Please note I'm not accusing them of fraud or lying. What I am saying is that something is obviously amiss. How to explain? The most obvious explanation is that an interviewee reports a death in the household, but is worried (reasonable worry, given the circumstances) that reporting the death as a result of violence would place him/her in jeopardy. So getting run over by a convoy is said to be a road accident, or being shot to death is a heart attack or renal failure or "unintentional injury." Getting more data from IFHS would not likely clarify this, in part because they did not obtain death certificates. This is not fraud. This is Iraq. This is Iraq bludgeoned by war. But we still need to sort it out as best and logically as we can, even if the data are not available.

By John Tirman (not verified) on 13 Jan 2008 #permalink

David Kane, you seem to think that every aspect of the Lancet studies was fraudulent - the demographics, the death certificates, the data collection. Why do you think every aspect was fake? I recall in October 2006 the reason you thought everything was fraudulent was the high response rate. What is your current reason?

at post #114 John Tirney says:
"I suggest that getting down in the weeds on matters like this is more important right now that fussing with the obsessive Lancet critics, high-mortality deniers, and those afflicted with Soros Derangement Syndrome."

How about a thread where such issues can be sensibly discussed, but any post going on about how wrong the Lancet studies are is deleted? That may of course require too much moderation to be practical, which would be undertandable.

Since professionals provide their data/code to other researchers and since the IFHS authors are clearly professionals...

Why does every post by David Kane have to include a smear of Les Roberts et al?

John Tirman -

Hi. You mention the possible reluctance of many Iraqis at the height of the conflict to speak honestly to MoH interviewers.

That's seems right, but only if the interviewers identified themselves as (or were known to be) MoH/government employees. Do you know whether they did?

Another question for all the scientists - given that the MoH was reportedly running death squads during the survey period, what methodological safeguards are there to prevent them from "checking and editing" the data in a creative fashion?

> so i would hope that they adjusted only the missing clusters.

The location data in IBC is not accurate enough to allow imputation within the missing clusters only. It seems that the imputation was done so that the ratios Baghdad/(3 ref governorates) and Anbar/(3 ref governorates) match the IBC ratios. For example, the text says that to reach the 1.98 in E-table 1 [BTW, the text has this as 1.97 - clearly an indication of fraud ;-)], they imputed "a rate of death in the missing clusters that is 4.0 times as high as that in the visited clusters".

If this is indeed the case, this means that the data that in clusters that were collected have no effect on the total estimate, since the imputation in the missing clusters is done in such a way that it would just fill in the difference between the rate in collected clusters and the rate estimated using the IBC ratios.

Saying this again with symbols:

Let's say that IBC has rates IBC-B and IBC-ref in Baghdad and in the reference governorates respectively. IFHS has rates IFHS-B and IFHS-ref, but IFHS-B is only for a subset of Baghdad. So IFHS imputes

IFHS-B^ = IFHS-ref * (IBC-B / IBC-ref).

The difference IFHS-B^ - IFHS-B is then used to impute deaths in the missing clusters, but this will not affect the the estimate for the total IFHS-B^. IFHS-B^ is calculated using only IFHS-ref (i.e. IFHS data from outside Baghdad) and the ratio from IBC. IFHS-B is used only to determine the allocation of deaths within Baghdad.

1) I thank sod for reminding me that L1 asked for death certificates as well. I had forgotten that! (Of course, L1 was not as successful, if memory serves, in finding death certificates as L2.)

successful enough to make the team ask for it in EVERY case in L2.

I do not know why, if death certificate information is easily available in Iraqi households, the IFHS team would not ask for it. An excellent question. Why do you think they didn't ask?

because this was NOT a mortality poll. they were on page 16 of their questionnaire when the mortality question came up. everybody wanted this to finally end.

That's seems right, but only if the interviewers identified themselves as (or were known to be) MoH/government employees. Do you know whether they did?

they did. the start of the questions is like this:

HOUSEHOLD INFORMATION PANEL
We are from the Ministry of Health and the Central Organization of Statistics & Information Technology. We are conducting a
survey on the health of families, women and children.

http://www.emro.who.int/iraq/ifhs.htm

It seems that the imputation was done so that the ratios Baghdad/(3 ref governorates) and Anbar/(3 ref governorates) match the IBC ratios

sounds like this was the way. did anyone check with some numbers?

Les Roberts: "There are reasons to suspect that the NEJM data had an under-reporting of violent deaths."

Indeed there are.

It is interesting to compare the adjusted post-invasion violence-related death rate for Iraq found by IFHS ( NEJM) -- 1.67 per 100,000 -- with the homicide rate for countries throughout the world.

As you can see from the table in the linked-to site, the adjusted post-invasion violence-related death rate for Iraq (1.67) is comparable to the homicide rates of Australia (1.8 per 100,000) Belgium (1.7 ), Canada (1.7), Kuwait (1.7) and just slightly higher than the Netherlands (1.2) -- and actually lower than places like New Zealand (2.0), Switzerland (2.7) and Finland(2.9)

I bet Tim Lambert and Jeff Harvey did not know they were living in places as dangerous as Iraq!

Be careful, Tim and Jeff, next time you walk out the door!

PS: everyone knows Iraq is much less dangerous than the US, with its 9.4 per 100,000 homicide rate -- which is why I never leave my house (thank goodness for Peapod grocery delivery is all I can say)

Note: the Iraqi violence related death rate actually includes causes of death in addition to just homicide, so if one were to remove the deaths due to such causes, re-calculate the death rate and then do the comparison, it would reduce the violence related death rate for Iraq, actually making it lower than the homicide rate for most of those countries that I listed above as comparable.

Err, JB. The NEJM finds it to be 1.67 per 1,000. That's 167 per 100,000. Which is more than the homicide rate in any other country.

Sod - many thanks.

That greeting would inspire terror in a great number of Iraqi's.

Notice how the main unsurveyed areas - Anbar & Ninevah provinces & Karkh (West Baghdad) - are all Sunni majorities?

That's unsurveyed by a Health Ministry controlled by Shia loyalists of cleric Moqtada al-Sadr; whose Deputy Minister, Hakim al-Zamili, was arrested for murder and kidnap; and whose militia was described by U.S. commanders and diplomats in July '06 as constituting "one of the gravest threats to Iraq's security".

How come everyone suddenly forgets all that, and is happy that such a ministry gets to do the "final checks and editing"? (IFHS p12)

"David Kane says ...
Third, just because something is not mentioned in this paper does not mean that the IFHS authors don't have an opinion on the topic. You think that this was their last word? Think again.
What will you say if the IFHS authors (or someone else) report asking a sample of Iraqi families for death certificates for some 2005 deaths and can't find them?
The IFHS authors made it very clear that they think the L@ estimates are very wrong. They will have much more to say. Stay tuned.

dhogaza replies--
This is starting to sound more and more like a coordinated attack ...
Either that or pure bluster on Kane's part, since he's clearly hinting that he believes the IFHS team will be supporting his claims of fraud. I'm reaching a bit, but I think that's a very reasonable interpretation of what Kane's written above.

It sounds that way to me too. Very ugly.

By Donald Johnson (not verified) on 13 Jan 2008 #permalink

Why would someone seriously interested in determining the violent death rate in Iraq in the middle of a civil war have survey teams telling people they work for the government? A government which is infilitrated by death squads and runs torture centers.

I wonder what Lancet skeptics would have said if Saddam's government had done a similar survey at the end of 1991 and found that the number of violent deaths was lower than that found by a group from outside Iraq.

By Donald Johnson (not verified) on 13 Jan 2008 #permalink

Donald Johnson,

You are a sensible guy. What is "very ugly" about this. I had nothing to do with IFHS. I haven't met any of the members. I have exchanged a few e-mails with M. Ali inviting him to present at JSM in August. There is no "coordination" going on. (Not that there would be anything wrong if there were.)

I am just trying predict what is going to happen, given the amount of data that IFHS has collected and the manner in which their paper is written. Feel free to not believe. I don't care. Recall what I wrote last month:

Where is the debate going? I sometimes worry that, like so many other left/right disputes, this will never be resolved, that we will never be sure whether or not the Lancet articles were fraudulent. Will these estimates be the Chambers/Hiss debate of the 21st century? I hope not. Fortunately, other scientists are hard at work on the topic, reanalyzing the data produced in L2 and conducting new surveys. Both critics and supporters of the Lancet results should be prepared to update their estimates in the face of this new evidence. If independent scientists publish results that are similar to those of the Lancet authors, then I will recant my criticism. Will Lancet supporters like Lambert and Davies do the same when the results go against their beliefs? I have my doubts.

If another and then another and then another peer-reviewed article insists/argues/shows that the L2 results are too high, will your opinion change?

Elsewhere, David Kane claims that "there are lot of very serious folks who are now gunning for L2." If these folks' marksmanship is on a par with David's grasp of statistics I don't think the Burnham-Roberts gang have much to worry about, but if the Iraqi MoH guys show up with electric drills it might be wise to head for the hills.

Tim,

Since you devote ample space to Lancet-bashers, would you not consider devoting a post to the NEJM-bashing of Pierre Sprey? I suspect he's a nutcase, but I'm sure he is more numerate than Seixon or David Kane and you've given plenty of attention to them.

By Kevin Donoghue (not verified) on 13 Jan 2008 #permalink

"If another and then another and then another peer-reviewed article insists/argues/shows that the L2 results are too high, will your opinion change?"

That's a reasonable question, so I'll answer it.

Yes, it would, if I was convinced the people running the studies did a competent job and there were not methodological problems. I'd have to rely on the word of experts for that. I'm not in the L2 camp in the first place, but a fencesitter. (I have been in the L1 camp, because an IBC undercount of a factor of 3-4 seemed easy to believe. Also, there've been claims of Iraqi groups doing bodycounts, one in 2003 and one in 2005, and their numbers of 37,000 and 128,000, if true, meant L1 was right on the money. L2's estimate of 600,000 violent deaths was a huge surprise to me.)

I don't know what to make of this latest paper. Why would someone trying to determine the violent death rate in Iraq have the surveyors tell the interviewees they worked for the Iraq Ministry of Health? The government is not some neutral party respected by all in Iraq.

I've been wanting people to do surveys in Iraq, but they've got to be seen as coming from a neutral party, and they've got to be able to survey the refugees and the most violent areas, or it's not going to settle the issue. Well, it might put a floor on the lowest plausible number of deaths, which is what I take this latest survey to have done.

By Donald Johnson (not verified) on 13 Jan 2008 #permalink

Of course one reason the survey teams would tell the interviewees that they worked for the Ministry of Health is because it was true, but it's not what I'd call a confidence-building measure in the Sunni areas, and maybe not in all the Shiite areas either. Look at how paranoid you rightwingers get at the mere mention of the name of Soros, and Soros doesn't even run any death squads or torture centers, AFAIK.

By Donald Johnson (not verified) on 13 Jan 2008 #permalink

Kane babbles:

If another and then another and then another peer-reviewed article insists/argues/shows that the L2 results are too high, will your opinion change?

But which studies are these, David? Orb put the death toll higher, and this study's total excess deaths fall within L2's confidence intervals. It also confirms L1. So which studies are showing that L2 is too high?

Well, this study argues (or shows, depending on your point of view) that L2 is too high. Has there been another peer-reviewed article on the topic? Not that I know of. The Munro article mentions one. I know of another, mentioned by the discussant at Roberts presentation last August.

Once there are three peer-reviewed articles (all by different authors) which argue that the L2 violent death estimates are very wrong (and no peer-reviewed articles arguing otherwise), would your faith in L2 be shaken?

By the way, if you think that this study "confirms" the violent death estimate from L1, then you really aren't paying attention.

By David Kane (not verified) on 13 Jan 2008 #permalink

David, the IFHS study confirms the L2 excess deaths. It disputes the size of the sub-category of "violent deaths", but didn't collect death certificates (whereas L2 did), so has a greater risk of misattribution.

Your little rant about L1 relies, as others have observed, on including a data point which the authors explicitly stated that they had excluded. A data point whose inclusion you have elsewhere argued should lower the death rate towards the null, yet here strangely increases it.

Even if the authors hadn't excluded it, the comparison would still not be valid since the IFHS didn't sample the most violent areas of Iraq (as Sortition has shown above, their method may even be equivalent to excluding fallujah).

So yeah, I haven't been paying attention to your wild attempts to compare apples with elephants. And I don't think you were paying close attention when you wrote them.

SG is lying yet again. The IFHS study makes no estimate even of excess deaths from violence, let alone of excess deaths from all causes. The IFHS estimate of total violent deaths is smaller than the L2 estimate of excess violent deaths by a factor of four.

Jason, you do understand what a crude mortality rate is don't you? You do understand it can only be calculated by calculating a number of deaths, and a sample size in which they occurred, don't you?

Further, you are aware aren't you, that the figure of 400,000 calculated for excess deaths comes from the same table as the figure of 150,000 for violent deaths? Using numbers calculated by the same method?

I get the impression you think that the rates themselves are the raw data, but really, they're not.

SG,

Your made-up number of 400,000 doesn't come from any table, and your made-up calculation doesn't come from any procedure described in the study. You have invented them out of whole cloth. The authors of the IFHS study give no estimate of total excess deaths and only vaguely describe the complex methodology they used to produce their estimate of 151,000 violent deaths. Your attribution of an estimate of 400,000 total excess deaths to the IFHS study is a lie. And your claim that this fabricated, falsely-attributed number agrees with the total excess deaths estimate from Lancet 2 (655,000) is another lie.

Completely wrong, Jason. The figure for 151,000 comes from multiplying the rate (1.09) by the population of Iraq and the study period.

The rate comes from "the complex methodology they used to estimate the 151,000 violent deaths".

You seem to be under the impression that the rates given in table 4 are not calculated from the "complex methodology". (By Table 4, btw, I am referring to the table in the supplementary appendix, not the main NEJM paper.)

A question for the statistically literate: if you have two studies of the same phenomenon which produce results with overlapping 95% confidence ranges, is it reasonable to infer that the correct result is likely to be somewhere in the overlap?

By Ian Gould (not verified) on 13 Jan 2008 #permalink

"I thank sod for reminding me that L1 asked for death certificates as well. I had forgotten that! (Of course, L1 was not as successful, if memory serves, in finding death certificates as L2.) I do not know why, if death certificate information is easily available in Iraqi households, the IFHS team would not ask for it. An excellent question. Why do you think they didn't ask? I am honestly curious."

Well for one thing, Roberts et al are the worldwide preeminent experts in conducting mortality surveys in conflict zones and were conducting a survey focusing specifically on measuring excess mortality whereas the IFHS were conducting a much broader survey covering a whole range of topics into which they shoe-horned a couple of questions about mortality.

By Ian Gould (not verified) on 13 Jan 2008 #permalink

David Kane, in his usual weasel-worded approach to "truth", claims:

I am just trying predict what is going to happen, given the amount of data that IFHS has collected and the manner in which their paper is written. Feel free to not believe. I don't care.

However earlier he said ...

Just because something is not mentioned in this paper does not mean that the IFHS authors don't have an opinion on the topic. You think that this was their last word? Think again.

This is not a "prediction". This is a definitive statement. This is NOT their last word. They DO have an opinion on the topic. Kane doesn't tell us how he knows that, but he does definitively state that he does.

Likewise:

The IFHS authors made it very clear that they think the L@ estimates are very wrong. They will have much more to say. Stay tuned.

Again, not a "prediction", but a definitive statement of definite knowledge (stated redundantly by yours truly).

So, David, when were you lying? In the original post when you claimed definite knowledge of the authors' thoughts?

Or in your last post, when you claimed that you were just making a "prediction" and really don't know?

Kane is one of those people who lie so frequently it appears to be habitual, and so habitual that he can't keep his stories straight.

And, no, we can't simply call this a "misunderstanding", either.

There's nothing to be misunderstood in the statements you made.

If you don't want us to read your words literally, there's a simple solution - STFU and crawl back into whatever damp, stinking pile of crap you call home.

[A question for the statistically literate: if you have two studies of the same phenomenon which produce results with overlapping 95% confidence ranges, is it reasonable to infer that the correct result is likely to be somewhere in the overlap? ]

it might not be a bad rule of thumb for a lot of real-world cases, but it's not really valid statistical reasoning so I'd caution against going too far down that route.

The figure for 151,000 comes from multiplying the rate (1.09) by the population of Iraq and the study period.

No, it doesn't. 151,000 deaths / 1.09 deaths per year per 1000 / 3.25 years = population of 42.6 million, which is far too high. The 151,000 figure comes from an adjusted death rate of 1.67 per year per 1000, which accounts for under-reporting. That gives a much more sensible population of 151,000/1.67/3.25*1000 = 27.8 million. Using that figure with the rates in table 4 gives: 27.8 million people * 6.01 deaths per year per 1000 * 3.25 years = 540,000 deaths. But Lancet 2 measured excess deaths, so we need to subtract the pre-invasion death rate of 3.17, for an estimate of around 250,000 excess deaths, before accounting for under-reporting.

Finally, the paper estimates completeness of reporting of deaths at 62% (compare to 1.09/1.67 = 65%), which gives an adjusted total of around 410,000 excess deaths, consistent with the lower end of the L2 range. Crude, yes. 'Made-up,' no.

Thank you Martin, my mistake. The original 410,000 figure was taken by scaling up the 150,000 on the assumption of similar underreporting. 1.09 is the unscaled figure, but taking 1.09 as representative of 151,000 deaths has the same result. It hinges, of course, on the assumption of equal rates of underreporting of violent and non-violent deaths.

It hinges, of course, on the assumption of equal rates of underreporting of violent and non-violent deaths.

I think it's a reasonable one. The underreporting issues they were correcting for didn't have anything to do with "possible fear of labeling a death as due to violent causes rather than non-violent causes". Just pure underreporting.

This was a long, general health questionnaire, with "any deaths?" just being one out of a long list of questions, and "cause of death" being keyed after a positive response to the "any deaths?" question ...

""any deaths?" just being one out of a long list of questions, and "cause of death" being keyed after a positive response to the "any deaths?" question .."

I would suspect some people who had lost family members to violence would start lying at that point, especially if they feared the next question might be about who killed their family member and why. They knew this survey team was working for the government.

By Donald Johnson (not verified) on 14 Jan 2008 #permalink

It hinges, of course, on the assumption of equal rates of underreporting of violent and non-violent deaths.

Well, the 62% completeness figure is for all deaths, so if we accept the 151,000 figure for violent deaths, we should also accept the 540,000 figure for total deaths. But to get from that to excess deaths, we have to assume comparable under-reporting pre- and post-invasion; IMHO, that's the weak point, not the violent/non-violent comparison.

On the other hand, 62% completeness pre-invasion yields a corrected death rate of 5.11, which is comfortably within the L2 range. And even if the reported pre-invasion rate is too low by a factor of 2, that's still more than a quarter million excess deaths.

Another issue that seems problematic is the adjustment for under reporting. The abstract says:

> After adjustment for missing clusters, the overall rate of death per 1000 person-years was 5.31 (95% confidence interval [CI], 4.89 to 5.77); the estimated rate of violence-related death was 1.09 (95% CI, 0.81 to 1.50). When underreporting was taken into account, the rate of violence-related death was estimated to be 1.67 (95% uncertainty range, 1.24 to 2.30).

The post adjustment estimate (1.67) and CI endpoints (1.24, 2.30) are simply the pre-adjustment values times 1.53. This corresponds to assuming that the reporting rate is known and equal to 65%.

But in the section "statistical methods", the authors claim that they accounted for an uncertainty in the reporting rate:

> We assumed that the level of underreporting was 35% (95% uncertainty range, 20 to 50), and its uncertainty was normally distributed. (p.487)

This does not seem to fit with the calculation in the abstract. Such a wide range of possible under-reporting rates should have resulted in a much wider post-adjustment interval.

> [A question for the statistically literate: if you have two studies of the same phenomenon which produce results with overlapping 95% confidence ranges, is it reasonable to infer that the correct result is likely to be somewhere in the overlap? ]

> it might not be a bad rule of thumb for a lot of real-world cases, but it's not really valid statistical reasoning so I'd caution against going too far down that route.

Actually, the intersection of two 95% confidence regions is a 90% confidence region. Thus, the overlap of the two intervals should contain the true value of the parameter in 90% of the cases.

MartinM,

You cannot assume equal underreporting rates pre-invasion and post-invasion. The pre-invasion underreporting rate is likely to be higher because the deaths occurred longer in the past. You also cannot assume equal underreporting rates by cause of death. You also cannot assume a fixed population, either in total size or age/sex/geographic distribution across the reporting period.

Since you don't have access to the data, assumptions and methods the IFHS authors used to calculate their estimate of 151,000 post-invasion violent deaths, you're not in a position to calculate a corresponding IFHS estimate for total post-invasion deaths, let alone excess deaths. Your 400,000 "estimate" isn't just crude, it's worthless.

The only counts in IFHS and L2 that can be meaningfully compared are the IFHS violent death count and the L2 excess violent death count, and those numbers differ by a factor of four. This enormous discrepancy is further evidence that L2 is very seriously flawed and its estimates therefore worthless, as its critics have maintained from the beginning.

[This enormous discrepancy is further evidence that L2 is very seriously flawed]

No it isn't. The fact that you can say this sort of thing, Jason, really eats into your credibility on other points.

Since you don't have access to the data, assumptions and methods the IFHS authors used to calculate their estimate of 151,000 post-invasion violent deaths, you're not in a position to calculate a corresponding IFHS estimate for total post-invasion deaths, let alone excess deaths. Your 400,000 "estimate" isn't just crude, it's worthless.

Jason, without a SINGLE assumption, just by using the numbers from the paper, you end up with an excess death rate of 316000.

http://tinyurl.com/34s8tr

your argument would get stronger, if you would try to contradict points being made, and not just repeat the same nonsense.

The only counts in IFHS and L2 that can be meaningfully compared are the IFHS violent death count and the L2 excess violent death count, and those numbers differ by a factor of four.

you have not provided a SINGLE reason, why the death rates are incomparable. they are there, real numbers in the paper.
they show an increase: the mortality rate in Iraq DOUBLED.

no assumptions, no calculations, no guess work. simple fact.

Jason, without a SINGLE assumption, just by using the numbers from the paper, you end up with an excess death rate of 316000.

Nonsense. You cannot do any kind of calculation of death counts using the numbers from the paper without making assumptions. It's all just an exercise in speculation and guesswork.

Jason's right, sod. You're assuming that addition is commutative, for example.

By Kevin Donoghue (not verified) on 14 Jan 2008 #permalink

dsquared,

What is the probability distribution across the 95% CI for total excess deaths in L2?

What is the probability distribution across the 95% CI for total excess deaths in L2?

Normally the person claiming a paper is trash is supposed to do the work showing so themselves.

And, BTW, dsquared is one of those here who took David Kane to statistics school 101 a few months ago.

So be sure you do a good job, OK? You wouldn't want to look any more foolish than you already do, right?

I heart your self-esteem and would hate to see it suffer.

dhogaza,

Normally the person claiming a paper is trash is supposed to do the work showing so themselves.

Your capacity for nonsequitur is nothing short of astonishing.

> What is the probability distribution across the 95% CI for total excess deaths in L2?

This question reflects a fundamental misunderstanding of the statistical analysis. In a frequentist framework (which is the standard methodology of statistical analysis, used both in the Lancet studies and the IFHS study) parameters are unknown but not random, and thus there is no "probability distribution for total excess deaths in L2".

If there is no probability distribution within the 95% CI, how can there be a "best" or "most likely" estimate?

You cannot assume equal underreporting rates pre-invasion and post-invasion.

Didn't I just say that?

The pre-invasion underreporting rate is likely to be higher because the deaths occurred longer in the past.

That's one possibility. Another is that respondents were more likely to lie to representatives of the new regime about deaths under said new regime.

You also cannot assume equal underreporting rates by cause of death.

I don't have to; the figure of 62% completeness is for all deaths. It's also almost identical to the figure of 65% used for violent deaths.

You also cannot assume a fixed population, either in total size or age/sex/geographic distribution across the reporting period.

I can and I have. Given that the reported population, adjusted for migration, rises from 26m in 2003 to just under 29m in 2006, any effect there is going to be minor.

Since you don't have access to the data, assumptions and methods the IFHS authors used to calculate their estimate of 151,000 post-invasion violent deaths, you're not in a position to calculate a corresponding IFHS estimate for total post-invasion deaths, let alone excess deaths. Your 400,000 "estimate" isn't just crude, it's worthless.

And yet using the same crude assumptions that went into the 400k estimate, I can estimate the violent death rate. Let's see how that works out.

First, take the population data from table 2, and average it, weighting for incomplete years 03 and 06:

(26388081*.75 + 26769584 + 27597117 + 28514649*.5)/3.25 = 27.2 million

Then 27.2 million people * 1.09 violent deaths per 1000 per year * 3.25 years / 0.62 completeness = 155,000 violent deaths. Alternatively, 0.65 completeness yields 148,000 violent deaths.

Either way, we're within 3% of the study's actual point estimate; not bad for a worthless methodology.

Bugger. Two multiplication signs were swallowed somehow; first line of maths should read:

((26388081 * .75) + 26769584 + 27597117 + (28514649 * .5))/3.25 = 27.2 million

If there is no probability distribution within the 95% CI, how can there be a "best" or "most likely" estimate?

I think you'll find the answer in the phrase "frequentist framework", and if you don't like it, take it up with the authors of the IFHS study you're so enamored with.

MartinM,

That's one possibility. Another is that respondents were more likely to lie to representatives of the new regime about deaths under said new regime.

What assumptions do the IFHS authors make about the rate of underreporting pre-invasion vs. post-invasion? How do you know? The question is crucial to any calculation of excess deaths.

I can and I have. Given that the reported population, adjusted for migration, rises from 26m in 2003 to just under 29m in 2006, any effect there is going to be minor.

You don't know that. The change in population size you mention is almost 12%. And changes in the age, sex and geographic distribution of the population may have a much larger effect, since the causes of death vary by age, sex and geographic distribution. Unless you know how the authors handle these variables in their simulation, you don't know what effect they would have on an estimate of excess deaths.

And yet using the same crude assumptions that went into the 400k estimate, I can estimate the violent death rate.

Just because your crude assumptions and calculation yields an estimate of total violent deaths close to the estimate of total violent deaths made by the IFHS authors doesn't mean the same thing would happen for excess deaths. Since you don't know how close your methods are to theirs you're not in a position to make any predictions about what their estimate would be. The 400,000 number is yours, not theirs.

dhogaza,

I think you'll find the answer in the phrase "frequentist framework",

I see no answer in the phrase "frequentist framework."

Do you have an answer? No, I didn't think so.

> If there is no probability distribution within the 95% CI, how can there be a "best" or "most likely" estimate?

That is a surprisingly deep question with no easy answer. To the extent that a "best" estimate exists, it would depend on prior beliefs or biases.

It is also interesting to note that one could construct many 95% confidence intervals (or more generally, confidence regions) for the same experiment, so that even the 95% CI at hand should not be seen as set in stone, but a matter of convention. Theoretically speaking, the crucial point is to set your decision procedure (how you are going act on a specific finding) before you carry out the experiment and stick to it rather than indulge in post-hoc reshaping of procedures.

jason, feel free to argue that the underreporting is different for violent deaths. Is it higher? You won't like that, it makes the total excess deaths figure more lancet-like. So it must be that the underreporting of total deaths is lower than for violent deaths.

In which case, you are arguing that a study which did not visit the most violent areas of Iraq, which did not collect death certificates, and which was less likely to record a violent than a non-violent death, can be used to dispute the figures of a study which did visit the most violent parts of iraq and did collect death certificates.

Feel free to argue that.

Sortition re the underreporting adjustment: my guess is that the multiplicative factor is a coincidence, though it seems mighty strange. Alternatively maybe their Monte Carlo simulations were designed to identify the "most likely" model, and they just presented the results of that.

That is a surprisingly deep question with no easy answer. To the extent that a "best" estimate exists, it would depend on prior beliefs or biases.

So is 655,000 the "best" or "most likely" point estimate from L2 for total excess deaths, or isn't it? Is it better or more likely than the low end of the CI, or isn't it? What "prior beliefs or biases" does the answer depend on?

> So is 655,000 the "best" or "most likely" point estimate from L2 for total excess deaths, or isn't it? Is it better or more likely than the low end of the CI, or isn't it?

Not in any rigorous or objective sense, without making further assumptions or specification of what is your criterion for "best". If you set out to make an interval estimate then it is the interval that you should be working with rather than choosing points within the interval. There is a theory of point estimates - but unlike interval estimation choosing specific points requires specifying distances between points and how costly it is to make mistakes.

> What "prior beliefs or biases" does the answer depend on?

You could work in a Bayesian framework and specify a prior distribution for the unknown parameter - of course, in that case, your prior may differ from mine so we may disagree on any conclusions. Alternatively, you could be working within the point estimation framework above and try to minimize the worst case (minimax) cost of your estimate.

All this may be somewhat frustrating, but that is just how the theory works out. And this is just the half of it: in practice, non-theoretical issues that are hard to quantify (like the unknown rate of under-reporting, or how imputations are done for the missing, dangerous clusters) often overwhelm any uncertainty that is in quantitative model.

It is important to aware of all these issues and be humble about what you know and what you don't know: "O men, he is the wisest, who, like Socrates, knows that his wisdom is in truth worth nothing." Plato, Apology.

Sortition,

Not in any rigorous or objective sense, without making further assumptions or specification of what is your criterion for "best".

So there is no rigorous or objective statistical basis for claiming, on the basis of L2, that the Iraq War caused 655,000 excess deaths through July 2006 rather than only 393,000.

You need to get word out to the defenders of L2, who keep throwing around the 655,000 number as if it is the "best" or "most likely correct" or "most accurate" estimate of excess deaths from that study.

And there is no rigorous or objective statistical basis for claiming, on the basis of IFHS, that the Iraq War caused 151,000 violent deaths through July 2006 rather than only 104,000.

I wonder if dsquared is reading this. He's the same Daniel who wrote this, right? Your statement about no "rigorous or objective sense" for preferring the midpoint estimate of the CI would seem to vindicate Fred Kaplan's "dartboard" criticism of the Lancet studies, which Daniel claims is specious. According to Daniel, not only is there a probability distribution within the 95% CI, but the midpoint is its peak. According to you, this means that Daniel has a "fundamental misunderstanding of the statistical analysis" of the Lancet studies.

MartinM,

What's the low end of the 95% CI for your estimate, using numbers from IFHS, of 400,000 excess deaths?

> You need to get word out to the defenders of L2, who keep throwing around the 655,000 number as if it is the "best" or "most likely correct" or "most accurate" estimate of excess deaths from that study.

I would say that the number 655,000 should be seen as shorthand for the entire interval. If you find "only 393,000" to be very different in its implications than "655,000" then I could understand your complaint. To me both those numbers mean wide-scale criminal carnage. I generally use the phrase "hundreds of thousands dead" instead of using any specific number.

> dsquared

Davies's reply to the dartboard argument does not make any reference to a distribution of the unknown parameter. His argument regarding the fact that zero is outside the interval is exactly along the lines of arguments that I do see as valid: Negative excess deaths seems like a reasonable test for the war as a humanitarian effort. The evidence points against this possibility. Making post-hoc excuses about not knowing if there are a few thousand excess deaths or more than 100,000 excess deaths is intellectually dishonest.

Sortition,

In response to Kaplan's "dartboard" argument against Lancet 1, Daniel (dsquared) writes here:

The confidence interval describes a range of values which are "consistent" with the model. But it doesn't mean that all values within the confidence interval are equally likely, so you can just pick one. In particular, the most likely values are the ones in the centre of a symmetrical confidence interval. The single most likely value is, in fact, the central estimate of 98,000 excess deaths.

So Daniel thinks not only that there is a probability distribution across the CI, but that the midpoint of the CI is the "single most likely value." This contradicts your statements there is no probability distribution across the CI, that to think there is betrays a "fundamental misunderstanding of the statistical analysis," and that there is no "rigorous or objective sense" in which the midpoint value of the CI is statistically better or more likely than the low end value.

> re the underreporting adjustment: my guess is that the multiplicative factor is a coincidence, though it seems mighty strange. Alternatively maybe their Monte Carlo simulations were designed to identify the "most likely" model, and they just presented the results of that.

I made some approximate calculations:

If the mortality rate is M and assuming, with the authors, that the underreporting U is distributed N(0.35,(.15 / 2)^2) then the adjusted mortality is M/(1 - U).

Politely ignoring the little matter that (1 - U) may be negative, we get that the variance of the adjusted mortality should be equal to about

Var[M] / 0.65^2 + Var[1/(1 - U)] E[M]^2 =

Var[M] (1.53^2 + 0.18^2 * E[M]^2/Var[M]) =

Var[M] (1.53^2 + 0.18^2 (1.09 / ((1.5-0.81)/ 4))^2) = Var[M] (1.53^2 + 1.13^2).

The authors used 1.53^2 Var[M]. Therefore, their interval should be about (1.53^2 + 1.13^2)^.5 / 1.53 = 1.25 times (or 25%) longer than they have it.

> Daniel (dsquared) writes here:

>> The confidence interval describes a range of values which are "consistent" with the model. But it doesn't mean that all values within the confidence interval are equally likely, so you can just pick one. In particular, the most likely values are the ones in the centre of a symmetrical confidence interval. The single most likely value is, in fact, the central estimate of 98,000 excess deaths.

You are right - the argument you quote here is incorrect. His original argument is the correct one.

On this question I'm sympathetic to Jason--confidence intervals seem designed to confuse people. If that's the intention, it works.

I'm about to go to sleep, but I'm pretty sure I've seen Lancet authors themselves treat the CI the way dsquared did in that quote--the central value is the most likely, and the ones at the edges are less so. In fact, in L1 I think I recall a claim that there was a 90 percent chance the excess deaths had to be greater than 45,000 (or something like that).

By Donald Johnson (not verified) on 14 Jan 2008 #permalink

I would say that the number 655,000 should be seen as shorthand for the entire interval. If you find "only 393,000" to be very different in its implications than "655,000" then I could understand your complaint. To me both those numbers mean wide-scale criminal carnage. I generally use the phrase "hundreds of thousands dead" instead of using any specific number.

I do think 393,000 is very different in its implications than 655,000. If a point estimate is to be given, and 655,000 is no more likely than 393,000, then 393,000 will do just as well. News reports, blog posts and other discussions of the study that prominently feature the 655,000 number, rather than using a description of the CI (your "hundreds of thousands") are likely to create a largely false impression regarding the study's findings among their readers.

His argument regarding the fact that zero is outside the interval is exactly along the lines of arguments that I do see as valid: Negative excess deaths seems like a reasonable test for the war as a humanitarian effort. The evidence points against this possibility.

I think that in most cases, including the Iraq War, it is totally unreasonable to expect the humanitarian benefits of a war to be clear during or in the short-term aftermath of major combat operations. World War II caused millions of "excess deaths" and its humanitarian benefits were only clear many years or even decades later. Europe and Japan were devastated and millions died but few people today seem to believe it was wrong for the Allies to fight it.

On this question I'm sympathetic to Jason--confidence intervals seem designed to confuse people. If that's the intention, it works.
I'm about to go to sleep, but I'm pretty sure I've seen Lancet authors themselves treat the CI the way dsquared did in that quote--the central value is the most likely, and the ones at the edges are less so. In fact, in L1 I think I recall a claim that there was a 90 percent chance the excess deaths had to be greater than 45,000 (or something like that).

Yes, Daniel quotes them to that effect further down in the same post.

So, who is right on this matter, Daniel and the Lancet authors, or Sortition? Maybe someone could provide a link to an authoritative source that would resolve the question.

> confidence intervals seem designed to confuse people

I guess the same can be said about quantum physics.

> Lancet authors themselves treat the CI the way dsquared did in that quote

This would not surprise me.

> I think that in most cases, including the Iraq War, it is totally unreasonable to expect the humanitarian benefits of a war to be clear during or in the short-term aftermath of major combat operations.

Ok - so now we moved beyond the statistical issue. That's a whole different discussion. I disagree completely of course, but I will save this discussion for another occasion.

Sortition,

On the "dart board" issue, you say that dsquared's "original argument is the correct one." That argument is as follows:

"This isn't an estimate. It's a dart board." The critique here, from Slate, is that the 95% confidence interval for the estimate of excess deaths (8,000 to 200,000) is so wide that it's meaningless. It's wrong. Although there are a lot of numbers between 8,000 and 200,000, one of the ones that isn't is a little number called zero. That's quite startling. One might have hoped that there was at least some chance that the Iraq war might have had a positive effect on death rates in Iraq. But the confidence interval from this piece of work suggests that there would be only a 2.5% chance of getting this sort of result from the sample if the true effect of the invasion had been favourable. A curious basis for a humanitarian intervention; "we must invade, because Saddam is killing thousands of his citizens every year, and we will kill only 8,000 more".

How is this a rebuttal of Kaplan's critique? Daniel claims Kaplan says the study finding is "meaningless." But Kaplan doesn't say that. His complaint is that the study is not useful, not that it has no meaning. He likens it to a poll predicting that Bush will win something between 4 and 96 per cent of the vote in an upcoming election. Such a poll isn't "meaningless," it just has very little value. Ditto for a study whose "estimate" of excess deaths ranges from 8,000 to 194,000. Yes, that range does at least exclude very small or negative values for excess deaths. Just as the Bush poll excludes very small numbers of votes. But if that's all it tells you, it has virtually no practical value. That's Kaplan's point.

"If another and then another and then another peer-reviewed article insists/argues/shows that the L2 results are too high, will your opinion change?"

Conversely, if multiple credible reports validate L2 will you have the decency to admit it and retract your allegations of fraud?

Furthermore, if Lancet 2 is proven to be wrong, that would still not justify in any way the baseless and vicious claims of deliberate fraud directed at its authors.

Science progresses through the falsification of hypotheses.

By Ian Gould (not verified) on 14 Jan 2008 #permalink

"So there is no rigorous or objective statistical basis for claiming, on the basis of L2, that the Iraq War caused 655,000 excess deaths through July 2006 rather than only 393,000.

You need to get word out to the defenders of L2, who keep throwing around the 655,000 number as if it is the "best" or "most likely corarect" or "most accurate" estimate of excess deaths from that study.

And there is no rigorous or objective statistical basis for claiming, on the basis of IFHS, that the Iraq War caused 151,000 violent deaths through July 2006 rather than only 104,000."

Correct, now tell us the rigorous or objective statistical basis for rejecting the upper bound of the 95% CI in both cases.

By Ian Gould (not verified) on 14 Jan 2008 #permalink

"I think that in most cases, including the Iraq War, it is totally unreasonable to expect the humanitarian benefits of a war to be clear during or in the short-term aftermath of major combat operations. World War II caused millions of "excess deaths" and its humanitarian benefits were only clear many years or even decades later. Europe and Japan were devastated and millions died but few people today seem to believe it was wrong for the Allies to fight it."

"Who remembers the Armenians now?" - Adolf Hitler.

By Ian Gould (not verified) on 14 Jan 2008 #permalink

Jason, as I understand it (and I recommend you listen to Sortition's opinion, not mine), there is no distribution of values of our estimate. Rather, we have calculated a number, we make some assumptions about the process by which our observations were created, and on that basis we can say that given the particular numbers we observed, if our assumptions are true, there is a certain range of values around our observation within which the real value is most likely to lie.

The probability that the true value lies within a smaller range will always be lower than the probability that it will lie in a wider range. This is the reasoning behind different levels of confidence. Exactly how the interval size changes with our preferred level of confidence depends on the assumptions we make, but it is not the case that there is a higher chance that the real value lies closest to the value observed. I think this is a common misconception about confidence intervals which stems from the fact that we can compare a 90% and 95% confidence interval, and the former is always narrower than the latter.

(Sortition, please correct me if I am wrong!!!)

Kaplan's argument is a false analogy. When examining George Bush's vote, we don't care about the exact value between 0 and 1. What we care about is whether he will get more than his rival. So we divide his predicted vote by his rival's, to get a ratio greater than 1. Then we compare the confidence interval for this ratio with 1. Alternatively we use the difference of votes and compare it to 0. The analogy Kaplan gave is only valid if we are trying to predict that at least one person will vote for Bush, in which case a confidence interval of 4 to 96% is very informative, since it tells us that at least 1 person will. Obviously if our question is "exactly how many people will vote for Bush", then this confidence interval is not informative. But statistics does not usually concern itself with specific values, only with hypotheses about certain important values. The question, for example, "will George Bush get 51% of the vote" is not well-answered by an estimate of his vote that has a confidence interval of 4 to 96. But we shouldn't be trying to answer that question with statistics, and it's not even interesting. That's why these papers all deal in excess deaths, rates and the like - so they can answer important and statistically valid questions.

Yes Jason, your view of WW2 is a bit jaundiced. The benefits for us of fighting the war may have taken years to realise, whatever they were; but there were never any benefits to the Germans or Japanese. Sure, benefits were sold to those nations by their leaders, but none ever materialised. Even the removal of their leaders was not a benefit of the war, since it must have happened anyway minus the complete destruction of their nations. To compare this situation with WW2 you either have to compare the Iraqis to the allies (but the Iraqis lost) or us to the Nazis (but the Nazis lost, and didn't care about their victims as we claimed to do). The analogy just doesn't work.

Our current war was spruiked and defended on the basis of its benefits, which people clearly claimed were going to include a reduction in deaths (remember Saddam's people-shredding machine?) But now there is no credible estimate of excess deaths which does not show that post-invasion deaths now exceed saddam's era, unless we go with the most crazy and unsubstantiated of the claims of his critics. So for the war's supporters it's a case of either 1) accept the evidence and say sorry or 2) deny the evidence.

> [Kaplan's] complaint is that the study is not useful

As I wrote: the evidence points against the hypothesis that the invasion saved lives. This is very useful since it undermines the claim that the invasion is justified on humanitarian grounds. Kaplan's post hoc reframing of the issue is intellectually dishonest. His analogy is deceitful as well. The appropriate analogy is an exit poll saying that a candidate has won between 51% and 60% of the votes - i.e., the range is between a small margin and a landslide, but a majority in either case. Not useful?

It is highly unlikely that it will ever be possible to make a valid humanitarian argument for the Iraq WAr.

Quite simply consider the opportunity cost associated with the $1 trillion or so the war has cost to date.

Anyone think there weren't better ways to use that money?

By Ian Gould (not verified) on 14 Jan 2008 #permalink

could have given it to me! I wouldn't have killed anyone with it, and my discretionary spending on cocaine, wine and dancing girls would have had a definite trickle-down effect (pun intentional) on several small economies...

"As I wrote: the evidence points against the hypothesis that the invasion saved lives. This is very useful since it undermines the claim that the invasion is justified on humanitarian grounds. Kaplan's post hoc reframing of the issue is intellectually dishonest"

I'm surprised to find myself on Jason and even Kaplan's side again in a limited way. There's a huge difference between an Iraq War that caused an excess death rate of 8000 and one that caused an excess of 100- 200,000. There's an excellent chance the Iraqis really would have been grateful to be relieved of a totalitarian thug in the first case--they might be willing to pay some price in lives for freedom, if that's what they'd end up with. OTOH, if the invasion led to chaos and violence on a scale that equals or exceeds Saddam's worst years, then they have gained nothing.

On CI's, I'm not qualified to say much about the subject, but I do know that Bayesians and frequentists have had some rather heated arguments about them. Here's E.T. Jaynes (some day maybe I'll get around to reading the book) in "Probability Theory: The Logic of Science"--

"Confidence intervals are always correct as statements about sampling properties of estimators, yet they can be absurd as statements of inference about the values of parameters. For example, the entire confidence interval may lie in a region of the parameter space which we know, by deductive reasoning from the data, to be impossible."

That's from a footnote on page 674. In the text itself he says that "confidence intervals are satisfactory as inferences only in those special cases where they happen to agree with the Bayesian intervals after all." The Bayesian intervals he's talking about are Bayesian posterior probability intervals and a few sentences earlier he says that the CI you get at a given level will be identical to the Bayesian posterior probability level at the same level if you use an uninformative prior.

As I say, I haven't read this book and have only a rudimentary knowledge of probability and statistics by the standards of people around here, but it sounds to me like it'd be more useful to have a prior probability distribution for the excess death toll that's nice and broad and includes negative values in case the invasion made things better immediately, and then calculate the Bayesian posterior probability interval, which, if I understand these things (and maybe I don't), means we could talk about the probability of the death toll being such and such. Maybe there's some reason this wasn't done, but to hear Jaynes tell it, one should never bother with frequentist methods except in those cases when they'd agree with Bayesian methods anyway. I can see his point, if the L1 CI of 8000-194,000 means you can't say anything about which value in that range is more likely to be true.

By Donald Johnson (not verified) on 15 Jan 2008 #permalink

For anyone interested, Jaynes doesn't say much about confidence intervals in the book I cited. He references a paper he wrote in 1976, which is online (pdf file) at

http://bayes.wustl.edu/etj/articles/confidence.pdf

I'm sorta curious to know if the Bayesian/frequentist debate still rages on, but I haven't been curious enough to try and find out.

By donald Johnson (not verified) on 15 Jan 2008 #permalink

I think it does, Donald, but everyone pretends they haven't picked a side... I don't really know much about this stuff beyond interpreting frequentist confidence intervals (which I'm probably wrong about anyway) so I don't bother getting involved...

Thanks for that link, Donald. I've heard much talk of the Frequentist-Bayesian wars and wondered what it's all about. My teachers gave me the impression that it's a lot of fuss about nothing and I've been happy to take their word for it, but one of these days I will read that paper and form a view of my own.

But does anyone seriously contend that Les Roberts is at fault for ignoring this theoretical dispute and adopting a traditional frequentist approach? Is he really supposed to get involved in that debate when his actual field of research literally relates to matters of life and death?

I don't see anything at all wrong with dsquared's statement that "the single most likely value is, in fact, the central estimate of 98,000 excess deaths" - to me all he is saying is that that figure is derived from the maximum likelihood estimates for pre- and post-invasion mortality-rates. Well, that's true isn't it?

Likewise, I don't see how any of this gets Kaplan off the hook. It's one thing to say that the CI is wide, it's quite another to pretend that the lower half is more worthy of consideration than the upper half.

By Kevin Donoghue (not verified) on 15 Jan 2008 #permalink

Donald,

> There's an excellent chance the Iraqis really would have been grateful to be relieved of a totalitarian thug in the first case--they might be willing to pay some price in lives for freedom.

Really? How many American lives would you be willing to pay to be relieved of an elected (and I use that word in a very loose sense) thug?

Kevin,

> I don't see anything at all wrong with dsquared's statement that "the single most likely value is, in fact, the central estimate of 98,000 excess deaths" - to me all he is saying is that that figure is derived from the maximum likelihood estimates for pre- and post-invasion mortality-rates. Well, that's true isn't it?

If "most likely" means to you "equal to the maximum likelihood estimate" then you are a very exceptional person. Most people have some conception of "likely", yet they have never heard the term "maximum likelihood estimate".

You also may be unaware that the maximum likelihood estimator often produces extremely unreasonable results (e.g., when estimating the parameters of a Gaussian mixture). I would not want my conception of "likely" to be tied to such a device.

You're welcome, Kevin. And I'm certainly not criticizing the Lancet authors. Virtually everyone in the field said they used the best practices in that area. I'm aware of the frequentist/Bayesian polemics, but don't know enough to follow them, or know just enough to be confused.

Come to think of it, if memory serves Robert's derived curves from the L1 data were, I thought, probability density curves for the excess mortality with and without Fallujah. I thought what he did was do a bootstrap (apologies if I'm botching the terms) over and over again and then plotted a histogram of the results, and the largest number fell around the 100,000 mark and dropped off from there (if you left Fallujah out). Then the 95 percent CI was just the two endpoints centered at the mean (or the peak?) at very close to 100,000 which included 95 percent of the runs. If that's the case then maybe the Bayesian/frequentist polemics are irrelevant for L1 and 98,000 can be termed the most likely result.

By Donald Johnson (not verified) on 15 Jan 2008 #permalink

So, I'm still mystified regarding this dispute over confidence intervals. Is there a probability distribution across a CI or isn't there? For L1, can we say 98,000 is more likely to be the correct number of excess deaths than 8,000 or can't we? For L2, can we say 655,000 is more likely to be the correct number than 393,000 or can't we? For IFHS, can we say 151,000 is more likely to be the correct number of violent deaths than 104,000 or can't we?

If there is no authoritative answer to this question, then I guess we're free to pick whichever value we like.

Sortition--

I don't want to defend a hypothetical, since I think it's clear that far more than 8000 excess people died in the first 18 months of the invasion. But my impression from the press is that initially many Iraqis were glad to see Saddam gone and might have forgiven the 7000 civilian deaths that Iraq Body Count claims occurred, if the US had kept order in the next few months while Iraqis set up a government and then left immediately. But the chances of that happening were zero--true humanitarian interventions are pretty rare and this was nothing of the sort. What happened after the invasion phase was that a low level insurgency occurred, plus looting which the US did nothing to stop and the US then replied with brutal hamfisted tactics as even the mainstream press acknowledges and any theoretical chance of the war going well ended very quickly. And the US intent seems to have been to set up someone like Chalabi.

By Donald Johnson (not verified) on 15 Jan 2008 #permalink

The 7000 deaths referred to above was IBC's count for civilian dead in the invasion phase.

By Donald Johnson (not verified) on 15 Jan 2008 #permalink

Sortition,

As I wrote: the evidence points against the hypothesis that the invasion saved lives. This is very useful since it undermines the claim that the invasion is justified on humanitarian grounds.

I just don't know why you think that is useful. Was anyone who favord the war seriously arguing that it was likely to reduce the loss of life during and immediately after major combat operations, as compared to the period immediately before the invasion? I don't remember seeing any such claim. I certainly wouldn't have made it. The humanitarian argument, at least as I would make it, rests on consideration of much longer timescales. Saddam had a 20-year record of mass murder and aggression. He had invaded two neighboring countries, causing two major wars. Our 13-year-long sanctions policy had caused the deaths of hundreds of thousands of Iraqis. If we had continued that policy indefinitely into the future, hundreds of thousands more would likely have died. If we had removed the sanctions Saddam would likely have rearmed and continued his record of death and destruction indefinitely into the future. That's the argument.

SG,

Yes Jason, your view of WW2 is a bit jaundiced. The benefits for us of fighting the war may have taken years to realise, whatever they were; but there were never any benefits to the Germans or Japanese.

Really? Liberal democracy, and 50 years of peace and prosperity for both nations. That's not a benefit?

Anyway, Sortition's right. This shouldn't become a political debate, so I'll say no more on this.

>> This is very useful since it undermines the claim that the invasion is justified on humanitarian grounds.

> I just don't know why you think that is useful. Was anyone who favord the war seriously arguing that it was likely to reduce the loss of life during and immediately after major combat operations, as compared to the period immediately before the invasion?

I don't doubt that in the absence of the Lancet or similar studies, we would be having exactly this kind of argument. People would argue (just like you do) that Saddam killed many people over the years, and then would argue that the killing was ongoing, and that it is only because of the invasion that this continuous bloodbath was stopped. Now, in retrospect this seems like a completely untenable position, but things always look different in retrospect.

Of course, the other use of Lancet 1 is that it raised the possibility that the death count is in the tens of thousands, and maybe over a 100,000. Before Lancet 1 was released, everybody could pretend that such such levels of carnage are unimaginable by any "serious" observer.

> [ ... Saddam evil ... would rearm ... ]

This line of argumentation is sanctimonious nonsense, but, again, I'll pick up these issues on a different occasion.

Sortition: If "most likely" means to you "equal to the maximum likelihood estimate" then you are a very exceptional person.

Rightly or wrongly I shall take that as a compliment; thank you. I don't say that I never use that expression in any other sense, but when a statistician, an economist or a highly numerate stockbroker uses the term without further elaboration when discussing a research paper that's what I take it to mean. Since dsquared is all of the above it seems reasonable to read his words that way.

I'll take your word for it that there are cases where maximum likelihood estimators give unreasonable results but I can't see why a mortality estimate should be such a difficult case - it's a pretty straightforward process, in principle anyway, to estimate the national mortality rate from a sample.

Jason: Is there a probability distribution across a CI or isn't there?

I honestly don't know what you mean by this question. Briefly, the following is the situation as I see it; but take note that Sortition has described me as exceptional, which might just be a polite way of saying I'm hopelessly deluded.

The problem is to get a sensible estimate of an unknown parameter (such as the risk of death in a particular country). The solution is to take a random sample and apply some formula (which may be as simple as calculating a sample mean) to the numbers in the sample. In the jargon, this function of the numbers in the sample is called an estimator. Since chance will determine the numbers in the sample an estimator (being a function of random variables) is itself a random variable. Once you have a sample the actual value the estimator gives you (the estimate) is of course a constant. Like any random variable the estimator has a distribution of some sort - hopefully a well-behaved distribution like a normal distribution or a t-distribution. A CI is, in essence, the researcher's understanding of what the distribution of the estimator looks like. Of course the true distribution of the estimator is unknown, as is the true value of the parameter.

I don't know whether that answers your question and I won't be in the least surprised or offended if Sortition tells me it's rubbish - provided he dispenses some free tuition as well of course.

By Kevin Donoghue (not verified) on 15 Jan 2008 #permalink

> Rightly or wrongly I shall take that as a compliment; thank you.

It was meant neither as a compliment nor as an insult - simply a statement of fact. I believe that for most people, the usage of the expression "most likely value" in Davies's post is misleading. You are the exception.

> A CI is, in essence, the researcher's understanding of what the distribution of the estimator looks like.

Your explanation of what an estimator is is correct. A CI is a pair of estimators with the property that the probability that the true, unknown value of the parameter lies in the interval spanned by the pair is high - say 95%. Note that the fact that CIs can be constructed is a small miracle, since the distribution is unknown - the CI condition has to hold across all the probability distributions being considered.

"A CI is a pair of estimators with the property that the probability that the true, unknown value of the parameter lies in the interval spanned by the pair is high - say 95%."

If the Lancet authors redid the calculation, but for confidence levels of 50 percent or 90 percent or whatever, would they be centered on the same spot (98,000)? My understanding is that you choose the CI for a given confidence level to be as short as possible, since I suppose there was also a 95 percent chance the true value lay between, say 20,000 and infinity. (20,000 made up, of course.)

If this is wrong and it's too complicated to explain why, which I suspect it would be, a simple "No" will do and I'll try to find the explanation elsewhere when I can.

By Donald Johnson (not verified) on 15 Jan 2008 #permalink

A CI is a pair of estimators with the property that the probability that the true, unknown value of the parameter lies in the interval spanned by the pair is high - say 95%.

Surely you mean a pair of estimates, not estimators? If so I can't disagree with that, but I think it's fair to say that when commenters on this blog refer to the Lancet 1 ex-Fallujah excess-death CI, for example, we usually mean not just the endpoints (8,000 and 194,000) but the entire bell-shaped bootstrap distribution. The fact that it's higher in the middle is the important point here! That's what I meant by describing the CI as "in essence, the researcher's understanding of what the distribution of the estimator looks like". Maybe there is a case against thinking of a CI in that way; I'm certainly open to correction in that regard.

Incidentally this 70-page PDF file is a good introduction to bootstrap CIs for Jason or anyone else who is interested. It's Chapter 14 of a textbook, so familiarity with the usual guff about t-tests and such is assumed, but given that background it's very readable.

By Kevin Donoghue (not verified) on 15 Jan 2008 #permalink

It is true that various 95% CI can be constructed. In most situations, however the (expected) length of the CI would depend on the true, unknown distribution - and so cannot be minimized (except, maybe, in a minimax sense).

It is also true that a family of nested CIs with levels 0% - 100% can be set up so that they all contain a certain point estimator - say the maximum likelihood estimator. This does not necessarily imply anything special about that estimator.

BTW, while it is true that a CI of the form (LCB,infinity) can be constructed (LCB = lower confidence bound), your sentence
> I suppose there was also a 95 percent chance the true value lay between, say 20,000 and infinity. (20,000 made up, of course.)

is incorrect. Again, the true value is not assumed random - the CI is the random entity. Once the data has been collected and the estimators calculated no randomness remains - the true value is either more or less than 20,000 - no chance is involved.

> Surely you mean a pair of estimates, not estimators?

No - it is estimators. "Estimate" is the value of an estimator at a specific sample point. The CI condition is a condition imposed on a pair of estimators rather than on specific estimates. Very little can be said about specific estimates. In fact, it is easy to construct a CI that for certain observations (but not too many of them) would evaluate to an empty interval.

Kevin,

I honestly don't know what you mean by this question.

I thought it was pretty clear. Can we assign probabilities to the values (and perhaps also ranges of values) within the CI? Can we say that, for instance, the midpoint value is more likely to be correct than the endpoint values? dsquared says yes. Sortition says no.

Applying the question to L2, is the midpoint value, 655,000, more likely to be correct than, say, the low end value, 393,000?

I'd really like to see a link to an authoritative source addressing this question.

Jason: Can we say that, for instance, the midpoint value is more likely to be correct than the endpoint values? dsquared says yes. Sortition says no.

Whereas I play the C.E.M. Joad gambit: it depends what you mean by "more likely". That's a sensible enough expression which we all use, but to translate it into the language of mathematicians is pretty difficult. For example, I've no idea what the average age of the US population is, but what does it mean to say that it is "more likely to be 32 than 34"? We surely can't mean that Prob(A=32) > Prob(A=34) because it isn't a random variable, it's a specific number. Or if you prefer you can think of it as a random variable, but with a degenerate distribution: it takes a specific value with probability = 1 and all other values with probability = 0. (Sortition might gag at that but it's fine by me, I'm no purist.) So it's only true to say that Prob(A=32) > Prob(A=34) in the (unlikely!) event that the US average is indeed precisely 32.

But suppose we decide to take a random sample of 100 Americans and calculate the sample mean, which is say 33.26. We are certainly entitled to say something like this: if the true value of A is over 40, the probability of getting a sample with a mean in the vicinity of 33.26 would be tiny; however if the true value is 34 the probability of getting such a sample would be quite high.

So over to you: in those circumstances, are you happy to say 34 is "more likely to be correct" than 40? If so you are with dsquared, if not you are with Sortition.

That's the issue as I see it - but to tell the truth my eyes glaze over when statisticians start arguing about the correct way to formulate these questions.

By Kevin Donoghue (not verified) on 15 Jan 2008 #permalink

> So over to you: in those circumstances, are you happy to say 34 is "more likely to be correct" than 40? If so you are with dsquared, if not you are with Sortition.

I would not completely agree with that. I would say that we should decide before doing the sampling what procedure we are going to use for the interpretation. That procedure can be constructed so that it has certain desirable properties. We may for example agree to use a certain procedure to calculate a 95% CI and call anything outside that interval "unlikely", and anything within the interval "likely". The the chance that we will call the true value "unlikely" is then no more than 5%.

If you insist, you can agree beforehand to use the data to create a complete ordering of "x more likely y" for all possible parameter values. That's also fine, although what that would give you is not clear - which is exactly the reason it is not usually done. What is not fine is to collect the data first and then start massaging the interpretation to fit with your pre-conceived biases.

I'd really like to see a link to an authoritative source addressing this question.

statistics are not my strongest point, but the only authoritative source i see on this point are the authors.
i think i remember the Lancet authors making a comment about this. i ll leave the search to you, as most of you "skeptics" profit big time from reading what they wrote...

i understand that your purpose in asking this question is your hope, that with this information you can calculate how "unlikely" it is, that the true number is in the overlapping interval.
if this is your intention, i d advice you to skip this work, as we all know this and the number will have little (most likely no) meaning.

Kevin,

Whereas I play the C.E.M. Joad gambit: it depends what you mean by "more likely".

I really don't understand your confusion. I mean it in the same sense that dsquared meant it when he said, of the 95% CI for excess deaths in Lancet 1:

The single most likely value is, in fact, the central estimate of 98,000 excess deaths

Is this assertion true or isn't it? If it is true, what is the statistical basis for it? Is it true for all CIs that "the single most likely value is the central estimate," or does it vary? If it does vary, what does that variation depend on? How do you calculate the "most likely value" within a CI?

Or do you, perhaps, think that dsquared's statement above is neither true nor false, but just meaningless?

Jason, I think Sortition took dsquared to task over that comment at the time (in a later comment), because it is not true. There is a restricted sense, I think, in which it is true, but it is irrelevant. If your model assumptions are correct, and your sample was the most likely sample to occur from the true distribution of the process you are measuring, then the estimate you calculated is the most likely estimate provided you used a maximum likelihood method. This is because the maximum likelihood method gives the most likely value of the estimator given the data, so if the data were the most likely data, then... But we can never know whether the data we observed is the sample with the highest probability of being drawn from the process. (Further I would argue that the process never represents the truth, so such ideas are irrelevant).

(Again, however, listen to Sortition's opinion of this over mine).

I think the confusion about "most likely" values is easily made because, for a fixed interval of data, we can always be more confident that the real value lies within the interval symmetric around the central value than in any other interval. For example, in L1 the central value is 98000. We can be more confident that the true value is between 88000 and 108000 than we can that it is between 4000 and 24000. But this is about confidence, not likelihood, since (as Sortition observed) the true value is where it is, and has 0 probability of being anywhere else.

In fact I think in a sense dsquared's claim as stated there is quite contradictory, since you could construct a 10% confidence interval around 98000, and this would give us a very low confidence that the true value lies very close to dsquared's "most likely" value.

The thing is that we have to make claims about reality. So we assume our model is best (I mean, who wouldn't?) and our sample well-behaved, and then it seems reasonable to claim the value we observed is the "most likely". But this statement comes with so many caveats it's not really worth using.

Well, SG, you certainly cleared that up!

Each time I ask a simple yes/no question or attempt to clarify the meaning of the study findings, I seem to get a lengthy reply full of technical statistical terms and qualifications like "it depends..." or "in a sense..." or "if we assume..." that provides no clear answer to my query.

If the meaning of the "estimates" produced by these studies is so subtle and arcane that it cannot be communicated without this kind of lengthy technical discussion, then I think the studies are worthless for consumption outside the rarefied community of professional statisticians. Certainly, the vast majority of people are not going to see or want to read any such discussion. They're just going to see a headline like "Lancet study estimates 655,000 excess deaths from Iraq War" and interpret that in a plain-language, common-sense way to mean what it says. But, apparently, it doesn't mean that. It doesn't mean anything close to that.

In fact, given this confusion, these studies are worse than useless. They confuse more than they clarify, creating a largely false impression among ordinary people about the consequences of the war.

"But my impression from the press is that initially many Iraqis were glad to see Saddam gone "

I hope that view isn't colored by the infamous fake pictures of Iraqis cheering as a statue of Saddam was pulled down.

You know the ones where US troops first cordoned off the square and kicked out the locals then bussed in a bunch of members of convicted con-man Ahmed Chalabi's political party for the photoshoot.

By Ian Gould (not verified) on 15 Jan 2008 #permalink

"I just don't know why you think that is useful. Was anyone who favord the war seriously arguing that it was likely to reduce the loss of life during and immediately after major combat operations, as compared to the period immediately before the invasion?"

Cheney was talking about a maximum loss of Iraqi lives of 10,000 - including battlefield casualties.

He also implied that the entire occupation would be over in a matter of months.

Hell, official plans were for the majority of US troops to be withdrawn within three months.

By Ian Gould (not verified) on 15 Jan 2008 #permalink

Oh and let's take note that had the Iraq war gone to plan, the whole reason the majority of US troops would have been withdrawn was that they would have been needed for the follow-on invasions of Syria and Iran.

Anyone think those would also have been brief, glorious and, on the US side, nearly bloodless?

The neo-con agenda was war with the whole Arab world and Iran. They happened to get bogged down in Iraq, had that not happened there then almost inevitably then would have gotten bogged down in Syria or Iran or Sudan or Somalia or Saudi Arabia or Yemen or...

By Ian Gould (not verified) on 15 Jan 2008 #permalink

"Our 13-year-long sanctions policy had caused the deaths of hundreds of thousands of Iraqis."

It's wonderful to see left-wing anti-American propaganda recycled by right-wing Americans for their own purposes.

The claim that one hundred thousand Iraqi children died due to sanctions is crap. The fact that Madeline Albright once failed to reject it during a TV interview doesn't make it any less crap.

The claim is based on the argument that if not for sanctions Iraq's child mortality rate would have continued to fall throughout the 90's.

Guess what - infant mortality rates stalled in most of the Arab world in the 1990's due to the fall in oil prices and resultant cuts in health spending. That's before we consider the other possible factors contributing to Iraq's child mortality rate - like environmental pollution for oil-well fires; the destruction of lots of infrastructure during the war; the disruption to health services caused by the Shia and Kurdish uprisings and the hundreds of thosuand of displaced persons as a result of the establishment of the autonomous Kurdish region.

Additionally, the sanction regime had been significantly reformed prior to the invasion with the oil-for-food deal. On the limited information available to us, it appears Iraqi death rates fell substantially in the post-1998 period. (This is based on comparing death rates from the 1998 WHO study with immediately pre-war death rates reported by L1 and L2, ILCS, IFHS etc). DEspite the corruption involved in oil-for-food it did result in several billion dollars worth of food and medicine reaching ordinary Iraqis every year.

We also need to consider whether war/status quo were the only options in 2002. The vast bulk of sanctions-busting involved export of oil by road via Jordan and Turkey - with US knowledge and approval including Security Council vetoes by both the Clinton and George W Bush administrations of motions to censure Jordan). In the lead-up to war, France proposed putting armed UN sanctions enforcers on the Iraqi border to stop that trade. They also proposed reforming the oil-for-food program so that the Iraqi government would have no role either in issuing export permits or spending the money.

Finally, let's note that the neo-cons running the "sanctions cost more lives than invasion" line were the msot vociferous advocates of harsher sanctions during the 90's and amongst the strongest critics of the claims that 100,000 Iraqi children had died as a result of sanctions. Were they lying then or are they lying now?

By Ian Gould (not verified) on 15 Jan 2008 #permalink

"They confuse more than they clarify, creating a largely false impression among ordinary people about the consequences of the war."

Yes, The people are simple and easily confused. They should simply be silent and obey The Leader. Criticism of The Leader only confuses and upsets The People and therefore should not be allowed.

Jason, I did a single first year University statistics course 30-odd years ago and barely passed it, I have no difficulty following most of the discussion of statistics here.

By Ian Gould (not verified) on 15 Jan 2008 #permalink

Ian Gould,

Your claims about the effects of the sanctions and the oil-for-food program are contradicted by various studies and reports. See, for example, the references cited here and here.

""But my impression from the press is that initially many Iraqis were glad to see Saddam gone "
I hope that view isn't colored by the infamous fake pictures of Iraqis cheering as a statue of Saddam was pulled down."

No, it's not. One source is from an antiwar journalist named Aaron Glantz. I read his book a few years ago--in some places it annoyed me (I think he cites the claim that 300-400,000 bodies from Saddam's era had been uncovered in mass graves), but he was actually in Iraq and he says there was much initial good will towards Americans among the Iraqis that was quickly thrown away and I believe him on things he saw for himself. His website is

http://www.aaronglantz.com/thebook.html

I've also read about how there was some initial Iraqi goodwill elsewhere. I read about the fraud of the statue being pulled down at various blogs, as you probably did, and wasn't basing my opinion on that stupid propaganda stunt for the television news.

I partially disagree with you on sanctions--it's probably true they weren't killing children in large numbers at the end, but the sanctions combined with the bombing of infrastructure in the Gulf War were intended to increase the death rate and probably did so at first, from what I've read. There's an article by Barton Gellman in the June 23, 1991 Washington Post where Pentagon targeting planners told him the bombing in the Gulf War was intended to work with the sanctions to provide leverage against Saddam. The leverage would come because the sanctions would prevent repair to the bombed infrastructure.

By Donald Johnson (not verified) on 15 Jan 2008 #permalink

Donald Johnson,

I partially disagree with you on sanctions--it's probably true they weren't killing children in large numbers at the end,

According to this article, Richard Garfield (the same Richard Garfield who was a co-author of Lancet 1, I believe) estimated that, as a result of the sanctions, "the most likely number of excess deaths among children under five years of age from 1990 through March 1998 to be 227,000." Garfield later estimated that by the end of 2000, that number had increased to 350,000. This was several years after the oil-for-food program had gone into effect. By 2002, according to this article, Garfield's low estimate the death toll was 400,000. And that's just deaths of children. The sanctions undoubtedly also caused the deaths of many adult Iraqis from malnutrition and lack of medicine, especially the sick and elderly.

Dennis Halliday, the first UN Humanitarian Coordinator in Iraq, who was in charge of administering the oil-for-food program, resigned from his position in disgust in October 1998 so that he would be free to criticize the sanctions policy. He wrote: "I don't want to administer a programme that satisfies the definition of genocide." His successor, Hans von Sponeck, lasted two years on the job and also resigned in disgust. In 2001, von Sponeck decried the proposed "Smart Sanctions" reform intended to reduce the loss of life as follows: "What is proposed at this point in fact amounts to a tightening of the rope around the neck of the average Iraqi citizen." And he claimed that the sanctions were causing the death of 150 Iraqi children per day.

"According to this article, Richard Garfield (the same Richard Garfield who was a co-author of Lancet 1, I believe) estimated that, as a result of the sanctions, "the most likely number of excess deaths among children under five years of age from 1990 through March 1998 to be 227,000.""

Well yes but this was obviously a deliberate fraud to further his extreme anti-American far-left pro-Saddam position, as I'm sure David Kane will be glad to aver.

I'd also be interested in seeing how Garfield differentiated the effects of sanctions from all the factors I already cited.

Oh and let's note that all available evidence is that deaths in Iraq are higher than during the sanctions period. So if sanctions were so terrible what does that say about the current situation?

By Ian Gould (not verified) on 15 Jan 2008 #permalink

Jason, your point at 225 is not a counter-factual to the war. An alternative would have been to end the sanctions and welcome Iraq back into the international community, either before or after concluding it had no WMDs (which we now know it didn't). You can't compare the situation in 2000 with the situation after the war because another choice we could have made in 2003 (or 4, or 5) was to simply drop the sanctions. Also bear in mind that the IFHS study puts excess non-violent deaths at about probably 250,000, which is about the rate at which you quote people dying from the sanctions. So there is no real way you can say the Iraqis have benefitted.

As regards the confidence intervals, after all my waffle it's very simple. The estimated number of dead was 655000. If the sample was not unusual, the methods were good and the assumptions sound, that number is not far from the truth. Since we don't know if the sample was unusual, we can be 95% confident that the true value lies somewher within the confidence interval. the study authors probably think the sample was close to the most likely data, and so their estimate is close to the most likely value, but they would say that - they're the study authors. We have to be a bit more skeptical, but that's okay because we don't need to confirm that the number was 655000 vs. 632000. We need to confirm it wasn't 0. And we are very sure that the number of excess deaths was not 0, i.e. the invasion killed a lot of people and the alternative of not invading was probably better.

Jason: do you, perhaps, think that dsquared's statement above is neither true nor false, but just meaningless?

As I explained above, I understood dsquared to be using "single most likely value" as a synonym for the maximum likelihood estimate. I say his statement is both meaningful and true. Sortition says it's true when interpreted in that way, but lots of people don't know what a MLE is so it is "likely" to mislead them. That is also true, but then Deltoid isn't read by lots of people; it's a science blog and some familiarity with statistics is assumed. As Ian Gould points out you really don't have to have taken advanced courses; even an introductory course should mention MLE.

As to your other questions, they are just the sort of questions that a good introductory course in statistics should cover. You aren't going to find out, for example, what the statistical basis for MLE is, by asking in comment threads. So if you are really interested - and you had better be really interested because statistics is a tough subject - it's clear what your next step should be. I just did a bit of Googling to see if there is anything helpful online, but even the Wikipedia entry assumes you know what is meant by "probability distributions parameterized by an unknown parameter θ (which could be vector-valued)" and I suspect that the θ isn't the only part of that phrase which is Greek to you. So sign up for those night classes.

By Kevin Donoghue (not verified) on 16 Jan 2008 #permalink

On the sanctions -- the reason children were dying in Iraq is the same reason children die when food or money is delivered so many, many other places around the world -- because the local government grabs the food and money and distributes it to the army and buys weapons with it. If Saddam had cut his military budget, he could have fed those children. It just wasn't a priority for him, any more than it was a priority for Mengistu Haile Mariam in Ethiopia or Dear Leader Kim in North Korea.

Barton, there's no need to limit oneself to one cause for the deaths of children in Iraq. If Iraq had been run by Swedish social democrats in 1991, I don't doubt they'd have done a better job lowering mortality than Saddam did. The US wanted leverage over Saddam--they thought civilian suffering would pressure him into being a more compliant and useful dictator, or even that the suffering would lead to his overthrow. It's the same logic some of the insurgents use, apparently.

Here is a relevant link to the question of malign US intent, which in turn contains links to other places, one of them the Washington Post article I mentioned.

http://www.scn.org/ccpi/infrastructure.html

By Donald Johnson (not verified) on 16 Jan 2008 #permalink

If Saddam had cut his military budget, he could have fed those children.

You're aware, I hope, that he never rebuilt the military to the level it was at before GWI?

If we were to cut our military budget, perhaps we could feed all of our children and provide them, and their parents, with universal health care, too.

I'd say that Iraq had more reason to fear invasion than the US does, come to think of it.

Okay, now dhogaza is defending Saddam Hussein. I'd better not say anything bad about the Nazis, or he'll probably think up a defense for them. In fact, I can probably get him to take any position at all just by taking the opposite position. It must be love. (Theme to "Romeo and Juliet" up)

SG,

Jason, your point at 225 is not a counter-factual to the war. An alternative would have been to end the sanctions and welcome Iraq back into the international community, either before or after concluding it had no WMDs (which we now know it didn't).

As I have said, Saddam had a 20-year record of extreme military aggression towards Iraq's neighboring countries, and mass murder and gross human rights abuses of his own people. That's why the sanctions were imposed in the first place. Lifting the sanctions, allowing Saddam to expand his domestic power and rebuild his military forces, would most likely have resulted in a continuation of this appalling record, indefinitely into the future. There is no indication that Saddam would have been overthrown by forces within Iraq if we had not invaded.

Also bear in mind that the IFHS study puts excess non-violent deaths at about probably 250,000, which is about the rate at which you quote people dying from the sanctions. So there is no real way you can say the Iraqis have benefitted.

I do not claim that the war is a clearly superior policy to any possible alternative. I merely claim that there are good reasons to think things could have been even worse if we had not invaded, based on the evidence of the effects of the sanctions and Saddam's record in power. There was no good way of dealing with Saddam. There were just a bunch of bad ones, and the war may be the least-bad of them.

SG,

The estimated number of dead was 655000.

But according to Sortition, there is no "rigorous or objective sense" in which the 655,000 "estimate" is "better" or "more likely" than 393,000. I think the average person is highly unlikely to understand the statement "The estimated number of dead was 655,000" as "The number of dead was estimated with 95% confidence to be somewhere between 393,000 and 943,000." If the findings were presented in that way, I think most people's reaction would be the same as Fred Kaplan's: "That's not an estimate. It's a dart board."

Jason, regarding 233, you may or may not be right (I don't agree with you), but my point is not that you are or aren't right, just that the comparison of the actual situation now with the status quo is not good enough. You also need to present the alternatives, and one of them was relaxing the sanctions and readmitting Iraq into the world community. To assess the consequences of that you need to consider the possibility of Iraq attacking its neighbours, supporting terror, suppressing internal dissent, etc. You could be right about these things being worse than the war, but these arguments are not being had by "policymakers" or the right-wing commentariat, and they cannot be had unless the human cost of the war is properly understood. Whcih is why the right-wing commentariat's response to the Lancet studies makes it so clear that they don't care for Americans or Iraqis , since they aren't interested in debating either what the US could have done better for Iraqis, or how the US might have achieved better outcomes at lower cost of blood and treasure.

Your point at 234 may be true (I shan't presume to judge others' science skills), but irrelevant. The Lancet papers are scientific inquiry, not propaganda, so it doesn't matter that their real meaning is too complex for mass consumption, and entirely unsurprising that the complexity washes out in the resulting mass media furore.

> I think the average person is highly unlikely to understand the statement "The estimated number of dead was 655,000" as "The number of dead was estimated with 95% confidence to be somewhere between 393,000 and 943,000."

I think that the average person is quite aware that there is uncertainty about any estimate. The only way in which using the number 655,000 is misleading is that by using 655,000 rather than 650,000 the impression is given that the uncertainty is in the order of thousands rather than hundreds of thousands. For this reason using the number 650,000 seems better.

As I said above, I use the phrase "hundreds of thousands" that in my mind is very effective in communicating both the position and the length of the CI.

"As I have said, Saddam had a 20-year record of extreme military aggression towards Iraq's neighboring countries, and mass murder and gross human rights abuses of his own people. That's why the sanctions were imposed in the first place. Lifting the sanctions, allowing Saddam to expand his domestic power and rebuild his military forces,"

Right because the US obviously wouldn't have maintained forces in the gulf to deter any such aggression.

By Ian Gould (not verified) on 16 Jan 2008 #permalink

Jason's entire argument pretty much reduces to:

The US is good.

The US invaded Iraq.

Therefore invading Iraq is good.

By Ian Gould (not verified) on 16 Jan 2008 #permalink

How about a thread where such issues can be sensibly discussed, but any post going on about how wrong the Lancet studies are is deleted?

The post introducing this study is accompanied by Roberts' criticisms of it. So, what you'd be looking for here is an unfavorable comparison of this study to Lancet and god forbid any criticism going in the other direction.

How about a thread just consisting of pictures of arseless chaps instead, if you are looking for something that gay.

Tim
Since you devote ample space to Lancet-bashers, would you not consider devoting a post to the NEJM-bashing of Pierre Sprey? I suspect he's a nutcase, but I'm sure he is more numerate than Seixon or David Kane and you've given plenty of attention to them.

Posted by: Kevin Donoghue | January 13, 2008 6:23 PM

*crickets*

And how would you argue that a survey unsupported by death certificates at all, used to produce an estimate of total mortality unsupported by even a single death certificate, would be as accurate?
Posted by: dhogaza | January 13, 2008 5:30 AM

I wouldn't need to argue this if the only point I was making was that the usage of death certificates wasn't something that added credibility to the Lancet figures, considering how they were used.

Have these anomalies been addressed yet?
http://www.iraqbodycount.org/analysis/beyond/reality-checks/

I'm going to guess "no" considering the number of people I see here saying "the IBC authors make this criticism but I have no opinion about that".

god forbid any criticism going in the other direction.

Right ... censorship on this site is SO extreme. Why, Mr. Lambert has never allowed a SINGLE post by David Kane! Or Tim Curtin! Or Steve McIntrye!

Oh, wait, I've made a boo-boo.

I've referred to that set of criticisms several times, Sans--page 4 is a sort of reductio ad absurdum of the Iraq Body Count case. They point to Iraq Ministry of Health statistics showing 60,000 people treated for wounds in the two year period from mid 2004-mid 2006, when every poll or survey taken implies casualties (dead or wounded) many times larger. This latest IFHS study is another blow to IBC's argument--IFHS found 120 deaths per day, so if IBC's 3/1 wounded to dead ratio is true, that'd be 360 wounded per day, 130,000 per year or 260,000 in the time period when the government statistics show 60,000. Yet IBC argues that the count for wounded should be more complete than the count for the number of dead.

By Donald Johnson (not verified) on 20 Jan 2008 #permalink

I hit post too soon.

I realize that there is a wide CI for the IFHS survey's death rate, but the bulk of that CI plus the 3/1 wounded/ratio is clearly incompatible with IBC's argument. Even a 1/1 ratio still puts IFHS at 120,000 wounded in two years if you use the midrange figure from IFHS--if you pick the lowest number from the CI (which someone else cited in one of these threads) and the 1/1 ratio, you might get a number compatible with IBC.

There have also been other surveys implying very high casualty figures, like the one sponsored by several news agencies in early 2007 which found one household out of six had suffered someone dead or wounded--you can't get a precise number out of that, but it does imply casualties in the many hundreds of thousands, if not more.

Anyway, IBC's argument boils down to this claim--a government bureaucracy controlled by one of the factions in a civil war working with a medical system which has broken down can be expected to give an accurate tally of dead and wounded. Because IBC's count has to depend either on official sources or else trust that somehow reporters will be able to independently count the dead and wounded themselves. No reporter is insane enough to claim that ability for the press, so we're left with those trustworthy government sources.

Which is not to say that Lancet2 is right, just that IBC's arguments against them are very weak.

By Donald Johnson (not verified) on 20 Jan 2008 #permalink