The Lancet Report - Criticizing my Criticisms

My post on the Lancet article has attracted a fair amount of comment, both in the comments here and on other blogs. On the whole, those who have addressed my criticisms have disagreed with them. I've read the criticisms and re-read both the new and the 2004 Iraqi death toll studies a couple of more times. Between the two, I've become convinced that some (but not all) of my earlier concerns were unjustified.

In this post, I'm going to try to respond to most of the substantive criticisms (and a few of the other comments). I'll let you know where my views have changed, and I'll try to clarify or restate my position where they haven't. Further discussion is, as always, more than welcome. I'd always rather have someone let me know that I'm saying something stupid than walk around ignorant of that fact. Well, almost always, anyway. Just try not to bruise my ego too badly, OK?

1: Extraordinary results:

I'll start with a comment that doesn't actually address anything I said, but which does illustrate something important. Wolfwalker wrote (in the comments here):

The study claimed that 650,000 Iraqis have died as a direct or indirect result of the invasion, right?

Has it occurred to any of the study's defenders that 650,000 amounts to 2.5% of Iraq's total prewar population? That the study claims that most of the deaths are among males of military age -- which would make the death toll almost nine percent of that demographic group? And yet no other source anywhere gives a figure that's within an order of magnitude of the study's figure. No one else seems to have even noticed the disappearance of one out of every eleven adult Iraqi men.

This latest Lancet study does make a claim that is clearly at odds with almost all of the limited information that was previously available on the death toll in Iraq. The paper does point out that the number of excess deaths that they calculated represents about 2.5% of the population of the country. The death toll that they calculated is in fact an order of magnitude greater than any other estimate. Those are extraordinary claims.

I think it is reasonable to subject extraordinary claims to an increased level of scrutiny - and that's why I've put so much time into examining this paper. What's not reasonable is the approach that Wolfwalker, President Bush, and many others have been taking. An extraordinary claim that is supported by evidence-based study needs to be addressed on its merits. It should not be dismissed out of hand simply because the conclusions don't fit someone's pre-existing view of reality.

2: Non-violent excess deaths:

In the earlier post, I argued that including the increase in the non-violent death rate since the start of the war was unjustified, since that increase was shown to be unjustified. Josh, over at Thoughts from Kansas, thinks that this is a valid criticism. Several of my commenters disagree, pointing out that if you try hard enough, you could probably subdivide the cause of death finely enough that no individual cause of death would show a significant increase. Another commenter points out the difficulties involved in identifying causes of death.

Those criticisms are legitimate and compelling, but there is an important point that they fail to address. I am not the one who separated out non-violent deaths and tested the increase for significance. The authors of the study did. They were confident enough in their ability to identify non-violent deaths to list them separately, and they decided to test that particular increase for significance. Looking at the companion (non-peer-reviewed) article, it appears that they did so in an attempt to detect evidence for deaths resulting from the deterioration of the health care system, and had they found a significant increase, I'm sure that they would have talked about health care deterioration in the Lancet article.

If they had not tested the increase in non-violent deaths, I would have no complaint. However, I think that including those deaths after they tested for, and found no, significance is highly questionable. I think this is especially true given that the non-violent death rate was actually lower than the prewar estimate in two of the three periods they covered in the study.

3: Selection of cluster sites.

I was concerned that the way in which the clusters were selected resulted in a heavy bias toward urban areas. I got hammered on this one about eighteen different ways, and my critics were right. The site selection process was reasonable, and probably resulted in a sample that was representative of the population of the country.

That having been said, I still would have been happier if the authors had used a stratified sampling approach, and examined urban and rural areas separately. One of the commenters here did not think that there was any reason to think that the death rate would vary between urban and rural regions, but I think there might be. Sectarian violence is going to be more likely to occur, I think, where there are groups from the rival sects living in close proximity to one another. That is probably going to happen more often in cities. I do realize, however, that logistical and (more importantly) safety concerns would make it difficult to use that type of sampling design.

4: Choice of population figures

I was concerned by the study authors' decision to use one estimate for the population of Iraq without mentioning two other ones. Opinion seems to be split on this one. Josh's criticism was the clearest:

Choosing a higher population estimate does inflate the mortality estimate to some extent, but unless there's some basis for thinking the UNDP estimate is worse than other estimates, there's no basis for claiming that this is a flaw, or even a bias (except in comparisons between this study and the earlier one).

The thing here is that I do think that there's some basis for at least questioning whether the UNDP estimate was better than the others. As far as I can tell, three independent studies of the population were conducted. I don't know what techniques were used in any of them, but I'm reasonably certain that none was a census. Two of the three studies produced estimates that were similar to each other - within a few hundred thousand individuals. The third produced an estimate that was 1.6 million individuals greater than the higher of the other two studies. With two similar estimates and one dissimilar one, I think it is quite reasonable to ask why the dissimilar one was favored over one of the other two - particularly when selecting one of the others would have resulted in an estimate of about 75,000 fewer deaths. One reader did point out that better censuses usually give higher numbers. However, it does not appear that any of those estimates was derived from a census, and I'm not sure that better estimates always are larger.

5: Responsibility for the deaths

I was a bit upset when I was writing the earlier part, with the result that my writing probably wasn't as clear as it should have been. A number of people have pointed out that we are responsible for the deaths that have resulted from sectarian violence, and they are right. The United States bears the bulk of the responsibility for what has happened and is still happening in Iraq. The reason that I was upset that the actions of coalition troops were singled out for criticism in the paper while sectarian violence was ignored is because there is a difference between what we, as a nation, are responsible for and what the troops on the ground are responsible for. As a nation, we're responsible for pretty much the whole bloody mess. We managed to conduct a well-orchistrated military operation, but as a direct result of unbelievable bungling totally screwed up everything after that. The disasterous occupation set up the conditions for all of the ongoing violence. That does not, however, mean that the troops on the ground are responsible for all of the deaths. That might not seem like an important distinction to everyone, but it is to me. I'll have a separate post talking about some of the reasons for that tomorrow.

6: Me as a PR hack

One of the comments today was a bit harsher than most:

I am a demographer and a statistician, and I offer one major criticism of his attack on this paper. He fails to show how any of the points he lists would result in any substantial changes in the final estimate.

Actually, I pointed out that using the lower population figures would lower the number of excess deaths by about 10%, and that excluding the non-violent excess deaths would knock another few percent off. At the time, I wasn't sure what effect the methods used to select the clusters would have, and said so. (And as I mentioned already, I've since figured out that I was wrong on that one anyway.) You might not think that those changes are substantial. If so, I'd honestly be interested in hearing why.

In general Dunford is using a standard PR technique to create the impression that there a lot of problems with the study and it should not believed without identifying a single real problem that, if corrected, would change the results. The more the uninformed reader reads sentences that notes apparent problems while discussing the article, the less the uninformed reader is going to trust the findings reported in the article. Remember the goal of science is to discover new knowledge while the goal of public relations is to create a false image to manipulate the public. It is clear that John Hopkins study is science and Dunford's tract pure PR.

That is absolutely not what I was attempting to do. In fact, I explicitly said at the start of the article that I thought the new study represents the best available estimate. At the end of the study, I said that I was convinced that the real death toll is much higher than the official estimates. I wrote the post for two reasons, and two reasons only. The first is that I was genuinely interested in finding out if the things that I thought were problems with the study really were - and I don't know of many better ways of finding that out than going ahead and throwing the questions out there for others to read and evaluate. The second reason is that I think that there is a great deal to be learned through critical examination, and little to be gained from not evaluating things critically.

A final note:

The more I read the Lancet paper, and the more I examine my own concerns with that article, the more reasonable their conclusions appear. I still think that their number is likely to be on the high side, but I don't think it's going to be high by as much as I would have liked to believe. In fact, I think that the actual number is quite likely to fall within their stated 95% confidence interval. It's painful to think that we have so much blood on our hands as a nation, but the concerns that I still have with the paper's methodology are almost certainly not severe enough to account for even half of their estimate.

For a while I've been of the opinion that we could not morally withdraw from Iraq, because withdrawing would result in more harm to the Iraqi people than staying. It looks like I've got some hard, hard thinking to do about that now.

More like this

Update: 14 October. I just posted another entry on the topic, responding to some of the comments on this post. My conclusions have changed a bit as the result of some of those comments. As many of you probably know, a study published today in the journal Lancet estimates that just over 650,000…
One of the headlines made by Bob Woodward's new book on the Bush administration, State of Denial, is that the violence in Iraq is much worse than we have been told. Told by the Bush administration, anyway. In fact we have been on notice for two years that the level of violence in Iraq is horrendus…
Or at least 655,000 (± 140,000) of them. Before I get to the news reports, I think it's important to make something clear. These statistical techniques are routinely used in public health epidemiology and nobody complains about them. Critics of this estimate can't play the same game the…
I've gotten a lot of mail from people asking my opinion about [the study published today in the Lancet][lancet] about estimating the Iraqi death toll since the US invasion. So far, I've only had a chance to skim the paper. But from what I can see about it, the methodology is sound. They did as…

A number of people have pointed out that we are responsible for the deaths that have resulted from sectarian violence, and they are right.

No, they're not. The people responsible for the sectarian violence are those Sunnis and Shiites directly involved in it. No one else.

By AJ Mackenzie (not verified) on 14 Oct 2006 #permalink

i suspect you are right to be asking questions.

In scotland in 2001, there were 5.064 million people, and 57,382 deaths, for 11.3 deaths per 1000 (and scotland isn't that far out for european mortality rates). Burnham et al state that in Iraq, the death rate was 5.5 per 1000 "pre-invasion".

Now you can explain this discrepancy a number of ways. One is that life expectancy in Iraq is twice as long as in Scotland. I hope I won't be treated like Wolfwalker if I speculate that I believe that to be unlikely, and that I further believe that life expectancy in Iraq (pre-invasion) was shorter than in Scotland. It could be that the population age spectrum is vastly different in Iraq than in Scotland (I don't have data to know if this contributes, but clearly it could). Another issue is that sampling methodologies have their problems, and that these are fairly well understood.

Given the power of the investigation (80% for a doubling of the original 5% rate), there was no significant increase in non-violent deaths. Nonetheless, given the collapse of much infrastructure, it seems inevitable that lack of food/ infection/ damage to medical facilities and provision would have a severe impact on non-violent death, and it is just a little surprising that the non-violent death rate has gone up by ~10% (ie no effect at all).

per

The people responsible for the sectarian violence are those Sunnis and Shiites directly involved in it. No one else.

The Allied Forces are and were responsible for the safety of the people of Iraq after the invasion.

By Wim Prange (not verified) on 14 Oct 2006 #permalink

per: "It could be that the population age spectrum is vastly different in Iraq than in Scotland".

Of course the population age structure in Iraq is different. Comparing unadjusted mortality rates with Scotland is just bizarre. The mortality rate of 5.5 is similar to that in neighbouring countries such as Syria.

OK, some stats wonk riddle me this, because while I am a chemist, and generally not innumerate, my statistical knowledge is weak:

The lancet study claims to that there were death certificates available for 92% of the 'excess deaths' that they record, which, extrapolated to the total number of deaths that the study estimate, would total over 600,000 death certificates that the ministry of health (or whichever authority does this) should be able to produce. I assume that the health ministry cannot in fact produce these, or the survey would be far less contentious.

If one cannot produce that many death certificates, because the record keeping is bad, is it the case then the clusters that they surveyed were at least non-representative in that they kept unusually good records? Does this say anything about the primary conclusions of the survey? The rate of death and the rate of death certification are independent, no doubt, but are we not assuming that these clusters are in some way representative of the whole population? How does one know that the death rate is, but the death certification rate is not?

My question is that IF the death certificates for 92% of the hundreds of thousands of deaths cannot be produced, is this then a charateristic of the original sample that does not scale to the population, as presumably the death rate does, and can any conclusion be drawn between the actual number of death certificates and the actual death numbers? If not, how is the difference in scaling characterized?

The fact that statistically savvy people are generally supportive of the findings suggests that the methodology is sound, but the questions about how representative the samples are still seem unclear to me, and I haven't seen those addressed as clearly.

By Dave Eaton (not verified) on 14 Oct 2006 #permalink

Blame doesn't just lie with the US or with local insurgents - it also lies with the French and the Russians and their decision not to get involved with overthrowing Saddam (because, one presumes, of their lucrative oil deals with his regime).

When the US and a few of their friends invaded it was seen by some as an occupation, and by infidels at that, which therefore needed to be resisted. Had the French and Russians been onboard it would have been possible for a much larger number of countries to participate and it would have looked much more like the world coming in to help than an occupation and it would have been harder, although not impossible, for the xenophobes and conspiracy theorists to oppose it.

And trying to use these numbers, real or not, to make moral judgements about the moral worthiness of the war is also silly. The comparison you need to make, in order to make such a judgement, isn't between then and now but between now and how it would be now if we hadn't gone to war. Analysing this putative parallel timeline is difficult to do with any accuracy but it still seems plausible that it may ultimately be the worse timeline. No one in the anti-war movement seems to have any realistic or plausible suggestion on how else the situation with Saddam, which wasn't sustainable, could have been solved.

I, too, am inclined to give credence to the Lancet study over published "body counts".

It's good to see that comment here is better than the generally mathophobe press.

But:

"If one cannot produce that many death certificates, because the record keeping is bad, is it the case then the clusters that they surveyed were at least non-representative in that they kept unusually good records?"

In Iraq, it seems, families do acquire and KEEP death certificates. But the authorities (both Iraqi and Coalition) have every reason to minimize casualty numbers -- not just, or primarily for propaganda, but more to keep some hope of progress. And I think that they are acting on it. Over the past year or so, Western reporters were able to get some numbers on the dead in Baghdad by talking to the morgue. This has led both to the assumption that most sectarian violence is in Baghdad (not true according to the Lancet study, where Baghdad is "moderate" in death rate), and to the recent report that morgue and hospital officials will now be prohibited from talking to the press.

lkr

Dave Eaton said:

The lancet study claims to that there were death certificates available for 92% of the 'excess deaths' that they record, which, extrapolated to the total number of deaths that the study estimate, would total over 600,000 death certificates that the ministry of health (or whichever authority does this) should be able to produce.

I think what may be wrong here is the assumption that there is a central authority that manages this. I suspect that the death certificates in question are issued locally, by hospitals or morgues and that they are not "rolled up" (at least not in any robust, efficient way) into some central system. I don't really know the answer here, but it's not hard to imagine that the accounting process is difficult in Iraq today.

By Eric Wallace (not verified) on 14 Oct 2006 #permalink

I wonder how much effect the mathophobic press or innumerate critics really have. My guess is that a) those inclined to be suspicious of anything the press or science say will go to sources that support their skepticism, and b) Debunking and counter-debunking is probably followed by people who are largely aligned prior to any argument being made ( to first order- some people really just want to know better how things were figured out, and how well the truth is being approximated). I'm not saying that the effort to clarify things in not valuable, just that one can't expect to convince many people of anything.

Despite the low singal to noise ratio of the blogosphere, the threads of discussion that can be followed from one ideological extreme to another, with the occasional island of science and rationality, are invaluable. Learning a little statistics along the way is great, but watching what people do with results is fascinating.

By Dave Eaton (not verified) on 14 Oct 2006 #permalink

Of course the population age structure in Iraq is different
hi tim

One would imagine that to make such a clear and definitive statement, you would have to have access to some excellent data; after all, we are talking about a two difference in rate. So how do you know this ? What sources do you have for your assertion ?

yours
per

It's an interesting post, Mike, and you make a decent case ... but even after consideration, I'm afraid I still think the study is snake-oil, and I'm not buying.

If you spend enough time doing any sort of science that involves data collection or correlation, after a while you develop a sense of how things fit together. Call it the voice of experience, or the subconscious, or intuition, or The Force, or whatever you want -- the fact is that it's there, and it's usually trustworthy. When a database query yields 1000 hits where I expected 10, I know I made a mistake even before I go back and recheck the query conditions. Back in '97 when Darwin's Black Box came out, I didn't need to be a master biochemist to know that Behe had missed something somewhere in his analysis. When I see somebody pushing a new version of the Pogue Carburetor, I don't need to know the thing's exact construction in order to know it's a fraud. When I see some armchair "scientist" claiming they found a flaw in Special Relativity, I don't need to be a master physicist to know they missed or misunderstood something.

I get the same sense of "wrongness" off this study. Iraqi bloggers say it's wrong. Iraq-veteran milbloggers say it's wrong. The numbers being reported day to day in the press say it's wrong. Cross-checking the study's numbers against other known data -- for example, Iraq's estimated prewar population -- produces nonsensical results. So there must be a mistake in it somewhere. I don't know what the mistake is. I don't know where it is. I don't know how or why they made it, or why they didn't see it themselves. But I'm absolutely certain that it's there.

By wolfwalker (not verified) on 15 Oct 2006 #permalink

I don't know what the mistake is. I don't know where it is. I don't know how or why they made it, or why they didn't see it themselves. But I'm absolutely certain that it's there.

This is not a useful statement in methodological studies. I hear the same statement from creationists. I think you need to bring more to the table than this.

(Note: I'm not saying you're wrong; I'm just saying the criticism is not particularly strong with respect to this study).

I think you need to bring more to the table than this.
hmmm, that is a little like hearing a physicist tell me he can measure the size of atoms with a ruler, and then telling me that I need to disprove the methodology...
it's a cluster sampling study; what more do you need to say ? It is not the most reliable protocol, it samples a tiny percentage of the whole population, and it is open to all sorts of systematic bias. There are many examples of cluster sampling methodologies failing spectacularly.

As wolfwalker points out, all sorts of anomalies fall out. The pre-invasion death rate is half of what it is in the UK, apparently ! post-invasion non-violent death doesn't go up ! So many death certificates, but the iraqi authorities don't know about them ! etc

it is clearly a piece of work, but giving a deaths figure to 6 significant figures, when your errors are +/- 30+ %, clearly overestimates the precision of the measurement tool.

cheers
per

Per challenged, regarding age structure in Iraq vs. Scotland:

One would imagine that to make such a clear and definitive statement, you would have to have access to some excellent data; after all, we are talking about a two difference in rate. So how do you know this ? What sources do you have for your assertion ?

Well, here's the CIA World Factbook on the age structure in Iraq and in the UK (sorry, not Scotland):

Iraq: 0-14 39.7%; 65+ 3%
UK: 0-14 17.5%; 65+ 15.8%

They look pretty different to me.

This is not a useful statement in methodological studies. I hear the same statement from creationists.

I think in this case it is useful because I can (and have) listed reasons to think there's a mistake. Creationists can't give any rational reason to doubt evolution (or any of the other science that they reject), only their own prejudices.

I think you need to bring more to the table than this.

Very well, how about this comment from a post over at Jane Galt's blog Asymmetrical Information:

The data in the study appear to provide confirmation for two hypotheses. One is that the random variable �excess deaths� in Iraq has a central tendency of about 650,000 (leaving aside the issue of discarding outliers). The other is that 91% of respondents could produce death certificates for those family members who have died since the invasion. This tends to confirm the validity of studies based upon counting the death certificates issued by hospitals, morgues, etc. Those studies yield estimates closer to the 50,000 level.

So there is a serious inconsistency here.

Elsewhere in the same thread, another commenter points out that many of Iraq's provinces are quite peaceful, and most of the violence is focused in a handful of provinces plus Baghdad. This means that those 650,000 (alleged) dead actually come from an area that contains only a fraction of the country's total population. Of course, the smaller the affected area, the higher the local body count has to be, and the less likely it seems that such an immense manpower drain would go unreported.

In another comment on another blog (sorry, I don't remember which one), someone said that Iraq is an "extended family" sort of culture, so when asked "has anyone in your family died as a result of violence?" they might answer "yes" even for a distant relative -- that is, not just for a spouse or a child, but also for an aunt, an uncle, a cousin, a remote cousin ... And the more distant a degree of relationship is involved, the more likely "multiple counting" becomes.

If you want me to accept the study, you're going to have to either provide reasonable answers to every one of these criticisms, or show me other studies by different groups using different methods that produce similar numbers. And I'm afraid that "the people who did it are experts, so take their word for it" isn't a reasonable answer. I've seen too many "experts" commit monumental blunders and never notice til it was too late.

By wolfwalker (not verified) on 15 Oct 2006 #permalink

I'll give the answer to the third of your questions and leave the rest to others: the study only counted the family's casulaties if the relative had lived with them the past three months.

By the way though it is good that you have stopped your angry remarks that I hope resulted from a first gut reactions, you still seem not to have read the study. You have gotten your information out of others blogs.

It seems as if you've been through quite a lot to find arguments that support your gut feeling. I remember you saying you didn't have the time to read the whole study and find proper mistakes in the study on your own.

Very good, Anne. One down, two to go.

I remember you saying you didn't have the time to read the whole study and find proper mistakes in the study on your own.

That's interesting ... since I don't recall saying any such thing, and don't see it in the previous post comment thread. In fact I skimmed the article enough to satisfy myself that a) the quotes I've seen in various blogs were accurate and b) I haven't the background or experience to properly analyze the complexities of the study data myself. I'm not an expert in reading public health studies. I know only that the numbers people (including Mike) have quoted from this study don't make sense. As a database analyst and programmer I've developed an ability to tell when something ain't right with the data I'm looking at and/or the conclusions being drawn from it. It's an ability I trust, because it's saved me from major mistakes several times. And with this study, that ability is screaming that something isn't right. Like I said before: I may not be able to tell exactly where the mistake is, but I'm quite sure there is a mistake somewhere.

By wolfwalker (not verified) on 15 Oct 2006 #permalink

wolfwalker wrote:

One down, two to go.

2. There's no simple way to go to a single place and tell how many death certificates have been handed out, especially on quick turnaround. The US system works pretty well but we've just published the 2003 data and we don't have an active war going on to mess up record collecting. 2005 and 2006 data? Forget it. As far as I know, no systematic effort has been made to survey Iraqi hospitals and morgues for death certificates (that probably wouldn't be a good way to estimate total excess deaths anyway).

many of Iraq's provinces are quite peaceful, and most of the violence is focused in a handful of provinces plus Baghdad. This means that those 650,000 (alleged) dead actually come from an area that contains only a fraction of the country's total population.

3. Uh, no it doesn't. It means that many of those dead come from an area that contains only a fraction of the country's land area. Sure lots of provinces may have been relatively peaceful, but provinces don't die, people do. The clusters were allocated by population because the right way to get an estimate of deaths is to survey where the people are.

That's pretty weak, Robert. If death certificates aren't reliable in Iraq, then why did the surveyors ask for them and use them as a validation check? Conversely, if they were valid for the surveyors, why aren't they valid for anyone else trying to count deaths? Who in Iraq has authority to write death certificates? Only the central government? Local official morgues? Any doctor? Why, in short, might these people have death certificates that didn't get into anybody else's data?

As for your answer #3, I didn't say it was a small fraction, did I? :-) Baghdad alone is some 25% of the entire country, population-wise. A large fraction, but still only a fraction. I don't see anything in the paper that breaks down the actual observed deaths by province. It looks like they just derived actual mortality rates for their 47 sample clusters, then multiplied by the population factor to get numbers for the whole country. If I do that, using their numbers, I get something like 162,000 deaths in Baghdad alone. Do you really think 162,000 additional deaths to violence could go unnoticed in Baghdad, where most of the foreign press is concentrated? If I factor in the fact that Baghdad is a disproportionately violent area, that number jumps by two or three times. 300,000 unreported deaths in Baghdad, with all those journalists around? I don't think so.

Rambling around this morning I found another post that draws attention to something I hadn't noticed before: under the crude scale-up the study seems to have used to get from surveyed deaths to projected deaths, comparing their calculated "deaths due to car bombs" to known car bomb incidents and deaths caused by them results in a conclusion that for the Lancet study to be right, something like 90% of car bombings must go unreported in the press. 90%? Single murders, accidental deaths, or even small mass killings by gunfire might go unreported, but car bombs? Thousands of car bombs going unreported? I don't think so.

Even worse is the number of deaths attributed to coalition air strikes. By a similar chain of figuring you can calculate that more than 90% of "air strikes" go unreported by other sources. Even the study's defenders should get a bit of a "say what?" reaction from that one!

By wolfwalker (not verified) on 16 Oct 2006 #permalink

wolfwalker wrote:

If death certificates aren't reliable in Iraq, then why did the surveyors ask for them and use them as a validation check? Conversely, if they were valid for the surveyors, why aren't they valid for anyone else trying to count deaths? Who in Iraq has authority to write death certificates? Only the central government? Local official morgues? Any doctor? Why, in short, might these people have death certificates that didn't get into anybody else's data?

Death certificates are handed out locally: either by a hospital if the patient died in a hospital, or by a local coroner if the person died outside of a hospital. There are hundreds of those places spread out over the country. Eventually copies of the certificates have to get forwarded up to some central repository, but the question is how many of them get forwarded in what period of time. In the US, where the system works well, we just published the 2003 vital statistics. At this moment, you cannot go to any central place in the US and find out how many deaths occurred in 2005.

As for your answer #3, I didn't say it was a small fraction, did I? :-) Baghdad alone is some 25% of the entire country, population-wise. A large fraction, but still only a fraction.

Oh, so you're saying that your third point was just a red herring, and wasn't a serious criticism? Okay, I can live with that.

If death certificates aren't reliable in Iraq, then why did the surveyors ask for them and use them as a validation check?

Robert didn't say death certificates weren't reliable in Iraq. He said [t]here's no simple way to go to a single place and tell how many death certificates have been handed out, especially on quick turnaround..
Let me explain this to you in rather simple terms: filling in and signing deathcertificates and handing them out to families is not that difficult to do. Keeping a tally of every deathcertificate that has been handed out is a very different thing and much more difficult to do, especially in chaotic situations. You need either a copy of the cert. or an entry in a local deathcertificates-register. All those local registers need to be collected nation-wide and processed. I'm sure you can imagine that this does not havethe highest priority for Iraqi government officials.

By Willem van Oranje (not verified) on 16 Oct 2006 #permalink

Oh, so you're saying that your third point was just a red herring, and wasn't a serious criticism?

Nah. The point is still valid. The wording was a booby trap, and it worked. You read what you wanted to read, not what I actually wrote. Take it as a lesson, and start thinking about your assumptions before you type.

Willem,

No single place for the entire country -- that, I understand. But no central place for a particular province? for a particular city? for a particular neighborhood? Nowhere that you can even get a simple count for how many certificates have been issued? No chain of reporting, neighborhoods to city center to provincial center to national center? That strikes me as rather unlikely, to be honest.

Noted in passing: this morning the site "Iraq Body Count," considered reliable by many people even though its own data sometimes seems open to question, posted a critique of the Lancet study.

By wolfwalker (not verified) on 16 Oct 2006 #permalink

wolfwalker wrote:

The wording was a booby trap, and it worked.

Oh, I see. It was a trap. So you're saying you weren't interested in a real dialogue but rather were intent on insincere posturing? OK, I can live with that. Ciao.

You walk back on your criticism that the sample is urban-biased, but you don't say why. That struck me as one of the more reasonable critcisms. Could you (or someone else) explain why you think the sample was representative of the population of Iraq?

Pithlord, the clusters were assigned in such a way that, at the outset, as far as possible every household in Iraq had an equal probability of inclusion. Admittedly an isolated farm or village might have had no chance, but no survey really lives up to the textbook ideal of a random sample. I can't see that a household in a small town containing a hundred homes or so would have been any less likely to get included than a household in Baghdad.

If isolated farms and small groups of houses are very safe/(unsafe) that would bias the mortality estimate upward/(downward), by an amount which depends on the proportion of Iraq's population living in such places. AFAIK that proportion is very small.

I suspect Mike's main reason for de-emphasising that concern is he doesn't know which way the bias (if any) goes. Most likely nobody does.

By Kevin Donoghue (not verified) on 17 Oct 2006 #permalink

If anyone reading this is still open-minded on the Johns Hopkins study, then this op-ed at OpinionJournal raises two more questions that really need answers:

1) why so few clusters? Other surveys of the same type have used several times as many clusters. Too few clusters in a survey like this yields the same result as too few individuals in any ordinary poll: a sample that may not be representative, and results that can be seriously different from reality.

2) why didn't the surveyors collect any demographic data at all? Without this data, there's no way to run any sort of subset queries or tell if the sample really was representative.

By wolfwalker (not verified) on 18 Oct 2006 #permalink

A number of these points have been addressed by one of the study authors in this interview.
Regarding wolfwalker's new point 1, the low number of clusters is one of the reasons the confidence interval is so wide, on the order of 30%. More clusters->narrower confidence interval. However as long as the number of people/households selected per cluster is sufficient, the results should be valid.

By Antiquated Tory (not verified) on 20 Oct 2006 #permalink

It is a curiosity that for studies of how we have changed the world around us by ongoing activities, our estimate of the world remains fixed in our minds long after the report. So, the first Iraq mortality survey estimated 100,000 excess deaths through May 04, and 2 weeks ago we pretended that 100,00, an estimate over 2 years old, was either too high to be credible or shocking and indicative. Well, the war in Iraq has been no less violent for Iraqis, and recently not for US soldiers either. Now, we have a new estimate of excess Iraqi deaths before us, 655,000, and nobody, including concerned believers that this is a best, unbiased estimate, has updated that estimate to include likely experience since data collection was completed. Quickly, take the estimated excess deaths from January to June 06, when the last deaths were recorded by the survey teams, and multiply by .5 to get an estimate of the additional deaths for July, August and September. Add that to 655,000.
I won't do it for you, you won't do it for yourself, and we will all keep 655,000 in mind until we are shocked, shocked, in 2 years that the cumulative total is higher. (Of course, if you are intent to disbelieve the recent study report, should the next estimate be, say just 680,000 excess deaths, you will harumph that the previous estimate has now been proven to have been too high.)
What I take from this is that we care very little about the number, and unfortunately look away from the unpleasant part of the process by which this number, which was supposed to have grown smaller, as Iraqis enjoyed safer lives in a democracy, instead grows larger.

By Harry Travis (not verified) on 20 Oct 2006 #permalink