More answers from Les Roberts

The BBC did not publish all of Les Roberts' answers. Here are the rest:

It seems the Lancet has been overrun by left-wing sixth formers.
The report has a flawed methodology and deceit is shown in the counting process. What is your reaction to that? --Ian, Whitwick, UK

Almost every researcher who studies a health problem is opposed to that health problem. For example, few people who study measles empathize with the virus. Thus, given that war is an innately political issue, and that people examining the consequences of war are generally opposed to the war's conception and continuation, it is not surprising that projects like these are viewed as being highly political. That does not mean that the science is any less rigorous than a cluster survey looking at measles deaths. This study was the standard approach for measuring mortality in times of war, it went through a rigorous peer-review process and it probably could have been accepted into any of the journals that cover war and public health.

The Lancet is a rather traditional medical journal with a long history and is not seen as "left-wing" in the public health and medical communities. The types of different reports (medical trials, case reports, editorials) in the Lancet have been included for scores of years. The Lancet also has a long history of reporting about the adverse effects of war, and the world is a more gentle place for it.

Why is it so hard for people to believe the Lancet report? I am an Iraqi and can assure you that the figure given is nearer to the truth than any given before or since. -- S Kazwini, London, UK

I think it is hard to accept these results for a couple of reasons. People do not see the bodies. While in the UK there are well over 1000 deaths a day, they do not see the bodies there either. Secondly, people feel that all those government officials and all those reporters must be detecting a big portion of the deaths. When in actuality during times of war, it is rare for even 20% to be detected. Finally, there has been so much media attention given to the surveillance-based numbers put out by the coalition forces, the Iraqi Government and a couple of corroborating groups, that a population-based number is a dramatic contrast.

Why do you think some people are trying to rubbish your reports, which use the same technique as used in other war zones for example in Kosovo? Another group, which uses only English-language reports - Iraq Body Count - constantly rubbishes your reports. Again, why do you think that is? --Mark Webb, Dublin, Ireland

I suspect there are many different groups with differing motives.

Lancet 2 found a pre-invasion death rate of 5.5/ per 1000 people per
year. The UN has as estimate of 10? Isn't that evidence of inaccuracy
in the study?

The last census in Iraq was a decade ago and I suspect the UN number is somewhat outdated. The death rate in Jordan and Syria is about 5. Thus, I suspect that our number is valid. Note that if we are somehow under-detecting deaths, then our death toll would have to be too low, not too high. Both because a) we must be missing a lot, and b) the ratio of violent deaths to non-violent deaths is so high.

I find it very reassuring that both studies found similar pre-invasion rates, suggesting that the extra two-years of recall did not dramatically result in under-reporting ... a problem recorded in Ziare and Liberia in the past.

The pre-invasion death rate you found for Iraq was lower than for
many rich countries. Is it credible that a poor country like Iraq would
have a lower death rate than a rich country like Australia?

Yes. Jordan and Syria have death rates far below that of the UK because the population in the Middle-east is so young. Over half of the population in Iraq is under 18. Elderly populations in the West are a larger part of the population profile and they die at a much higher rate.

A research team led by physicists Sean Gourley and Neil Johnson of
Oxford University and economist Michael Spagat have asserted in an
article in Science that the second Lancet study is seriously flawed due
to "main street bias.". Is this a valid, well tested concept and is it
likely to have impacted your work significantly?

I have done (that is designed, led, and gone to the houses with interviewers) at least 55 surveys in 17 countries since 1990 ... most of them retrospective mortality surveys such as this one. I have measured at different times, self-selection bias, bias from the families with the most deaths leaving an area, absentee bias ... but I have never heard of "main street bias." I have measured population density of a cluster during mortality surveys in Sierra Leone, Rwanda, Dem. Republic of Congo, and the Republic of Congo, and in spite of the conventional wisdom that crowding is associated with more disease and death, I have never been able to detect this during these conflicts where malaria and diarrhoea dominated the mortality profile.

We worked hard in Iraq to have every street segment have an equal chance of being selected. We worked hard to have each separate house have an equal chance of being selected. I do not believe that this "main street bias" arose because a) about a 1/4th of the clusters were in rural areas, b) main streets were roughly as likely to be selected, c) most urban clusters spanned 2-3 blocks as we moved in a chain from house to house so that the initial selected street usually did not provide the majority of the 40 households in a cluster and d) people being shot was by far the main mechanism of death, and we believe this usually happened away from home. Realize, there would have to be both a systematic selection of one kind of street by our process and a radically different rate of death on that kind of street in order to skew our results. We see no evidence of either.

In Slate Magazine, Fred Kaplan has alleged that "....if a household wasn't on or near a main road, it had zero chance of
being chosen. And "cluster samples" cannot be seen as representative of
the entire population unless they are chosen randomly." Is Kaplan's
statement true?

His comment about proximity to main roads is just factually wrong! As far as cluster surveys go, they are never perfect; however, they are the main way to measure death rates in this kind of setting. See the SMART initiative.

A recent Science Magazine article stated that Gilbert Burnham (one of
your co-authors) didn't know how Iraqis on survey team conducted their
work. The article also claimed that raw data was destroyed to protect
the safety of interviewees. Is this true?

These statements are simply not true; and do not reflect anything said by Gilbert Burnham! He's submitted a letter to the editors of Science in response, which I hope they will print.

A UNDP study carried out survey 13 months after the war that had a
much higher sample size than both Lancet studies and found about 1/3 the
numbers of deaths that your team has found. Given the much higher sample
size shouldn't we assume the UNDP study was more accurate and that
therefore your numbers are way too high?

The UNDP study was much larger, was led by the highly revered Jon Pedersen at Fafo in Norway, but was not focused on mortality. His group conducted interviews about living conditions, which averaged about 82 minutes, and recorded many things. Questions about deaths were asked, and if there were any, there were a couple of follow-up questions.

A) I suspect that Jon's mortality estimate was not complete. I say this because the overall non-violent mortality estimate was, I am told, very low compared to our 5.0 and 5.5/ 1000 /year estimates for the pre-war period which many critics (above) claim seems too low. Jon sent interviewers back after the survey was over to the same interviewed houses and asked just about <5 year old deaths. The same houses reported ~50% more deaths the second time around. In our surveys, we sent medical doctors who asked primarily about deaths. Thus, I think we got more complete reporting.

B) This UNDP survey covered about 13 months after the invasion. Our first survey recorded almost twice as many violent deaths from the 13th to the 18th months after the invasion as it did during the first 12 (see figure 2 in the 2004 Lancet article). The second survey found an excess rate of 2/1000/year over the same period corresponding to approximately 55,000 deaths by April of 2004(see table 3 of 2006 Lancet article). Thus, the rates of violent death recorded in the two survey groups are not so divergent.

Tags

More like this

Looking through old threads about the ILCS it seems to me that this point made by Nicolas Davies deserves a bit more attention than it seems to have got: "More than half of the deaths reported were in the southern region of Iraq, suggesting that it captured deaths in the initial invasion rather than in the violence that followed."

Here are the figures, central estimates of numbers of deaths, with my own calculation of deaths per thousand (not annualised) in brackets:

South: 12,044 (1.24)
Baghdad: 7,547 (1.15)
Centre: 3,686 (0.51)
North: 466 (0.13)
Total: 23,743 (0.87)

The Centre region includes Fallujah, so 0.5/1000 is a remarkably low figure. Could it be that the Iraqis understood "war-related" death to refer only to people killed in the initial fighting, up to and including the capture of Baghdad?

The questionnaire asked if anyone in the household had died or gone missing and the causes were listed as disease, traffic accident, war-related, pregnancy-related and Other. Have figures for "Other" been published? Presumably not, given that even Les Roberts seems to be relying on a private communication for the total figure.

By Kevin Donoghue (not verified) on 31 Oct 2006 #permalink

The political problem angle is of little interest--you have to show that the Iraq papers are defective and then you can point to political bias as a possible reason for why bad papers were published. But as an argument it doesn't work in reverse.

By Donald Johnson (not verified) on 31 Oct 2006 #permalink

Donald Johnson --

You fail to understand right-wing argumentation.

Since the facts have been against them for years now, they content themselves with indicting the source of the facts, thus quelling the cognitive dissonance between their "Bush Bubble" and the real world.

I see Roberts endorses the SMART initiative, a set of guidelines for "measuring mortality, nutritional status, and food security in crisis situations" put together by UNICEF and USAID.

Does the 4 1/2 year recall period used in Burnham et al follow the SMART guidelines? Well, no. The experts say that it's "usually advisable" to limit recall to 3 months, and caution against excessively long recall periods. "Recall periods longer than one year should not be used," say the guidelines (pp. 31-32).

Although the Lancet Iraq study is pretty much the only crisis mortality study to violate this SMART guideline, Tim Lambert has assured us that there's nothing wrong with such a long recall period. Since I assume Tim will want to set the SMART folks straight, and explain to these so-called experts how their guidelines need to be revised, here's their contact info.

I'm all for getting the figures every three months. So which mortality studies should we rely on for Iraq, then?

By Kevin Donoghue (not verified) on 31 Oct 2006 #permalink

The ILCS used a recall period of two years. Lancet 1 used a recall period of 2 1/2 years. The SMART guidelines suggest a recall period of one year for baseline mortality and up to one year for crisis mortality. So Lancet 2 doesn't follow the guidelines. But they are not so much rules as guidelines. Maybe Ragout will tell us why he/she thinks this makes a big difference to the results.

Note that if it did make a difference, it would show up in the base line number, but Lancet 1 got the same number with a shorter recall period.

Except that the proportion of violent to non-violent deaths was completely different.

Tim,

The SMART guidelines discuss a number of reasons not to use a long recall period. I think the most important one is that the imprecise definition of a household member (whose deaths they are asking about) becomes increasingly problematic as the recall period gets longer.

How can you say that (1) you don't know why this would make a difference, but (2) you do know that if it did make a difference it would change the baseline number? You're making some unstated assumption, probably about the monotonic decay of memory. But you're forgetting about "anchoring" effects. Both surveys asked about deaths in the 14 months *before the invasion*, so my guess is that they both have a similar bias in the baseline numbers.

The ILCS used a recall period of 2 years, but when interpreting the results, the researchers said that they assume that almost all the reported violent deaths happened in the past year.

Finally, if you're saying that the SMART guidelines suggest a recall period of up to 2 years (1 year for baseline mortality plus 1 for crisis mortality), you're wrong. They're talking about two different surveys, each with a recall period of 1 year or less.

Doesn't the fact that death certificates were produced mean that memory issues become largely irrelevant?

By James Haughton (not verified) on 01 Nov 2006 #permalink

James, if there is a recall problem it would probably take the form of earlier deaths being more likely to be forgotten, not later ones to be fake.

Deaths, of course, are not easily forgotten, but dates and whether the deceased was a member of the household may be more fuzzy.

Recall error is a serious issue (I see it in other survey contexts), and is likely to show up as a somewhat overstated increase in the death rate over time. I don't think it has any power in explaining away the later death rate, however.

Here are the disadvantages of a long recall period according to the SMART guidelines:

* Mortality rate may be less relevant to current needs than a more recent mortality rate.

* Important or traumatic events may be recalled as having occurred more recently than they actually did (recall bias).

* Additional information, such as cause of death, becomes increasingly unreliable as the recall period lengthens.

The one that Ragout claims is the "most important" does not appear at all. As James H notes, looking at death certidicates makes the last two largely irrelevant.

Tim Lamber writes:

"Note that if it did make a difference, it would show up in the base line number, but Lancet 1 got the same number with a shorter recall period."

But that isn't necessarily significant because of the broad confidence intervals.

Consider this hypothetical:

Actual preinvasion mortality is 8/1000/year. Recall error makes the perfectly sampled reported preinvasion mortality 6/1000/year for a 2004 survey and 5/1000/year for the 2006 survey. This is primarily because of respondents misremembering the date and thinking that some of the deaths happened before the study period. Postwar, death dates are also misremembered, but generally corrected when the death certificate is examined. Sampling error results in a sampled reported death rate of 5.5 per thousand in both surveys. The researcher falsely concludes that recall error isn't a problem because he got the same number both times.

There might be a problem with remembering whether one's uncle died of the heart attack before or after March 2003, but I suspect people generally manage to keep straight when their brothers were blown up by car bombs or air strikes.

By Donald Johnson (not verified) on 01 Nov 2006 #permalink

Given that the study reports over 300,000 violent deaths in the year to June 2006 it's hard to get excited about recall problems. For the Iraqis' sake I hope there really is something wrong with it, preferably relating to 2005/06 rather than 2002.

By Kevin Donoghue (not verified) on 01 Nov 2006 #permalink

The SMART guidelines discuss the problem I mention as most important a number of times, for example on page 39:

"Survey respondents sometimes misunderstand questions about mortality in theirhouseholds and tell survey interviewers, for example, that persons who left the household are dead. This would lead to an overestimate of the death rate..."

By the way, this directly contradicts a previous claim of Tim's that confusion about who is a household member does not cause bias in any particular direction. Again, Tim needs to set the SMART guys straight.

And since actual deaths are being reported, just not deaths that *should* be reported, it's irrelevant whether a death certificate was produced or not.

I find Will McLean's scenario pretty plausible: 25% of pre-invasion deaths recalled as happening before 2002, but more recent deaths reported more accurately. After all, new year's day is a pretty good "anchoring" point when we're talking about the beginning of the current year, but not so good after 4 or 5 new years have come and gone.

It's worth adding that this amount of error would explain *all* of the 100,000 excess deaths in the first Lancet study, and about 200,000 of the deaths in the second.

In the first Lancet study there were 21 non-Fallujah violent deaths post invasion and 1 before. It's unlikely people were confused on when those deaths occurred.

The same for Lancet 2, where I think the ratio is 300 violent deaths after the invasion and 2 before.

The effect might be worth thinking about as far as nonviolent deaths are concerned.

By Donald Johnson (not verified) on 01 Nov 2006 #permalink

Ragout, lack of an anchoring point means you tend to overcount deaths since deaths from earlier periods get counted (this is called telescoping). I did not say that confusion about who is a household member produces no bias. I said it biases the result down. And your comment about death certificates being irrelevant makes no sense whatsoever.

Tim Lambert writes:

"Ragout, lack of an anchoring point means you tend to overcount deaths since deaths from earlier periods get counted (this is called telescoping)."

But if they are actually looking at death certificates most of the time, wouldn't that screen out most telescoping?

Tim,

*Death certificates*. The Lancet interviewers asked about deaths among a certain group of people. If they're told about deaths among a broader group of people, there's an upward bias, whether the deaths are real or not. It's a subtle point, I know.

By the way, your apparent belief that it can't hurt to count more deaths in a mortality survey, as long as a death certificate is produced, is just the error I think was made by the Lancet interviewers and respondents.

*Telescoping*. I knew that you knew more about recall bias than you pretended to! Anyway, telescoping means that if you ask about deaths in the last 6 months, people often tell you about deaths in the last year. When asking about deaths in the period between, say, 3 years ago and 4 years ago, I think McLean's guess as to likely errors is better than yours, but any evidence on this point would be welcome.

*Confusion about household membership*. As I recall, you made an argument whose logical conclusion was: no bias. If you actually think the bias is towards underestimating the death rates, then I guess I gave you too much credit, but the point remains. The SMART folks say that you're in error: the bias is upwards.

Kevin writes, "Given that the study reports over 300,000 violent deaths in the year to June 2006..." with the implication that this recent figure is especially likely to be right.

Tim, maybe you could explain to Kevin why "telescoping" implies that the recent figure is especially likely to be wrong.

But if they are actually looking at death certificates most of the time, wouldn't that screen out most telescoping?

It should. Even if the certificates don't list actual dates of death--which is unlikely since they do list both circumstances of death and date of birth--they come with an issuing date. (This came up in the notorious case of the rape and murder of an Iraqi teen, for instance.) If survivors are misremembering the death date of the deceased by months or years, the certificate will correct them.

Of course there's the theoretical possibility that recall bias is still distorting the results in the 13% of cases without death certificates, but the authors checked up on that; as they say in the paper, "The pattern of deaths in households without death certificates was no different from those with certificates."

By Anton Mates (not verified) on 01 Nov 2006 #permalink

Ragout,

"Survey respondents sometimes misunderstand questions about mortality in theirhouseholds and tell survey interviewers, for example, that persons who left the household are dead. This would lead to an overestimate of the death rate..."

By the way, this directly contradicts a previous claim of Tim's that confusion about who is a household member does not cause bias in any particular direction.

Why do you think the above quote has anything to do with confusion over household membership? So far as I can see, it has to do with confusion over whether someone actually died, or simply left the household.

Perhaps you're interpreting it as--the respondent tells the surveyer about the death of a former household member, when s/he should have kept quiet? But the SMART guidelines explicitly say that information should be taken on former members. From page 29:

"We need to find out how many people have been at risk during the recall period--not just those in the house at the time of the survey. Therefore household members who have left the household should be counted."

By Anton Mates (not verified) on 01 Nov 2006 #permalink

Ragout,

Actually the main thing that strikes me about the 2005/06 figure is simply that it is (a) alarmingly high and (b) relevant to the current situation. Arguments about what the death rate might have been in 2002 are a bit pointless at this stage. The debate now is about whether to stay the course, cut and run, or cut and leave in an orderly manner.

Convincing arguments that the 2005/06 figure is wrong (meaning too high) are welcome. However I could give a surveyor accurate dates for deaths in my own family going back ten years or more; also where they lived their last months, if that's crucial to the argument. Why should an Iraqi be different? It really isn't the same as trying to remember when my bicycle was stolen. Also, death certificates for recent deaths are less likely to have been lost.

By Kevin Donoghue (not verified) on 01 Nov 2006 #permalink

Why might the 2006 John Hopkins study be wrong? To start with, it relies on dividing the country into equal size clusters, and sampling a group of households in each cluster. Accuracy depends on reasonably accurate estimates of population by region. One challenge is that accurate population estimates are hard to find in Iraq. The last nationwide census was 1993. Further, there's reason to question the accuracy of the Saddam era censuses: the ruling party had strong motives for inflating the number of Sunnis at the expense of other groups. The problem is compounded by the study's use of a two year old estimate.

If the population is overestimated in the more dangerous parts of Iraq, and underestimated in the less, then the more dangerous parts will be overrepresented in the sample. There's good reason to think that this was the case. The Sunni triangle would have benefited from any Saddam era selective distortion, and is generally considered to be one of the more violent parts of Iraq, and indeed this reflected is in the Johns Hopkins study. Further, you would expect the more violent regions to lose population share relative to the rest of Iraq since 2004, both because of flight and higher death rates.

Another problem might be nonrandom sampling of households. As I understand the protocol, the first household in the cluster was randomly chosen, and then the team would go to the nearest neighboring household, and so on. But in a lot of communities, a household will have two neighbors that are equally close. Ideally, the team should flip a coin, but that might not be what happens. "Asking about people killed in the fighting, are you? Then you should visit Widow Tikriti. She's right next door".

Anton,

You are misunderstanding this passage:

"We need to find out how many people have been at risk during the recall period--not just those in the house at the time of the survey. Therefore household members who have left the household should be counted."

"At risk" implies alive, so this passage is discussing which living people to count, not which dead ones. The SMART guidelines are saying that household members who have left should be counted as *alive* for the portion of the recall period *before* they left. What happened to them after they left is irrelevant, and should not be counted, even if they died.

See how tricky this is? See how easily even someone highly educated & intelligent can misunderstand this point even when reading a report written in their first language? (At least I assume all these things are true of you). Despite this, it seems that you would have made exactly the error I think the Lancet interviewers made: counting deaths when you shouldn't have.

That's why the Lancet's generally vague and ambiguous discussion of who should be counted makes me highly doubtful that the interviewers and respondents got it right.

Ragout:

it seems that you would have made exactly the error I think the Lancet interviewers made: counting deaths when you shouldn't have.

As far as I can tell, the interviewers' instructions were quite clear:

Deaths were recorded only if the decedent had lived in the household contiuously for 3 months before the event

By Meyrick Kirby (not verified) on 01 Nov 2006 #permalink

Ragout:

What happened to them after they left is irrelevant, and should not be counted, even if they died.

For the recall error to inflate the excess deaths of the Lancet study, requires the error to be made more often in the more recent periods compared to the pre-war period, which is unlikely. Incorrectly including deaths (i.e. when the deceased was not resident for 3 months prior) would be more likely to occur in the period furthest from memory (i.e. the pre-war period).

By Meyrick Kirby (not verified) on 01 Nov 2006 #permalink

All of Ragout's presumptions of superiority to the poor saps who actually did the work on the ground in Iraq are explained in Ragout's mind by this: "And since actual deaths are being reported, just not deaths that should be reported, it's irrelevant whether a death certificate was produced or not".

Someone sympathetic to Ragout's prejudices might care to try to explain these words to the rest of the world on Ragout's behalf, because despite it being the only semblance of a relevant argument in any of Ragout's volumes of output on his/her special subject (the whitewashing of the ill effects of our war of choice on Iraq), Ragout is conspicuously, tragically, unable to make any sense or point of it by him or herself. Anybody else care to try?

Ragout is conspicuously, tragically, unable to make any sense or point of it by him or herself. Anybody else care to try?

Good lord! And subject us to more tap-dancing and denial?

Retract thy statement, or puppies will die from the side-effects of the bandwidth-eater!

Best,

D

I might add that the pre-war estimate is in line with surrounding countries, although we do get a few people around arguing that it is too low.

By Meyrick Kirby (not verified) on 01 Nov 2006 #permalink

Ragout,

"At risk" implies alive, so this passage is discussing which living people to count, not which dead ones. The SMART guidelines are saying that household members who have left should be counted as alive for the portion of the recall period before they left. What happened to them after they left is irrelevant, and should not be counted, even if they died.

Yes, of course. The point is that those people should at least be asked about, according to the SMART authors, so they can't possibly be saying that the respondents' mere mention of their (post-departure) deaths is an automatic source of bias. On the contrary, precisely because the surveyor asked about those people, and took down information about the dates they stayed in the households, s/he can remove their death from consideration. As Roberts et. al did.

By Anton Mates (not verified) on 01 Nov 2006 #permalink

Meyrick Kirby writes:
*As far as I can tell, the interviewers' instructions were quite clear: Deaths were recorded only if the decedent had lived in the household contiuously for 3 months before the event.*

Except that (1) there must have been more to the instructions, since lived in the "household continuously for 3 months" would rule out infant deaths and deaths in hospitals (2) Roberts wrote to the BBC it was "most of the nights during the three months," contradicting the paper, which said "continuously."

More importantly, since "household" is vaguely defined, it's also going to be unclear who's lived in the household for 3 months.

Roberts wrote to the BBC it was "most of the nights during the three months," contradicting the paper, which said "continuously."

Good grief, is that really what passes for meaningful criticism?

Apparently hyper-literalism is a problem with more than just religious fundamentalists.

Although, given the death rates of neighboring countries, I tend to believe the study's pre-invasion 5.5 death rate, isn't Roberts being a little disingenous in his response to the question about comparing it to the UN estimate of 10? He says, "if we are somehow under-detecting deaths, then our death toll would have to be too low, not too high." But that assumes that any error affecting the count of pre-invasion deaths necessarily affects post-invasion deaths the same way, right? And without knowing what that error is (if any, of course), you can't make that assumption, can you?

MartinM,

The fact that Roberts has given differing reports of the exact wording of several questions isn't the most important issue, but it's not trivial either. Good surveyors are hyper-careful about the exact wording (see the ILCS for a better way to word questions about living in a household for 3 months). To take another example, interviewers are supposed to read the questions exactly as written, not give extemporaneous speeches about the importance of the survey as the Lancet interviewers did.

Since Roberts clearly is not that careful about the exact wording of questions, I have a lot less trust in the quality of his survey work. Your mileage may very.

Ragout:

Any normal person would include those types of infant deaths, as you know only to well.

More to the point you've missed my other point, so I'll try again. The period most likely to be affected by you're hypothesized over counting of deaths is the pre-war period, yet the estimated pre-war mortality rate is in line with surrounding countries, and does not appear to be too high (as demonstrated by the numerous complaints that it is too low).

By Meyrick Kirby (not verified) on 02 Nov 2006 #permalink

Glenn,

But that assumes that any error affecting the count of pre-invasion deaths necessarily affects post-invasion deaths the same way, right? And without knowing what that error is (if any, of course), you can't make that assumption, can you?

The error that Ragout is hypothesizing is due to the difficulties people have in accurately report past events. The period of time most likely to be affected by such problems is the period furthest in the past, i.e. the pre-war period.

By Meyrick Kirby (not verified) on 02 Nov 2006 #permalink

More importantly, since "household" is vaguely defined, it's also going to be unclear who's lived in the household for 3 months.

It's just as precisely defined as in the SMART guidelines, while we're on that: "Household definitions are culturally specific and need to be decided in the field. A frequently
used definition is "who slept here last night and ate from the same cooking pot"."

To take another example, interviewers are supposed to read the questions exactly as written, not give extemporaneous speeches about the importance of the survey as the Lancet interviewers did.

You're not seriously suggesting that a survey will be better if interviewers say absolutely nothing other than to read each question off the page, regardless of how the respondent reacts?

By Anton Mates (not verified) on 02 Nov 2006 #permalink

Way upthread, Ragout wrote:

I see Roberts endorses the SMART initiative[...]Does the 4 1/2 year recall period used in Burnham et al follow the SMART guidelines? Well, no. The experts say that it's "usually advisable" to limit recall to 3 months, and caution against excessively long recall periods. "Recall periods longer than one year should not be used," say the guidelines (pp. 31-32).

The SMART Initiative is designed to measure short-term crisis mortality. That's why they recommend very short recall periods, and don't recommend long ones. Basically, they're focusing on spike mortality with rates in the ballpark of 50 - 100 per thousand person-years of exposure (or higher), where they already know that rates have increased and they're trying to get an estimate of the size of the spike. That's not the situation in Iraq, where a substantial number of critics appear not to believe that the mortality has increased at all. In those cases, a cohort mortality method is probably the best way to estimate the change.

In addition, the Demographic and Health Surveys use much longer recall periods for measuring mortality than the Roberts or Burnham studies.

Sigh...

"This UNDP survey covered about 13 months after the invasion. Our first survey recorded almost twice as many violent deaths from the 13th to the 18th months after the invasion as it did during the first 12 (see figure 2 in the 2004 Lancet article)."

Note that this is only true if you include Fallujah in L1. Roberts omits this point. Strange. Though perhaps not, because doing so would let everyone know that in order to make this argument he's relying on an estimate of almost 300,000 "excess" deaths with 84% caused by coalition forces by September 2004, findings which are both at odds with L2, not to mention ILCS or anything else you care to name.

Getting back to ILCS/UNDP..this 300,000 would then have to be compared (minus about 60-70,000 for crime and non-violent deaths) to 24,000 estimated in UNDP by April 2004, resulting in a huge divergence. And the Falluja-based increase after April 2004 would not bring the two back into any kind of line, as there were significant numbers of Falluja deaths before April 2004 as well.

This ground has been covered before as, a few months ago, Tim had to strain rather mightily to maintain some ray of possibility that L1 _excluding Falluja entirely_ might possibly fall inside the outer edge of the ILCS CI.

However, it follows directly that the inclusion of any of Falluja data into L1 drives away all hope of any kind of convergence between the two. And including all of it, as Roberts does above (undeclared of course), would drive the comparable Lancet figure way off from the ILCS point estimate, and way outside its CI.

Roberts then, after bringing Falluja in (unannounced) in order to make his claim about the period between UNDP and L1, he then goes on to say:

"Thus, the rates of violent death recorded in the two survey groups are not so divergent."

Of course his preface has assumed something that in fact does make them "so divergent". He's just not declaring which assumptions he's using when. 'Whichever is most expedient to score a point and parry away an inconvenient question at this particular moment' seems to be the rule.

Also, on another point, I recall in the original list of questions by Joe Emersberger (a Media Lens regular), who I should say did do a fairly good job of putting together this list and representing a lot of the important questions that were being asked, there was a question about the MoH figures cited (or rather asserted with a vague attribution) in the supplement for the Lancet report, which I had previously said were wrong on this blog. I notice this question was not answered. I wonder why that is. Hmm....

I see Josh is waving the UNDP-ILCS around again. Can he, or anyone, tell me what a "war related" death is? When did the war end, or is it still in progress? Were the assaults on Fallujah and the like "war", or were they merely counter-insurgency?

And does anyone have even a clue what figures the ILCS got in relation to deaths by disease, traffic accidents and "Other" causes? Is there any corroboration to the claim by Roberts that they got a remarkably low figure for total mortality?

Is David Kane hounding Jon Pederson about this?

By Kevin Donoghue (not verified) on 02 Nov 2006 #permalink

The fact that Roberts has given differing reports...

...is not a fact at all in this case unless you insist on a remarkably stupid definition of 'continuously.'

Meyrick, thanks for your comment, but I wasn't referring to anything Ragout had to say. My comment was directed at Roberts' assertion that the fact that his study found a lower pre-invasion mortality rate than the UN was of no moment, because that necessarily meant that the Lancet study undercounted all deaths. That seems like a questionable proposition to me, because -- assuming there was any error in the Lancet study count of pre-invasion mortality (which I doubt) -- then without knowing what that error was, you can't possibly know that it applies equally to the post-invasion count.

For example -- and obviously I don't know if any of this is true -- suppose the Lancet study for some reason oversampled Sunni households, and further suppose that Sunni mortality was less than average pre-invasion, but greater than average post-invasion. (This certainly seems plausible to me.) This error would give you an undercount of pre-invasion mortality, but would give you an overstatement of post-invasion mortality (and also the number of excess deaths). All I'm pointing out is that for Roberts to dismiss the concern over their pre-invasion mortality numbers by saying that if anything, it necessarily cut in favor of the study's calculation of excess deaths being too low, was inaccurate and maybe even a bit disingenous on his part.

Glenn,

We can all come up with alternative explanations of data, that's why in science hypotheses are never proved, only disproved. Robert's assertion was no doubt made because it will eliminate a good chunk of the alternative explanations, such as Ragout's, but not all (which no statement can ever do).

By Meyrick Kirby (not verified) on 03 Nov 2006 #permalink

Glenn,

My comment was directed at Roberts' assertion that the fact that his study found a lower pre-invasion mortality rate than the UN was of no moment

I haven't read the UN report, and since I have my viva in 10 days, I'm unlikely to do so. But are you sure the UN's pre-war mortality rate was higher than the Lancet study?

I say this because the overall non-violent mortality estimate was, I am told, very low compared to our 5.0 and 5.5/ 1000 /year estimates for the pre-war period which many critics (above) claim seems too low.

By Meyrick Kirby (not verified) on 03 Nov 2006 #permalink

The UNDP/ILCS report is barely compatible with Lancet2 if you take the low end of the CI of the violent mortality rate for the first 13 months from L2 and assume it is about 50 percent criminal murders. Table 3 says the violent mortality rate CI for the Mar 2003-April 04 period is 1.8-4.9 per thousand per year, so at the low end that's about 50,000 deaths for 13 months. Table 4 says that 25 of the 45 violent deaths in that first period were from "unknown", 4 were from "other" and 16 were from the coalition. They don't give any estimate that I see for the CI for the percentages they calculate from those numbers, but 50,000 total violent deaths is compatible with the ILCS figure of 19-28,000 war-related deaths.

Not that I really think the two studies are compatible, but they do overlap at the margins.

Robert (not Roberts) crunched the numbers for Lancet I with and without Fallujah. Someone could probably find the link--I think that with or without Fallujah the spread is wide enough to be compatible with Lancet II. Much or possibly most of the violence in Fallujah in Lancet I (and in actuality) might have occurred after the UNDP survey was finished.

By Donald Johnson (not verified) on 03 Nov 2006 #permalink

Donald,

Aren't you making some pretty strong assumptions about what Iraqis understand by the term "war related" death (or whatever that looks like in Arabic)? Looking at the geographical breakdown of their figures I'm inclined to think they didn't regard what happened after the fall of Baghdad as part of the war at all.

I'm not referring to the fact that they may have done most of their fieldwork before the insurgency took off. Tim Lambert has made that point in other threads. What I'm getting at is that we don't have any clear idea what that 24,000 figure is supposed to represent. Therefore I think the whole argument about whether the ILCS contradicts other surveys is a waste of time.

N = 24,000 but what the hell is N? If anybody knows, please enlighten me.

By Kevin Donoghue (not verified) on 03 Nov 2006 #permalink

Meyrick, I think maybe we're talking past each other. You refer to "Roberts' assertion" as one that will "eliminate a good chunk of the alternative explanations." I'm assuming you must be referring to his assertion that his two studies found similar pre-invasion mortality rates. I'm talking about this statement:

"Note that if we are somehow under-detecting deaths, then our death toll would have to be too low, not too high."

I don't see how that "eliminates a good chunk of the alternative explanations". But in any case, the point I'm making is, I don't think it's true. My Sunni oversampling suggestion was not intended as an explanation of the Lancet data, just an example to demonstrate that the proposition, "If the study underestimated pre-invasion mortality, then it also underestimated post-invasion mortality" is not necessarily true, as Roberts claims it is.

My point -- which I obviously am not making very clearly -- is not that in fact the Lancet numbers are wrong, only that Roberts incorrectly sought to cut off concern over the discrepancy between his numbers and the UN's (and, by extension, between his numbers and the "real" number) by making a false statement about the effect of that discrepancy, if it were shown to be true.

As for whether the UN numbers were in fact 10 vs Roberts' 5.5, I have no idea, but Roberts' answer seems to assume that there is in fact a difference, because he suggests that the UN numbers are "outdated."

On my reading, Roberts was just trying to say something like this: Suppose the non-violent death rate is substantially higher than 5.5. That would imply that the JHU survey is missing an awful lot of non-violent deaths. If that's so, it is surely missing a lot of violent deaths as well.

Put like that, it's not a logically compelling argument but he probably didn't think that particular question deserved much thought.

By Kevin Donoghue (not verified) on 03 Nov 2006 #permalink

Meyrick Kirby wondered:

I haven't read the UN report [...b]ut are you sure the UN's pre-war mortality rate was higher than the Lancet study?

When you write "UN report" do you mean the UNDP/ILCS report? They did not ask date of death so that particular report cannot be used to make any estimate of pre-invasion mortality.

And, could Donald have been thinking of [this](http://anonymous.coward.free.fr/misc/roberts-iraq-bootstrap.png)?

That's probably what I was thinking of, Robert. The plug-in abacus I use at home doesn't let me look at your plots. I'll see it Monday.

You might easily be right, Kevin. "War-related deaths" might be interpreted differently by different people. A person might think it refers to air strikes and invasion forces coming in, or they might think it refers to that plus checkpoint killings, but not insurgent-caused deaths which they might think are criminal murders, etc....

This argument came up before--I used it on Josh at medialens, iirc, several months ago, having seen it someplace or other. He didn't buy it. But it might be right, so it weakens the case for pitting the ILCS study against L2.

By Donald Johnson (not verified) on 03 Nov 2006 #permalink

I have just had an email from Sean Gourley, who reckons the total number of deaths in L2, total period since the war, and discounting main street bias,is 150,000-200,000.

Roberts seems to display an Irving-like propensity for obfuscation on the casualty disparities.

Violent deaths March 2003 - april 2004

UNDP about 24,000

L1 minus Falluja 13/21 *60,000 = 38,000
(13 deaths March - march inclusive, 8 deaths april sep 60,000 total vioelnt deaths march 03 - sep 04, L1)

L2 first year: 78,080

Has anyone got suggestions for the disparities?
Does UNDP not count deaths from criminal activity?
I am off to Poland for a week, irony of ironies, including Cracow, (neaer Auscwhitz), It's a press trip with a number of German and Austrian journalists. I wouldn't be suprised if they asked me about the Lancet study.

Donald,
is your point is that the disparities between the Lancet studies on the one hand and the UNDP study on the other is that Lancet is all violent deaths while UNDP (clarification from my earlier post) includes violent deaths except crime related one.

But what if Iraqis actually attribute criminal deaths to being war-related ones. (since they are, in the loose sense of the word, since there would be no rise in sectarian murders of Saddam had been in power.)

Then the disparity is a true one that Roberts needs to explain

I have scanned the UNDP report and cannot find any entry for deaths.

Where is it and does it have a breakdown of deaths per category?

Why not just add up other deaths to war related deaths to get a figure of total violent deaths and compare it to the Lancet 1 and 2 figures. Then you'll avoid the problem of individuals either putting insurgent deaths

Does the 24,000 figure only refer to war related deaths? If so, why doesn't Roberts mention his estimated total for violent deaths - which he claims puts the UNDP in contention with other reports - in his disngenuous response to the last question?

Pelle,

You seem to be typing in a hurry. It's unclear what you are after. If you want the UNDP/ILCS totals you are out of luck unless you can get Jon Pederson to release them. They only disclosed "war related" deaths and nobody seems to know what that means. Was Abraham Lincoln's death war-related? How about Steve Biko? Or, since you are going to Auschwitz, what about Adolf Eichmann? To call his execution war-related is insulting to warriors, to put it mildly, but if it hadn't been for the war he would probably have died in bed.

I'd love to see how Gourley got his numbers. Did he do any fieldwork? (I jest, of course.)

By Kevin Donoghue (not verified) on 04 Nov 2006 #permalink

Kevin writes:

"If you want the UNDP/ILCS totals you are out of luck unless you can get Jon Pederson to release them. They only disclosed "war related" deaths and nobody seems to know what that means."

UNDP/ILCS also lists infant mortality.

Even fairly minimalist interpretation of "war related" makes it difficult to reconcile the two estimates. Assuming 70,000 dead from Burnham et al in the ILCS period, that's at least 31,000 from identified coalition and insurgent killings. But there's also some number in the "unknown" category caught in crossfires, and a rather large number of invasion-phase fatalities among soldiers living in barracks.

While some number of the Burnham et al. number probably would probably be classified as "crime" in ILCS, it's hard to imagine someone killed by an airstrike post "mission accomplished" being recorded as anything other than war-related.

...it's hard to imagine someone killed by an airstrike post "mission accomplished" being recorded as anything other than war-related.

You mean a death is war-related even if it happens after the war is over? That's weird.

But let's say it's true. Now I know Americans expend a lot of munitions to kill very few people. But it's hard to believe that out of the entire population of Nineveh, Al-Tameem, Diala, Al-Anbar and Salahuddin they killed less than 4,000 people in that first year. That includes soldiers from that region killed in the initial fighting, guerrillas, civilians caught in the crossfire and so forth.

Remarkable, if true.

By Kevin Donoghue (not verified) on 04 Nov 2006 #permalink

Kevin,

Gourley showed me a map of Baghdad with sunni and shiite secxtors separated by main highways, the cross streets are guarded by local militias who form road blocks. He explains the high toll of men killed by refernce to the fact that men being une,ployed loll around these cross streets for much of the day. He doesn't explain the 150k figure.

To solve the problem of "what is a war related death" why doesn't Peterson just publish all stats that could conceivably cover the same ground as the Lancet 2 stats of 78,000 violent deaths year march - march 2003-04.
That is, war related deaths plus other deaths.
There is no other category that includes violence in the options UNDP gave. Only "other".

But he keeps quiet.
Why do statisticians keep concealing these this things and expect to trust them as a profession.

Even pro Roberts people - and I counted myself as one - would surely agree that the answer below is disingenuous.

This UNDP survey covered about 13 months after the invasion. Our first survey recorded almost twice as many violent deaths from the 13th to the 18th months after the invasion as it did during the first 12 (see figure 2 in the 2004 Lancet article). The second survey found an excess rate of 2/1000/year over the same period corresponding to approximately 55,000 deaths by April of 2004(see table 3 of 2006 Lancet article). Thus, the rates of violent death recorded in the two survey groups are not so divergentingenuous answer

Why? UNDP does not

Hello all. Have been following this debate with much interest. Pelle, are you able to share Dr. Gourley's email, and any details of how the main street bias effect was quantified? Thanks in advance.

Okay, I am going to post all of Sean's emails here, eariest first.

Hi Pelle,

I'll answer your email in three parts if that is okay;

Q: In your argument,the big death tolls happened in the
residential cross streets, giving a bias, since these
were polled. But how do you account for the low number
of women and children?

In order to understand the male/female death ratio recorded by the L2 survey you need to look at the daily activities of the sampled population.
________________

We know as a result of the religious influences and violence in Iraq that it is not considered safe for women to travel outside of their homes. See report from TIME magazine and their reporter based in Baghdad, Bobby Gosh.

How are the Iraqi women being treated?
Carolyn Altman
Avon Lake, OH

BOBBY GHOSH: .......There's nothing in the new Iraqi constitution that prevents them from enjoying the usual freedoms, but the realities on the ground have changed. Iraqi politics is now dominated by Islamicist parties -- Shi'ite and Sunni. And many neighborhoods are controlled by religious militias or jihadi groups. Some of them openly demand that women confine themselves to their homes. Even where there are no such "rules", many women say they feel safer staying indoors, or wearing the veil and abaya when they step out......

This is consistent with most accounts of daily life for women in Iraq and may well be one of the reasons that there was a 99% response rate amongst those surveyed.

_____________

So women tend to stay at home, what are the men doing?

The UN puts the unemployment rate at 27% for the population, but the Washington Post goes so far as to say 50% of the population is unemployed.

[Link](http://www.washingtonpost.com/wp-dyn/content/article/2005/06/19/AR20050…)

and many more hold only part time jobs or have irregular work.

This means that the men that are outside of the house are not at work, rather they are passing time in local cafe's as is typical in this part of the world.

see for example this photo essay from the BBC;

[Link](http://news.bbc.co.uk/1/shared/spl/hi/picture_gallery/06/middle_east_ir…)

and again from TIME magazine

What about everyday things like groceries, libraries, coffee shops, religious services-weddings, funerals? Nancy Gainesville, GA

BOBBY GHOSH: .......I don't know any libraries that are still open, but coffee shops and tea houses continue to do business. Iraqis need someplace to go where they can discuss politics!

-> based on such accounts of daily life in Iraq's main cities, we believe that men are the overwhelming majority of people on the streets in Baghdad.

-> attacks kill more people on streets than those in houses.
(this is probably true for everything except airstrikes, and even then the house will afford the occupants a large degree of protection.)

-> thus the majority of people dying in the sampled regions will be male - as is found to be the case in the survey

Proximity to your house

Because of the nature of the violence, if you are a sunni, you don't go into shi'ite areas, and if you are shi'ite you don't go into sunni areas - generally you don't go far from your neighbourhood unless you have to.

remember the authors need to show that there exists unlimited mixing of the populations in order for no bias to occur. Given the situation in Iraq this seems unlikely.

-> the majority of people on the streets are male

-> these men are more likely to be found near their home than far away

-> thus those living close to or in a danger area (i.e. major avenue or large residential street directly connected to the major avenue) are more likely to spend time in the area than those from outside the danger area.

-> so the most common person to be die near a main street is a male person who lives near this street.

Urban circulation should show "distance bias", i.e., the closer a person lives to a particular street the more likely he is to turn up on that street at any point in time. When choosing, for example, a cafe are a food shop, distance is a factor. You will tend to go for one nearby. So distance bias becomes a form of main street bias. The circulating people in the sampled area are at more risk than the circulating people in the unsampled area.

remember, we are not saying that all people in a danger area are from or near that area, just that the population in the danger area is biased towards people who live in or near the danger area.

SEAN 2

I'm not sure if you have asked Roberts about this - but I would be very interested to know three things

(1) how exactly did they determine what was a 'main road'?
-> did they have a list, or did they ask locals, or did they look at a map

(2) how many of these main roads were there per district?
-> 2, 5, or 50+

(3) when they randomly selected a main road - how were the roads weighted?
-> presumably the main roads in large cities were weighted differently from townships with only 100 people

I think that answers to these questions from Roberts would clear up a lot of the confusion - hopefully you get a chance to ask them. I know a lot of people would be interested in his responses.

Sean

SEAN 3

Hi Pelle,

one final email on the L1 vs L2 violent death rates

> Is the disparity between violent death toll estimates
attributable to the absence of Fallujah in L1?

Let's look at the two surveys

There were 98k deaths recorded in L1 without Anbar province, of which
60,000 were due to violence. The Falluja data was not included in the
final L1 analysis as it was seen to be an "extreme statistical outlier".

Thus in order to determine if the two surveys (L1 and L2) agree we have to compare the violent death estimates from both with Anbar province removed.

~1.2 million live in Anbar, and three clusters were sampled from this
province in L2

difference between surveys

145,0000 - 60,000 = 85,000

thus in order for the two surveys to agree more than 60% of the total deaths must occur in this one province.

we are then looking at 60,000 violent deaths across all of Iraq except for Anbar province, and 85,000 deaths in Anbar province alone. If this is indeed the case, surely we are looking at the same problem
that the authors had with the L1 data -namely an 'extreme statistical
outlier'.

Why then was it not discussed in the paper?

___________

of course this is setting aside the issues of the huge confidence
intervals associated with the L1 paper

-> violent deaths show an increase of between 8.1- 419.0 (95% CI)
without even including the Falluja data.

The authors of the paper seem to invoke the Falluja data point when
it suits them, and leave it out when it doesn't.

The basic question then is "How did the authors account for a possible street-bias in L2, quantify its effect and hence correct the L2 data?"

If there is no bias - show this to be the case. Invoking the 'extreme
outlier' of Falluja from the L1 survey, with it's implicit bias, is
not a robust scientific response.

Sean

SEAN 4

> No reply from Roberts. Ha! Bad move.

that is too bad, I think an exact list of the main roads used would
help everyone know precisely how the sample was collected.

> But the Iraq debate in parliament yesterday several
> times mentioned hundreds of thousands.....
> so the argument seems won

yeah in some ways science has a time scale of months, and current
events have a time scale of days. Thus once a number is put into the
media it can very quickly stick.
> Do you have a more honest ballpark assessment?
> More than 50,000 surely
>

it's difficult to know precisely how many Iraqi's have died. Our
simulations that we have been conducting show that a bias of between
3 and 4 is likely given the sampling procedure described by the Lancet.

Removing the main street bias would then put the total number of
deaths at around 150,000 to 200,000. There may be other issues with
the survey that reduce this number further, but main street bias
alone can increase the total number of deaths by 300-400%.

The point to make here is that Roberts et al collected data in
difficult conditions, now they must look at the biases brought about
by their sampling methodology. If they spell out openly and clearly,
exactly how they collected their data, what main roads they used etc,
then we can all get a much better estimation of the actual violence
occurring in Iraq.

Well isn't that just precious? "... 150,000 to 200,000. There may be other issues with the survey that reduce this number further, but main street bias alone can increase the total number of deaths by 300-400%" [my italics]

To put the finishing touches to this triumph of the failed physicists over messy reality on the ground in Iraq - somebody just call for Lubos would you? Meanwhile perhaps the physickiness folks might make public their methods, assumptions and data to better allow for a little bit of scrutiny by their peers (better still, some scrutiny by people who have some kind of a clue about the subject matter)? Jackasses.

Gourley says:

>Our simulations that we have been conducting show that a bias of between 3 and 4 is likely given the sampling procedure described by the Lancet.

I can't see how you could get this from any reasonable set of assumptions.

I, for one, would like to see the details of their simulations.

Given the history of such things, I would take money that the physicists simply disappear and there is no paper. The excuse will be that they could not get information from Roberts, et al. It would be good to hold their feet to the fire in six months or so.

Eli said: "I would take money that the physicists simply disappear and there is no paper....It would be good to hold their feet to the fire in six months or so."

In six months? By then, everyone will have forgotten.

How about holding Gourley's feet to the fire now?

Gourley should he forced to put up or shut up.