IBC takes on the Lancet study

Iraq Body Count has published a defence against some of the criticism they have been receiving. The Lancet study implies that there are about five times as many Iraqi deaths as the IBC number. They do not accept this and so are arguing that Lancet estimate is to high and is not corroborated by the ILCS:

Comparisons between the Lancet study and ILCS have been attempted in the past, one of the best-known being by British activist Milan Rai. His analysis concludes:

"If we crudely scale up the UNDP [IMIRA] figure to take account of the longer Lancet time period, we reach a figure (33,000) which is exactly the Lancet-derived figure of 33,000 violent deaths due to military action."

This widely cited conclusion is wrong, for at least two reasons.

First, the correct Lancet figure for combat-related violence is nearer 39,000 than 33,000. The incorrect 33,000 figure was published by a blogger named Tim Lambert, and accepted uncritically by Rai. But data from the Lancet study itself shows that only a third of 57,600 violent deaths were due to criminal activity, leaving 38,400 combat-related violent deaths. A later re-analysis of Lancet data by the Small Arms Survey placed this figure at 39,000.

In fact, Rai did not uncritically accept my figure --- he asked me to explain how it was derived and included the explanation immediately before the paragraph quoted by IBC:

Each death recorded in the Lancet study represents 3,000 deaths in Iraq in the period under consideration. Outside Fallujah, there were nine deaths caused by coalition forces and two by the insurgents. By simple multiplication, this results in 33,000 'war-related' deaths according to the UNDP definition of this term, for the period March 2003-September 2004.

The only difference between my analysis and the Small Arms Survey one is that they included two violent deaths of "unknown origin" as well as the nine coalition- and the two insurgent-caused deaths. The Lancet study describes the breakdown like this:

Table 2 includes 12 violent deaths not attributed to coalition
forces, including 11 men and one woman. Of these, two were attributed
to anti-coalition forces, two were of unknown origin, seven were
criminal murders, and one was from the previous regime during the
invasion.

The ILCS question asked whether a death was "war-related". I don't think that if the cause of death was unknown it would have been described as "war-related" by a respondent in the ILCS.

Correction: In comments Josh Dougherty points out that on a radio show on 28 Oct 2005 Les Roberts gave some more details: "2 people died in firefights where it was unclear where the bullet came from". Therefore, these two should be included in the war-related deaths so my revised estimate is 39,000. This doesn't affect my conclusion -- the ILCS still corroborates the Lancet study.

The IBC defence continues:

Second, the correct ILCS figure is probably nearer 28,000 than 33,000. This is because the per-day death rate in the post-invasion period was much lower than during the invasion. Averaging across the whole period, as Rai does, gives an unrealistically high per-day rate for the post-invasion months over which the scaling-up was applied. ILCS does not provide its own time-distribution of deaths, but our own recalculation, which applies the Lancet time-distribution to ILCS, yields a scaled-up total of 28,165.

Unfortunately their recalculation contains multiple errors:

Of the 14 violent conflict-related deaths reported by Lancet, five took place during the 42-day invasion phase of March-April 2003, and nine took place in the remaining 510 days covered by the study.

The Lancet study does not report the time distribution for the 14 violent deaths you get if you exclude murders. It reports the time distribution for the 21 violent deaths outside Falluja. IBC seems to have made the unwarranted assumption that there were no murders during the invasion. They also assume that the death rate was constant in the post invasion period even though deaths increased in 2004. Let's apply the age distribution correctly. Of the 21 violent deaths, 11 occurred before the ILCS was conducted, 6 happened in the months when the ILCS was being conducted, and 4 after the ILCS was finished. If we split the 6 evenly into before and after we get that 14 of 21 violent deaths would have been picked up by the ILCS. Using this to adjust the ILCS gives an estimate of 24,000x(21/14) = 36,000, which is higher than the 33,000 we used before.

The IBC continues:

When these two corrections are combined, it is revealed that the Lancet estimate remains some 10,000 (35%) above the scaled-up ILCS estimate.

Both of their corrections are erroneous, but even if they weren't, the two numbers are still close to each other and the ILCS number still supports the Lancet estimate. Even if we reduced the 100,000 Lancet estimate by 10,000 it wouldn't make much difference.

i-dfe4a09683ba204f8215320c841d3c44-ibclancetilcs.png They then present the graph on the left to argue that

the ILCS data only allows for a one in a thousand chance that the true number lies within the upper half of the Lancet range (the area shaded in grey).

However, both confidence intervals are too narrow. All they have done is scale the original intervals for the Lancet and the ILCS estimate. But the Lancet number here is based on a smaller sample so the confidence interval is wider. And the ILCS number has been scaled by a factor based on 14 deaths in the Lancet study and will be much much wider if the uncertainty of that scale factor is taken into account. (And I know that confidence intervals are not probability distributions, but I'll give them a pass on this, since you could uses Bayes theorem to get probability distributions that look like those above.)

This is a disappointing effort by the IBC. I believe that they should have sought advice from some experts in epidemiology.

Tags

More like this

Roberts, of the Lancet study, has just been on NPR's This American Life - check the podcast once they post it tonight.

"the Lancet number here is based on a smaller sample so the confidence interval is wider"

Is the Lancet number recalculated for violent deaths based on a smaller sample? As I recall the analysis was carried out via a regression on cluster mortality rates. So the sample size is the number of clusters (each of which has a number of person years of life), which has remained the same.

It's also not true that confidence intervals necessarily get wider based upon a smaller number of observed events. Plug some values in here:

http://www.swogstat.org/stat/public/binomial_conf.htm

Keep the sample size the same (say 20), start with 0 events and a 95% CI, then increase the number of events. The range of values covered by the CI of the population estimate keeps on getting bigger - as a higher number of observed events means a broader range of population probability could have generated the observed number.

I realise this isn't exactly applicable to the Lancet's methodology. But as far as I'm aware no-one has yet tracked down the exact linear model they used, and it works fine as a simple analogy. Higher death rates are more unlikelier to have generated a small observed number of deaths than a large one, so the CIs around the death rate could get smaller.

This dispute seems rather pointless. At most the IBC is arguing that the Lancet estimates should be reduced by 35 per cent. As the two grpahs show this is well within the confidence interval from the Lancet study.

Going the other way, if you use the IBC figures as the basis for apoint estimate of excess deaths, you get a figure well above 100 000.

By John Quiggin (not verified) on 29 Apr 2006 #permalink

Tim, we kind of expected you to be unable to admit a mistake on this. And that's what has unfortunately come to pass. I need to go to work now, but will post a rebuttal to your claims tomorrow.

Actually, that was a bit unfair. Sorry.

Let me rephrase that: "reluctant", not "unable".

Is there any indication of why IBC estimate their undercount factor to be 2? Why not 3, 5, 10 or 100? Is this only gut-feeling-based, or is there any kind of statistics behind this estimate?

Tim Lambert:

"All they have done is scale the original intervals for the Lancet and the ILCS estimate. But the Lancet number here is based on a smaller sample..."

Let me see if I understand this right. IBC complains about how imprecise the Lancet estimate is, and how unacceptably large their standard deviation is. Then, they use Lancet's own data in their "recalculation," without accounting for the fact that Lancet's data have a higher standard deviation - which is the very fact they've been bitching about. As a result, their "recalculated" estimate looks crisp and sharp as opposed to the "flat" estimate of the Lancet.

If true, thats absolutely ridiculous.

By pierremenard (not verified) on 30 Apr 2006 #permalink

IBC say they have caught over half the ILCS estimate (they also say/believe they have caught half the true number?) .

IBC say the ICLS figure is three quarters of Lancet? (is that what their graph points to?).

IBCs 'estimate' of the true figure therefore, in their own words, matches the Lancet, does it not?

The criticism, i thought, was aimed at the perception of the, admitted by IBC, undercount number being some sort of official authoritive total (i spoke to my prospective LibDem MP).

IBC should be arguing that their numbers, their undercount numbers, in their own estimate, back up the Lancet numbers very well. Infact, they ought to advertise the Lancet on their web site as their idea of a true estimate.

joshd,

Your analysis can only be said to be valid for "war-related" deaths during the ILCS period. There is no basis for assuming that the undercount factor is the same for later periods or for deaths due to crimes. (Of course, your data does not count excess non-violent deaths at all.)

In essence, your analysis provides no insight beyond that given by the ILCS data.

Lambert is therefore correct in saying that even if we accept all your claims it reduces the point estimate of the Lancet study by about 10,000, to about 90,000, which is well within the uncertainty limits given by the Lancet authors. There is no statistical basis for the notion that the entire Lancet estimate should be deflated by 30%.

Tim,

With respect, none of the substantive claims in your piece "IBC takes on the Lancet Study" (http://scienceblogs.com/deltoid/2006/04/ibc_takes_on_the_lancet_study.p…) stands up to scrutiny. The Lancet central estimate remains significantly above the 95% confidence interval for ILCS on any reasonable interpretation of the data. We identify the key points on which we disagree with you, and then show why in more detail. Since there may be limitations on the amount of text we can put in a single post to your blog, we are numbering them so that readers can reference the summary and longer treatments of some of the points in postings to follow.

To begin with, the summaries:

1. You say that Lancet reports five times as many deaths as IBC. This is not true. Roberts' own comparisons (excluding deaths from accidents or disease, which are the comparisons used in our own paper) show that the real difference is closer to three times. More generally, IBC is not "taking on the Lancet study" but various misinterpretations of it.

2. You say that two Lancet deaths of "unknown origin" were not war-related and so should not be included in the calculations. This is not true. These two deaths, explicitly confirmed by Les Roberts for inclusion in the SAS study, were definitely identified by the Lancet authors as "war-related". 39,000 is the "Roberts-approved" war-related Lancet total, not 33,000 as proposed by you. SAS partly funded the Roberts' survey in Iraq, btw.

3. You say that our scaling-up of the ILCS figure to the Lancet period is too small because we wrongly assume that there were no criminal deaths during the invasion phase. There were unlikely to be criminal deaths during the invasion given known death rates supplied by the Baghdad morgue. In any case, other factors, such as ILCS's inclusion of Iraqi military in their study (unlike Lancet) would also lower the post-invasion ILCS estimate. We stand by 28,000 as a more reasonable estimate than 33,000. For a more detailed discussion of the time-distribution of deaths, see point 5.

4. You say that even if our calculations are correct, the ILCS number supports the Lancet estimate. This may be so, because the ILCS central estimate lies well within the Lancet's huge 95% confidence interval. A 10,000 reduction in the Lancet 100,000 estimate would make no difference to this, as you rightly say. But our actual question was whether ILCS supports Lancet's central war-related death estimate (39,000), not Lancet's extremely wide CI which is consistent with almost anything you care to name. On the figures we present it does not, because it is well outside the narrow 95% confidence interval of ILCS.

5. Your time-distribution of deaths in ILCS is hardly credible, and ignores the nature of the data from which you derive it. If ILCS really captured 14 of Lancet's 21 violent deaths, then the remaining 7 must have been murders (there are 7 murders in Lancet's 21 violent deaths). Your analysis requires that Lancet failed to record ANY conflict-related deaths in the summer months between ILCS and Lancet, that the difference between the two studies is therefore entirely comprised of criminal murders, and that therefore ILCS had recorded all the conflict-related deaths in Lancet 3-4 months before Lancet came along. These are far bigger and more insupportable assumptions than anything in our analysis, and - unlike our comparatively small assumption - don't seem to be based on any known data that would support them, and in fact stand in contrast to known data.

1. You open by claiming: "The Lancet study implies that there are about five times as many Iraqi deaths as the IBC number. They do not accept this and so are arguing that Lancet estimate is to high and is not corroborated by the ILCS"

This is a false description of the issue we are addressing and our position on it. First, the comparisons we were addressing were attempted like for like comparisons initiated and performed by Les Roberts. The only way you can assert that Lancet shows deaths five times as high as IBC is if you smuggle in the roughly 40,000 attributed to 'accidents' or disease that IBC explicitly does not cover, and which Roberts, and subsequently we, were not comparing. We make no claims about how many excess accidents or disease-related deaths might have taken place, or how IBC would relate to them. The Lancet could have gotten those estimates just right, or been too high, or too low. We have made no claims about these matters.

2. "In fact, Rai did not uncritically accept my figure.."

We said uncritically because we assumed a critical analysis would have revealed the two key errors we pointed out. Maybe the wording was too strong, but it was not intended to insult Rai (whom we praise elsewhere in the piece for a fair-minded - albeit flawed - analysis). One of these two errors (assuming even distribution of deaths across all months) you implicitly concede. The second, you try to maintain was not in error, but your defense of it is extremely shaky. You write:

"The only difference between my analysis and the Small Arms Survey one is that they included two violent deaths of "unknown origin" as well as the nine coalition- and the two insurgent-caused deaths."

Who told them to do this Tim, other than Roberts himself?

You're conflating "unknown cause" with "unknown origin". On a previous radio program featuring Roberts (commended by you to your readers - http://timlambert.org/2005/11/american-life-on-lancet/), to which data was evidently provided and approved by Roberts himself, they list a breakdown as follows:

"The shocker was how people were dying. For the first time in any of his surveys, the leading cause of death wasn't disease. It was bombs and bullets. In the 32 of the 33 clusters sampled, 21 people died of violence, as compared to just one violent death in the period before the war.

There was a second shocker. Of those 21, 2 people died in firefights where it was unclear where the bullet came from. 3 were killed by insurgents or Saddam loyalists. 7 died from criminal violence, car jackings, revenge killings, that sort of thing. And the biggest number, 9, were killed by the American-led Coalition."

(from This American Life documentary on Roberts and Lancet, July 2005)

Note that "2 people died in firefights where it was unclear where the bullet came from", hence are properly included in the SAS analysis of conflict-related deaths, and are precisely "war-related" as recorded by ILCS.

Sorry the previous posting is *2*

2. "In fact, Rai did not uncritically accept my figure.."

The previous posting is about point number *2*

When I begin a post with a number your blog seems to change it to a "1" when i put a different number

Point 5:

You basically take murders and conflict deaths together, then derive a portion of them which ILCS is supposed to have recorded based solely on temporal overlaps, while completely overlooking that ILCS didn't record any of the murders you've included. There is no situation where "we get that 14 of 21 violent deaths would have been picked up by the ILCS" unless those 14 violent deaths were war-related. The whole point of the exercise is to find comparable numbers Tim, not incomparable ones.

To elaborate, your method of deriving numbers for ILCS from time distribution data that also includes criminal murders is highly dependent on when those criminal murders occurred. Yes, by your reckoning, two thirds of violent deaths recorded by Lancet had occurred by the close of fieldwork on ILCS. You want us to accept that the deaths in ILCS should follow the same distribution. But why should they? One third of Lancet's deaths (7) were criminal murders which ILCS would not have recorded. How do you judge which proportion of these criminal deaths recorded by Lancet fell within the ILCS time-frame? They constitute between 0 and 7 of the deaths in the period where (by your analysis) ILCS and Lancet overlap - that is, they constitute up to half of the Lancet deaths ILCS could legitimately have "picked up" under your assumptions.

Variations of this method were among several solutions we considered, and rejected on the grounds that they would require too many blind (i.e., arbitrary) decisions about the position of criminal murders within the timeline, decisions which would drastically alter comparative estimates in large steps (thanks to the low-definition Lancet data), depending on whether one decided they fell within or outside the ILCS time-frame.

We've already shown in our summary point the highly dubious assumptions of your implementation of this method (which by the way, includes what you refer to as an "unwarranted" assumption that none of the 5 deaths during the invasion were criminal murders, while making what we would consider a far more unwarranted assumption that the next 9 post-invasion deaths until late spring 2004 contained no such murders).

Other, more reasoned, versions of this don't help your case either. For instance, assume that criminal murders are split pre- and post-ILCS in the same ratio that overall violent deaths are split (for this exercise we won't quibble with your calculation of this split as 14:7). That would mean 4.66:2.33, or 5:2. So by this fairly neutral analysis ILCS would have captured 9 of Lancet's 14 combat-related deaths - or 27,000 deaths by your multiplication factor. This is all before the complexities of the "excess deaths" calculations are taken into account, by the way.

In the end we settled for a method which made only relatively light use of a not-so-arbitrary assumption that the number of criminal murders during the invasion phase was not so high that it could be equal to 25% of the deaths caused directly by our countries' expending 30,000 bombs, hundreds of thousands of cluster bomblets, and untold quantities of groundfire in Iraq over this period.

This was also based in part on our intimate knowledge of Baghdad morgue data, which began to show a really huge rise in murders from May 2003 but only a relatively small one after the morgue's re-opening in mid-April.

And we only used this (we believe entirely supportable) assumption to determine some weighting for invasion and post-invasion deaths in ILCS to provide an estimated ILCS daily rate in the summer "gap" months between it and Lancet.

Thus at the ratio of 5:9 (invasion:post invasion) the summer daily rate for this period became 35 by this calculation. This is still significantly higher than the Iraqi Ministry of Health's daily rate during this interval. Had we chosen to make the ratio 4:10 instead of 5:9, the daily rate would not have increased enormously.

Furthermore, ILCS included Iraqi military which Lancet claims its own methods would likely have mostly missed, so should have been more weighted toward the war than Lancet was. Similarly, IBC's much more fine-grained distribution shows a heavier weighting toward the period of invasion than Lancet, and we could have used that - but we took the more conservative path.

Two final side notes. One: we did consult relevant experts who agreed with our calculations and that the depiction of the curves of the CI's was appropriate. Two: you have completely ignored the implications of our alternate method of doing the calculation (adding criminal murders to ILCS - for which both Lancet and IBC show the same proportion), which we consigned to a footnote, but which corroborate our analysis and conflict with yours, and which showed an even greater difference between Lancet and ILCS, again on conservative assumptions.

The real story buried in the IBC defense is that their estimate for civilian war-related deaths is only 30 percent lower than the mid-range figure from the Lancet and that the IBC maximum toll is likely to be tens of thousands too low. One would think that a site devoted to revealing the true cost of the war would have emphasized that point even more than the unfairness of some of their critics,.

I'm looking at the Lancet paper and if we want to get into fine-grained time analysis based on how a tiny number of violent deaths are distributed, it looks as though the pace really picked up around March 2004 until August and if you look at the excluded Fallujah data that is even more obvious. And that makes sense--there was, as I recall, heavy fighting in Iraq in the spring (first Fallujah assault) and the summer and leading up to the November assault which was outside the Lancet study range. The ICLS study only picked up the first portion of this increased war-related violence. I'm too lazy to try and come up with my own version of the "correct" extrapolation of the ICLS numbers beyond their actual range of applicability, and anyway, it's sort of meaningless to do it with the Lancet numbers when Fallujah has to be excluded. That's the ironic thing about the paper--the single clearest signal of a possibly massive uncounted death toll inflicted by US forces was the Fallujah neighborhood and it couldn't be included because it was such an outlier. (That single neighborhood suffered one seventh of all the American-inflicted deaths in Iraq in August 2004 if you go by the IBC two year analysis published last summer and believe that the press is able through its professional omniscience to report most civilian deaths inflicted by the US.) Not that I believe 200,000 died in Anbar province or even necessarily that 50-70,000 died in Fallujah (as I think the Lancet team mentioned as a possibility in a letter to a British newspaper), but when the Lancet team noted impressionistically that much of the city looked as devastated or more devastated as the neighborhood they surveyed, then it makes me wonder if the true number for Iraq up until Sept 2004 is possibly larger than 39,000. It was the Fallujah data that made the Lancet team believe that the true number was in the upper range of their confidence interval and also made them suspect that US-inflicted casualties from air strikes were underreported. And yes, I know ICLS picked up the early assault.

Which brings up my last point--we know that the US government (all governments, really) will lie about the number of civilian casualties they inflict and that when they say they've killed x number of insurgents, there's a good chance some of those reported deaths (if the number wasn't totally made up) were actually civilian. It's not the media's fault that they can't check up on every incident as they should with known liars (i.e., government spokesmen in times of war) and one could point this out while simultaneously praising the professionalism of the press and maintaining that symbiotic relationship. Though one could criticize them for not pressing harder. If I sound a teensy bit snarky, I am. IBC has taken some unfair abuse, but Sloboda's BBC interview and the IBC response show that at least some of the criticism was on target.

By Donald Johnson (not verified) on 01 May 2006 #permalink

The point about there being no criminal deaths during the invasion is sheer wishful thinking. It is precisely during chaotic periods that knife work gets done. With bodies piling up there is neither time nor the will to check whether the neighbors, the house invaders or the armies have offed Joe next door. Bodies go straight into the ground if they are lucky, not the morgue. SInce the weapon of choice in the finer Iraqi home is an Ak-47 you could not use the type of bullet as a tracer and the cops have gone hiding.

PS. Tim why not give JoshD some real estate so this argument can be better organized?

JoshD,

I will read your posts in detail. But Media Lens and Tim have done a great job in forcing some kind of accountability out of IBC. What I find most annoying about the IBC response is their hypocrisy through all of the criticism. I have written to Sloboda via Media lens a few times about this, but the same lame answer always comes back from him and IBC. And the answer leaves a lot about IBCs intent open to serious debate.

Its clear that the state-corporate media, as well as pundits and politicians in the UK and US have used the IBC figures for their own political reasons: First, to downplay coalition atrocities, and second, to place most of the blame for the carnage on the insurgents and resistence, which are lumped into one amorphic mass in order to demonize them. But the intent of western pundits in support of US-UK aggression is clear: we (the coalition) are benevolent and our intentions are noble whereas the 'enemy' is ruthless and barbaric, even subhuman.

The IBC claims to have been set up to monitor Iraq civilian deaths in order to criticize the invasion of Iraq. But Sloboda earlier sent me and other Media Lens readers a powerpoint presentation in which it was made clear that IBC was happy that its figures had been used by media sources to illustrate the 'human cost of the war'. There was not even a hint in the IBC slideshow that the IBC estimates had been in fact used by our media to downplay western culpability in the slaughter - far from it. Media Lens has already presented several examples of this, such as Andrew Bolt of the Herald Sun in Australia, who wrote:

"In the three years since the war's start, as many as 37,800 Iraqi civilians are reckoned to have died in fighting, most now killed by Islamists."

Here, Bolt states what many western commentators have alluded to using IBC data, that most of the killing is because of "islamists" and not US-UK forces. In other words, its the same old story: 50,000 plus coalition bombs were dropped for humanitarian reasons on Iraq in 2003 alone, whereas the insurgents are the real 'killers', 'evildoers' etc. No one denies the horrific nature of the insurgent campaign, but why has s IBC failed to rebut garbage like the piece by Bolt? This blatant distortion of IBC data is only one of many, and yet has not been answered by Sloboda as far as I can see. If you are part of the IBC team, JoshD, its time to stand up and be counted. When are you at IBC going to speak out loudly at the brazen use of your data for political purposes by those who supported this vile war?

By Jeff Harvey (not verified) on 01 May 2006 #permalink

joshd, you make one good point: the later statement by Roberts shows that the two unknown-cause deaths should be classified as war-related. I've corrected my post.

I'm very surprised that any expert would okay the confidence intervals you presented. The ILCS CI is only correct if you exactly know the adjustment factor for the ILCS to make it cover the same period as the Lancet. But you don't. You have to make assumptions about when murders occurred and also assume that the death rate after the invasion was constant. And even after that, the numbers you put in are subject to sampling error. Once you expand the ILCS CI, your argument that it shows that the Lancet point estimate is too high falls apart.

I think that your calculation of the correction factor is incorrect but this is not an important point. Please focus on the point in the preceding paragraph, because that's what is important.

If you want to put your comments together into one piece, I'd be happy to put it up as a guest post.

JoshD,

1) Iraq Body Count co-founder John Sloboda stated -

Our best estimate is that we've got about half the deaths that are out there.

Why isn't this made apparent on IBC's homepage?

2) Iraq Body Count sends traffic to numerous media articles which grossly misrepresent their work.

Why?

3) Have Iraq Body Count rebutted or attempted to correct a single article that misuses their findings?

Thanks.

I'd like to point out again: What is undertaken here is not an IBC vs. Lancet analysis, it is really an ILCS vs. Lancet analysis. The IBC number adds absolutely no information to this analysis.

IBC's work is valuable, but they should make it clear that their number must not be seen as an estimate, or even a proxy for an estimate, for the total number of Iraqis killed by the invasion. It is merely a lower bound.

I feel that after some long, and sometimes complicated, exchanges many people have lost site of the simplest facts.

1. The ILCS has a vastly larger sample than the Lancet one and is, therefore, a much more reliable work. The studies cover slightly different time periods and death rates vary over time, so there needs to be some rescaling of curves to compare like with like. You can quibble about this rescaling. But any reasonable procedure will give you something like what IBC get in their paper: a very wide spread for the Lancet and a much narrower range for ILCS. If Lancet were the only game in town I could understand why people have fixated on it to the degree they have. But with the ILCS study towering over Lancet such behavior makes no sense.

2. Some people seem to have forgotten the importance of Falluja in this whole discussion. Lancet, as represented in the figure above, is Lancet without its Falluja information. However, many of the statements made in the original Lancet paper and in further papers and comments by Les Roberts place Falluja right back in the mix. The claim that the Lancet study has shown IBC figures to be a gross undercount, by a factor of 5 or 10, can only be sustained if Falluja is included. But the Falluja figures are simply not credible. In fact, when they are included the 95% confidence interval balloons out even into negative territory (as a close reading of the Lancet paper reveals). In other words, if Falluja is in then the Lancet authors cannot even say with 95% confidence that excess deaths due to the war have been positive. So Falluja needs to be out if there is any content whatsover to the Lancet. And once it is out then we have to admit that IBC does not give a gross undercount.

3. People do not seem to have come to terms with the persistent string of inacurracies propagated by anti-IBC critics. This begins with the awful table put forward by Les Roberts and his followers at every opportunity. Among other things, this table cuts the IBC number in half (!) and creates estimates of Iraq casualties out of thin air. For example, the table claims that a study in the New England Journal of medicine supports an extremely high casualty number for Iraq. Yet anyone willing to follow the trail to this article will find that it contains no such estimate. This table is discussed at length in the IBC piece and serious IBC critics must come to terms with this material. It makes for sobering reading.

4. This whole debate has been pervaded with much unseemly credentialism that should send up alarm bells to people with lively minds. Must I believe that everything put out by the Lancet is beyond question? (A few years ago the Lancet put out a ludicrously bad study on the MMR vaccine that convinced many people not to vaccinate their children. The effects still linger even after the Lancet was forced to publicly admit its error) Should I be against IBC because a world renouned epideologist says I should be? Can't I form my own judgement? This style of debating is a real sign of weakness.

MS

1. If the ILCS contradicted the Lancet you would be justified in disregarding it. But they are in good agreement in the area that they overlap.

2. Even without Falluja, the IBC is lower the Lancet by a factor of 5.

3. Haven't looked at the table.

4. Sorry, but the IBC people don't have a good grasp of statistics. This is not credentialism. They could have a good grasp without credentials, or a poor grasp despite credentials (see Lott, John). I think they should consult with folks who do have expertise. If they did, I don't understand how they could have put out the stuff about CIs above.

MS, everyone agrees that the Fallujah numbers taken as a point estimate of the death toll in Anbar province are not credible, and it's also true that the claim that air strikes have killed most of the civilians comes from putting Fallujah back in, but you're preaching to the choir here. Everyone who has followed the debate in this blog knew all this from almost the moment the paper was published. I think the Lancet team has been sloppy the way they talked about it--the error in the table that you refer to was also sloppy.

But the Fallujah data does have some legitimate uses and one of them is this--if you take it into account it raises the probability that the death toll is in the upper range of the Lancet estimate. Yes, maybe the CI would widen into negative territory, but as it was explained to me a year ago that's because the orthodox statistical treatment would have us believe there might have been equally large outliers in Iraq in the year before the invasion. But (here I'm arguing like a lay Bayesian), we have no reason to think such a thing. So taking the Fallujah outlier into account should make us think the true death toll is higher than the midrange estimate.

The other point about Fallujah is the one I've harped on--if we really want to know how many civilians were killed there then it would be necessary to do a serious survey of survivors and ask them what happened to their families. AFAIK this has only been done by the Lancet team. The ICLS survey ended after the first incursion, but before the summer bombing that the Lancet survey picked up on.

The criticism of the appeal to authority cuts both ways, which is what makes that line of argument so ironic. The entire IBC approach relies very heavily on the authority of those who supply the press with casualty statistics and IBC is full of flattery of the professionalism of the press, despite the fact that this isn't the issue. (Sloboda is embarrassing on the subject of his relationship with the press in the BBC interview.) From what I've read, some reporters in Iraq are among the first to say they can't be sure what the true death toll is. But if we do want to rely on those authorities, Fisk and one of the Cockburns have both said they find the Lancet numbers plausible.

By Donald Johnson (not verified) on 01 May 2006 #permalink

"But the Fallujah data does have some legitimate uses and one of them is this--if you take it into account it raises the probability that the death toll is in the upper range of the Lancet estimate."

Only if you pretend ILCS does not exist. ILCS' central estimate suggests the central Lancet estimate without Falluja is a bit too high, and suggests *extremely* low probability that the true figure lies in the upper range of the Lancet war-related estimate.

"If the ILCS contradicted the Lancet you would be justified in disregarding it. But they are in good agreement in the area that they overlap."

...if you take "good agreement" very loosely.

"Even without Falluja, the IBC is lower the Lancet by a factor of 5."

No, it is not. See point one.

"Sorry, but the IBC people don't have a good grasp of statistics."

A rather gratuitous ad hominem. Then yours must be really bad because you now concede that we, with our supposedly poor grasp, correctly pinpointed the two statistical errors you had constructed and have been believing in and promulgating for the last year or more.

And, afaik, we are the only people, of all who have read it, to identify them in that time, including yourself.

1. "If the ILCS contradicted the Lancet you would be justified in disregarding it. But they are in good agreement in the area that they overlap."

The Lancet and ILCS curves above are probability density functions. If you fix an interval then the area under the curve within this interval gives the probability that the true value lies within the interval. It is true that the Lancet point estimate and the area to the right of it does overlap the the ILCS curve. But the area under the ILCS curve in this region is tiny, about 1/1000 of the total area under the curve. This is the crucial point. You can't look just at overlap. You have to consider the probability of the overlap range.

It is often said, including in the Lancet article, that the Lancet point estimate is conservative. But the probability on this estimate and all higher estimates combined is negligible.

Some people want to turn this around and say that they Lancet curve has plenty of area in the fat part of the ILCS curve. But such an outcome is inevitable when one estimate is much less precise than another.

In any case, if the point is that really that Lancet estimate is the same as the ILCS then from where does the factor of 5-10 difference difference emerge between IBC and Lancet? How can Lancet be at once perfectly consistent with ILCS and vastly higher than IBC? Do we all aggree now that there were about 28,000 war deaths in the period covered by the Lancet study?

"Even without Falluja, the IBC is lower the Lancet by a factor of 5."

That's just a simple factual error. No one could honestly think that after reading the discussion of the Roberts table in the IBC paper. Even cutting the IBC number in half, as Roberts does, won't get you to a factor of 5. Alternatively you could go to the IBC website, download their data for the dates that correspond to the Lancet study. You can't get a factor of five difference unless you smuggle Falluja in.

"Haven't looked at the table."

It's very improtant for people to look at this table and the IBC discussion of it. There has been an immense amount of discussion, all centering around this table.

"Sorry, but the IBC people don't have a good grasp of statistics. This is not credentialism. They could have a good grasp without credentials, or a poor grasp despite credentials (see Lott, John). I think they should consult with folks who do have expertise. If they did, I don't understand how they could have put out the stuff about CIs above."

The IBC construction of confidence intervals makes perfect sense. No, a 95% confidence interval isn't a probability distribution. It is 95% of if a probability distribution. The curves clearly show this. They are normal distributions with percentiles 2.5 and 97.5 marked. Getting from those bounds to the full distribution has nothing to do with Bayes as you suggest, it is just extending the distribtution. This is a reasonable, and simple, picture. As with anything there are assumptions underlying it. The Lancet article says nothing about the its confidence intervals are constructed. It just gives the confindence interval. But it is likely that both they and the ILCS study used a normal distribution so these look as they should. You could make all sorts of other assumptions on the distributions. You can play around with error terms to reflect that the right scale factor is uncertain. They must look more or less as they do in this picture because of the vastly larger sample of the ILCS study and the fact that the number of months of the two studies close to only minor rescaling is needed. There is simply no substitute for sheer numbers here. Size of the sample is everything.

The IBS people do seem to know how to get the simple and important things right.

Even if you exclude Fallujah and use the IBC extrapolation for the ICLS number and apply it to every Lancet calculation, you'd get 70,000 excess deaths by Sept 2004 and perhaps double that now. I'd be more inclined to cut IBC some slack if they'd make it very clear right next to their counters how much uncertainty there is about what sort of undercount they've got.

As for Fallujah and what it does to the ICLS figures, we can't say except that it very likely brings the total up by a significant amount. According to the Lancet, Fallujah was hit very hard by repeated air strikes after the ICLS survey was finished--that neighborhood of 250 people apparently lost about 25 in August and some more in September (I don't have the paper in front of me). IBC says there was an upsurge in US military activity in the summer and fall of 2004, leading up to the final assault on Fallujah, though of course the IBC numbers are amazingly low. We really don't know how many people died in Fallujah--just that much of the city was leveled and that a neighborhood that looked much like much of the town suffered enormous losses.

BTW, if the US military wants to enhance its reputation for humane counterinsurgency warfare in an urban environment they couldn't do better than to cite IBC statistics. I'm not saying that's wrong, though I certainly would want much more proof. But it's amazing how few people our troops have reportedly killed. What's the point in them being there at all? According to IBC, US forces reportedly killed 370 civilians in the last year (I'm not sure if that means 2005 or if it means March 2005-March 2006, when the news release came out). Israel has killed 3300 Palestinians since Sept 2000, over 1800 of which were armed according to B'Tselem, the Israeli human rights group. So either the US isn't killing many insurgents with all the troops over there, or alternatively if the US has managed to kill thousands of insurgents that year, they've done it with an astonishingly low civilian casualty rate. That's something I've noticed over the years that some naive antiwar types don't know about the IBC numbers. They had the impression that IBC only counted the civilian deaths which the US was responsible for, and they interpreted that to mean the number of civilians the US military killed. I've seen people say that. Prowar people eventually caught on to this misapprehension, and part of the anger at IBC comes from this sense that IBC is really providing the prowar side with considerable ammunition. It's a purely mainstream criticism that what was wrong with the Iraq War was that the US didn't send enough troops, and if Iraqi on Iraqi violence has been by far the largest killer, that's not an obviously stupid argument. I could still argue against it, but my argument would partly depend on how many of the Iraqi killings are caused by US-trained death squads and ultimately I'd just say there are much better ways of helping people overseas than overthrowing murderous dictators, successfully or not. Speaking from how I know prowar people think, I simply wouldn't use the IBC statistics to argue any point at all. The most casual reading or watching of TV news will let you know that Iraq is in a chaotic state and if IBC says American troops aren't doing much of the killing after the initial invasion, the casual watcher of TV news already knows that.

By Donald Johnson (not verified) on 01 May 2006 #permalink

What an interesting argument!

Not that anyone cares, but here is an update on my quixotic efforts to get the Roberts team to release their data. Short version: total failure. Beyond the data that they have already made public (and which Tim kindly posted), they are refusing to provide anything or to give any insight as to the exact statistical methods that they use. For the last two months, they have stopped returning my e-mails.

I have also tried to pursue this topic via the Lancet itself. The response has been similar. The Lancet will not even tell me if any of the peer reviewers of the article looked at the actual data (as opposed to just reviewing the methodology).

Given the continued public interest in this question, I think that this refusal to share data --- even data scrubbed to prevent the identification of specific individuals --- is scientific misconduct of a quite pernicious sort. A central tenet of science is openness and replication. If we can not get an accurate account of data and procedures followed by Roberts et al, why should we believe anything they have to say?

By David Kane (not verified) on 01 May 2006 #permalink

David Kane:

1. Do you have some sort of cred to warrant folks taking time out of their incredibly busy careers to share info with you? That is: are you a public health specialist or have training and experience in a related field? If not, quit whining. Your plumber isn't going to take kindly to your asking him a million questions, either, during his troubleshooting of the toy dump truck down your toilet.

2. If the implication of 1. is true (you have no cred to get a callback), then the premise of your last paragraph is not true, and the wish that your argument has play falls apart.

Best,

D

joshd, you have ignored, again, the key point

>I'm very surprised that any expert would okay the confidence intervals you presented. The ILCS CI is only correct if you exactly know the adjustment factor for the ILCS to make it cover the same period as the Lancet. But you don't. You have to make assumptions about when murders occurred and also assume that the death rate after the invasion was constant. And even after that, the numbers you put in are subject to sampling error. Once you expand the ILCS CI, your argument that it shows that the Lancet point estimate is too high falls apart.

You did identify an error I made in the classification of those two unknown deaths. I've acknowledged and corrected it. How about you do the same?

David Kane: "The Lancet will not even tell me if any of the peer reviewers of the article looked at the actual data (as opposed to just reviewing the methodology)."

Have you looked at the comment by Prof Sheila M. Bird (one of the reviewers) which is on their website? Does it answer your question? I'm not sure whether it does or not; it depends on what you mean by "looking" at the data. Certainly she doesn't claim to have checked every calculation, nor should she be expected to do so. If you are clear about just what you mean by "looking" at the data, you could try asking her. But if you want a sensible answer from a biostatistics professor, you will need to ask a sensible question.

By Kevin Donoghue (not verified) on 01 May 2006 #permalink

David,

I would be very interested to learn more about your experience with the Lancet authors and the Lancet itself. I could think of two reasons why they wouldn't share the data. The first would be that they want to publish more articles based on it before anyone has the chance, i.e., normal scientific competitiveness. Having built big databases myself I can certainly sympathise with this. But this doesn't seem to be the case as the Lancet team haven't been coming out with more stuff based on this data. And frankly I don't see how they can. Their sample is so small that the confidence interval even on their most aggregate figure, total excess deaths, is already enormous. As you move to more specific categories, like the number of women and children killed or the number of people killed by coalition forces, then they lose all semblance of precision. This data has already been squeezed for what it can yield.

The second reason, of course, they are afraid of what will happen if people start poking around inside their dataset. (Being too busy isn't an answer). They could just shoot the data right over in an email message).

Many journals these days have replication policies that force contributors to post their data. If, for example, the Lancet team had published their data in the Journal of Peace Research their data would have been posted months ago. The journal's policy tries to reward people who have made great data-gathering efforts with a chance to preside uniquely over their data for some period of time. But this incentive is balanced against the needs of the larger community to check reported results. Unintentional errors are pretty common in empirical work. (It's just too easy for the eyes to glaze over when staring at a computer screen for hours on end.) So the importance of replication extends well beyond the issue of fraud. All researchers make honest mistakes and there needs to be an error correction mechanism in place.

On methodology, one of the weakest aspects of the Lancet paper is that it contains no explanation of how it calculates its confidence intervals. It is by no means obvious how to do this and any method will contain a variety questionable assumptions. For instance the Lancet team paired off Iraqi provinces which they assume to have equal violence and sampled just one province from each pair. This is a sensible cost-saving device but it has big implications for confidence intervals. When building the confidence intervals is it assumed that this pairing works perfectly? There is no way to know from reading the paper.

A bigger issue is what assumption the Lancet team has made on how deaths are spread across space. If we assume that deaths are spread perfectly evenly across space then quite a small sample will be sufficient to build an accurate nationwide estimate. However, as you make the pattern over space more and more erratic you need a larger and larger sample to maintain precision. (In the extreme if all deaths were in single location then almost all samples you could draw would suggest that no one had died. Yet there would be some small chance that you would draw a sample suggesting that everyone had died. You would need comprehensive census to really get to the bottom of things. Or think of trying to estimate the number of victims of 9/11 by drawing a nationwide sample. It would have to be gigantic to get the fraction of New Yorkers and Washingtonians in the sample in a range where the estimate would make sense.)

So you can't build any confidence interval without making some assumption on this spread. How was this done? We have no way to know unless the Lancet team tells us. However, without such an explanation I would speculate that they may not have taken adequate account for the erratic spread of casualties typical of modern warfare, particularly wars with big aerial bombardments. This is because most epidemiology is about tracking the spread of diseases which are likely to progress from place to place more smoothly than war casualties. Now that epidemiologists have started studying war have they adjusted their models to account for this possibility? There is know way to know if they won't tell.

In any case, my main point is not that the Lancet study is based on questionable assumptions. Pretty much any statistical estimate will have questionable underlying assumptions. We need to know what the assumptions were in this particular case so we can assess them. How did the Lancet team resolve the inevitable slew of tricky judgement issues in doing their statistics. Really the Lancet should have forced the team to do this in writing the paper as a condition for publication. As these studies are becoming increasingly common it is ever more important for the writers of each paper to explain exactly what they have done. Saying "I am an important epidemiologist and I have done as I see fit" in unacceptable and no sensible person should be taken in by this.

MS, have you considered reading the paper? It explains how they computed the confidence intervals. And Les Roberts *did* email the data to David Kane. See [here](http://timlambert.org/2005/12/lancet-study/). What is at issue is Kane's requests for more data. Roberts doesn't want to release this because it would make it possible to identify the people who participated in the survey.

Tim,

I suppose you are referring to the material on page 1859 just below the map? If so, this is not what I have in mind. I mean actually specifying the equations, discussing the assumptions underlying the equations, showing the manipulations of the equations and stating clearly how the bootstrapping is done. It's not possible to be sure that you know what is in the equations without seeing them. In replication exercises there are always many ambiguities on specifics and usually replication is impossible without ongoing interchange between the original authors and the replicator. In this case, even with data you could hardly get a viable replication off the ground. You would be guessing at the form of the equations right from the beginning. Just saying that one has a loglinear model and have used this or that software isn't enough.

MS asserts:

>You could make all sorts of other assumptions on the distributions. You can play around with error terms to reflect that the right scale factor is uncertain. They must look more or less as they do in this picture because of the vastly larger sample of the ILCS study and the fact that the number of months of the two studies close to only minor rescaling is needed.

This is untrue. The IBC argument turns on the Lancet point estimate being above the upper end of the ILCS CI. If you just scale based on the number of months (13 vs 18) the upper end is 40,000, which is greater than the Lancet point estimate. If you use the distribution of the Lancet violent deaths (14 in the overlap period, 7 after) the upper end is 43,000, even more. If you account for uncertainties in the estimation of the scale factor the upper end becomes even higher.

Tim wrote:

The IBC argument turns on the Lancet point estimate being above the upper end of the ILCS CI.

Yeah, but I don't quite understand that argument. Suppose you had a jar of jellybeans and by some sampling scheme you estimate that there are 100 red beans with a confidence interval of [85,115]. Then you count and find that there were exactly 94 red jelly beans. There is zero variance around the 94. In what sense does that invalidate the estimate?

(That's ignoring for the moment that the ILCS measured something different so they weren't counting the same jar of jellybeans).

This is actually a very simple issue and Tim is right. The IBC calculation is (I have now checked) based on the assumption that you can simply multiply the daily death rate estimated in the ILCS by a scaling factor to get a number comparable to the Lancet study, without affecting the confidence interval. This is wrong; you can't do that.

I don't want to get further involved in this debate because it strikes me that to get involved in it, you have to pretend that it's possible to make straightforward comparisons between surveys which ask different questions and it isn't. But on that specific point, Tim is clearly right.

Robert seems to have it exactly right; if I estimate from a jar of 1000 beans that there are 400 red beans, CI 300-500 and then you count every bean in your jar of 500 beans and find that there were exactly 275 (CI 274-276 to take account of possible counting errors), then despite the precision of your estimate, you are not entitled to scale your CI up and say that my jar must have 550 red beans, CI 548-552. Even if our jars had been filled from the same underlying barrel of beans, the correct procedure would be a Bayesian averaging calculation which would have the effect of flattening out the uncertainty in your sample.

I do think it is a shame, though, that Medialens have used their typical charm and personability and put IBC's backs so far up against the wall.

I was wondering whether you all were arguing about this out of a need to minimise your culpability or maximise your indulgence, but then I remembered that you were completely ignoring the average 50,000-60,000 Iraqi citizens killed every year by the Saddamite oppression.
Why do you ignore the fact of these deaths? Why do you ignore the fact of the mass graves? Have you lost track of what is important in the human world.
Why do you ignore the fact that, if you'd had your way, over 100,000 more Iraqis would be dead than is presently the case? Is this the 100,000 dead people that you seem so obsessed by?

By Paul Johnson (not verified) on 02 May 2006 #permalink

Paul, I presume people here ignore your "facts" because they are not facts, but rather highly dubious assertions and rest on an extremely misleading "averaging" in terms of anything this invasion could have accomplished.

Your figures seem way too high, and you have nothing to offer in support of their accuracy. Further, the vast majority took place in the 1980s and early 90s, with far lower tolls in the ten years preceeding the invasion. Making an "average" as you do is to conflate these very different periods and pretend it somehow represents "truth". It does not. The arbitrary exercise takes you very far away from the truth. Rational exercises like performing averages are only valuable when they somehow help present some reasonable picture of the truth. You're blur and distort it.

The invasion has vastly increased the death toll in Iraq over what had been taken place at the time of invasion and the years directly prior. And this remains an inarguable fact whether looking at IBC, Lancet, ILCS or any other source.

JoshD,

Finally! Something that the IBC and Lancet teams can agree on. You said, "The invasion has vastly increased the death toll in Iraq over what had been taken place at the time of invasion and the years directly prior. And this remains an inarguable fact whether looking at IBC, Lancet, ILCS or any other source".

Now its time that Professor Sloboda and his team stop making ludicrous remarks alluding to how the western state corporate media 'have highlighted IBC figures to illustrate the cost of the war', as he has put it in so many words. For the zillionth time, western pundits and politicians have incessantly used IBC figures to downplay the cost of the war, at least in terms of western complicity in what amounts to mass murder. You see, the civilian death toll doesn't play too well with the well-cultivated myth of the US-UK doctrine of 'noble intentions'.

Whatever the statistical strengths/weaknesses of the IBC/ILCS/Lancet/other studies, your concluding remark is the message that is being buried. Our media are effectively fiddling while Rome burns. So long as this message is obscured by debates over exact details, then the war party knows that, because the public's collective memory is short, this sordid episode will join the countless others that have long been consigned to the 'memory hole'.

By Jeff Harvey (not verified) on 02 May 2006 #permalink

Robert,

Your jellybean example is useful. Here is a short answer to your question. There is a true number of red jellybeans in the jar which you give as 94. There is no variance here: 94 is simply the truth. But then we use some procedure to estimate the number of jellybeans in the jar. This estimate is random. If we apply the procedure repeatedly we will get a variety of answers that will have some distribution. The key here is not to confuse the underlying reality, 94 red jellybeans, with our estimate of the reality.

Now I'll give more detail in case this interests you. Say the you know the jar has 1,000 jellybeans and you wish to estimate the number of red ones. You select at random 50 jellybeans (1/20 or the jar) and get 5 red ones. Your estimate for the number of red jellybeans is 100 (5x20).

However, your sample of 50 may or may not be representative of the bigger picture and the confidence interval is meant to reflect this uncertainty. Following your numbers suppose we calculate a 95% confidence interval of 85-115 (The correct confidence interval will be much wider but this discussion is just to illustrate the concept.)

Of course, you are right that there is a true number of red jellybeans in there. So what then could the 85-115 spread mean? The standard answer is a bit convoluted but here it is. Suppose we repeatedly draw 50 jellybeans (returning the 50 we draw each time back into the jar). Every time we count the number of red beans and multiply by 20 to get our estimate of the true number of beans in the jar. If 85-115 is the 95% confidence interval on this estimating procedure then 95% of these of estimates will turn out to be between 85 and 115. (Again, this interval is far too narrow. The only possbile estimates are multiples of 20 so if you get 4 or fewer red ones or 6 or more the estimate will fall outside of 85-115. Draws of 4 or 6 red jellybeans will be pretty common.)

There is a different interpretation of probability that I prefer which works in terms of how a rational decisionmaker would evaluate choices, the outcomes of which depend on things that are uncertain, e.g., should I carry an umbrella today given that I have been told that there is a 30% chance of rain in the afternoon. In this case you could think of bets on the number of red jellybeans in the jar. Then saying that I have 95% confidence that the true number is between 85 and 115 means that I should be willing to give 19 to 1 odds in a small bet to someone betting that the true number is outside this range.

Time wrote

"This is untrue. The IBC argument turns on the Lancet point estimate being above the upper end of the ILCS CI. If you just scale based on the number of months (13 vs 18) the upper end is 40,000, which is greater than the Lancet point estimate. If you use the distribution of the Lancet violent deaths (14 in the overlap period, 7 after) the upper end is 43,000, even more. If you account for uncertainties in the estimation of the scale factor the upper end becomes even higher."

The problem with these suggestions is that they don't account for how deaths are distributed over time. If you just scale ILCS by the number of months then you are saying that the death rate in the nonoverlap period was roughly the same as it was in the overlap period. But the overlap period includes the first 6 weeks of the war when the death rate was massively higher than in any other period and also the first siege of Falluja which was also an exceptionally violent period. So the death rate in the nonoverlap period must be very much smaller than in the overlap period.

Your second suggestion would compound this distortion because it would put the death rate in the nonoverlap period higher even than the death in the overlap period. Suppose we accept your suggestion that the overlap period is 13 months and the nonoverlap period is 5 months (I think that 13.8 and 4 is actually more accurate but this doesn't matter much here.) Further, let's accept your ratio of 14 overlap deaths to 7 nonoverlap deaths. (This 14-7 split actually mixes criminal deaths with war deaths but we can ignore this as well.) Then the implied ratio of the death rate in the nonoverlap period to the death rate in the overlap period would be [(7/5)/(14/13)]= 1.3. So the death rate would have to be at least 30% higher in the period just after the first Falluja siege compared to the period that includes the first Falluga siege and the first stage of the war.

Undoubtedly the IBC people felt that they couldn't use their own data in this calculation because some people would then dismiss their efforts on that basis. However, using the Lancet data to understand time trend in the war is simply not feasible, as the Lancet study covers nearly 18 months and has only 14 war deaths. Even if the IBC data severely undercounts total war deaths it is indisputably a better guide to time trend than the Lancet study. Using the monthly IBC data published in their "Dossier" and your 13-5 split I get a ratio for the killings rate in the nonoverlap period to the killing rate in the overlap period of .55 (which comes from [(3039/5)/(14386/13)]. One could refine this using daily data but the ratio will always come out far lower than the 1 or 1.3 implicit in Tim's suggestions.

Pursuing this line we can calculate an IBC-based adjustment factor as 1+ (0.55)*(5)/13 = 1.21 (Tim's is either 1 + (1)*(5)/13 = 1.38 or 1 + (1.3)*(5)/13 = 1.5)

IBC's Lancet-based adjustment factor is about 1.17. My IBC-based adjustment factor comes out slightly higher at 1.21. Switching from theirs to mine would bump up the center of the ILCS distribution from 28,165 to 29,040 and the upper limit of the CI to 35,830. This changes very little from the IBC analysis. (I have to say that I'm rather surprised that they are so close that mine turned out higher but there you go.)

To summarize, when you account for the fact that killing rates were much lower in the nonoverlap period than in the overlap period then the appropriate adjustment factors come out much lower than the ones Tim proposes. It turns out, somewhat surprisingly, that one can arrive at essentially the same adjustment using either the Lancet time distribution or the IBC one.

Tim had another point which I think is reasonable. We can calculate adjustment factors on various sets of assumptions (some more reasonable than others I would insist) but we can't know the "true" adjustment factor. We can capture this uncertainty by saying that my 1.21 is a point estimate and that there is a probability distribution around with a mean of 1.21. In the IBC picture then the ILCS curve would then have a mean slightly to the right from where it is now (reflecting my slightly higher adjustment factor). It would spread out more. This is because the new distribution would be a weighted average of a bunch of distributions all centered around different means with the weightings reflecting the probabilities of the various adjustment factors. The highest weightings would be near 1.21 with the probability dropping off as you move in either direction.

The impact of this modification would be modest, however. For the right hand side of a single adjusted ILCS confidence interval to touch the Lancet central point the adjustment factor would have to exceed 1.29. This would imply a killing rate for the nonoverlap period of 75% or the killing rate for the nonoverlap period. Perhaps we cannot completely rule this possibility out but adjustment factors of this size or larger are very improbable. They would get little weight in a reasonable formulation. And there would be other adjustment factors below 1.21 that would also get weight. So the ILCS curve would spread out but there would still be very low probability on the Lancet central estimate and above.

MS, I'm very surprised that you think that deaths in Iraq are measured to such great accuracy that with a point estimate of 1.21 a slightly higher estimate of 1.29 is very improbable. The IBC number is affected not just by changes in the underlying death rate, but by changes in the reporting rate as well. Since it's getting, at best, half of the deaths, it could change by 50% with no change in the death rate or it could stay unchanged with a 50% change in the death rate. Any reasonable interval around 1.21 is quite large.

Of course we do have a complete tally of coalition deaths. If I use those as a proxy for violence in Iraq, I get a factor of 1.4.

To summarize: doing it on a straight time gives 1.4, using coalition deaths gives 1.4, using Lancet violent deaths gives 1.5 and using IBC numbers gives 1.21 with a wide uncertainty. But you think it is "very improbable" that it's more than 1.29.

Apologies for the delay in responding.

Dano asked:

Do you have some sort of cred to warrant folks taking time out of their incredibly busy careers to share info with you?

1) Why does it matter what "cred" I have? If reading Tim Lambert teaches us anything, it is that you do not need to have "expertise" in a specific field to ask all sorts of interesting questions.

2) I guess it depends on what sort of "cred" you are looking for. I have a Ph.D. in Political Economy and Government from Harvard. I make a living as an applied statistician (more or less). I am active in the R open source community. I am an Institute Fellow at the Institute for Quantitative Social Statistics at Harvard. Is that cred enough for you?

Kevin asks:

Have you looked at the comment by Prof Sheila M. Bird (one of the reviewers) which is on their website?

No. My Googling skills are not what they should be. Can you provide a link?

Tim writes:

What is at issue is Kane's requests for more data. Roberts doesn't want to release this because it would make it possible to identify the people who participated in the survey.

Tim is correct that Roberts (reasonably enough) does not want release data that would allow someone nefarious to identify specific individuals. Fair enough. But Tim also knows that Roberts et al have refused to share anything else (including details of the calculations). There is a great deal of material which would endanger no one which they could release but which they have refused to do so. For example, I sent them this on February 15.

On page 2, you write:

"We obtained January, 2003, population estimates for each of Iraq s 18 Governorates from the Ministry of Health. No attempt was made to adjust these numbers for recent displacement or immigration. We assigned 33 clusters to Governorates via systematic equal-step sampling from a randomly selected start. By this design, every cluster represents about 1/33 of the country, or 739 000 people, and is exchangeable with the others for analysis."

I would like to try and replicate this portion of the analysis. Can you provide the raw population estimates that you used as well as a description of the algorithm. Also, which software program did you use for doing this and how was the random start point selected?

I apologies for taking your time, but I think that a replication of your results is an important project. In the near future, I hope to create an R package that include all the supporting data.

This was part of a longer e-mail. Their reply was, more or less, "Go away." (It would be rude of me to quote the actual e-mail without their permission.) I followed up on March 31 with:

Sorry to be a bother, but I asked you in February for some of the data to replicate your Lancet article. You haven't replied since then. To start with, I am just interested in the raw population counts from which you created the sampling scheme. You write on page 2:

"We assigned clusters to individual communities within the Governorates by creating cumulative population lists for the Governorate and picking a random number between one and the Governorate population."

Can you provide me with the cumulative population lists used? This is public information, I think, which could in no way place an interviewee in danger.

Of course, you are free to decline my request. Life is short and it's a free country. But if you could please let me know that you don't plan on helping me, that would be great. I don't want to bother you with an endless series of e-mails.

I have heard nothing back since then.

I conclude that Roberts et al have no interest in allowing outsiders to replicate their results, even results which do not require them to release information which might, theoretically, identify specific survey participants. When authors refuse such requests, I find it hard to believe their results.

Why do you believe them?

By David Kane (not verified) on 03 May 2006 #permalink

David Kane,

My humble PhD is in population ecology. I like to focus on processes and leave the mechanistic/statistical analyses to those more qualified than me. But I like to see the bigger picture, both in my research and other fields. As you are a qualified statistician, answer me these simple (non-statistical) questions.

Do you think the Iraq war violated international law? Do you think that US-UK forces committed atrocities in Iraq? Does the US support real 'bottom-up democracy' or only the top-down variety, that is most in line with its economic, military and strategic agenda? Can you give examples of other interventions in which the US and its client states have been involved that have resulted in mass suffering, death and devastation? Do you think that the current incumbent and his neocon brethren give one iota for human rights, democracy, and freedom?

I'm just askin' because I wonder why you are so vehemently critical of Les Roberts and the Lancet study, which produced results that hardly contradict with previous imperial episodes and adventures in which the US and its junior sidekick, Britain, have been involved.

One final question. How many declassified documents have you read written by either US or UK state/government planners outlining the agenda of either government? I've read quite a few. Where do you think such noble intentions as human rights and democracy fits into them? Again, just wondrin'.

By Jeff Harvey (not verified) on 03 May 2006 #permalink

David Kane:

I'm vaguely surprised that articles you're sent for review include data and programs and that you take the time to replicate every finding. What journals are these? I ask because that hasn't been my experience at all. I've only ever been sent the paper itself since, as was explained to me many years ago, my publication recommendation should be based on what the journal reader will see. What got published in the journal is what I got sent to review.

You've already been sent the grouped data (thanks for sharing them; they are very helpful) and since they form a sufficient statistic for the averages it's possible to replicate the main findings from them. However, by continuing to demand individual-level data, census counts, program code, and (I suppose) the GIS coordinates for each cluster, the coding schemes for variables, copies of travel receipts to verify that the interviewers actually visited the right clusters, and the random seed for the bootstrap calculation, you're sending a signal that you suspect the authors of fraud. Frankly, had I been the recipient of your e-mails, that would not have sat well with me. I suspect I would have stopped after sending the initial set of grouped data.

I know nothing of either global equity hedge funds or the Marine Corps but perhaps you operate in a different world, one in which you learned to encourage free and open questioning of your decisions, to share data, programs, and methods whenever anyone asked, allowed them to replicate your analysis and verify your decisions, and promoted wide dissemination of their findings whether critical or not. Dude, that's street cred.

I hesitate to respond to trollish comments like those of Robert and Jeff Harvey, but I will suggest that others interested in the topic of replication start here for background. I also do not think it is fair to describe my writings on this site as "vehemently critical" of the Lancet article. (Although I did point out one error by Tim a while ago.) Indeed, I would say that 99% of the comments made by Tim and dsquared in response to other critics have been spot on. I often think that Roberts et al have benefited from having incredibly incompetent critics.

That said, the more that I deal with the Roberts et al (and the Lancet), the less impressed I am with their devotion to openness and transparency in academic research. There is nothing, obviously, that I or anyone else can do to make them explain precisely how they did their calculations. Some may conclude that their refusal to explain the results tells us nothing about the accuracy of their conclusions. Perhaps.

As for me, I remain suspicious of results --- from anyone, anytime, on any topic --- which can not be replicated by outside researchers.

MS asks:

I would be very interested to learn more about your experience with the Lancet authors and the Lancet itself.

Roberts is the lead author but, as best I can tell, played no role in the actual statistical work. He kindly replied to a couple of e-mails and did send me some data (which Tim then posted). For statistical questions he sent me to another author, Richard Garfield, who I believe is the main statistician associated with the project. Garfield refused to answer any questions with regard to the data and the methodology. (The e-mails I quoted above were mainly directed to him.) His point of view seemed to be that this was a huge waste of time, both for me and for him. He felt that, to the extent that I care about this topic, I should work on projects designed to gather more data.

Roberts kindly provided me with the names of the Lancet editors responsible for the project. They are Bill Summerskill and Stuart Spencer. I believe that Summerskill was the initial editor but that Spencer was in charge for the main part of the process. Spencer took the time to reply to two of my e-mails, although the second reply made it fairly clear that I should stop asking him questions. His (the Lancet's) position seems to be that, if I want to know more about the data and methodology, I need to contact Roberts et al.

I wanted to know about the peer review process that the article went through. One of the unusual aspects of this process is that the article appeared in print just 6 weeks after data collection had stopped. Assuming that it took some time to write up the paper, there was not a lot of time for peer review. (It is unclear to me if the rush had anything to do with a desire on someones part for the paper to appear before the US presidential election.)

I wanted to know, ideally, who the peer reviewers were and, if that were not possible, what the reviewers actually did. In particular, did the reviewers examine the raw data and/or the code used to estimate the results.

Spencer claimed that the Lancet had a policy against discussing the specifics of any paper in such detail. I asked for a copy of the policy. (I can find none on the Lancet webpage). Spencer has failed to reply.

By David Kane (not verified) on 03 May 2006 #permalink

David Kane:

My comments, trollish? You misunderstand me. I admire your philosophy of sharing data, methods, code, documentation, and procedures with whoever asks in order to allow them to examine your work. It must be becasue your experience with paper review is so dissimilar to my own and that you replicate every finding that I have such a different perspective on the reliability of authors' findings. What journals have you reviewed for? They have a written policy on reviewers replicating every finding?

Post 1: The ILCS CI and the IBC graphic.

First, let us be absolutely clear about one thing: the shape of these curves, or whether there's some way for Lancet's point estimate to fit inside the ILCS CI, or indeed whether these two estimates broadly or exactly agree with one another, are not pivotal to the paper we have just written and published at http://www.iraqbodycount.org/editorial.defended/. This is readily apparent from the fact that we didn't touch upon any of these questions in the Executive Summary to this paper. The main purpose of the article was to demonstrate that talk of IBC's underestimating violent deaths by factors between 5 and 10, or severely underrepresenting deaths caused by coalition forces, among other claims, were based on a series of errors, and no one has raised any serious challenges to any of those arguments.

So talk like Tim's that IBC's "argument turns on" something or other regarding these curves is overblown and misleading, as is his fanciful title for this blog post. IBC has not "taken on" the Lancet study. More to the point here, we have shown that at least one misinterpretation of Lancet and ILCS, namely that their point estimates coincide - was wrong, and based on a flawed analysis emanating from this blog (the Lancet point estimate was nearer 39,000 than Tim's 33,000, and the scaling up of ILCS that made no effort at all to adjust for war-weighting was more crude than it needed to be).

What we did say about ILCS in the Exec Summary was:

"the ILCS survey, which improves on the study by Roberts et al. in several crucial respects, is strangely under-emphasised by Roberts, Media Lens and their followers, yet it is superior to the Lancet study on sample size, geographical distribution of samples, and number of deaths recorded. As a result its 95% confidence intervals are far smaller, indicating far more precision in its estimate. On this basis, the ILCS estimate should be taken as the most reliable estimate of violent, conflict-related deaths available for the period it covers.

When appropriately compared to ILCS, the worst one could say of IBC is that its count could be low by a factor of two, a far cry from factors of "five or ten"."

If anyone here wishes to continue arguing that ILCS and Lancet are broadly similar, then know that you are supporting our main thesis, not refuting it.

Now onto the minor issues:

The question we were attempting to answer and illustrate with the graphic Tim has reproduced from our article was, "what would the ILCS CI have looked like next to Lancet if that study had covered a period as long as Lancet's, and arrived at a credibly higher number as a result?".

This is a separate question from whether the higher number we used was indeed the 'true' number that ILCS would have recorded had it covered the longer period.

Combining these two questions into a new CI would only serve to mask and confuse the very thing we were providing an insight into, namely the relative likelihood of any ILCS-based estimate compared to one deriving from Lancet. Such conflation would instead be giving a picture of the uncertainty of the particular scaling correction we had chosen, superimposed upon and obscuring a precise estimate. No such ILCS study would have contained the adjustment in its CI that Lambert claims is an "error" not to introduce. So, doing as he advises would fail to provide any realistic illustration of what ILCS would have looked like had its time-frame been a few months longer, which is the whole point of the exercise.

The two pertinent issues are:

a. Any reasonable assumption about an expanded ILCS point estimate would always place that point significantly below Lancet's. There are many different ways to go about producing a reasonable correction for the time disparity, each of which may result in a slightly different single point along the horizontal axis of the graph, but they would all fall within a relatively narrow range and lie below Lancet's point estimate. What we did was attempt to evaluate the options for doing this and construct what we felt was the most reasonable approximation, making the mildest assumptions, while giving all due (and in some cases probably undue - no adjustment for Lancet missing military deaths from the invasion, for example, and see discussion of "excess deaths" considerations in the following post) benefit to a convergence with Lancet's point estimate.

b. Wherever the best scaled-ILCS estimate lies in this range below Lancet's point estimate, whether it be our specific figure or a slightly different one, ILCS will have always carried the vastly higher precision that we illustrate. In short, whichever of these choices is the singularly correct one, they will all show broadly the same picture we presented. It is of course the case that under some assumptions, ILCS will move closer to Lancet, under others further away (and the most reasonable ones would suggest further away, if anything). But the precision of ILCS will remain far greater than Lancet's and the curves will remain roughly the same. And as MS said above:

"You can play around with error terms to reflect that the right scale factor is uncertain. They must look more or less as they do in this picture because of the vastly larger sample of the ILCS study and the fact that the number of months of the two studies close to only minor rescaling is needed. There is simply no substitute for sheer numbers here. Size of the sample is everything."

We can of course quibble over whether our new point estimate for ILCS is the most reasonable single figure to choose, but that should be kept a separate question, one that can of course be debated on its own merits. Factoring the uncertainty of this exercise into a new conflated CI would only serve to distort and obscure any attempt to present an accurate picture of what the CI of a longer ILCS study would have looked like compared to Lancet's (which is also left unadjusted in our illustration). And this was the whole point of the graphic.

So we do not accept this as an "error", but the appropriate solution for analysing pertinent issues, one we feel has far more information value than the one Tim proposes, which evidently has other priorities.

We assume that those priorities include arriving at what Tim sees to be a more realistic comparison of the central estimates, and address those issues next. None hold any surprises for the 'IBC amateurs', as this isn't the first time we've had to address them. Points which we made in earlier posts to this blog and appear not to have been comprehended are not going to be repeated below, however. We are here to report what we know, not to force people into submission.

Anyone who wants to continue insisting that months can be treated as being of equal value, as in:

"If you just scale based on the number of months (13 vs 18)..."

or that there is nothing wrong with the method proposed below:

"If you use the distribution of the Lancet violent deaths (14 in the overlap period, 7 after)....

after having read our previous posts here, is not going to benefit from anything more we have to post on those matters.

Point 2: ILCS unscaled.

We can easily set aside the debate about the scaling of the ILCS CI (addressed in our post immediately before this one). We don't need to worry about scaling the ILCS CI at all if we confine Lancet to the ILCS time-frame.

The precise ILCS central estimate is 23,743 with a CI of 18,187 to 29,299 (Analytical Report, p. 55). This covered a maximal 14 month period, with fieldwork commencing on the fieldwork commenced on 22 March 2004 and ended on 25 May 2004). Tim seems to want this reduced to 13, which we think questionable, but we'll give him the benefit of our doubt, and grant the effective period was to the end of April, not May, 2004.

By that time Lancet had recorded 16 deaths. There can be no argument about this - it is clearly given in the timeline. 11 violent deaths occurred before March 2004. Two happened in March, three in April. That's 16. Then there's the one Lancet death in May which we believe ILCS would have picked up (if it was war-related), but we'll again grant that it might not, or rather, that it did not.

There can be no more than 5 criminal murders post-ILCS, because there are only 5 Lancet deaths post-ILCS to begin with. This means there must be at least 2 Lancet criminal deaths inside the ILCS time-frame.

As discussed in our previous posts, there are both criminal murders and war-related deaths in the Lancet survey, meaning that there are different possible Lancet estimates for this period, depending on what proportion of its 14 war-related and 7 criminal killings were among the 16 Lancet deaths recorded within the ILCS time-frame. This information is not in the public domain, as far as we are aware. (Relative to the other discussion going on in this post, one might ask why Mr. Roberts doesn't just release this time-line data so everyone can end these speculations.)

Here, then, are all the possible variants for the breakdown of Lancet's ILCS-time-frame deaths by war- vs criminal-caused, where W=war-related and C=criminal murders:

14W : 2C

13W : 3C

12W : 4C

11W : 5C

10W : 6C

9W : 7C

If we follow Lambert's way of doing things, we now encounter the problem of not knowing from where it is that we are supposed to subtract the 1 violent death which allows his multiplication by 3,000 to incorporate the "excess deaths" principle in Lancet (and in SAS). This brings us to an overlooked but important issue: the validity of comparing an "excess deaths" estimate (Lancet) against one that is just a straight "deaths" estimate (ILCS), which incorporates no such principle.

If ILCS were an "excess deaths" estimate like Lancet, it too would require the subtraction (if Lancet's pre-war adjustment is correct) of about 6% of the violent deaths it recorded. This would place the Lancet-comparable ILCS central estimate at just over 22,000 deaths. But a much more straightforward solution is to use all 14 of Lancet's war-related violent deaths, treating both studies as unadjusted estimates, as is easy enough to do.

The post-invasion deaths from war-related causes in Lancet then become completely independent of any pre-war "excess" adjustment.

Comparing the studies in this manner is highly appropriate, for several reasons. It would be nonsensical to insist, for instance, that the Lancet survey had failed to record any violent deaths directly resulting from war and occupation because it had found a similar number of violent deaths in the pre-war period. If we wanted to find out how many people had actually died in war-related violence, we would of course look at the unadjusted figures for war-related deaths Lancet had recorded, and save any "excess deaths" judgments for other (in IBC's view, morally dubious) purposes.

Further, there were no "war-related" deaths to subtract in the period prior to the war. All that can be subtracted is some generalized "violent deaths", be they by the previous regime or from crime. But what is being measured here is deaths from the war.

Thus the properly comparable like-for-like number is between the unadjusted ILCS estimate and the unadjusted Lancet total of 14 war-related violent deaths, rather than 14 adjusted downwards to 13 to account for one pre-war death. (Remember that the adjustment number is independent of the war's effect and could have been not just 1 but anything at all. If Iraq had really been much more violent before the war, it could have been 2, 3, 4, or higher - which if applied to the post-war numbers, could "disappear" any number of war deaths, even all of them if it was high enough. That the adjustment in this case is small - about 6% - does not change the fact that it alters and distorts the actual number of war deaths.)

This means that we may safely multiply the numbers listed above by 3,000, leading to six possible resulting estimates, with Lancet's ILCS-comparable war-related deaths listed on the left, and criminal murders on the right:

42,000W : 6,000C

39,000W : 9,000C

36,000W : 12,000C

33,000W : 15,000C

30,000W : 18,000C

27,000W : 21,000C

The first thing to be said here is that, as one might expect, the two extreme variants require the acceptance of stronger assumptions than the others.

The upper-end variant (14W : 2C) requires that *all* the deaths recorded by Lancet in the summer months following ILCS were criminal murders and that therefore Lancet recorded *no* war-related deaths over those months (We discussed this variant in point 5 of our previous posts, because it provides an even more shaky basis for scaling ILCS - in fact, negates the need to scale ILCS at all, since it assumes that ILCS already recorded all Lancet's war-related deaths four months in advance of Lancet). It also, of course, turns Lancet into a wild overestimate of war-related violent deaths for the ILCS period (42,000 compared to ILCS's precisely-defined 24,000).

Looking at the lower-end variant, only this one of 6 variants would allow Lancet's point estimate to fall within the ILCS CI at all. It represents a scenario wherein Lancet recorded just 9 war-related deaths and all 7 of its criminal murders during the ILCS time-frame. And it also requires one to accept some unlikely scenarios.

First, this scenario gives us just 9 war-related deaths to distribute throughout the 13-month period, but 5 are taken up by the invasion of March-April 2003, leaving only 4 to distribute in the following 11 months. It is not exactly credible that there were so many criminal murders during the invasion that they could have amounted to a number as high as 25% of the Iraqis killed by the massive "shock and awe" air and ground campaigns, which some people seem to have a hard time recalling, but some of us have documented quite well. However it is possible that, in Lancet's sample, that's what was recorded - one criminal murder and 4 war-related deaths. So let's again give the benefit of the doubt in such a way as to make this scenario a little more reasonable, and call one of the 5 invasion-phase deaths a criminal murder. (The next assumption on this scale would be 2 criminal murders during the invasion phase, representing a percentage equal to 67% of all the Iraqis killed by the invasion, and take us straight from the unlikely to the absurd. It can be ruled out.)

Having assumed this much, we now have 5 war-related deaths to distribute over the first 11 months of the occupation (15,000 once extrapolated). However over the same period we also have 6 criminal murders, making them the majority of killings in this period (18,000). More surprising is that after having been the major cause of violent deaths over 11 months they "stop" altogether and become 0% of deaths immediately after ILCS, a level sustained for the next 4 months.

This is the particular scenario you must adopt in order to obtain an estimate where Lancet's point estimate falls within the ILCS CI (while of course still remaining some distance above the mugh higher probability ILCS point estimate). All the other scenarios, including those that spread the criminal murders more evenly, put the Lancet central estimate outside the ILCS CI - in most of them, far outside it. You also have to show why you pick the one out of six solutions that reject the majority of solutions over this particular one out of six.

You may ask why we didn't go down this route in our paper. The answer is obvious: assumptions and unknown factors loom much larger than in the analysis we chose, even if the most likely results in this analysis make Lancet's estimate of war-related violent deaths seem more of an outlier than in our published paper (we do however include in our appendices a novel, and for champions of Lancet's war-related deaths estimate, devastating approach which continues to be studiously ignored, but no matter). Or as someone we respect put it,

"Not everyone in this debate gears his analysis to the conclusions that he'd like to reach."

JoshD,

I slightly reword the questions Ron F asked. I hope you enswer them this time- no mathematical calculations are required.

1) Iraq Body Count co-founder John Sloboda stated -

Our best estimate is that we've got about half the deaths that are out there.

Why don't these words appear right under the Min/Max numbers on your homepage with alink to a detailed explanation?

2) Iraq Body Count sends traffic to numerous media articles which grossly misrepresent their work.

Why? In fact, would you not agree that any media outlet that uses IBC numbers with sattsing "they believe they capture half the deaths" is misusing your work?

3) Have Iraq Body Count rebutted or attempted to correct a single article that misuses their findings?

By joe emersberger (not verified) on 03 May 2006 #permalink

The Lancet does indicate indicate five times as many deaths as the IBC. I don't see why you deny this.

>The question we were attempting to answer and illustrate with the graphic Tim has reproduced from our article was, "what would the ILCS CI have looked like next to Lancet if that study had covered a period as long as Lancet's, and arrived at a credibly higher number as a result?".

>This is a separate question from whether the higher number we used was indeed the 'true' number that ILCS would have recorded had it covered the longer period.

Oh, good grief. You are now saying that the ILCS graph above is a *hypothetical*? You didn't say this when you presented it and you cannot then state, as you do:

>the ILCS data only allows for a one in a thousand chance that the true number lies within the upper half of the Lancet range (the area shaded in grey).

I suppose you could say

>in this hypothetical case the ILCS data only allows for a one in a thousand chance that the true number lies within the upper half of the Lancet range (the area shaded in grey).

But that's not a reason to believe that the Lancet estimate is too high, except hypothetically.

And hey, even in your hypothetical case the curve is *still* wrong. If the ILCS was conducted over a longer time frame and picked up 20% more deaths, the point estimate would increase 20%, but the width of the confidence interval would only increase by 10%.

You claim:
>Any reasonable assumption about an expanded ILCS point estimate would always place that point significantly below Lancet's.

Not true, as I have already shown. Using time, or the Lancet time distribution or coalition casualties all put the top of the ILCS CI above the Lancet estimate. I'm sorry, but the data does not let you draw the conclusions you are trying to draw.

JoshD:

I've looked over your site but can't find a tally of reported deaths by month. Does such a listing exist?

Only a fool would attempt to draw firm conclusions from what are in reality nothing more than estimates.

"The Lancet does indicate indicate five times as many deaths as the IBC. I don't see why you deny this."

For the same reasons that you went to great pains to deny that Lancet indicates four times as many deaths as ILCS. You're disingenuosly moving the goalposts Tim, as we already discussed previously.

"Oh, good grief. You are now saying that the ILCS graph above is a hypothetical?."

Anything containing any assumptions is a "hypothetical", such as: "Making conservative assumptions, we think that about 100000 excess deaths..." If nothign can be said from "hypotheticals", then so much for Lancet.

"But that's not a reason to believe that the Lancet estimate is too high, except hypothetically."

There's no reason to believe Lancet showed anything other than 21 violent deaths, except "hypothetically" Tim.

"Using time, or the Lancet time distribution or coalition casualties all put the top of the ILCS CI above the Lancet estimate."

The point to which you offer this response was about the ILCS point estimate, not it's CI. Moving goalposts again.

I haven't checked your coalition-based statistic, but to assume its trends would transfer directly to civilians is unfounded, and a far more shaky "hypothetical" than anything we have made. And the assumptions to which you must cling to make Lancet's time-line get what you want (while rejecting many far more plausible ones - and while entirely ignoring far more significant factors that would drive the two further apart) are again very shaky at best.

"I'm sorry, but the data does not let you draw the conclusions you are trying to draw."

As said in the previous post, we were here "not to force people into submission". You may continue believe whatever you choose. IBC tries to stay out of religious matters.

Robert, that can't be right. The invasion has the highest civilian death rate, so there should be a big spike on the left in early-mid 2003. I didn't check the rest.

JoshD, there are two points in the top left-hand corner of Robert's picture. You could make a big spike by joining up the dots.

By Kevin Donoghue (not verified) on 04 May 2006 #permalink

Yeah, the two points for March and April 2003 are way up in upper left corner. I've removed the little dotted line (which was a smooth I had added for a different purpose) and updated the plot; perhaps it was the line that had drawn your eye away.

Okay, I'm confused. Off the top of my head, I think that for the Lancet time frame (up through Sept 2004), IBC said there were about 15,000 violent civilian deaths, including criminal murders. The Lancet said (midrange estimate) about 60,000 violent deaths. But that may include an unknown number of insurgents. I don't have the paper handy--I think one could put an upper limit on the possible number of insurgent deaths out of the 21 based on age of the men killed and assuming some arbitrary fraction were insurgents. So maybe, ballpark estimating, the Lancet gives a number about 3 times bigger for civilians.

I'm sort of a broken record on this, but the fact that the Lancet paper couldn't do anything sensible with its Fallujah outlier doesn't mean Fallujah doesn't exist. And much of the violence at Fallujah that is recorded in the Lancet occurred after the ICLS data. So any attempted correction to the ICLS paper using the Lancet data is likely to be an underestimate if you only use the 21 deaths. Of course, by IBC statistics the number of people killed in Fallujah by US air strikes in the summer and fall was only about 300 anyway, but the Lancet paper gives reason to think Fallujah was hit much harder than the media reports were able to determine, even if you can't take the one neighborhood and do much with it. (I recall Les Roberts or someone saying that in hindsight it would have been good to take just one other sample from Fallujah--if it also showed horrific casualties than the one they got probably wasn't a fluke. Somehow I doubt anyone has done that.)

By Donald Johnson (not verified) on 05 May 2006 #permalink

I downloaded the Iraq Index put out by the Brookings Institute (the NYT carries excerpts from their monthly reports on the opinion pages from time to time). They refer to the Iraq Body Count numbers of 35-37,000 as an "estimate".

More importantly (because everyone refers to the IBC number as though it were an estimate), they also have a table which shows the number of insurgents killed or captured from month to month. Every single month from March 2003 up to the present shows between 750 to 3000 insurgents killed or captured, with the total (if I counted correctly) of over 60,000. There's a footnote which says the sudden surge that seems to have occurred in November 2003 may reflect an improvement in the data rather than an actual change in the killing and capturing rate. As it happens, the jump was from 750 in October to 3000 in November--a factor of four. Fancy that--an admission that we might not know what level of violence the US military is inflicting on its enemies to within a factor of four.

IBC put out a report on the first two years of their data last summer and there's a table in there which shows the number of civilians killed by coalition forces. In most months (with a handful of obvious exceptions in the early months and when Fallujah was being attacked), the number was typically in the low dozens. I compared it to the monthly death rates for coalition soldiers and very often the coalition forces lost more men than there were civilians killed by the coalition. So anyway, putting the Brookings numbers in with the IBC numbers, apparently the US is capable of capturing or killing around 2000 insurgents while only killing dozens of civilians. If that's true it's a stunning tribute to the "purity of arms" of the US military. Israel has, according to B'Tselem, killed more unarmed than armed Palestinians during the current Intifada. But cynical people might wonder if the numbers only reflect what the press is able to find out, not what actually is happening. And as it happens, there was a report in the December 2, 2003 issue of the LA Times which said that the US military was starting to report how many insurgents it was killing, because it was tired of being perceived as the side that took all the casualties and never inflicted any. You'd think that the press couldn't find this out unless the military told them. And apparently you'd be right.

BTW, if the US really did capture or kill 60,000 insurgents, then they clearly have such a large base of support that the distinction between guerilla and civilian supporter is likely to blur as it usually does in guerilla wars. Which ought to make it really really difficult to sort out the "correct " number from press accounts.

By Donald Johnson (not verified) on 05 May 2006 #permalink

joshd, you refuse to clarify the description of the IBC number on the front page even though I've demonstrated that it is usually misunderstood because all you are doing is giving the number of reported deaths. But you object when I observe that the Lancet study indicates that there were about five times as many deaths. If all you are doing is counting the number of recorded deaths, why do you have anything at all to say about estimates of the total number of deaths?

joshd, I am very disappointed in your conduct. The confidence intervals you present are, without question, incorrect. You then go on to make arguments based on those incorrect intervals:

> the ILCS data only allows for a one in a thousand chance that the true number lies within the upper half of the Lancet range

This is only true if you assume a particular adjustment factor. You don't mention this. Everyone here, including your supporter MS, agrees that the confidence interval should be wider to account for the uncertainty in the adjustment factor. But you refuse to correct the error in your analysis. Are your co-authors aware of this? Should I contact them to see if they will correct it?

Tim, you may contact whoever you like. We will certainly not "correct" something that is not in error, and our approach to the CI's are not in error, and far better and more appropriate than the one you'd like us to take, as we explained above.

Most people I know who clicked on the IBC homepage believed that less than 39,000 civilians have been killed in Iraq because they didn't realise -- until I pointed it out -- that there is a major discrepancy between actual deaths and reported deaths.

People should not have to click on the "Quick-FAQ" section to see that IBC's maximum "can only be a sample of true deaths" because "most civilian casualties will go unreported by the media". This caveat should be displayed +prominently+ above or below the counter. How long would it take for IBC to do this?

Furthermore, does it say +anywhere+ on the IBC website that the IBC team are outraged by the fact that many journalists continue to misrepresent and misuse their work?

Jon Pederson, the head of the ICLS survey, expressed some doubts about the competence of his survey workers. The article where that appeared was by Carl Bialik last summer at the Wall Street Journal online site. [Here's the link](http://online.wsj.com/public/article/SB112309371679604061-EAfVPB24lX6gS…)

To the layperson an estimate of 28,000 seems to be in rough agreement with 39,000 anyway. Plus some of the worst violence inflicted by the US according to the Lancet paper occurred in Fallujah after the ICLS team finished. (The ICLS team presumably caught the violence of the spring invasion, though I have to admit to some puzzlement about how easily they apparently managed to do this, when by Sept 2004 it was apparently extremely dangerous for the Lancet team to even show up there.) I wonder how you should incorporate a lack of confidence in one's survey team into a calculation of a confidence interval. Seems like it would be tough to do.

By Donald Johnson (not verified) on 07 May 2006 #permalink

Notice this comment in the LA Times piece cited by Robert--

"Obtaining accurate numbers from the Health Ministry or the 18 major hospitals serving Baghdad proved difficult, because officials at all tiers of government routinely inflate or deflate numbers to suit political purposes."

Well, who would have guessed? And I suppose one shouldn't expect the US government to be any more honest about the civilians it kills. Of course this means you can't even use the IBC numbers to provide a bare minimum (not that I doubt that the true number is considerably higher.)

By Donald Johnson (not verified) on 08 May 2006 #permalink

Tim wrote:

"This is only true if you assume a particular adjustment factor. You don't mention this. Everyone here, including your supporter MS, agrees that the confidence interval should be wider to account for the uncertainty in the adjustment factor. But you refuse to correct the error in your analysis. Are your co-authors aware of this? Should I contact them to see if they will correct it?"

I had hoped to extricate myself from all of this but I became aware of a lot of distortions, some made in my name and I'd like to correct them.

I did not say that there was an error in the IBC analysis. I think that they did something entirely reasonable and ended up with a reasonable adjustment factor for scaling up ILCS to compare with Lancet. They used the information on the timing of deaths in the Lancet paper. I used the IBC data to make my own adjustment and, remarkably, arrived at virtually the same figure.

I went on to say that there could be other ways to arrive at an adjustment factor. All the ones I've seen from Tim strike me as unreasonable. From every adjustment factor you can back out an implied ratio of the rate of killing during the period of overlap between the Lancet and the ILCS studies to the ratio during the nonoverlap period. All of Tim's suggestions imply a higher killing rate during the nonoverlap period than during the overlap period (details in one of my earlier posts). The overlap period includes the first 6 weeks of the war, when the killing rate was overwhelmingly higher than in any period, and the first siege of Falluja, when the killing rate was very much higher than average. No, I do not think that I know the exact numbers but these facts are clear even without them. Surely the first six weeks and Falluja were very big compared to the other events during this time frame. Tim's adjustment calculations would require us to deny this reality.

Using coaltion casualties to estimate the pattern over time of Iraq casualties, as Tim suggests, will also be very misleading for similar reasons. During the first 6 weeks and during the first siege of Falluja surely the ratio of Iraqis killed to coalition forces killed was extremely high. In other words, when coaltion forces were dropping thousands and thousands of bombs from the air and rarely getting shot down they must have been killing unusually many Iraqis for each death of their. Again, one doesn't need to know the precise numbers to understand that this must be true.

I have also seen posts that suggest that when ILCS is scaled up the width of its confidence interval is either maintained or shrunk. This is wrong. These are normal distribution. When multiplied by a fixed factor the lower and upper bounds of the confidence interval scale by the same factor. When the factor is more than one the width of the interval increases by this factor. So the ILCS interval widens under this exercise and it is easy to verify that the IBC picture is faithful to these properties.

Finally, something that seems to be forgotten in this discussion is that the ILCS data includes Falluja whereas the IBC data to which it is compared does not. If we somehow add Falluja back into the Lancet data then any remaining hope of consistency with ILCS evaporates. There does seem to be a consensus on this site that the Lancet number of 52 violent deaths for Falluja is a gross overestimate. But suppose we say that the overestimate for Falluja is by a factor of 25, leaving about 2 remaing violent deaths. Even these two deaths would shift the Lancet curve substantially to the right and even Tim's rescaling factors would fall far short of what would be required to make them consistent.

In fact it is unlikely that the ILCS picked up the seige of Falluja. Are we supposed to believe that they had no trouble surveying Falluja while the siege was going on? The IBC calculations wrongly assume that all the surveying was done on the very last day of the field work. Actually it was done over two months, starting on Mar 22, before the seige of Falluja. Since they were able to survey Falluja without difficulties (unlike the dangers faced by the Lancet team), it is likely that Falluja was surveyed before the siege and none of the deaths there were picked up the IBC.

Again, the IBC picture is wrong because it assumes that the scaling factor is known exactly. I am perplexed that you claim that the picture is correct while conceding that different scale factors are possible.

MS wrote--

"Finally, something that seems to be forgotten in this discussion is that the ILCS data includes Falluja whereas the IBC data to which it is compared does not. If we somehow add Falluja back into the Lancet data then any remaining hope of consistency with ILCS evaporates. There does seem to be a consensus on this site that the Lancet number of 52 violent deaths for Falluja is a gross overestimate. But suppose we say that the overestimate for Falluja is by a factor of 25, leaving about 2 remaing violent deaths. Even these two deaths would shift the Lancet curve substantially to the right and even Tim's rescaling factors would fall far short of what would be required to make them consistent."

The ICLS includes Fallujah and the IBC numbers do not? I assume that's a typo. At most ICLS includes the April assault--maybe, maybe not.

The Lancet paper's Fallujah data shows the largest number of casualties in August, due to the bombing. Iraq Body Count picked up on the bombing of Fallujah--their total for the months of July through October was 299. The number of Iraqi civilians killed by Americans in August 2004 was 171, and presumably most were in Fallujah. (It's in their two year study). The Lancet data implies this is a huge undercount --though I find it hard to believe that literally one fourth of Fallujah residents were killed, it's unlikely that if the true death toll was only a few hundred from the summer bombing that the survey team would have stumbled on one of the very few neighborhoods that had lost dozens.

Anyway, yeah, if we "corrected" the Anbar cluster down to two deaths, meaning that 6000 people died (which seems like a reasonable sort of wild guess to me), that would be tacked onto the 39,000 Lancet figure and make it that much harder to reconcile with the extrapolated ICLS number of around 28000 that IBC produced. So what? All that would mean is that there was an extremely violent American military operation and most if not all of it occurred after the ICLS survey teams did their work.

By Donald Johnson (not verified) on 08 May 2006 #permalink

These are normal distribution. When multiplied by a fixed factor the lower and upper bounds of the confidence interval scale by the same factor

MS, this is a howler. This would be true if the number we were interested in was "X times the ILCS estimate", where X was a known scaling factor. But what we are interested in is "the correctly scaled ILCS estimate", of which "X times the ILCS estimate" is our best estimate of the expectation of. So X, our estimated scaling factor, has a sampling distribution of its own. It's not just a scalar that you can multiply by.