Violent war deaths: Surveys vs passive surveillance

ResearchBlogging.org

I've been remiss in not commenting on Obermeyer, Murray and Gakidou's paper in the BMJ, Fifty years of violent war deaths from Vietnam to Bosnia: analysis of data from the world health survey programme. OMG derive estimates of violent war deaths in thirteen countries from the World Health Survey and compare them with counts from passive surveillance (like the Iraq Body Count). Here's my graph of their results, showing 95% confidence intervals for the ratio between the survey estimate (WHS) and the passive count (Uppsala/PRIO). The green line is the weighted average of the ratios (2.6).

i-2d8b390c3898a0d87167f5d5981e3379-whsuppsalaratio.png

As you can see, surveys generally give a much higher estimate, but not always.
In a commentary on OMG, Richard Garfield argues that OMG's survey estimates are likely too low:

Despite rigorous efforts to correct for under-reporting, Obermeyer and colleagues could not correct for household members who chose not to report deaths. How the relevant questions were asked in face to face interviews can greatly influence the results obtained. Similarly, the total number of deaths in war may be grossly underestimated by multiyear demographic modelling. Half a million deaths can occur unnoticed when demographic models do not count actual deaths but depend on projections from count data that are decades old.

Finally, the study only includes violent deaths. In the poorest countries, where most conflicts now occur, a rise in deaths from infectious diseases often dwarfs the number of violent deaths during a conflict. For all these reasons, Obermeyer and colleagues' study is likely to underestimate the importance of conflict as a cause of death.

But the part of OMG that I want to comment on is this:

As a final point of comparison, we applied our correction method, derived from the comparison of survey estimates with Uppsala/PRIO data, to data from the Iraq Body Count project's most recent report of 86 539 (the midpoint of the 82 772 to 90 305 range reported in April 2008) dead in Iraq since 2003.27 Our adjusted estimate of 184 000 violent deaths related to war falls between the Iraq Family Health Survey estimate of 151 000 (104 000 to 223 000)9 and the 601 000 estimate from the second Iraq mortality survey by Burnham and colleagues.1

There are a couple of problems here. First, they are comparing an estimate of deaths up to April 2008 with IFHS and Lancet2, which only covered deaths up to June 2006. The IBC count almost doubled between those two dates. I think a better comparison is to plot the ratios for the five surveys of Iraqi deaths to the IBC count on the same graph:

i-008f54a5c870c9dcfb853fcef3db8bdb-whsuppsalaratioiraq.png

Compared with the OMG's ratios, ILCS is on the low side, Lancet 2 and ORB are on the high side and Lancet 1 and IFHS are in the middle. But none of the surveys are out of the range observed by OMG.

Second, the linear model they use for their correction method is obviously wrong. They just do a linear regression to get the model

s = 27380 + 1.81p

where p is the passive count and s is the survey estimate. But this implies that even if the passive count was 0, a survey would find 27,000 or so deaths. This doesn't make sense -- they should both be zero in this case. Further, the intercept parameter is not significantly different from zero. And if you plot the data, you'll see that the slope is completely determined by the value for Vietnam (the point in the upper right corner). The regression line just connects Vietnam to everything else near the origin.

i-8334e96de3d5647687cfeb4d81897d7b-whsuppsala.png

I think it makes more sense to use the weighted average ratio of 2.6 for the correction, which yields an estimate of 225,000 violent war deaths in Iraq up to April 2008. But this estimate is highly uncertain -- the ratios found ranged from 0.3 in Namibia to 12 in Georgia, so applying these to Iraq gives a range of 25,000 to 1,000,000 violent war deaths.

Notes: OMG calculated an unweighted average of the ratios, but since some of the sample sizes were smaller, they should not be weighted as heavily. I used the inverse square of the width of the confidence interval as the weight. Lancet 1 did not publish a confidence interval for the number of violent deaths, so the one in the graph above is my guess.

Z. Obermeyer, C. J L Murray, E. Gakidou (2008). Fifty years of violent war deaths from Vietnam to Bosnia: analysis of data from the world health survey programme BMJ, 336 (7659), 1482-1486 DOI: 10.1136/bmj.a137

Tags

More like this

nice analysis, i ll take a second look later...

about
"s = 27380 + 1.81p
where p is the passive count and s is the survey estimate. But this implies that even if the passive count was 0, a survey would find 27,000 or so deaths"

No, unless p = 0 is in your data - if not, it isn't proper to interpret the intercept as giving information about s.

The comment about Vietnam outlier/high leverage point: have you tried a robust regression on these data - it would be interesting to see the results.

Tim: Can you tell us what estimate you are using for violent deaths in L1 along with a description of how you guessed at the confidence interval? In particular, do you include Anbar? And, if not, why not? (You are certainly correct that L1 did not publish a confidence interval for this.)

I used the 60,000 number for L1, which does not include Anbar, for reasons which I'm sure we've gone over before. For the CI, there were about 20 deaths. I guessed a design effect of 2, so took the standard deviation to be sqrt(20/2).

Tim,

Forgive my slowness, but, as Robert can confirm, statistics are not my forte. The square root of 20/2 is 3.16. So, what's the next step? We are looking for a standard error, of course. Again, I am not saying that you are wrong, I just don't know what you are doing. Can you start by giving us the actual confidence interval in raw numbers? I am having trouble reading your graphic, but is the lower bound at 1,000? That's pretty low.

This is sort of off-topic from your main point, but, once you ignore Anbar, the confidence interval for the rest of the Iraq is quite narrow because the design effect is so low. I am never done the math to see if L1 and ILCS overlap, but I am pretty sure (and ignoring conflicting definitions) that they do not overlap nearly as much as they do in your figure.
Also, can you tell us what numbers you are using for ILCS? (I seem to remember that they gave a confidence interval, but can't find the citation right now.)

David Kane admitted:
> statistics are not my forte.

Ya think?

I multiplied 3.16 by 2 to get a 95% CI, then scaled it the right units by multiplying by 2 (design effect) and then by 3,000. The interval graphed is 60,000 +/- 38,000 divided by the IBC count for the same time period (which you can read off the horizontal scale).

? I am having trouble reading your graphic, but is the lower bound at 1,000? That's pretty low.

the scale on the horizontal axis is a ratio, not a death toll number... (as is written there...)

Tim is right. as we have daily numbers of the IBC death count, a ratio makes much more sense, than comparing projections.

David, here is your problem: you combine limited mathematical understanding, with a huge ego, inability to admit errors, focus on unimportant points and a huge desire for publicity.

quite a n explosive (and BAD!) mixture....

sorry, i meant the vertical axis. the scale on the horizontal axis actually is a death toll number. (for example the IBC one, at a specific time...)

what is the difference between survey and surveillance, regarding definition, methodology and interpretation?