Lenin on the IBC attack on the Lancet study
I had anticipated that the team behind Iraq Body Count would react to the latest survey on Iraqi mortalities published in the Lancet by trying to minimise their import and undermine their reliability. I was not wrong. The reason is fairly simple: they're defending their turf. They have been engaged in this operation ever since Media Lens asked them what they thought of the fact that mainstream media outlets were using their figures as reliable maximum estimates of the dead, and why they didn't challenge this evident untruth even though they acknowledged on their site that it was indeed an untruth. Their place in the media spotlight is threatened, and such is the only occasion under which they have put up any kind of a fight, even going so far during the spat with Media Lens to compare their opponents to terrorists on BBC 2.
I appreciate the Socratic method involved in IBC's attempt to rebut the findings of Burnham et al: rather than address themselves to matters of data collection, statistical analysis and methodology, about which the research team are often attacked in ignorance, IBC tries to examine a number of implications that result if one accepts the study's findings. Describing these implications as extremely anomalous, they conclude that the findings cannot be accurate. This assessment and their offering of it rests on some assumptions that are unsound.
Lenin takes the critique apart point by point and concludes:
And this is it. The whole thing is an enormous and misleading exercise in circularity, a massive raise of the eyebrow, a titanic exercise in obfuscation. They cannot touch the study for methodology, they cannot find anything in it that is badly done: not a single cluster wrongly placed, not a single false extrapolation, not a particle of evidence of any fraudulence or fecklessness. They hazily refer to possible bias, but on the basis of nothing more solid than that this would explain away the uncomfortable implications that they draw. As Daniel Davies points out, the chances of the Lancet authors obtaining the sample they did, if the facts were much closer to what the IBC records, are so low that it would have to be fraud. The IBC cannot and do not make this accusation, because they are not prepared to test their flimsy insinuations and doubts in a court of law. For a proffered rebuttal entitled 'Reality Checks', the IBC's intervention is breathtakingly short on either rebuttal or reality.
I noted in my response to IBC's criticism of the first Lancet survey that the IBC authors don't understand sampling. There's more of this in this critique as well. Note:
Lancet estimates 150 people to have died from car bombs alone, on average, every day during June 2005-June 2006. IBC's database of deadly car bomb incidents shows they kill 7-8 people on average.
They've taken a subgroup (bombings) of a subgroup (last 13 months). Once the sample gets that small the estimate has a wide amount of uncertainty. This is particularly true of bombs where there would be multiple deaths within the cluster from the same bombing. A 95% confidence interval for bombing deaths per day would probably be something like 20-300.
Alon Levy has more on the IBC's ignorance of statistics
Iraq Body Count, the organization that has been publishing gross underestimates of the number of civilian dead in Iraq since 2003, is pissed at the Lancet for giving a more accurate figure. IBC has written a lengthy apologia to that effect, which boils down to pointing to another survey that gives a lower figure [the ILCS]. ...
The ballpark figure of 24,000 [from the ILCS] is a good reason to doubt the relevance of the data to Lancet. The Lancet study estimates a total mortality rate, which in principle is cause-blind. When asked about war deaths, people may not respond affirmatively if the death was not obviously related to the war.
Obviously, if your mother was killed in Shock and Awe, you'll almost certainly consider her a war dead. But what if she was killed by American troops who were frustrated with their not finding any real insurgent? What if she died in an insurgent bombing? The phrasing of the question is skewed toward war deaths as opposed to occupation deaths. At most, it refutes Dana's assertion that the Lancet numbers are too high because by a certain metric they're higher than a certain WW2 death toll. ...
In other words, the Lancet study may have a large error margin, but it also has high enough a figure that it doesn't matter. From the above formulation, if some group releases a study that says 400,000 Iraqis died, then the Lancet's figure is within its margin of error, so we can't conclude the group is wrong. But if a group claims that 40,000 Iraqis died, then the Lancet's figure is well outside its margin, so the Lancet contradicts it.
When we have two contradictory studies, we can't ever assume that the one with the larger sample size is wrong. We can only assume that if we have some discrepancy that lies within the margin of error. If the discrepancy is this big, we need to investigate the methodologies and see who's doing a mistake; it's possible neither side is, but the probability of that is vanishingly small. In this case, we have a study of Iraqi death rates that uses a standard epidemiological methodology, versus a compilation of media reports that not only neglects deaths not reported to the authorities but also neglects deaths not mentioned in the media.
IIRC, some human rights group denounced the first Lancet Iraq article. They were working with the Pentagon, and apparently 'went native'.
Actually no. Garlasco was ambushed by the WaPo, spoke before he had seen the study, and later retracted. Not that mattered, the Conventional Wisdom for America's elite journalists had been established.
I'd say the only thing Iraq Body Count really does is tell us just how bad the media are at covering what is going on in Iraq.
Since the war began, an estimate based on monthly Baghdad morgue numbers alone totals roughly the same as the "max" estimate given by IBC so it is a stretch (to say the least) that their 48,000 dead applies to the whole country. Baghadad has contributed a significant part to the total dead, but clearly not all.
Actually, the mere fact that IBC labels their estimates "max" and "min" tells you all you need to know about their knowledge of statistics (or at least about their ability to convey that knowledge) -- as if, somehow, they are absolutely certain that the actual number lies between those two numbers (ie, that the error bar in this case is only 5,000.
Thanks, Tim. I had thought that he had later retracted, after having thought it over.
Well, you did use a subgroup of a subgroup to come up with the notion that the Iraq Living Conditions Survey "validated" the first Lancet study.
http://timlambert.org/2005/05/lancet34/
Actually what it showed me was how hard it is to get reliable numbers from even a massive survey, where random sampling error is virtually a non-issue. I don't think we've got a good estimate of 2002 crude death rate or infant mortality, anything between 2.5 and 15 seems quite possible for the former and anything between 20 and 150 for the latter.
And likewise we haven't got good recent numbers and I think they are fundamentally unobtainable.
In fact, the whole discussion has made me very skeptical about other claimed death tolls, say Darfur or Saddam's killing campaigns or the claim that 2 million North Koreans starved in the 1990's.
These days I take all such claims with a big grain of salt. I want to know how they were obtained and I want a better measure of uncertainty than statistically obtained confidence intervals.
Confidence intervals are a measure of the uncertainty coming from random errors. They don't include author bias and systematic errors, like recall bias or wrong sampling.
As for the Lancet study specifically, a lot comes down to trust. The actual field work is uncheckable.
The Lancet has a high impact factor and is a prestigious peer reviewed journal. But, in the end, it's the editor that makes the decision to publish, and that editor is very clearly biased, and has a dubious track record. There is no other journal I am aware of that's come in for more criticism than the Lancet.
http://www.timesonline.co.uk/article/0,,2-1658807,00.html
Likewise, the biases of the survey team and of the lead researchers matter too. Author bias is something that can creep in very easily, even when authors are trying hard to be objective.
There's lots I could say about Bush, reasons for or against invasion etc.., I'll keep that to myself though and stick to the issue whether estimates of infant mortality and crude death rates obtained by the Lancet or by other means are reliable in the case of Iraq and the period 1985-2006.
My summary on that is, they are not, we don't know what they are to better than a factor 3 either way, and barring a detailed and expensive census, which I don't see as a near term priority in Iraq, I don't see us getting reliable numbers (for infant mortality and crude death rates). "War dead" is a notoriously hard to define number, and something I don't want to go into. Crude death rates are something easy to agree on by comparison, ie something where the definition shouldn't depend on moral judgements, just requiring the counting of deaths and population, rather than judgements about culpability and causation.
Now, I am not saying we know nothing. I do believe that the balance of the evidence is very strong that death rates have gone up between 2002 and 2006, eg look at the officially reported number of deaths by the Iraqi Ministry of Health (up by 50% or so, from 80,000 to nearly 120,000 a year), and it is overwhelming to the point of near certainty that violent death rates have gone.
Heiko Gerhauser said:
"I'll keep that to myself though and stick to the issue whether estimates of infant mortality and crude death rates obtained by the Lancet or by other means are reliable in the case of Iraq and the period 1985-2006.
My summary on that is, they are not, we don't know what they are to better than a factor 3 either way,"
Really, a factor of 3? The death rates?
Let's see what that means.
The Lancet said the pre-invasion mortality rate was 5.5 per 1,000 people per year, while they estimated the post-invasion rate rose to 13.3 per 1,000 people per year.
A factor of 3 either way would put the pre-invasion mortality rate between 1.8 and 16.5 per 1000 per year and the post war rate at between 4.4 and 39.9 per 1000 per year.
Are those numbers believable?
Encarta provides death rates by country for most of the world's countries.
http://encarta.msn.com/media_701500528/Birth_and_Death_Rates_by_Country…
If Iraq's prewar death rate had been as low as 1.8 per 1000 (5.5/3), that would have put it below that of every country in the list of over 200 countries.
If the pre-war death rate had been as high as 16.5 (5.5x3), that would have put it above all but 18 countries (out of over 200), including many war- and AIDS-ravaged countries in Africa (Congo, South Africa, Niger, etc).
A simlar comparison can be made for the post war estimates divided and multipled by 3 with similarly absurd results.
I won't venture to say just how accurate Lancet was in their estimates for the death rate in Iraq, but I am willing to bet very good money that they were more accurate than within a factor of 3.
Perhaps you meant to say that Lancet's estimate for excess deaths in the post war period could be off by a factor of 3?
Even if Lancet overestimated excess deaths by a fcator of 3, that would still mean there have been over 200,000 excess deaths in the post war period.
Attacks on the Lancet are off-beam. It is a campaigning journal, that's true - it's one of the reasons it's so popular among doctors and so influential. However, there is no publication bias. This study is clearly very important, and if the Lancet hadn't published it then it would have been published in some other important general medicine journal.
Importantly, it's not as if the Lancet or other journals are witholding other studies: no other studies exist. Nobody else has made a systematic attempt to establish overall mortality rates in Iraq - this is the only one. This is the best estimate we have. The US or Iraqi govts could commission similar research any time they wanted to - it's not expensive. It's disappointing that they don't.
Jordan comes in at a death rate of 2.6, Afghanistan is up at 17.4, or so say the numbers in your link, which are from, the United States Census International Programs Center.
The interesting thing here is that I believe that the United States Census International Programs Center is also the source for the CIA factbook, and indeed for 2002
http://www.bondtalk.com/factbook2002/geos/iz.html
It's got Iraq as 6.02 (which rounds to 6.0 as per your link).
But, for 2006, gues what number they come up with?
If you think 15 or 20 or 10 you are wrong, it's
https://www.cia.gov/cia/publications/factbook/geos/iz.html
5.37
(on the face of it this would indicate a decline in the death rate, not that I'd give too much credence to that, but taking the table you quote at face value implies exactly that).
And let me cite another source, Unicef:
http://www.unicef.org/infobycountry/jordan_statistics.html
They have Jordan at 4, rather than the figure in your link of 2.6, and for 2004 they've got Iraq at 10, nearly twice the figure of the CIA factbook.
Tom Rees,
it does matter what the editor's stance is, he decides, in the end, what gets published and what does not. He can override the objections of peer reviewers, and besides he is able to choose the peer reviewers.
And his editorial judgement has been poor in other cases, like MMR.
It is also not true that there are no other estimates of crude death rates in 2002 and following years. There are many, and I do not see the Lancet estimates as much of an improvement in accuracy, though I do think that they are more likely to have the direction of the death rate movement right than the CIA factbook.
Estimating the "toll of the war" depends on how you define it and that's a minefield compared to just measuring death rates, because it means linking the war with deaths and attributing blame.
Naturally, the US army will be tempted to count only deaths they directly cause, and among those, will be sorely tempted to assume that they've killed only combatants. This leads to body counts and possible inflation of combatant deaths and understatement of civilian deaths.
But how do you provide an independent check? Insurgents do not want to be known as such, and any independent organisation coming in to check on whether a victim is an insurgent or not is liable to be attacked for bias either way, and any anti-insurgent bias or even perception of insufficient pro-insurgent bias may make them liable for retribution by insurgents/terrorists.
And where government statistics and collection of information is concerned, obviously, the Iraqi government is also liable to be biased, and to a degree even incapable of judging even handedly who is a civilian and who is not.
Jordan at 4 rather than 2.6? By my calculations, that's a factor of 1.5.
And 2004 was after the start of the iraq war, so the UNICEF value of 10 should be compared to the Lancet's 13.3 (not the pre-war 5.5 value). So, that's within a factor of 1.3.
Nothing you have provided explains where your factor of 3 comes from.
To me, one of the things that makes the recent Lancet study believable is that the pre-war death rate for Iraq calculated from their study is virtually the same as the U.S. government estimate of the death rate in Iraq for the same period.
Coincidence? Perhaps, but not very likely.
I don't think the US government estimate for 2002 is particularly convincing evidence considering what they've put out for 2006.
I haven't been able to quickly find the Unicef crude death rate estimate for 2002, I expect it'll be pretty close to 10. I know that for infant mortality they have published figures for 2002, 2004 and 1999 that differ very little from each other (between 102 and 107 or so), but differ rather markedly from the recent UNDP survey
http://timlambert.org/2005/05/lancet34/
which comes in at 32 for 2002 and 35 for 2003, or from the CIA factbook, which is around 50.
I should add that I find the differences between the Iraq Living Index Survey and previous research disturbingly large. It's not due to a low sample size, they've got a huge number of clusters.
http://www.iq.undp.org/ILCS/PDF/Analytical%20Report%20-%20English.pdf
Even if you, kindly, assume that the Unicef and CIA factbook numbers are ancient and merely updated poorly with new information,
these figures go way back to the 1990's when large surveys were definitely done and the numbers contradict each other.
I hate the fact that Unicef can publish a number for 2002 infant mortality for Iraq that may, or may not, be really for 1994-1999 and for only part of Iraq, and that even for the period in question differs from another, huge survey, by a factor of 3-4.
What does that tell me about any figure they produce for another country, like say Jordan? Is that actually for Jordan in 2004, or maybe just for the capital in 1989, and then still wrong by a factor 4?
One possible explanation for the Iraq Living Index Survey would be recall bias, but a look at the graph immediately kills that one, because we've got a virtually flat line from 1989 through to 2003, while Unicef numbers have a post sanction jump that's nowhere in the Iraq Living Index Survey graph, which actually has 1993 as the lowest value.
Heiko, wouldn't it make a lot of sense to tackle the problem by doing a longitudinal study?
The IBC critique, (see http://www.iraqbodycount.org/press/pr14.php) asks at least 2 extremely tough questions.
Firstly, if the death-to-injury ratio in Iraq is anywhere near normal, there must be maybe 3/4 of a million additional severe injuries that have not been seen. That's three quarters of a million people with blast injuries, missing limbs, et cetera who did not go to the hospital, did not die, and are presumably still in pain. Where are they?
Also, if the Lancet study's verification by death certificate is right, then there should be another half a million death certificates issued by the Iraqi government than they have actually recorded. That's a lot of deaths to lose. Where are they?
Aside from the IBC, the birth rates and infant mortality rates in the Lancet study don't match nationwide estimates, probably because the Lancet study concentrated on urbanized areas. But the death rate could be different there too.
Finally, it must be mentioned that the Lancet has been more political and less reliable in recent years than previously, see this article from the Times.
Warren, those questions aren't tough, let alone extremely tough.
The answer to the first one is: walking around, or staying in their houses. Why the h*ll would you expect that 3/4 million maimed people would automatically trip some alarm?
Death certificates - let's start with looting and burning the Health Ministry building in 2003. Then add
the occupation government preventing the counting of the dead, in late 2003(http://www.usatoday.com/news/world/iraq/2003-12-10-iraq-civilians_x.htm). Finally, wrap it up with Sadr's militias running the Ministry of Health.
Heiko, the reason I think 5.5 is a good prewar estimate is that it makes sense in light of what I know about other countries in the region. Iraq has a very young population, like most Middle Eastern countries. In fact its prewar death rate was higher than this of any other Arab Middle Eastern country but Yemen, which is by far the least developed country in the region (Haiti narrowly beats it for least developed non-African country), and Lebanon, which has a below-replacement fertility rate. It doesn't make that much sense for Iraq to have had a death rate of, say, 3, because it was ravaged by sanctions, but I'll be surprised if it had a rate higher than something like 6.5, because it had too few old people.
Warren, the death-to-injury ratio makes sense for the war, but not for the occupation. If I'm not mistaken, terrorist attacks have a low rate of serious injuries to deaths; from what I remember about Israeli media reports of terrorism, there were many more deaths than debilitating injuries. I presume the same applies to an occupying force, but I'll have to check it.
Barry:
Yes, 3/4 of a million maimed people would go to the hospital where their injuries would be recorded. They didn't and they weren't.
The deaths in question (among the 600,000 excess deaths claimed) are after 2003 so the burning of the Ministry wouldn't have destroyed them.
Kevin,
I am not sure where the sources of error are, just that I don't trust the results, when say the Iraq Living Index Survey is completely inconsistent with the Unicef numbers for infant mortality. On top of that there is an apparent internal inconsistency, with the South, the supposedly most neglected region under Saddam, having an anomalously low infant mortality (see Table 36, the South has neonatal mortality of 13 and the Centre of 30).
Because I am not sure what the apparent source of huge systematic error in these types of surveys is, I am not clear how following a group of people through time would address these failings.
Alon,
well, the population is young, so in principle death rates could be as low as Jordan's. The question is how bad were sanctions/Saddam's regime/the damage caused by previous wars?
Maybe not bad at all, and it's reasonable to compare with neighbouring countries. The Iraq Living Index Survey seems awfully close to supporting that, as its infant mortality number is in the same range as Jordan's.
On the other hand, maybe very bad indeed and with infant mortality 3-4 times that of Jordan, as suggested by Unicef, why wouldn't the death rate have been above 10 in 2002?
What I see is contradictory data and no satisfactory explanation for the contradictions (by far the worst being the stuff advanced by Tim Lambert about Unicef numbers for 2002 and Iraq really being for 1994-1999 and for part of Iraq only, I still don't know in how far that's true, but it does not exactly inspire confidence in Unicef, if it were true).
Warren,
For some reasons why people don't rush to Iraqi hospitals, look here and here. It's pretty well understood that the Iraqi Health Ministry's data-gathering function has broken down, along with much else.
There's been quite a lot of discussion of this, here and elsewhere. It's well worth your while to read Tim's posts and links.
Heiko,
It depends what you want to know. If your concern is with the trend in mortality, rather than the level, a longitudinal study is what you need.
The 5.5 number may also be anomalously low for Saddam's regime. In the few years before the war he was on his best behavior and not massacring at his highest rate. And then there's the question of whether you should count deaths from Saddams military adventures in Iran and Kuwait into his average.
Alon
The Israeli rates may be different because of bus bombings. There is no reason to suppose an explosion of an IED or car bomb in Iraq produces a death to injury ratio different from a car bomb or IED in any other country.
Kevin,
well, what I want is an explanation for the apparent inconsistencies that now make me distrustful of all mortality data published for developing countries by virtually any source.
Why can't the Iraq Living Index Survey get it right? It's not that their sample size is too low, I don't see how it's recall bias, or even author bias.
What I suspect is that it's issues to do with training people to gather data, doing interviews right and the like, and if so, it makes all such surveys in developing countries highly suspect.
Heiko, don't you think that's a strong argument for getting doctors to gather data of this kind?
http://notropis.blogspot.com/
Kevin,
that reminds me of the above interesting comments. Yes, doctors would seem a good choice, but I do have some doubts about the thoroughness of their interviewing and household selection (see Iraqi Death Survey Part I of the above link).
And of course, if you do use doctors for surveys, you aren't using them for real health care work, and if you use few doctors due to resources, then they'll be pressed hard to cut corners in their interviews and/or sample size has to be cut down.
Kevin Donoghue asked:
So you're saying you haven't worked much with physician-collected data?
--Robert, fan of nurse-collected data.
Sorry to interrupt your propaganda campaign again Tim, but I just wanted to respond to some of these rantings by "lenin".
You seem to find the following tantrum worthy of mention:
"And this is it. The whole thing is an enormous and misleading exercise in circularity, a massive raise of the eyebrow, a titanic exercise in obfuscation."
Of course, there is no "obfuscation" (hiding of facts), in the IBC piece. There is exposure of them. The problem is that whenever real world facts are exposed (rather than fabricated like the MoH figures in the Lancet report, distorted like the NEJM study, or again fabricated like the IBC rate referred to in the Lancet paper or in Roberts previous MIT paper), it doesn't look very good for this study.
"Obfuscation" is writing the ILCS out of history while deeming the "Iraqyun" worthy of mention, as is the case with the Lancet report. "Obfuscation" is also doing as lenin does, and not mentioning the part of the IBC critique referring to ILCS (except to conveniently cherry pick something from ILCS where it might support something in Lancet).
"They cannot touch the study for methodology, they cannot find anything in it that is badly done: not a single cluster wrongly placed, not a single false extrapolation, not a particle of evidence of any fraudulence or fecklessness. They hazily refer to possible bias, but on the basis of nothing more solid than that this would explain away the uncomfortable implications that they draw."
IBC is not involved in a propaganda campaign for or against the Lancet study like Lambert or lenin. So IBC does not need to argue that the study is conclusively wrong or right, rather it argues that there is plentiful cause for scepticism that needs further examination and explanation (and hand-waved assertions that whatever improbable scenario is necessary for the study to be true are hardly sufficient). Lenin's straw man notwithstanding, such a point does not require conclusive proof that the study is absolutely right or wrong. Only his propaganda campaign requires assuming one of these.
If lenin believes these Lancet findings are correct, and that the calls for scepticism such as those from IBC are basically heresy, he must believe that the ILCS study is wrong, and wrong big time. There is no alternative conclusion. Yet the ILCS carries more _statistical_ weight than does the Lancet study. Can lenin find anything in the ILCS that is "badly done"? Can he find "a single cluster wrongly placed" or "a single false extrapolation", or "a particle of evidence of any fraudulence or fecklessness"? Can he do anything other than "hazily refer to possible bias"?
If he can't. Then he's a hypocrite and his writings should be summarily dismissed as dishonest and self-serving tantrums. If he can, I have not seen him do so.
So where is Lenin's case that he could argue in a "court of law" for rejecting outright (not just being sceptical about) the ILCS? It seems to be as invisible as those 500,000 death certificates.
"As Daniel Davies points out, the chances of the Lancet authors obtaining the sample they did, if the facts were much closer to what the IBC records, are so low that it would have to be fraud."
As I've noted of most of Davies' writing in the past, he rarely says things that are true. This is no exception. It could be any number of things that could throw off the results other than fraud.
"The IBC cannot and do not make this accusation, because they are not prepared to test their flimsy insinuations and doubts in a court of law."
Of course not. As I said above, IBC are not involved in a propaganda campaign, and so don't have to take the stand of absolute wrongness or rightness that lenin or lambert must take.
lenin should do a reality check on his own hypocrisy, and reconsider whether he wants to approach these issues like a propagandist, or like a rational observer who actually cares about the truth.
Fiery stuff, Josh. For somebody who doesn't like propaganda you don't half dish it out. Stripped of the fancy rhetoric, IBC's "Reality Check" basically just says that if the Lancet is right then:
(a) Iraq's administrative infrastructure has fallen apart so that information supplied by the government is worthless; and
(b) one in 15 adult males have been killed and that's just the average.
To which Burnham, Roberts and Co. might reasonably reply: exactly so; that's just what happens in a civil war; lots of men get killed and the central authority either collapses or it becomes one of the belligerents and cooks the numbers - either way you trust the official figures at your peril.
In essence IBC's case is just an argument from incredulity. You can't believe things are really that bad. Sorry but it's the results which are supposed to determine the conclusions; and the numbers in this case are pretty conclusive. If you don't think they are fabricated then you really don't have much choice but to accept that things in Iraq are very bad indeed.
I do find the "according to Lancet2 7% of adult males have died violent death since the War" point unsettling. That's roughly 1 in 15 men, almost decimation.
First off, I'm not sure the standard arguments about disagregating the sample work here, since these deaths are such a large component of the sample.
I'm not really an Iraq-watcher, but if this is true - and the demographic impact of the war is absolutely stunning - or it's false. If it's true you'd (well I'd) expect that amount of non-natural deaths of such a large number of people to be easily verified from other sources. But I don't see any, which makes me sucpicious, but again I don't know much about Iraq...
I realise this is basically an argument from incredulity. But I think there's a good case that these are extra-ordinary figures. Just Googling around for comparisons, I make French military loses in WWI being 1 in 30 of its population - which is comparable to the Lancet2's report. Now, no-one would have been out doing cluster samples to estimate that figure. Deaths on that scale are obvious and have a huge social impact. Surely either the estimate's wrong, or we're completely missing the point by arguing about it's correctness or otherwise. When talking about deaths on that scale doing a study like that is a bit like taking soot samples to see if your house is on fire.
Nik, are you saying that there haven't been such disruptions in Iraq? This is a place where '50 tortured bodies found by police' is no longer the story of the year, but the story of the week. Where the question is not 'will there be car-bombs this week?', but 'how many car-bombs this week?'. And that's just Baghdad.
The entire last three years of news from Iraq has been about 99% societal disruption, violence, chaos, fear and death. Nobody is ignorant of that.
josh doesn't seem to have read my whole post. I quoted Alon Levy's explanation as to why josh's ILCS has "greater statistical weight" argument is wrong. The ILCS and Lancet2 disagree. Lancet2 confirmed deaths with death certificates, while in ILCS deaths were a couple of question deep into long survey. An ILCS undercount seems more likely to be the explanation.
For those of you unfamiliar with previous discussion, josh is the third author of the IBC attack piece.
Tim,
Alon Levy's explanation is rubbish because he didn't even read the right IBC piece. He quoted IBC's earlier editorial, which referred to Lancet 2004 as "the Lancet study", mistakingly thinking it was talking about Lancet 2006.
Anatoly, his comments are still relevant.
Of course, because it doesn't matter if you can't understand what you're reading, it doesn't matter if you're talking nonsense, as long as you come to the right conclusion.
No, his comments are not relevant. He attacks IBC for their "ignorance of statistical testing", because they didn't understand that the confidence interval of Lancet 2006 is incompatible with ILCS; well, the IBC piece he's quoting deals with Lancet 2004, whose very large confidence interval is compatible with ILCS.
Then he wraps up his discussion of ILCS versus Lancet 2006 with this gem: "When we have two contradictory studies, we can't ever assume that the one with the larger sample size is wrong. We can only assume that if we have some discrepancy that lies within the margin of error. If the discrepancy is this big, we need to investigate the methodologies and see who's doing a mistake; it's possible neither side is, but the probability of that is vanishingly small. In this case, we have a study of Iraqi death rates that uses a standard epidemiological methodology, versus a compilation of media reports that not only neglects deaths not reported to the authorities but also neglects deaths not mentioned in the media." But wait! The last sentence compares Lancet 2006 to IBC, not Lancet 2006 to ILCS. It is IBC that is "a compilation of media reports" etc.; ILCS was a cluster-sampling study like Lancet. Levy forgot what he was talking about... just in the previous paragraph!
Levy's is an embarrassingly inadequate critique.
I realise this is basically an argument from incredulity. But I think there's a good case that these are extra-ordinary figures.
It depends on your reading of the Iraqi situation. If there is a major civil war going on then these are just the sort of figures you would expect. It's beside the point that the male population of Iraq isn't forming up into companies to fight large battles like Gettysburg. Not all civil wars are fought that way. Many of them are diffuse affairs - skirmishes between militias led by local warlords with massacres of those who don't have a militia to protect them. Even before Burnham's study was published we had a lot of indications that the situation was chaotic. Now we have a systematic survey which tells us that the chaos is more widespread than most of us thought.
"There is no reason to suppose an explosion of an IED or car bomb in Iraq produces a death to injury ratio different from a car bomb or IED in any other country."
This is just a trifle, but I suspect that yes, Israeli emergency medical care is much better than Iraqi, and not quite as strained. What's an injury with good emergency care may easily become a death with poor or noen emergency care. I don't know how quiclky the ambulances appear in rural Iraq, but I suspect it's slightly less quickly than in Tel Aviv or Oslo.
Yet another example of people having trouble imagining what life is like in a poor, corrupt, war-torn country.
Hi. If I may be permitted a wider meditation -
Looking at the disputes over pre-war mortality, it seems to me that some of us are still labouring under a misapprehension, gleaned during the years of sanctions on Iraq, about the effect they were having in the immediate pre-war period. It's kind of understandable, really. For instance, have a look at this article, originally in the Guardian, in November 2001, by Hans von Sponeck and Denis Halliday, who were involved in the UN's relief programmes there:
http://zmag.org/halsponiraq.htm
The death of some 5-6,000 children a month is mostly due to contaminated water, lack of medicines and malnutrition. The US and UK governments' delayed clearance of equipment and materials is responsible for this tragedy, not Baghdad.
Or take this press release, released by the British Campaign against Sanctions on Iraq in March 2003. Amid sensible (and rather prescient) warnings about the risks of military action, it states:
Child (under-five) mortality has surged under sanctions...
Oil for Food bureaucracy, deductions for compensation payments to victims of the 1991 Gulf war, delays in contracting and other impediments have kept the value of goods actually arriving under Oil for Food to a mere $25 billion over six years. This corresponds to 50-60 cents per Iraqi per day for the past six years, and nothing at all for six years before that... Fifty cents a day is a starvation ration.
Now we are confronted with evidence, from the Lancet survey and even from the ILCS, which is cited by some Lancet critics, that far from 'surging', infant mortality was if anything even lower than the pre-sanctions level by 2002. As has been pointed out in threads on this site, UN agencies like UNICEF, up to and including in 2002, were issuing high figures for Iraqi mortality, when, if anyone might have been expected to notice that the UN's Oil-for-Food programme was having a transformative effect upon such figures, you'd think it might have been they.
I would even go so far as to suggest that some people - myself included - didn't voice as much opposition to the war as we might have, because we were sort of working on the basis that at least those sanctions would be lifted. (We also thought that Saddam and his government bore some responsibility for the effect the sanctions were having, although of course there are debates about that.)
But there you go. To quote David Kay (in another context), 'We were all wrong.'
Lopakhin,
that's a very nice collection of graphs and figures you've put together on your blog.
What bugs me is the inconsistencies between surveys covering the same period in time. The ILCS graph is a flat line, more or less (especially when you plot it with the y axis going above 100).
Compare that to the graph you cite, which gives a steady decline until the mid eighties and then a jump after 1990.
And to explain the Lancet study figures for 2002, Tim Lambert and others have argued that improved nutrition and the like caused a rapid fall towards 30 between the 1990's and 2002.
But that isn't in the ILCS either, no jump after 1990 and no fall just before 2002.
Worse, within the ILCS figures, there's a big inconsistency for values of infant mortality obtained in the South and the Centre of Iraq. The study's results appear to show a neonatal mortality nearly 2/3 lower in the South than the Centre, yet, it is commonly assumed that Saddam neglected the South compared to Baghdad.
Something isn't right there, and the something I think is the reliability of these kinds of surveys for a country like Iraq, either before or after 2003.
Media Lens has just published a new alert on the Lancet study.
Lancet Propaganda Minister Tim Lambert as usual seizes on any thin theory or assertion that can support his (ever shifting) agenda.
After having loudly trumpeted ILCS's death estimate for over a year as "Vindicating" the Lancet 2004 study, now, suddenly, ILCS is supposedly "likely" wrong. Why is this the case all of a sudden?
1) Because Lancet confirmed 92% of 87% of deaths it recorded with death certificates. Perhaps Tim can put away his smoke and mirrors and explain what the relevance of this point is to ILCS.
2) ILCS asked about deaths within a long survey. So?
What is this other than a "hazy" reference to "possible bias" (entirely unsubstantiated) in the ILCS, producing a convenient theory that it is "likely" wrong? Care to argue your case against ILCS in a "court of law" Tim?
Josh dear, shall we call you the Voice of IBC?
The ICLS study is the only legitimate reason (that I know of anyway) for questioning the newest Lancet study and for me it's a coin toss as to which is closer to the truth for the period where they overlap. The ICLS didn't count criminal murders, so we may only be talking about factors of two or maybe three anyway. There's even a chance the bottom of the Lancet CI for the violent mortality rate in March 2003-April 2004 agrees with the ICLS numbers if one assumes half the violent deaths in the Lancet survey for the first 13 months were criminal murders that went uncounted in the ICLS survey. The bottom number in the Lancet CI for that period was 1.8 violent deaths per 1000 per year, which comes to around 50,000 for 13 months.
Divide the Lancet range by 2 or 3. Would IBC be skeptical if the violent death toll at this point had been found to be 200,000?
Pablo,
Let me guess, Media Lens is whining once again about how "little" coverage the Lancet study got, even though it was on the front page of two of the largest UK newspapers, the New York Times, and several others. Oh how disappointed the Lancet and the JHU must be, they only got partial global front page coverage! Cry me a friggin river why don't you.
josh,
Of course, the claim that the ILCS "vindicated" the Lancet study was complete BS to begin with, so obviously Lambert will change things around when it goes against his beloved Lancet studies.
In fact, the raw internals of the ILCS will prove that the Lancet study was completely wrong. Now if only someone will give them to me, then I can submit it to the Lancet and debunk the 2004 study for good.
The entire 2004 study methodology rests upon the assumption, or as I should quote from the study, the "belief" that they paired up similar governorates. Data gathered by IBC and Coalition casualty figures indicate that those governorates that were paired up were in fact not similar, thus destroying a fundamental building block of the entire methodology used.
Now this isn't the case with the 2006 study as far as I can tell, but I would enjoy getting a list from the JHU team of the sites they visited in Iraq to see how representative of the country those sites are.
Donald Johnson,
The ICLS study is the only legitimate reason (that I know of anyway) for questioning the newest Lancet study and for me it's a coin toss as to which is closer to the truth for the period where they overlap.
Amazing that you managed to say ICLS that many times when it is ILCS. Anyways, you think it's a "coin toss" between the ILCS and the Lancet study for the period they share??
So you think a study with 47 clusters and 1800 households is equally reliable to one using far more clusters and around 20,000 households?
I must say, Lambert really has you folks brainwashed. You can't even see the difference between 1,800 and 20,000.
I've been making that mistake a lot, Seixon. What a silly thing to nitpick.
You don't seem to understand that the fact that the UNDP study (safer to use that set of letters) used many more clusters means its confidence interval will probably be much less, but does not mean it is closer to the truth. One of the two studies may have some sort of systematic error and adding many more clusters wouldn't change that.
A study with far more clusters and households would have narrower confidence limits. However, it would not be more "reliable" in terms of the degree of certainty that the true value is within the stated confidence limits.