The editors at Slate really don’t like epidemiology. Not content with Christopher Hitchens’ clueless attack on the Lancet study they’ve published another attack on the study. And this one is by Fred Kaplan, the man who made such a dreadful hash of it when he tried to criticize the first Lancet study. Kaplan writes:
The [first] study’s sample was too small, the data-gathering too slipshod, the range of uncertainty so wide as to render the estimate useless.
So he’s learned nothing about statistics since his botched criticism of the first study. Kaplan concedes that the new study has a smaller confidence interval, but you just know he’s going to find some reason to dismiss it, and sure enough:
But the study has two major flaws — the upshot of which is that it’s impossible to infer anything meaningful from it, except that a lot of Iraqis have died and the number is getting higher.
So what’s the first flaw that Kaplan claims to have found?
Based on the household surveys, the report estimates that, just before the war, Iraq’s mortality rate was 5.5 per 1,000. (That is, for every 1,000 people, 5.5 die each year.) The results also show that, in the three and a half years since the war began, this rate has shot up to 13.3 per 1,000. So, the “excess deaths” amount to 7.8 (13.3 minus 5.5) per 1,000. They extrapolate from this figure to reach their estimate of 655,000 deaths.
However, according to data from the United Nations, based on surveys taken at the time, Iraq’s preinvasion mortality rate was 10 per 1,000. The difference between 13.3 and 10.0 is only 3.3, less than half of 7.8. …
(If the Hopkins researchers want to claim that their estimate is more reliable than the United Nations’, they will have to prove the point. It is also noteworthy that, if Iraq’s preinvasion mortality rate really was 5.5 per 1,000, it was lower than that of almost every country in the Middle East, and many countries in Western Europe.)
Kaplan gives a broken link for the UN data, but you can see it for yourself if you go here and select Iraq from the list. The UN’s table does indeed gives a mortality rate of 10 per 1,000 for Iraq for 1995-2000. We see straight away the mistake that Burnham et al made. They should have presented their estimate of Iraqi deaths in a big table and not provided any details of the source other than “surveys”. Presumably, Kaplan would then have accepted the number uncritically.
What are the “surveys taken at the time” that Kaplan reckons contradict the Lancet study? The 2004 Lancet study provides the answer:
No surveys or census based estimates of crude mortality have been
undertaken in Iraq in more than a decade, and the last estimate of
under-five mortality was from a UNICEF sponsored demographic survey
That’s right, there weren’t any. The UN number is just a guess. The Lancet number is more reliable than the UN number because it is based on a survey rather than being just a guess. Kaplan even admitted this in his critique of the first Lancet study.
According to quite comprehensive data collected by the United Nations, Iraq’s mortality rate from 1980-85 was 8.1 per 1,000. From 1985-90, the years leading up to the 1991 Gulf War, the rate declined to 6.8 per 1,000. After ’91, the numbers are murkier, but clearly they went up.
Did he forget writing this or something?
Kaplan also claimed that a death rate of 5.5 “was lower than that of almost every country in the Middle East”. I went to the UN population page and looked up the death rates for every country neighbouring Iraq for 1995-2000. Here are the numbers: Iran 5.5 Jordan 4.6 Kuwait 1.8 Saudi Arabia 4.1 Syria 3.9 Turkey 6.6. All but one are less than or equal to 5.5. Remember, this is the same source that Kaplan used for the death rate for Iraq. Kaplan’s claim seems to have been made with a reckless disregard for the truth.
The second flaw Kaplan claims is the so-called “main-street bias” which I dealt with here.
He does have a comment from Burnham protesting the Science reporters misrepresentation:
I did not ever tell the writer from Science that the raw data have been destroyed. Absolutely NOT! It is sitting right here! What I did say is that our Iraqi colleagues are very concerned about security, not just theirs but the neighborhoods they surveyed. They have asked us for the moment not to release the data to others as there might be some identifiers there. I am sure that we can remove any unique identifiers, but I am bound to honor their requests, as they have staked so much in collecting the data. We will be discussing this over time with our Iraqi colleagues, and I would imagine that in due course we can make it available to those interested. …
Under human subjects regulations we could not keep unique identifiers, so we limited the information collected — such as street and house numbers. The team did not write down information on the forms on the specific decision making process for each location.
From this, Kaplan promptly contrives another rationale for him to reject the study:
It sounds as if he’s saying he didn’t destroy the data because they never existed in the first place. If that’s the case, how does Burnham know whether his instructions on methodology were followed at all? How can anyone verify the findings? And this is a peer-reviewed article. Who were these peers? And what did they review?
Why does Slate hate epidemiology? Why?