Last year AP-IPSOS surveyed Americans and asked them to estimate how many Iraqi civilians had died in the war. They grossly underestimated the number, with the median estimate being just 9,890. The Atlantic has now published Megan McArdle’s latest anti-Lancet screed, where she argues that it would be better if the Lancet studies had not been published at all because they make people more willing to accept higher estimates of Iraqi deaths. Yes, for war-advocate McArdle, the big problem is that people’s estimates of Iraqi deaths are too high.
McArdle’s piece reminds of me of Neil Munro’s hatchet job in the National Journal — they both pretend to be objective observers, dispassionately recording the arguments between the pro-Lancet and anti-Lancet people, when in reality they are anti-Lancet partisans and the reason why they wrote their pieces was to try to knock down the Lancet studies.
McArdle, you might recall, came up with the macaroni and cheese argument against the Lancet study, and her latest piece isn’t any better. She writes:
How many Iraqis have died because of the American invasion? It would be nice to know the local price of Saddam Hussein’s ouster, five years on. Many researchers have produced estimates. Unfortunately, these range from 81,020 to 1 million.
This is wrong. The 81,020 number is not an estimate of the number of Iraqi deaths. It’s the Iraqi body count number, and it is the number of deaths reported in the media, which is guaranteed to be significantly less than the total number of deaths.
Research by the World Health Organization, published in January in The New England Journal of Medicine, has cast further doubt. It covered basically the same time period and used similar statistical techniques, but with a much larger sample and more-rigorous interview methods.
While the raw sample size of the NEJM study was larger, because they were unable to visit 11% of the clusters and had to extrapolate the estimate to those places, the effective sample size was not larger. And the Lancet study asked about death certificates while the NEJM did not, so it is wrong for McArdle to describe the interview methods as more rigourous
It found that the Lancet study’s violent-death count was roughly four times too high. This has a familiar ring to it. A smaller study, released by the Johns Hopkins team in 2004, had been quickly contradicted by a larger UN survey suggesting that it had overstated excess mortality by, yes, about a factor of four.
This is wrong. The UN survey did not measure excess mortality, just war-related deaths, and it covered a different time period. If you compare like with like, the UN survey gets a similar result to the Lancet 1. Furthermore, when you look at deaths over the same time period, the NEJM has a similar number of violent deaths and a larger number of excess deaths.
All casualty studies have problems. But the Johns Hopkins study’s methodology was particularly troublesome.
In other words, McArdle is not going to mention any problems with any other studies.
The number of neighborhoods the team sampled was just above the minimum needed for statistical significance, and the field interviewers rushed through their work.
Neither of these statements is true. The minimum size for significance is generally taken to be 30 clusters. Lancet 2 had 47. McArdle has made the claim about insufficient time for the interviews before. It was wrong then. And it is wrong now.
The interviewers were also given some discretion over which households they surveyed, a practice generally regarded as unwise.
This isn’t true either.
Cluster sampling was developed for studying vaccination; it has never been validated for mortality.
Cluster sampling was not developed for studying vaccination. And there isn’t anything special about mortality that means that it wouldn’t work. Does the Atlantic even care whether the stuff it publishes is accurate or not?
Yet though its compromises made it particularly unreliable, the Lancet study remains the most widely known. Its conclusions were the earliest and most shocking of the scientific estimates and thus generated enormous media attention. The more-careful counts that followed prompted fewer, and less prominent, articles.
In fact, the IBC number is the one most widely known. Even though it’s not an estimate of total deaths, is usually presented as such, effectively downplaying the number of deaths. This is not a problem to McArdle. In fact she does it in her article. And do you like the way that she works her false claim that the other counts were “more careful” in at every opportunity?
All of this calls into question the idea that even a flawed study is better than no study. Like most people, I believe that more information is usually better; when facts or theories conflict, air the differences and let the facts fight it out. But not every number is a fact. And when the data fall below some threshold of quality, it’s better to have no numbers at all.
When articles fall below some threshold of quality, it’s better that they not be published. Like McArdle’s article here.
Witness the Johns Hopkins team’s critics, who triumphantly waved the WHO results at their opponents. But even if “only” 150,000 people have been killed by violence in Iraq, that’s a damn high price. Conversely, few of the study’s supporters expressed much pleasure at the news that an extra 450,000 people might be walking around in Iraq. After a year and a half of bitter argument, all that anyone seemed interested in was proving they had been right.
McArdle does not disclose that she was one of the more strident critics of the Lancet study and that her article was written to prove that she was right.
Daniel Davies has more criticism here and here, while McArdle threatens to write more anti-Lancet stuff on her blog.
Update: McArdle repeats her threat using more words.