The latest issue of the Walkley Magazine has an article I wrote about the media coverage of the Lancet study. They haven’t made it available on line, so I’ve put a copy below the fold.
Imagine an alternate Earth. Let’s call it Earth 2. On Earth 2, just like our planet, there was a Boxing Day tsunami that killed about a quarter of a million people. On our planet the tsunami was front page news for days and because of the horrendous death toll people opened their hearts and their wallets. On Earth 2 the reaction to the tsunami’s death toll was different. The story was in the papers for one day and was buried in the inside pages. John Howard said the estimate was not believable. George Bush said that the methodology was discredited. News reporters made it clear that they didn’t believe the number. Opinion pieces were published dismissing the estimate because the Red Cross was “anti-tsunami”. Does Earth 2 sound far-fetched? Well, that’s basically what happened when researchers from Johns Hopkins published an estimate that there had been roughly 650,000 extra Iraqi deaths as a result of the war.
What is even stranger is that the estimate wasn’t produced by some incomprehensible scientific technique, but via a scientific method that journalists are familiar with and know gives reliable results when used properly — a survey.
There are three possible ways that surveys can give wrong answers. First, the sample size could be too small. The Johns Hopkins study surveyed 1800 households, which is more than the 1000 people commonly surveyed in opinion polls.
Second, the sample might not be representative. The Johns Hopkins study used standard techniques to randomly choose households to be surveyed. Because of the very high response rate achieved the sample is likely more representative than in polls taken in Australia.
And third, respondents might not tell the truth. The Johns Hopkins study verified deaths by checking death certificates. In polls of voting intentions there is no way to know if respondents are telling the truth.
Further, a survey should be compared with the results of other surveys to see if the results had were consistent. There has only been one other survey designed to measure Iraqi deaths, and that was conducted by the same team of researchers and found that there had been about 100,000 excess deaths in the first eighteen months after the invasion. This agreed well with the new survey which gave a similar number over the same time frame. There was another survey on Iraqi living conditions that included a question on war-related deaths that produced an estimate for those deaths in the first year that was about half that of the new survey, but the researchers felt that their survey had missed a significant number of these deaths.
So it is possible that because of some unknown factor the Hopkins study gave a wrong answer, but by any objective measure its results are more reliable than those of the opinion polls journalists routinely accept.
But journalists did not accept this number and made their disbelief quite clear. For example, the AP’s Malcolm Ritter began his story like this:
“A controversial new study contends nearly 655,000 Iraqis have died because of the war, suggesting a far higher death toll than other estimates.
“The timing of the survey’s release, just a few weeks before the U.S. congressional elections, led one expert to call it ‘politics.’”
In his first sentence he calls it “controversial” and gives a reason to disbelieve the result. His second sentence gives another reason to doubt it and brings in the opinion of the only expert he quotes in the story, who is also quoted as saying that the estimate is “almost certainly way too high”. The expert, Anthony Cordesman, was an expert in military affairs and not an expert in epidemiology or the statistics of sampling.
The ABC’s David Mark was even less subtle:
“A more reliable count of the number of civilian deaths in Iraq may be found on the Iraq Body Count website.”
The Iraq Body Count is just a count of the civilian deaths reported by the media. There is no reason to expect more than a small fraction of all deaths to make into a newspaper. There isn’t any contradiction between the Hopkins study and the IBC — they are measuring different things, but journalists usually misled their readers by writing about the difference between the two numbers as if they were measuring the same thing. For example, Tom Hyland in the Age:
“Controversy surrounds the civilian death toll since the 2003 invasion. A recent study in The Lancet estimated 655,000 people have died. The campaign group Iraq Body Count puts the number of reported civilian dead at between 44,741 and 49,697.
When journalists talked to experts about the study, they often talked to people who, like Cordesman, were experts in something not relevant to judging the accuracy of the study. Those reporters who talked to experts in sampling learned that the methodology was sound and that the researchers were well respected. The ABC’s Tom Iggulden talked to Epidemiologist Mike Toole and was able to report:
“Roberts used the same cluster methodology to count the civilian death toll in the Congo conflict. The same type of survey was also used in Sudan.”
The estimated numbers of deaths in those conflicts were also shocking but journalists did not describe them as “controversial”.
When George Bush was asked about the study he answered: “The methodology is pretty well discredited”. It seems that the reporters believed him because nobody asked him what was wrong with the methodology. John Howard offered this: “It’s not plausible, it’s not based on anything other than a house-to-house survey.” Nobody asked Howard why the Australian Bureau of Statistics uses surveys if they are so implausible.
The reporters, while ill-informed, at least has the excuse that they lacked the time to learn more about the subject. The writers of the opinion pieces attacking the study that followed a few days later had to actively avoid learning that the methodology was sound.
The Australian published a piece by David Burchell, a historian with no background in science or mathematics:
“Yet The Lancet — a respected publication, albeit not one known for its expertise in social statistics analysis — has given the report its full backing.
Yes, Burchell claimed that one of the leading medical journals in the world had no business publishing a study on mortality.
The Courier Mail published a piece by Ted Lapkin, whose first degree was history and who worked as a political lobbyist rather than in epidemiology:
“In The Wall Street Journal, political statistician Steven Moore savaged both Lancet studies on account of their minuscule sample groups. And it’s not as if Moore had no experience in Iraq. He spent much of the past three years in Baghdad measuring public opinion and training Iraqi pollsters.”
Actually Moore is not a statistician, but a Republican political consultant. And if Moore really thought that the sample size was too small, why did the polls he ran in Iraq use an even smaller sample size?
Why did journalists disbelieve the results of the survey? The most likely reason is that it was so very different from the two other numbers for deaths that were usually cited, the 100,000 estimate from the previous survey and the Iraq Body Count’s 50,000 deaths reported in the media. Neither one really contradicts the new survey since they cover a different time period or measure something different, but journalists’ gut reaction told them that the new number was wrong.
It is not reasonable to expect journalists to be experts in epidemiology. But it is reasonable to have expected them to be able to find such experts and write informed stories on the Hopkins study. For the most part, they failed.