Back in November 2001 Neil Munro was an advocate of war with Iraq and predicted:

The painful images of starving Iraqi children will be replaced by alluring Baghdad city lights, smiling wages-earners and Palestinian job seekers.

Iraq war advocates like Munro don’t like the results of the Lancet study that suggest that about 600,000 Iraqis have died as a result of the war they championed. So Munro has written a piece that throws every piece of mud he can find at the study, and to their discredit, the National Journal has published it. And if you think I’m being unfair by stressing Munro’s obvious bias, Munro has 1400 words about how the Lancet researchers were ideologically biased against the Iraq war and were funded by the Great Satan himself, George Soros. Presumably Munro thinks that studies on cancer are brought into question because the researchers are anti-cancer and funded by anti-cancer foundations.

And of course Munro’s fellow war supporters amongst bloggers such as Glenn Reynolds, Charles Johnson and Gateway Pundit seized on his article as proof that the study was a fraud or at least had serious problems. The serious problems are, however, with Munro’s article with fabricated claims, misrepresentation of sources, a misleading graph, and reliance on discredited arguments.

Munro writes:

Even Garfield, a co-author of the first Lancet article, is backing away from his previous defense of his fellow authors. In December, Garfield told National Journal that he guesses that 250,000 Iraqis had died by late 2007. That total requires an underlying casualty rate only one-quarter of that offered by Lancet II.

I contacted Garfield and this is a misrepresentation of his views. He told me:

I seem to have a special ability to make statements that lend themselves to misinterpretation.

I could not believe that 100,000 had died in 2004, but the best evidence made me believe it.

As a guess, out of the blue, I feel confident that at least a quarter million Iraqis have died due to violence since the 2003 invasion. But that is just a guess.

An estimate, based on field data, collected via good methods, is far better than a guess, even if there are some biases along with imprecision in it.

He is not backing way from his statement in 2006 that:

I am shocked that it is so high, it is hard to believe, and I do believe it. There is no reasonable way to not conclude that this study is by far the most accurate information now available.

Munro claims:

The survey teams failed to collect the fraud-preventing demographic data that pollsters routinely gather.

This is an obvious fabrication. Munro even provides a copy of the survey instrument, which tells the surveyor to record demographic data:

Who lives in this household? (Resident means spent most of the past 3 months sleeping in this household.) (only record M/F and the age, if less than 4 years, record age in months)

Correction: I was wrong. While the plan was to collect ages (hence the instructions above), during the survey this was dropped to speed things up. Les Roberts explains here. My apologies to Neil Munro.

Munro presents a highly misleading graph (shown on right) that compares the Lancet estimates of violent deaths with the IBC count of the civilian deaths reported by the media. Not all deaths are reported by the media, but the graph deceptively presents the IBC number as an estimate of violent deaths.

Munro presents a list of what he claims are “potential problems”. Long-time readers will be familiar with most of these alleged problems, but for the sake of completeness I’ll go through the entire list.


Still, the authors have declined to provide the surveyors’ reports and forms that might bolster confidence in their findings. Customary scientific practice holds that an experiment must be transparent — and repeatable — to win credence. Submitting to that scientific method, the authors would make the unvarnished data available for inspection by other researchers.

This is deceitful. Munro is well aware that IRB rules require that the identity of respondents be kept confidential and that breaking such rules are not “customary scientific practice”. We know this because he has a sidebar alleging that the Lancet team broke IRB rules by recording names of respondents.


Sample size. The design for Lancet II committed eight surveyors to visit 50 regional clusters (the number ended up being 47) with each cluster consisting of 40 households. By contrast, in a 2004 survey, the United Nations Development Program used many more questioners to visit 2,200 clusters of 10 houses each. This gave the U.N. investigators greater geographical variety and 10 times as many interviews, and produced a figure of about 24,000 excess deaths — one-quarter the number in the first Lancet study. The Lancet II sample is so small that each violent death recorded translated to 2,000 dead Iraqis overall.

This is just a rehash of Steven E Moore’s innumerate criticism. When sampling, you do not need a larger sample when the population is larger. Steve Simon explains

The best analogy I have heard about sampling goes something like: “Every cook knows that it only takes a single sip from a well-stirred soup to determine the taste.” It’s a nice analogy because you can visualize what happens when the soup is poorly stirred.

And you just need a single sip whether it’s a large vat or a small pot of soup. Munro does not understand the most basic thing about sampling and despite apparently spending months on his story, failed to learn it. Rebecca Goldin of is also unimpressed with Munro’s understanding of statistics.

Munro’s comparison of the 2004 UNDP survey with Lancet I is misleading. He failed to mention that that the UN survey covered a different time frame and measured something different. If you look at their measurements of the same thing, the UNDP and Lancet 1 agree.


“Main street” bias? According to the Lancet II article, surveyors randomly selected a main street within a randomly picked district; “a residential street was then randomly selected from a list of residential streets crossing the main street.” This method pulled the survey teams away from side streets and toward main streets, where car bombs can kill the most people, thus boosting the apparent death rate, according to a critique of the study by Michael Spagat, an economics professor at the Royal Holloway, University of London, and Sean Gourley and Neil Johnson of the physics department at Oxford University.

Even if this “main street bias” exists (and it’s likely that it’s just a poorly worded sentence in the paper), it makes no significant difference.


Oversight. To undertake the first Lancet study, Roberts went into Iraq concealed on the floor of an SUV with $20,000 in cash stuffed into his money belt and shoes. Daring stuff, to be sure, but just eight days after arriving, Roberts witnessed the police detaining two surveyors who had questioned the governor’s household in a Sadr-dominated town. Roberts subsequently remained in a hotel until the survey was completed. Thus, most of the oversight for Lancet I — and all of it for Lancet II — was done long-distance.

Roberts did not need to go to Iraq for Lancet II, since he was satisfied that Lafta could properly conduct the survey.


To Kane, the study’s reported response rate of more than 98 percent “makes no sense,” if only because many male heads of households would be at work or elsewhere during the day and Iraqi women would likely refuse to participate.

To my knowledge, David Kane has never conducted a door-to-door survey in Iraq or anywhere else. Why is his uninformed opinion presented by Munro? Oh right, he said something that suited Munro’s agenda.

Lack of supporting data. The survey teams failed to collect the fraud-preventing demographic data that pollsters routinely gather.

As noted earlier, this is an outright fabrication by Munro.


Death certificates. The survey teams said they confirmed most deaths by examining government-issued death certificates, but they took no photographs of those certificates.

I can’t help but be impressed with the why Munro feigns ignorance of the IRB rules he put tin the sidebar.

Under pressure from critics, the authors did release a disk of the surveyors’ collated data, including tables showing how often the survey teams said they requested to see, and saw, the death certificates. But those tables are suspicious, in part, because they show data-heaping, critics said. For example, the database reveals that 22 death certificates for victims of violence and 23 certificates for other deaths were declared by surveyors and households to be missing or lost. That similarity looks reasonable, but Spagat noticed that the 23 missing certificates for nonviolent deaths were distributed throughout eight of the 16 surveyed provinces, while all 22 missing certificates for violent deaths were inexplicably heaped in the single province of Nineveh. That means the surveyors reported zero missing or lost certificates for 180 violent deaths in 15 provinces outside Nineveh. The odds against such perfection are at least 10,000 to 1, Spagat told NJ.

Well, he may have told NJ that, but the only he could do such a calculation is if he made some unwarranted assumptions that missing certificates would have the same distribution for violent and non-violent deaths. Furthermore, you can always find low probability patterns in any data. For example, I just rolled a die four times and got the sequence 6426. Notice how the number goes down by two each time (wrapping around when it goes to 0), The odds of this happening are 215 to 1 against. But while this particular pattern is unlikely, there are many many patterns, so it almost certain that I can find some pattern to fit any sequence.


Suspicious cluster. Lafta’s team reported 24 car bomb deaths in early July, as well as one nonviolent death, in “Cluster 33” in Baghdad. The authors do not say where the cluster was, but the only major car bomb in the city during that period, according to Iraq Body Count’s database, was in Sadr City. It was detonated in a marketplace on July 1, likely by Al Qaeda, and killed at least 60 people, according to press reports.

The authors should not have included the July data in their report because the survey was scheduled to end on June 30, according to Debarati Guha-Sapir, director of the World Health Organization’s Collaborating Center for Research on the Epidemiology of Disasters at the University of Louvain in Belgium.

This is ridiculous. The survey was designed for 50 clusters. Even if the survey was scheduled to end by June 30, it makes no sense to stop then if some provinces were unsampled.

The Cluster 33 data is curious for other reasons as well. The 24 Iraqis who died violently were neatly divided among 18 houses — 12 houses reported one death, and six houses reported two deaths, according to the authors’ data. This means, Spagat said, that the survey team found a line of 40 households that neatly shared almost half of the deaths suffered when a marketplace bomb exploded among a crowd of people drawn from throughout the broader neighborhood.

Is it possible that a group of people from the same street travelled to the marketplace together for protection? Iraq is a dangerous place, you know.

After this, Munro goes on at length about the “ideological bias” against the Iraq war of the people connected with the survey. He states:

Whether this affected the authors’ scientific judgments and led them to turn a blind eye to flaws is up for debate.

Do you think that perhaps Munro’s spirited advocacy of the war affected his journalistic judgments and led him to turn a blind eye to flaws in the criticisms he printed?

I asked Munro to explain how someone such as himself, with an evident bias towards presenting the war as a success was chosen to write the piece, and why his war advocacy was not disclosed. He evaded the question, writing:

We’re a journal of fact and politics, not opinion, and that’s why we printed the facts about Soros’ money, Lafta’s Saddam-era articles, the claim of 15,000 dead from U.S. vehicles, Roberts’ description of Lafta’s views, Spagat & Kane’s claims about data-heaping, etc.

It is rather telling that, to Munro, the most damning “fact” is “Soros’ money”. And if you wondering about “the claim of 15,000 deaths from U.S. vehicles”, neither Lancet study makes any such claim. Perhaps Munro’s eyes were dazzled by those alluring Baghdad city lights.

Update: Burnham and Roberts reply to Munro:

The overwhelming confirmatory evidence of the Lancet study findings, the conventional nature of our survey procedures, and the abundance of internal consistencies in the data of which Mr. Munro was informed and chose not to report, suggests that National Journal’s critique of our work should itself be examined for political motivations.


  1. #1 Tim Lambert
    January 15, 2008

    BruceR, nice job of removing Garfield’s statement from its context. Full para:

    >As a guess, out of the blue, I feel confident that at least a quarter million Iraqis have died due to violence since the 2003 invasion. But that is just a guess.

    If L2 did not exist he would have guessed 250,000. But he says that L2 trumps his guess. Munro misrepresented Garfield’s position.

  2. #2 BruceR
    January 16, 2008

    An analogy: if a business associate of yours told you today, “I just finished my tax return, and although it’s hard to believe, our firm booked $600,000 in income this year,” and then a few days later she told you, “It’s just a guess, but I’m confident our firm made over $250,000,” you would certainly be justified in pointing out the discrepancy, and asking her what might have changed between the two statements. Her then saying “I stand by both statements” might not be logically refutable, but is hardly evidence of either good faith or sincerity.

    The additional context you provide does not change the analogous situation with regard to Dr. Garfield’s recent public comments on L2.

  3. #3 BruceR
    January 16, 2008

    Not to harp, but the lower bound at 95% confidence for L2’s estimate of violent deaths was ~420,000. If the fellow’s personal estimate now only amounts to 60% of that (not counting the significant differences in reporting period between a study that ended in mid-2006 and an estimate made in early 2008) then quite clearly L2 is not currently convincing to him in any non-abstract way, at least based on the correspondence you have shared here.

New comments have been temporarily disabled. Please check back soon.