In a very prestigious journal called the New England Journal of Medicine there was an article published on 1 July 2004. Military doctors interviewed soldiers returning from Iraq.
They interviewed them because they were interested in post-traumatic stress disorder, so they asked the soldiers about stressful things that might have happened to them.
Among other things they found that 14 percent of the ground forces in the army had killed a non-combatant and 28 percent of returning Marines had killed a non-combatant.
If you work through the numbers you come up with a figure pretty darn close to our estimate in the Lancet.
Daniel Davies worked through the numbers as well:
Quite extraordinary. He refers to this article in the New England Journal of Medicine, which finds (Table 2) that in a survey of 894 US Army soldiers, 116 of them (out of 861 who responded to the survey question) regarded themselves as having been personally responsible for the death of a noncombatant. That’s 13.47% (I don’t know why the NEJM rounds it to 14% and suspect someone has made a transcription error).
I think that the most sensible way to extrapolate from this (which is not to say that this is a legitimate calculation; call it the least bad way to create a number) is to say that, given that it was an eight month tour of duty, we got 116 noncombatant deaths in about 215000 troop-days. There were 250,000 US and 45,000 British troops (plus other coalition forces) in the initial assault on Iraq and about 130k US and 20K coalition troops by December. I’m guessing that this gives us 3 months of 300k troops and 5 months of 150k troops. That would be roughly 50m troop-days in the eight months of the tour of duty of the troops surveyed.
50,000,000 x (116/215,000) = about 27,000 civilian deaths. Note that UK troops would have seen fewer noncombatant deaths per troop-day, but units like the 815 US Marines surveyed saw twice the rate of the regular Army units — also, I am not allowing for the fact that some soldiers might have been responsible for multiple noncombatant deaths.
This is really quite consistent with the Lancet study; if you crudely scale it up from eight months to eighteen you get 60,000 deaths, which is significantly more than the Lancet team would have attributed to coalition troops.
That’s about 110 deaths per day. In a later article Roberts came up with a similar number:
A report in the New England Journal of Medicine in July 2004, based on interviews with returning U.S. soldiers, suggests an unintentional non-combatant death toll of 133 deaths per day.
We recently contacted the first author of the NEJM paper, Dr Charles
Hoge, who replied as follows:
“In no way can our data be used to estimate civilian deaths. We ask two
questions related to killing of enemy combatants and civilians on the
our anonymous questionnaire that we ask U.S soldiers, but neither can be
used to estimate casualty rates. We ask if at any time in the deployment
the soldier perceived that he was responsible for the death of an enemy
combatant and another similar question pertaining to the death of
civilians. Since all members of a team may in some way be responsible
during a combat operation these questions can in no way be used to
estimate actual civilian casualty numbers.” (email to John Sloboda,
dated 8th May 2006)
In summary, you have published a claim, on the basis of the Hoge et al
paper, which the lead author of that paper says is unsustainable (just
as we had independently argued).
There are two matters of serious concern:
You have misused the authority of the New England Journal of Medicine
and the authors of this July 2004 paper to promote a claim which has no
basis in that study and which is explicitly rejected by the authors of
The supposed 133 per-day-rate of civilian deaths is one of several
“estimates” used by you and many of your readers to make unwarranted
claims about the relative value of different studies of Iraqi mortality,
and the likely overall death toll. Your use of this figure, and the use
made of it by others, has thus helped to spread confusion and
misinformation on a matter which is of the utmost gravity, and where
therefore the highest standards of rigour and professionalism are needed
from those claiming academic expertise and authority.
Now Sloboda does have one point here. Roberts cites the NEJM as the source for the 133 figure when it’s not contained in that paper but calculated from it. The cite should be to a document where the details of the calculation are given. But Sloboda’s other points are wrong. Clearly the NEJM paper can be used to construct an estimate, regardless of what Hoge thinks. And before the IBC accuses Roberts of spreading confusion, they might want to put their own house in order.
Roberts explains how the 133 number was calculated in his reply, but this bit is the most interesting:
Finally, we measured the sensitivity of your surveillance system during
the first 18 months, we found it was <5%. This is what we generously
referred to in the HPN paper as “cannot be more than 20% complete.” The
Falluja deaths in our data set were recorded by month and in IBC Falluja
deaths were not as distinct as elsewhere so that we could not match
them. However, among the other 21 violent deaths we encountered in our
random sample of 988 households, one was in the IBC data set. Thus,
unless you have evaluated the sensitivity of your system from some
independent data source, I hope you will temper the statements you make
about the complete nature of the IBC dataset.
5% completeness is the norm of newspaper reporting in times of war. (See
Patrick Ball’s work in Guatemala online with the AAAS) I suspect and
hope that the sensitivity has increased over time as systems have
improved and the role of major battles with airpower has diminished.
But, the speculation in the press that the real number might only be
twice the IBC tally is preposterous.
Last October 11th, I was invited at 2 hours notice before a flight left
to appear on the BBC program Newsnight with Jack Straw. I called you at
that time, hoping to hear that IBC had calibrated the system and to give
you the chance to defend the IBC sensitivity before saying on-air that I
had found it to be ~5% complete. Because we did not speak, I did not
then, and up until now, report our evaluation of your sensitivity in
public. I thought I was doing you a favor by calling.
5) As for your “Speculation is no substitute” paper, I discussed it with
some of my coauthors when it arrived. We decided that it was so devoid
of credibility, and so laden with self-interest rather than the interest
of the Iraqis, it did not merit a response.
The “Speculation is no substitute” paper contains serious errors which the IBC refuse to correct.