Walkley Magazine article on Lancet study

The latest issue of the Walkley Magazine has an article I wrote about the media coverage of the Lancet study. They haven't made it available on line, so I've put a copy below the fold.

Imagine an alternate Earth. Let's call it Earth 2. On Earth 2, just like our planet, there was a Boxing Day tsunami that killed about a quarter of a million people. On our planet the tsunami was front page news for days and because of the horrendous death toll people opened their hearts and their wallets. On Earth 2 the reaction to the tsunami's death toll was different. The story was in the papers for one day and was buried in the inside pages. John Howard said the estimate was not believable. George Bush said that the methodology was discredited. News reporters made it clear that they didn't believe the number. Opinion pieces were published dismissing the estimate because the Red Cross was "anti-tsunami". Does Earth 2 sound far-fetched? Well, that's basically what happened when researchers from Johns Hopkins published an estimate that there had been roughly 650,000 extra Iraqi deaths as a result of the war.

What is even stranger is that the estimate wasn't produced by some incomprehensible scientific technique, but via a scientific method that journalists are familiar with and know gives reliable results when used properly -- a survey.

There are three possible ways that surveys can give wrong answers. First, the sample size could be too small. The Johns Hopkins study surveyed 1800 households, which is more than the 1000 people commonly surveyed in opinion polls.

Second, the sample might not be representative. The Johns Hopkins study used standard techniques to randomly choose households to be surveyed. Because of the very high response rate achieved the sample is likely more representative than in polls taken in Australia.

And third, respondents might not tell the truth. The Johns Hopkins study verified deaths by checking death certificates. In polls of voting intentions there is no way to know if respondents are telling the truth.

Further, a survey should be compared with the results of other surveys to see if the results had were consistent. There has only been one other survey designed to measure Iraqi deaths, and that was conducted by the same team of researchers and found that there had been about 100,000 excess deaths in the first eighteen months after the invasion. This agreed well with the new survey which gave a similar number over the same time frame. There was another survey on Iraqi living conditions that included a question on war-related deaths that produced an estimate for those deaths in the first year that was about half that of the new survey, but the researchers felt that their survey had missed a significant number of these deaths.

So it is possible that because of some unknown factor the Hopkins study gave a wrong answer, but by any objective measure its results are more reliable than those of the opinion polls journalists routinely accept.

But journalists did not accept this number and made their disbelief quite clear. For example, the AP's Malcolm Ritter began his story like this:

"A controversial new study contends nearly 655,000 Iraqis have died because of the war, suggesting a far higher death toll than other estimates.

"The timing of the survey's release, just a few weeks before the U.S. congressional elections, led one expert to call it 'politics.'"

In his first sentence he calls it "controversial" and gives a reason to disbelieve the result. His second sentence gives another reason to doubt it and brings in the opinion of the only expert he quotes in the story, who is also quoted as saying that the estimate is "almost certainly way too high". The expert, Anthony Cordesman, was an expert in military affairs and not an expert in epidemiology or the statistics of sampling.

The ABC's David Mark was even less subtle:

"A more reliable count of the number of civilian deaths in Iraq may be found on the Iraq Body Count website."

The Iraq Body Count is just a count of the civilian deaths reported by the media. There is no reason to expect more than a small fraction of all deaths to make into a newspaper. There isn't any contradiction between the Hopkins study and the IBC -- they are measuring different things, but journalists usually misled their readers by writing about the difference between the two numbers as if they were measuring the same thing. For example, Tom Hyland in the Age:

"Controversy surrounds the civilian death toll since the 2003 invasion. A recent study in The Lancet estimated 655,000 people have died. The campaign group Iraq Body Count puts the number of reported civilian dead at between 44,741 and 49,697.

When journalists talked to experts about the study, they often talked to people who, like Cordesman, were experts in something not relevant to judging the accuracy of the study. Those reporters who talked to experts in sampling learned that the methodology was sound and that the researchers were well respected. The ABC's Tom Iggulden talked to Epidemiologist Mike Toole and was able to report:

"Roberts used the same cluster methodology to count the civilian death toll in the Congo conflict. The same type of survey was also used in Sudan."

The estimated numbers of deaths in those conflicts were also shocking but journalists did not describe them as "controversial".

When George Bush was asked about the study he answered: "The methodology is pretty well discredited". It seems that the reporters believed him because nobody asked him what was wrong with the methodology. John Howard offered this: "It's not plausible, it's not based on anything other than a house-to-house survey." Nobody asked Howard why the Australian Bureau of Statistics uses surveys if they are so implausible.

The reporters, while ill-informed, at least has the excuse that they lacked the time to learn more about the subject. The writers of the opinion pieces attacking the study that followed a few days later had to actively avoid learning that the methodology was sound.

The Australian published a piece by David Burchell, a historian with no background in science or mathematics:

"Yet The Lancet -- a respected publication, albeit not one known for its expertise in social statistics analysis -- has given the report its full backing.

Yes, Burchell claimed that one of the leading medical journals in the world had no business publishing a study on mortality.

The Courier Mail published a piece by Ted Lapkin, whose first degree was history and who worked as a political lobbyist rather than in epidemiology:

"In The Wall Street Journal, political statistician Steven Moore savaged both Lancet studies on account of their minuscule sample groups. And it's not as if Moore had no experience in Iraq. He spent much of the past three years in Baghdad measuring public opinion and training Iraqi pollsters."

Actually Moore is not a statistician, but a Republican political consultant. And if Moore really thought that the sample size was too small, why did the polls he ran in Iraq use an even smaller sample size?

Why did journalists disbelieve the results of the survey? The most likely reason is that it was so very different from the two other numbers for deaths that were usually cited, the 100,000 estimate from the previous survey and the Iraq Body Count's 50,000 deaths reported in the media. Neither one really contradicts the new survey since they cover a different time period or measure something different, but journalists' gut reaction told them that the new number was wrong.

It is not reasonable to expect journalists to be experts in epidemiology. But it is reasonable to have expected them to be able to find such experts and write informed stories on the Hopkins study. For the most part, they failed.

Tags

More like this

Or at least 655,000 (± 140,000) of them. Before I get to the news reports, I think it's important to make something clear. These statistical techniques are routinely used in public health epidemiology and nobody complains about them. Critics of this estimate can't play the same game the…
One of the headlines made by Bob Woodward's new book on the Bush administration, State of Denial, is that the violence in Iraq is much worse than we have been told. Told by the Bush administration, anyway. In fact we have been on notice for two years that the level of violence in Iraq is horrendus…
I've gotten a lot of mail from people asking my opinion about [the study published today in the Lancet][lancet] about estimating the Iraqi death toll since the US invasion. So far, I've only had a chance to skim the paper. But from what I can see about it, the methodology is sound. They did as…
Over at Majikthise, Lindsay has been doing some really good debunking of the critics of the Lancet article that indicates roughly 655,000 Iraqis have died as a result of the Iraq War and Occupation. Here they are: Over half a million additional deaths in Iraq since US invasion Innumerate cowards…

But it is reasonable to have expected them to be able to find such experts and write informed stories on the Hopkins study. For the most part, they failed.

Well, journos are just another segment of human society and have the same foibles as the rest of us, and they are responding to market forces.

And many markets don't want to bother their beautiful minds with messy truths.

Best,

D

The Johns Hopkins study used standard techniques to randomly choose households to be surveyed.

Really? Why then, can't anyone say what those techniques were?

Further, a survey should be compared with the results of other surveys to see if the results had were consistent. There has only been one other survey designed to measure Iraqi deaths, and that was conducted by the same team of researchers and found that there had been about 100,000 excess deaths in the first eighteen months after the invasion. This agreed well with the new survey which gave a similar number over the same time frame. There was another survey on Iraqi living conditions that included a question on war-related deaths that produced an estimate for those deaths in the first year that was about half that of the new survey, but the researchers felt that their survey had missed a significant number of these deaths.

This entire passage is equivocation. There have been two other surveys designed to measure Iraqi deaths. The "agreement" between the two Lancet surveys is illusory statistical sleight of hand. The ILCS survey was far less than half the new survey, and the researchers did and do not "feel" what you assert.

"Why did journalists disbelieve the results of the survey?"

Maybe it's because those who are promoting it are so transparently deceptive and misleading?

"Really? Why then, can't anyone say what those techniques were?" said joshd

Joshd, here is a link to the paper. I think it covers your question...

http://tinyurl.com/ymvpds

By ERIC JUVE (not verified) on 05 Dec 2006 #permalink

Eric, it doesn't. You haven't been keeping up. The methodology that was published and peer-reviewed is apparently not the one they used, and nobody has so far given any explanation of what they did use that isn't hopelessly vague or contradictory.

joshd, you need to stop reading denialist writings; that's why you're reading "hopelessly vague or contradictory" accounts.

QrazyQat implies that a non-vague or contradictory account of the methodology actually used exists, and issues a loaded argument to the effect that I've just not seen it. Great. I'm eager to read it. Please post it for me to read.

"but via a scientific method that journalists are familiar with and know gives reliable results when used properly -- a survey"

How do we know that surveys ever give reliable results. The only real way to know that surveys give reliable results is to do a large number of experiments along the followng lines: You have two sets of researchers A and B. Researchers A setup an experiment with some large number of people where they already know the anwser e.g. they have a fake vote and they know which person voted for which thing because they have been secretly observing everything. Research team B then uses standard survey methodology to obtain an estimate of the results of the vote. You then compare to see how accurate statistical models and survey techniques are. Also you would have to do a very large number of experiments along these lines in many different situations in order to confirm the validity of survey techniques. The only thing I know of that been done in this area is opinion polling for elections. But opinion polls have been wrong and they also tend to use information of actual voting results to 'adjust' their results which implies that something is fundamentally wrong with survey methodology.

Also I should mention that survey consistency does not imply correctness. Surveys could be wrong but wrong in a consistent way.

"The Johns Hopkins study used standard techniques to randomly choose households to be surveyed." It's hard to see how any standard technique can apply to a anarchic war zone. (Indeed, it sounds like the authors had to invent some rules of thumb to deal with the horrible situation in Iraq.)

There's also a fourth way a survey could be botched: individual survey teams could screw things up. As I understand it, the survey teams were composed of local Iraqis. If even one team cut corners and decided to save time by reporting some bogus data, that could botch the results. If a team were ideologically opposed to the American occupation and decided to inflate their reported death tolls, that would botch things even more. (The high response rate to the survey makes me a little suspicious -- given the state of affairs in Iraq, I would think people would be far LESS likely to cooperate with survey teams than in free countries.)

In addition to the previous Lancet study and the Iraq Body Count, there was a UN report claiming 3000 violent deaths across the country in the month of August. That's about one fifth of the death rate in the Lancet study. The UN got their figure from hospitals and morgues. Is it possible that the UN only got one fifth of the deaths in Iraq? Maybe, but it seems awfully unlikely.

I'll also repost a previous comment of mine:

"Car bombs account for 13% of the deaths in the survey. If the authors extrapolated that proportion into the total number of excess deaths -- and I think that's what they did -- then even at their low-end figure of 400k excess deaths you end up with 52000 deaths from car bombs. That averages to slightly over 40 deaths per day from car bombs since the occupation began, an implausibly high death rate.

"How did they get such a high figure for car bomb deaths? The simplest explanation is that through some sort of error they dramatically oversampled especially violent areas of Iraq. And if they oversampled violent ares, then their entire death rate from violence will be considerably inflated."

But it's hard to see why people obsess so much over the death toll. If it turned out that only 60000 people had been killed in the Boxing Day tsunami, we would still call it a huge catastrophe. And it's pretty obvious that Iraq is a humanitarian disaster, even if the death toll is far lower than the Lancet figure.

By Peter Caress (not verified) on 05 Dec 2006 #permalink

"But it's hard to see why people obsess so much over the death toll. If it turned out that only 60000 people had been killed in the Boxing Day tsunami, we would still call it a huge catastrophe. And it's pretty obvious that Iraq is a humanitarian disaster, even if the death toll is far lower than the Lancet figure."

Hard to for you to see why people obsess over the death toll, indeed, especially when the article in question in fact began with a comparison of news coverage of humanitarian disasters. That comparison coverage of Humanitarian distasters shows that only one kind of death toll was questioned in the media and by the media.

If even one team cut corners and decided to save time by reporting some bogus data, that could botch the results. If a team were ideologically opposed to the American occupation and decided to inflate their reported death tolls, that would botch things even more.

The fact that death certificates were produced in a large majority of cases should put such a possibility to bed.

"Why did journalists disbelieve the results of the survey?"
Maybe it's because those who are promoting it are so transparently deceptive and misleading?

Does this mean that the "truth" is that Iraq is going great? Even the new US secretary of state said yesterday that the US is losing the war. It's time to wake up and live in the real world.

josh josh josh. You spend so much time attacking the two 'Lancet' reports into Iraqi mortality that it's become patently obvious why you simply don't have the time to take the mainstream media to task over repeated misrepresentation of the Iraq Body Count data.

If you're truly anti-war and in favour of finding out the real cost of the Iraq war then perhaps you might better spend your time in the pursuit of this end, and +not+ sniping at epidemiologists whose stated aim appears to be just that.

That you never question people who claim that 'Saddam killed X Thousand of his people', at least asking them to verify the methodology used is telling.

Whether the methodology of the John Hopkins group is sound or not, the results are easily believable - and why is that? Perhaps because we've heard numerous independent testimonies of US soldiers detailing their killing and abuse? One interview with a veteran I watched was revealing. This single individual estimated that he personally had been responsible for the deaths of over 200 Iraqi civilians through collective punishment techniques they were encouraged to employ in order to coerce information from suspects or people thought to be connected with suspects - execute the family to make the father talk!!!

So come on josh - less time trying to rubbish other people's surveys and more time spent trying to extend the remit of your own beyond recording what the mainly propagandist mainstream media are reporting as the civilian death toll.

IBC is, after all, an innacurate picture of anything resembling reality. The media report numbers of dead, so you put them on your list which then allows the media to report YOUR figures as a believable de facto total reported by an 'anti-war' site, so hey, they must be correct. It's totally self-serving and skewed.

M-RES, that is the most empty, uninformed, rhetoric-filled pile of crap post I've read in some time. You should go argue on FreeRepublic, which is about your level of argument. This is supposed to be a science blog.

I don't run around "taking the media to task" as per your, Roberts', Lambert's and this crowd's self-serving and hypocritical demands because the misuse is largely imagined (see Bob Shone's excellent debunking of Tim Lambert's error filled "analysis" of "How IBC is reported" which he spins all to serve his own agenda), or when it isn't imagined, causes little if any of this supposed "damage" that the various IBC-haters conjecture (but can't demonstrate or quantify). In most cases, the only "damage" appears to be to some epidemiologist's ego, or to his fans desire to see some other estimate (that I don't believe) being cited. And I'm supposed to run around being all concerned by this. Please. The only purpose of these requests is to turn IBC into a tool for the promotion of the Lancet study.

You further write that it's "telling" that I never take on people who quote Saddam figures, but your head appears to be buried somewhere directly behind you. And this is far from the only time:
http://scienceblogs.com/deltoid/2006/04/ibc_takes_on_the_lancet_study.p…

Your post is empty, uninformed blather, but it has managed to divert the issue from the way Tim disinformed the readers of his article in the passage I took issue with.

What do you know? Ask Josh about IBC's inability to correct the distorted way the media presents IBCs incomplete mortality data and you get 'empty' (x2), 'uninformed'(x2) 'blather', 'rhetoric-filled pile of crap', 'self-serving' 'hypocritical', 'hater', all in one small paragraph.
What else is new?

Palo, it's a laugh that you describe that venomous and inaccurate posting as "Ask Josh about...". If someone had actually "asked josh" honestly, and the "asking" wasn't just some excuse to spew bile I may answer it more carefully, as I have done many times in the past. What I did answer is far more than it deserved. The post was, like your posting, an uninformed lecture with loaded and fallacious assumptions given by someone who's probably done absolutely nothing on this issue, but who's happy to spit on all the journalists who have actually done something (even risking their lives) and pompously lecturing someone like me who's spent years of his life painstakingly compiling and researching data on this issue, until recently almost completely anonymously and certainly with no reward to speak of. What work have you Palo or this M-Res done on this issue may I ask?

Tim, himself, is presenting _distorted mortality data_ above. If IBC has any obligations to correct *other peoples* real or imagined distortions about IBC, certainly Tim has far more obligation to correct distortions that he himself is putting forward. The Lancet report presents distortions almost everytime it mentions other sources, which are almost always distorted and spun to flatter Lancet conclusions, just like Tim's distortions above. Lambert and Roberts have done far more to distort the facts and disinform people on the issue of Iraqi mortality figures than have any inadequate soundbytes about IBC, and they do so directly and shamelessly.

And since you seem to like my wordings, I'll say them again. These demands of me (and the vitriol that accompany them) are empty, uniformed, self serving and hypocritical blather from a bunch of haters. Their intent has absolutely zero to do with the truth and everything to do with promoting Lancet and their own prejudices (which I don't share) about what the "real" toll is. Remove these prejudices and the desire to see Lancet paraded around as the "truth", and the purported "damage" from misleading or "potentially misleading" soundbytes about IBC reduces to almost nil, if it was every anything substantial to begin with. My time is much better spent correcting the truly misleading direct distortions that are continually emanating directly from those who are supposed to be the good guys.

There is a certain morbid humor in watching joshd flail about. His claims, while perhaps convincing to the ignorant or the deceived, are humorous to the informed. Yet the designers of America's delusional foreign policy have used his kind relentlessly, and there is no likelihood they will ever show remorse for the endless humiliation of joshd and his ilk. Yet what will joshd gain from their use of him? G.W. Bush & Co can proudly point to their clumsy, yet successful, enablement of the greatest ever looting of the American taxpayer. Their goals of Imperium, control of oil, and a puppet in Iraq may no longer be achievable, but the raiding of America they have achieved. But will joshd and his ilk receive a share of the ill-gotten goods in return for their unswerving devotion to the grand delusion? I suspect not.

Peter Caress wrote--

"The high response rate to the survey makes me a little suspicious -- given the state of affairs in Iraq, I would think people would be far LESS likely to cooperate with survey teams than in free countries.)"

I thought that too, at first, but it's wrong. Response rates in Iraqi surveys are extraordinarily high by Western standards. I don't have a cite, but there was at least one large Iraqi survey where the response was literally 100 percent. This was discussed here and elsewhere some weeks ago.

By Donald Johnson (not verified) on 06 Dec 2006 #permalink

"The Johns Hopkins study surveyed 1800 households, which is more than the 1000 people commonly surveyed in opinion polls."

Does anyone know if that's true of Iraqi opinion polls? And would 1000 people mean 1000 households? If so, that should be plenty big enough to either confirm or deny that the violent mortality rate is several hundred thousand, if someone were interested enough to ask.

Yes, I'm beating this into the ground, but it's frustrating. If Iraqi mortality matters, why is there only one team doing surveys on the subject? And yes, I think I know the answer.

By Donald Johnson (not verified) on 06 Dec 2006 #permalink

What work have you Palo or this M-Res done on this issue may I ask?

I'll answer for myself. As a foreigner, it is really not much what I could do to stop the killing machine. As an editor of what at the time was the largest Spanish-speaking literature website I placed an IBC banner when few did. I thought IBC approach was brillant at the time when few would believe innocent would die. I also pasted the walls of my place of work with flyers detailing the mendacious and false reasons for the Iraq invasion, something very few did, and, as a foreigner, I did at considerable risk.

I'll tell you also what I did not do and never will: be a tool of the mendacious criminals that unleashed or supported this war and the killing of hundred of thousands in Iraq. Something you cannot say.

Actually, Josh's objections to the Burnham et al. study do not appear to be much different in substance from those raised by a number of other critics: the study's authors said in interviews that side streets not intersecting main streets were sampled in some towns, and that sleeping (as well as eating) arrangements were used to determine household membership, neither of which were mentioned in the published description of methodology.

To sympathetic observers, these seem to be honest errors of omission, and unlikely to affect the study's conclusions. To Josh, the given explanations are "vague or contradictory" and the authors are "transparently deceptive and misleading."

Honest people may differ, but the depth and intensity of Josh's bitterness against Burnham and Roberts seems to lack any obvious explanation, especially since IBC and B&R are, at least nominally, working toward the same goal. On its web site, IBC states quite openly and honestly that

Our maximum therefore refers to reported deaths - which can only be a sample of true deaths unless one assumes that every civilian death has been reported. It is likely that many if not most civilian casualties will go unreported by the media. That is the sad nature of war.

There is no unavoidable conflict between IBC's numbers and Johns Hopkins', so long as we recognize that (as IBC acknowledges) media reports cannot capture all the deaths. So how did we get to the point where IBC is quoted approvingly by the Wall Street Journal in an editorial calling the Johns Hopkins study a "fraud"? Why is Josh so incandescently angry at Burnham and Roberts that he throws a fit[1] like the one above when it is suggested that he is missing the main point?

This ugly and pointless turf war between IBC and the Johns Hopkins team is doing no good whatever for the cause of getting at the truth. Perhaps Josh could start improving matters by stating for the record that there is in principle some description of methodology that would satisfy him, and that if Burnham and Roberts can supply it, he will at least credit them with good faith.

[1] I calls 'em like I sees 'em. Phrases such as "empty, uninformed, rhetoric-filled pile of crap" are hardly unusual for Internet dialogue, but those who use them too frequently tend to acquire the consensus label of "troll."

Here's an article on the US bombing of Cambodia that just came out that has some indirect relevance to Iraq--

http://www.walrusmagazine.com/articles/history-bombs-over-cambodia/

The interesting development is that Bill Clinton released classified data on air strikes in Cambodia and it turns out that the bomb tonnage dropped on that country was five times greater than previously known.

So maybe one can't necessarily trust US government statistics on air strikes in Iraq.

By Donald Johnson (not verified) on 06 Dec 2006 #permalink

If you'd been around at the time josh would you have believed then that the individual casualties which, integrated, now comprise the Vietnam War death toll were being fully and accurately reported in the media somewhere? If not, do you have some sense of the degree of underreporting that occurred during the Vietnam war?

"Why is Josh so incandescently angry at Burnham and Roberts that he throws a fit"

First I should point out that I'm responding above to a couple of venomous posts directed at me and IBC.

Second, I am not angry at Burnham. I know little about him and I've rarely heard him say much, and I believe his placement as lead author of L2 was due to the baggage of Roberts as a former Democratic candidate for congress and to place a bit of distance between this and L1, among possible other reasons. It's obviously a Roberts paper, it repeats almost all of his spurious arguments I've seen in the past, almost verbatim, and he does almost all the appearances and speaking for it.

My extremely low opinion of, and anger toward, Roberts comes from several things. First and foremost is his instigation and participation (anonymously - even worse and more underhanded) in the vicious smear campaign against IBC at Media Lens and elsewhere. Then he makes up these bogus claims that he and other experts had asked IBC to perform various analysis while the amateurs at IBC supposedly disregarded their urgent and reasonable requests, while no such contacts or requests had ever been put to IBC. Next, was his continual equivocation and deception wrt admitting to errors in his bogus "sensitivity analysis" which was used to give the smear campaign a completely groundless air of scientific support. The one error he did admit (cutting IBC in half in order to erroneously call it "the lowest of 8 serious studies") he only did dishonestly again while again distorting IBC downward again and trying to pretend his error wouldn't change any of the conclusions (even though it rendered them all false based on the evidence he himself provided). Next was his arrogant and self-serving dismissal, without argument, of our response to the smear campaign he instigated as "devoid of credibility", even while he knew damn well that we corrected him on several obvious mistakes and oversights in his "analysis" which actually was "devoid of credibility", until we amateurs fixed it for him that is.

Next, since Roberts took no action to correct any of the errors we identified, John wrote to him to ask him to admit and correct just one of the most obvious, his gross distortion of the NEJM study he pretended had a death estimate and put into his "sensitivity analysis" to provide a fraudulent illusion of support for his own estimate, cited to the "prestigious" NEJM. Roberts then made this letter a public matter and sent it around to everyone with a response questioning IBC's motives and writing a ridiculous defense of his distortion of this source, so he wouldn't have to admit or correct his transparent error.

Then comes L2 and it just gets worse. He absurdly falsifies IBC downward again in his L2 report with equivocation and a misleading reference, distorts a number of other sources always to the effect of supporting his thesis. In an interview he calls IBC "most disingenuous" for considering ILCS a credible estimate (rather than a "gross underestimate" as he's now taken to calling it) and pointing out that it disagrees with his estimate (as ILCS' author also does). Then he decides to omit IBC's coverage relative to ILCS (along with many other relevant examples) when making his transparently dishonest L2 claims that they can't "find" any examples of "passive surveillance" covering more than 20% of "population-based methods". Then he makes up these bogus MoH statistics in the L2 companion, asserted with no particular source or citation, and circulates them in interviews all to create an illusion of support for his thesis. He even continues doing this after he knows damn well that IBC has shown these figures to be incorrect. Then he concocts this ludicrous distortion that L2 is only off from ILCS by a factor of 2, which Tim regurgitates above. On and on and on...

Almost every word the guy says on this matter is a self-serving distortion or flat out lie. IBC has worked very hard (many at IBC have worked far harder than me, again anonymously and with no reward) to find and expose the facts as they are known. And all this guy does is go around self-servingly distorting and falsifying facts to suit his own purposes. And with all this he has the sheer gall to instigate this smear campaign against us, all while looking down his nose condescendingly at the "IBC amateurs", who have only ever corrected his countless "expert" mistakes and distortions of fact.

It's a disgrace, and that is why I'm angry about it.

Josh, Les Roberts has been sloppy on several occasions. The NJEM estimate was just silly, though it's a shame he did that, because it is very interesting to know that a high percentage of US forces say they've been responsible for killing civilians, even if one can't get a solid number out of that.

But I haven't noticed IBC being forthcoming about the possible shortcomings of its own methodology other than the token concession about many or most deaths going unreported (which you clearly don't believe) and the Lancet papers are way ahead of you guys on that. It seems clear you guys look for data that support your thesis and downplay stories that don't (including those that report the general level of chaos or accusations that government data on deaths is suppressed or falsified). You latched onto the ILCS estimate based on one vague question at the end of a long survey and tried to use this number to discredit Lancet 1, when in fact the numbers (to the extent one could compare them) weren't that far off and if there was any sort of undercount in ILCS's number at all, there might not be any discrepancy. The point, I think, was to "win" the debate with the Lancet supporters at all costs, but it boggles my mind that your group actually thinks it could be sure that your count was over 50 percent of the total, rather than, for instance, 33 percent. You could only be "sure" of this by insisting that the ILCS estimate was correct and couldn't be an undercount.

And the two year study wasn't much better--I downloaded that expecting to see both a summary of the data and a long discussion about the possible shortcomings of the methodology and instead there was a lot of self-praise, disguised as praise for the Iraqis "who let the world know what was going on". Fine, but that's not the question. The question is whether the press is in a position to know whether the Iraqi and US governments are covering up some of the deaths. You don't have to believe that they are covering up over 90 percent of them to wonder if they might be covering up some, but the issue hardly seems to exist for IBC, yet there are reporters who know Iraq who do think that the true death toll might be in the hundreds of thousands--Nir Rosen for one. Not to mention some Iraqi bloggers.

It seems to me that both sides cherrypick in this debate.

By Donald Johnson (not verified) on 06 Dec 2006 #permalink

Donald, your versions of events are, as is often the case, slanted beyond recognition. IBC "latched onto" ILCS because it is a far larger, better distributed and far more precise survey estimate than L1. You try to poison the well with this "one question" business, while giving no argument other than implying whatever conjecture you want to believe about it.

You can disagree with our view of ILCS, but ours is a fair and reasonable position (held also by its author). And we did not "discredit" L1 with our comparison, we provided the best and most appropriate comparison of the two sources that had ever been done (far better, I should add, than the crude and erroneous one that expert Lambert did on this blog and which most here - including yourself - seemed happy believing for a year, and also far better than the ridiculous one he's doing now with ILCS and L2). It's the practice of Roberts to change or omit the facts when they might be taken as "discrediting" his conclusions. It's not the practice of IBC. The relationship between the two is what it is. If that "discredits" one, so be it. If it doesn't, so be it. And no, it isn't the case that if there is any sort of undercount in ILCS that there's no disparity. All kinds of undue benefit was given to convergence with L1 in our comparison:

1. We compared an "excess" L1 with a non "excess" ILCS - meaning we should have subtracted something from ILCS to account for Lancet's "excess" subtraction of 3,000 violent deaths from it's estimate.

2. We compared the L1 estimate for 97% of Iraq (minus Anbar) with the ILCS estimate for 100% of Iraq. We should have removed all the Anbar deaths from ILCS.

3. We made no correction for the fact that L1 claims it would have missed most military deaths during the invasion, while ILCS had no such limitation.

We let L1 slide all over the place in our comparison(s), giving every advantage, even undue advantage, to convergence, and there's still a disparity.

You point to Pedersen thinking there is likely some small amount of underestimation in ILCS, but Pedersen saw our comparisons before we published them. And if we're supposed to factor that into the comparison, what then about Roberts thinking that L1's non-Falluja estimate was a big underestimate? If we start factoring this kind of thing in it only drives them further apart again. The closer you look the further apart they get.

And you're inventing straw men to boggle your own mind (and to allow you to find yet more fault with IBC as you are so, so eager to do). IBC doesn't say it's "sure" (who are you "quoting" exactly Donald?) about any percentage.

And the IBC dossier did contain comments about limitations. I've heard you rant about that dossier over and over again. I reject your interpretation. The intro of the Dossier states:

"Our accounting is not complete: only an in-depth, on-
the-ground census could come close to achieving that."

Everyone's entitled to their opinion, but if that's not good enough for you, then so be it. You didn't like the dossier. You didn't like the dossier. You didn't like the dossier. Fine. I get it. Your whole complaint consists in that IBC doesn't hold all the same views as you about limitations (mostly meaning it didn't share your faith in L1) and doesn't give enough credence to these various conjectures that you choose to believe in. How is that the same as constantly distorting the basic facts and figures, as Roberts has done, or instigating a smear campaign? It isn't.

To claim as you do that "both sides" have been doing the kind of things I discussed above is patently absurd, and it's telling that you have to invent straw men to find some way to keep this view afloat.

It seems to me that both sides cherrypick in this debate.

Huh:

On page 94 of its report, the Iraq Study Group found that there had been "significant under-reporting of the violence in Iraq." The reason, the group said, was because the tracking system was designed in a way that minimized the deaths of Iraqis.

"The standard for recording attacks acts a filter to keep events out of reports and databases," the report said. "A murder of an Iraqi is not necessarily counted as an attack. If we cannot determine the source of a sectarian attack, that assault does not make it into the database. A roadside bomb or a rocket or mortar attack that doesn't hurt U.S. personnel doesn't count."

A Lancet mendacicization bingo chip, surely.

Best,

D

Tim, you're confused. It was your expert comparison that contained the errors which we amatuers had to correct for you.

The "error" you assert is that we explain and use an assumption to apply a correction to ILCS. This is not an error. It's applying an assumption. This is far more legitimate than your original comparison which applied spurious and crude assumptions to reach your desired conclusion, and it's also far more legitimate than your "corrected" version of your (still erroneous) comparison in which you now use entirely unexplained new assumptions to cling to your claim that ILCS and L1 line up, "vindicating" L1.

If our explaining and using an assumption (L1's timeline to expand ILCS) is itself an "error", what is your using some new series of assumptions in your "vindication" comparison (the old ones don't work anymore) while not explaining what they are?

I know what it is, but I'll refrain from calling you names.

I will give you though Tim, that I do still get a laugh at how badly you screw up in the opening posting of that thread.

You claim:
"Of the 21 violent deaths, 11 occurred before the ILCS was conducted, 6 happened in the months when the ILCS was being conducted, and 4 after the ILCS was finished. If we split the 6 evenly into before and after we get that 14 of 21 violent deaths would have been picked up by the ILCS. Using this to adjust the ILCS gives an estimate of 24,000x(21/14) = 36,000, which is higher than the 33,000 we used before."

Yet we show here that this bunch of assumption you reached for to keep your boat afloat is the worst case you could have picked:
http://scienceblogs.com/deltoid/2006/04/ibc_takes_on_the_lancet_study.p…

But then, you've always evaded addressing my posting there so you wouldn't have to explain the nebulous assumptions you must be using to continue pretending your "vindicated" analysis was correct. Maybe you just didn't read it (yeah right).

I understand too why you still want to pretend that IBC made an "error" with regard to the CI shapes. This is the one thing in your posting that you felt you could still cling to, but I fail to see why you are now trying to pretend that the one thing you felt hadn't fallen entirely apart is "several serious errors".

Perhaps this has something to do with one of the names I called you a short time ago.

MRES, Great posts. JoshD is not at all interested in attacking the senseless butchery of the illegal and unprovoked invasion of Iraq, but of those who suggest that coalition forces have unleashed a conflict that has effectively destroyed the country and killed hundreds of tousands in the process. Every time he writes one of his invective filled posts, lashing out at anyone or anything that suggests there is carnage in Iraq well beyond IBC estimates, while making hardly a whimper against the aggressors, it tells me more and more about on which side of the debate he lies.

By Jeff Harvey (not verified) on 06 Dec 2006 #permalink

joshd is purveying FUD, not trying to do anything else.

It reminds me of an Ursula K LeGuin quotation that I can't remember where I got it from:

"To oppose something is to maintain it."

I think this works with ideological spamming of comment threads too - Atrios calls it 'feeding the trolls'.

Best,

D

Josh, saying that Pedersen saw your piece before it was published doesn't tell me that he endorsed it. If he did, then it'd be interesting to hear him explain why he thinks it is so easy to get a mortality rate in Iraq, just asking one question on a survey when apparently others find that approach problematic.

You mention that I accepted Tim's lower number--in fact, I wouldn't have bothered quibbling over 33,000 vs. 39,000. For that matter, I've never been clear on where the 57,000 figure comes from. (Except roughly, with one death in L1 being about equal to 3000.)

As for my ranting about the two year dossier, yep, that's exactly what I've been doing. We medialens-reading critics went way overboard earlier this year and I regret that, but I don't think every criticism we levelled at IBC was unfounded. It was clear in the dossier and even more clear in IBC's response and in Sloboda's slideshow and in the BBC interview that you guys had become wedded to the notion that your methodology catches most deaths, based, so far as I can tell, solely on your reading of the ILCS number. What would satisfy me (not that you care, but I'll say it anyway) would be the following statements--

1. IBC has no way of knowing to what extent the US might have covered up civilian deaths inflicted by its forces.
IBC is skeptical that it could be as high as 180,000 as suggested in L2, for reasons that we have explained, but it could well be much much higher than press reports indicate. There are scattered reports in the press that hint at collateral damage being common, but not much beyond that.

2. IBC also doesn't know to what extent we are missing the total number of deaths. We are skeptical that it could be as high as 600,000, but it might be much higher than what we can determine with our methods. Further mortality studies are urgently needed and it is a disgrace that so far, only the Lancet authors have done surveys designed specifically to answer the question. It is customary in science for a controversial claim to be resolved by other scientists jumping into the fray and conducting their own studies--the US and the UN have a moral obligation to fund independent research into this question.

Feel free to copy the above and use it in future IBC press releases.

By Donald Johnson (not verified) on 08 Dec 2006 #permalink

I forgot to reply to some of your specific comments, Josh.

Your statement about ILCS's superiority over L1 because it was a larger survey is an example of cherrypicking. Yes, ILCS was much larger and therefore its CI would be smaller. But the CI is only measuring random sampling error. It says nothing about other sorts of error. You call it "poisoning the well" when I point out that one vaguely worded question about "war-related" deaths in the later portion of a very long survey isn't the ideal way to measure mortality. Well, I'm not an expert, but apparently it isn't and if it weren't for the fact that you're cherrypicking I think you'd admit that. If the team had to go back and ask further questions to get a correct number for infant mortality, it seems quite possible that further questioning might have led to a larger war-related death count.

And yes, IBC does throw in these comments every now and then about how their data is incomplete and "many or most" deaths go unreported, and that's good, because casual readers see this and think that IBC is admitting that it doesn't really know what the true death toll is and doesn't pretend to estimate it. But there was no point in the attack on Lancet1 unless you really think IBC gets over 50 percent of the death toll--you yourself said you thought admitting something as small as 50 percent was a mistake. If you guys think it was possible that in Sept 2004 the true civilian death toll could have been 30,000 rather than 19,000, than that meant your methodology missed 11,000 deaths. But if you could miss 11,000, then why not 20,000 or more? There's nothing in the methodology that would let you know. Only your belief that ILCS is the gold standard would let you come to the conclusion that you could roughly quantify how many deaths you're missing, to the point where you go out of your way to attack the midrange figure from Lancet1, something that was totally unnecessary if the only point was to defend IBC from the medialens hordes.

Finally, in my own reading of the mainstream press, I find almost zero interest in the question of coalition-inflicted casualties. There are exceptions here and there, but it doesn't come up, for the most part. Even when the NYT reported the findings of Lancet 2, the shockingly high number attributed to US forces is only found in the pie charts, which also show the percentage dropping to about 1/4 for the final year. Neither the total nor the percentage on a year-by-year basis is remotely similar to IBC's numbers (except for percentages for the first year), but to the NYT this isn't worth mentioning. The issue just doesn't exist for them.

By Donald Johnson (not verified) on 08 Dec 2006 #permalink

Donald, as I've said elsewhere, I did not speak with Pedersen directly, but before we published the comparison one of my co-authors told me about his conversations with Pedersen on the phone about these and that he approved.

"Your statement about ILCS's superiority over L1 because it was a larger survey is an example of cherrypicking."

It's an example of accepting many large and undisputable statistical advantages between two survey studies and placing these over your conjectured non-sampling biases. For you, IBC not giving these conjectures equal weight to the large and quantifiable advantages is "cherry picking". I'd suggest the exact opposite. Your giving equal weight to these conjectures is the only "cherry picking" going on here.

There could be biases in either study, not just ILCS. Pedersen believes that ILCS is likely to be somewhat low, but believes that your conjectures about it (repeated from Les Roberts) are mountains being made from molehills. We could conjecture about them all day, but there is no debate about the advantages ILCS has over Lancet.

Also, IBC's view is not just based on ILCS alone, but also on extremely in-depth evaluations of all the circumstantial evidence, such as the kind we discuss in the response to L2 (which itself is only the tip of the iceberg with this kind of analysis). Reasonable assumptions about missed deaths from evaluations of all the evidence coming out of Iraq (which it seems obvious that IBC has analysed far more thoroughly than have Lancet authors, or anyone of IBC's related critics) tend to square pretty well with ILCS, while requiring countless dubious assumptions for the L2 type scale. The closer you look at all this the more plausible ILCS looks and the less plausible L2 looks.

You can discard all this type of analysis as conjecture or "arguments from incredulity" if you like, but then that's all your argument against ILCS amounts to.

And this is incorrect: "only the Lancet authors have done surveys designed specifically to answer the question." because it chooses to erase ILCS from history ("cherry picking" again), so we wouldn't write it.

joshd wrote:

I did not speak with Pedersen directly, but before we published the comparison one of my co-authors told me about his conversations with Pedersen on the phone about these

What level of training in demography or epidemiology does your co-author have?

For you, IBC not giving these conjectures equal weight to the large and quantifiable advantages is "cherry picking". I'd suggest the exact opposite.

Your suggestion would probably carry a bit more weight if you hadn't already demonstrated that you're not trained in the technical issues.

and earlier, joshd wrote:

[unhinged comments about Roberts snipped]

Dude, remember when I wrote that you're a tad too invested in this topic? I'm thinking that "tad" no longer applies.

Robert, my colleague is not trained in either field. Though I fail to see why such training is a prerequisite for discussion of the ILCS comparison (training in epidemiology seems to have little if any relevance here anyway, since is no knowledge exclusive to that field that is relevant to the comparison in question). Pedersen is trained and the background of my colleague isn't going to change his opinion of the comparisons. It only changes yours because you seem to decide factual questions based not on the facts but based on what letters are behind someone's name (and then only as it suits you).

Second, I've never claimed to have any formal training in these fields. You repeatedly bring this up to evade the factual questions and engage in credentialist dick-waving, just as here. Everything I say above is correct, and your reply is to say I'm not trained.

You further claim that I've "demonstrated" something, but this is again so much empty blather. You haven't shown me wrong about anything here (no doubt you'll assert you have but I just can't see it for my lack of formal training - your all-purpose excuse for your lack of argument).

Next you write that I'm too "invested" because of what I said about Roberts above, but again you place these ad hominems in place of any argument on the facts. What I said was factually correct and more than the deserving response for what I correctly described. It is not my "investment" level that is the problem. It is those things I correctly described in my "unhinged comments".

joshd wrote:

my colleague is not trained in either field. Though I fail to see why such training is a prerequisite for discussion of the ILCS comparison (training in epidemiology seems to have little if any relevance here anyway, since is no knowledge exclusive to that field that is relevant to the comparison in question).

Well, of course you think "it seems to have little if any relevance." That's cuz you don't know anything about it.

joshd, it's not ad hominem to point out that your anger toward Roberts has colored your judgement. No one has said that one needs formal training in order to discuss or ask questions -- but you've gone far beyond that. Your need to discredit Roberts is so great that you're claiming four decades of research on demographic surveys is irrelevant and diversionary. That's cuz it's saying something you don't want to hear: that the mortality estimates from the ILCS survey aren't reliable enough to be used either to support or reject the Roberts study. Note that I didn't say that this body of research says that the ILCS survey was poorly done, or that the Roberts study was done well; it merely says (and I merely say) that the two aren't comparable enough to be used either for definitive support or definitive refutation of each other. However, that kind of equivocation is something that doesn't support your attack on the hated and despicable Roberts, so must be attacked, too. Yikes, dude. You need to drink a little chamomile tea or something; maybe go for a walk on the beach.

Josh I find your deployment of the ILCS as an irrefutable coup de grace puzzling. As you know it was not primarily a mortality study but designed to study the entire gamut of iraqi living standards. Hence there was but one question referring to "war-related" deaths in an 82 minute interview.

Firstly, according to Roberts, the overall non-violent mortality estimate found by ILCS was very low compared to the lancet's 5.0 and 5.5/ 1000 /year estimates for the pre-war period which many critics claim seems too low. Jon also sent interviewers back after the survey was over to the same interviewed houses and asked just about <5 year old deaths. The same houses reported ~50% more deaths the second time around.

Thus we have clear evidence that ILCS is a marked undercount on 2 area of mortality.

Secondly when you consider that the question asked for "war-related" deaths, not violent deaths, you have a further discrepancy between the lancet studies and ILCS. What is a "war-related" death? One caused by the coalition? Death during war, but not occupation? Its definition is vague and problematic which obviously this will not pick up all violent deaths as did the lancet report resulting in another undercount.

Please explain josh why these quite obvious and easily determinable points escape you in your drive to rubbish the lancet studies?

Josh writes:

IBC's view is not just based on ILCS alone, but also on extremely in-depth evaluations of all the circumstantial evidence, such as the kind we discuss in the response to L2

I'm not entirely sure what Josh means here, but I do hope it doesn't refer to the absolute guff that Iraq Body Count wrote about the Pentagon, namely:

The Pentagon, which has every reason to highlight the lethality of car bombs to Iraqis, records, on average, two to three car-bombings per day throughout Iraq, including those hitting only its own forces or causing no casualties, for the period in question. (emphasis added)

That is so wrong it's embarassing. Everyone - except Iraq Body Count - knows that the Pentagon has no reason to highlight such incidents because to do so would highlight their failure to control the security situation. I pointed this out weeks ago on the ML board and have seen it confirmed in this weeks Iraq Study Group report, which noted in its discussion of intelligence reports that:

there is significant underreporting of the violence in Iraq. The standard for
recording attacks acts as a filter to keep events out of reports and databases. A murder of an Iraqi
is not necessarily counted as an attack. If we cannot determine the source of a sectarian attack,
that assault does not make it into the database. A roadside bomb or a rocket or mortar attack that doesn't hurt U.S. personnel doesn't count. For example, on one day in July 2006 there were 93 attacks or significant acts of violence reported. Yet a careful review of the [intelligence] reports for that single day brought to light 1,100 acts of violence. Good policy is difficult to make when information is systematically collected in a way that minimizes its discrepancy with policy goals.

So, it's not just that the Pentagon under-report violence in its publicly available data but they also conceal the true figures from their own policy makers. This is the precise opposite of what IBC claim.

Here is the recommendation made by the ISG report in light of the current massive under-reporting of violence:

RECOMMENDATION 78:

The Director of National Intelligence and the Secretary of Defense should also institute immediate changes in the collection of data about violence and the sources of violence in Iraq to provide a more accurate picture of events on the ground.

As observed somewhere else (slips my mind just now, I mentioned it when I first posted about this a month or so ago), 650,000 dead in a civil war in a country of Iraq over 3 years is about the same percentage as killed in Bosnia over 3 years. I don't think we can maintain at this point that the Iraq situation is so much more stable and peaceful than Bosnia that a ten-fold less death rate would be expected. In fact, 650,000 would represent a death rate on the low side for civil wars.

Nash writes: "Firstly, according to Roberts, the overall non-violent mortality estimate found by ILCS was very low compared to the lancet's 5.0 and 5.5/ 1000 /year estimates for the pre-war period which many critics claim seems too low. Jon also sent interviewers back after the survey was over to the same interviewed houses and asked just about <5 year old deaths. The same houses reported ~50% more deaths the second time around."

On your first point, I'm not sure where Roberts is getting this from, so can't comment. On your second point, as I've pointed out elsewhere, initially finding underreporting in infant mortality (which they were careful to correct) does not mean you got underreporting of adult mortality. And Jon Pedersen doesn't accept this argument either (see below).

Thus I have yet to see any evidence that ILCS was a "marked underestimate" of anything. I'm not sure about your first reference, and your second was corrected so there wasn't underestimate, and the assumption that initial issues with infant mortality translates to the other estimates is something you're simply assuming but not substantiating.

Third you question "war related" deaths. This was one of several categories on the questionaire, which respondents were supposed to assign for the dead or missing. My interpretation of this category is consistent with Pedersen's, that these are violent deaths (and missing) over the period specified by the questionaire, while probably excepting many or most criminal murders.

Robert writes: (more self-serving assertions)

Since the entirety of Robert's "argument" consists of saying I don't have "training" and then issuing circular ad-hominem's about my supposed over "investment", we can look at what Pedersen says, who's training is hardly in doubt.

He believes L2 is way too high and believes his estimate was about right, on the "low side" due mainly to standard limitations with household surveys, but in the right "ball park". Thus he, like IBC, takes ILCS as more reliable. He also does not concur with either Robert's or Nash's (asserted and conjectured, respectively) exceptions to the study.

Soldz posted this from Pedersen about similar points:

>Les Roberts quotes you as saying:
>"A survey led by a group in Norway (see report at www.fafo.no) estimated 56 violent deaths per day over the first year of occupation, but the authors speculate that the estimate is low.8 " AND
>"Lead Researcher Jon Pederson told Lancet author Richard Garfield that he knows his estimate is low. When revisiting a small sub-sample of household with children, and additional 50% of reported child deaths could be identified. "
>Is this fair as regards the war-related mortality figures? If so, why?, If not, why not?

No it is not, I am frankly rather irriated by Les' contention that because of the fact that I admit that carrying out surveys in Iraq is difficult, then their work must be much better. In any case - reporting problems on infant mortality and adult mortality are generally quite different. But I do think that we are on the low side.

It's commonly believed that single items in surveys are less accurate than surveys with at least a section on a topic. You hinted in the Washington Post yesterday that the ILCS mortality could be low for this reason. Could you elaborate? Do you have any evidence one way or another regarding this issue?

yes in general this is true. But what one usually finds is that it is not that bad. It is generally accepoted in the demographic community that household based mortality estimates are on the low side, particularly with respect to mortality where particular (easily know causes) are not speciufied.

His view here, as also conveyed to my colleague, is that these arguments are mountains being made from molehills and they do not in any way render ILCS "too unreliable" for anything.

Robert, great posts. I learn a lot from reading them. As for the Dunning-Kruger effect, in my opinion it fits a number of people I know, including Josh here and Bjorn Lomborg, who is clearly out of his depth in a myriad of fields but speaks and writes as if he is a sage of wisdom.

As someone who is a population ecologist but has no expertise in epidemiology, I would say that 4 decades worth of research in a field does qualify someone - in this example, Les Roberts - with credibilit in this debate. Josh can throw as much invective as he likes, but it does not change the fact that he has no relevant expertise in this area. That doesn't mean he cannot comment on it, but it does mean that many of us should take those comments with an immense helping of salt.

Most importantly, reading Josh D's outbursts and the debate on estimating the level of carnage in Iraq as a result of an illegal and unprovoked invasion by US-UK forces, reminds me of the ongoing debate on the importance of biodiversity in sustaining civilization as we know it through the providing of direct and indirect ecosystem services. The question has been formulated around the concept of two hypotheses: 'triage' and 'river popper'; in the former, most species are superfluous to the needs of the system, whereas in the latter all species reinforce ecosystem resilience to some extent. Amongst ecologists who support each of these hypotheses, the debate has generated quite some enmity and bitter arguments. A few ears ago, ecologist David Ehrenfeld made the point that neoclassical economists (who view the environment as a small subset of the economy) like nothing more than to see ecologists arguing over valuation were valuation ought to be evident. As long as the two sides are involved in a bitter debate, nothing changes; it is like fddling while Rome burns.

I see the IBC-Lancet debate is running along similar lines as the debate on the importance of biodiversity. The two sides are apparently in agreement that the war was vile and illegal (although JoshD's comments give me considerable doubt) yet the argument appears to be over just how many bodies have piled up. However, the facts, which ought ot be evident, are being obscured by this bitter debate as well. The facts are that the result of the US-UK invasion of Iraq has been utter carnage, a senseless butchery that should result in trials for war crimes and crimes against humanity for the perpetrators (meaning the civilian leaderships in Washington and London). AT LEAST tens of thousands of civilians, AND PERHAPS MANY MORE, have died as a result of a war that had nothing to do with a perceived threat (there was none) or democracy promotion (bearing in mind the current incuments in Washington loathe real 'bottom-up' democracy) but had everything to do with economic expansion and control for the benefit of a narrow constituency of powerful, welathy groups and individuals. This would hardly be controversial, were it not for the corporate media that is unremittinly hostile to progressive movements that are non-aligned with state-corporate power and continually subservient to those commanding wealth and power.

JoshD rarely addresses these points. He's made a few noises indicating that he thinks the invasion was a criminal act, but with only a very small fraction of the anger he reserves for those who estmate that the butchery is on a far vaster scale than IBC estimates. I would like to ask him this (and I expect, of course, no response).

1. How many civilians does he think dies under US bombs in Viet Nam and Cambodia? In other foregin adventures e.g. the Phillipines campaign, 1901-02, the fire bombing of Japanese cities in 1945 (just prior to the end of WW II), the Korean War, and in other proxy wars fought to suppress indigenous nationalism? Does he think the US media really cared very much about making accurate body counts of Reagan's policies in Latin America in the 1980's?

2. Does he really feel that the US media, which, as I said above, habitually supports naked US power, is interested in counting the victims of US aggression, especially in light of the fact that a large tally will shed a bad light on the constantly propounded myth of US benevolence: the 'we are the good guys' propaganda? I make this point because the New York Times and other so-called 'liberal' papers have described US foregin policy as being 'noble' or of having a 'saintly glow' (e.g. Haiti) or being examples of 'US fair play' (Nicaragua).

By Jeff Harvey (not verified) on 10 Dec 2006 #permalink

Jeff Harvey wrote:

[triage vs. river popper controversy snipped]

Yeah, that's an interesting analogy. AFAICT, no one either in demography or epi thinks that the ILCS and the JHU/AMU studies are directly comparable -- in the sense that they are so comparable that one can be used to prove the other wrong. I emphasize that no one is saying that the ILCS is a bad survey; just that it wasn't designed to measure mortality. In fact, the single mortality question asked in the ILCS doesn't provide enough information to calculate a mortality rate, let alone to analyze changes in mortality over time. It's only guys like [JoshD](http://www.apa.org/journals/features/psp7761121.pdf) who, in desperation, grasp for anything that can be used to attack the Dread Pirate Roberts.

neoclassical economists (who view the environment as a small subset of the economy)

My elder brother, who was trained as an engineer, once explained to me the difference between physics and engineering: "Engineers," declaimed he, "think an equation is the approximation of reality while physicists think reality is the approximation to an equation." Now I'm a demography professor but originally I started off as a mathematician so he paused before continuing, "Mathematicians haven't yet made the connection."

Yeah, the Lancet/IBC debate is, politically speaking, a tempest in a teapot, in that nobody outside medialens and here and maybe one or two other places pays much attention. It's clear even by IBC numbers that the Iraq war has been a disaster. Also, it seems to me that the news media itself does a decent job adding up the numbers that they know about--if IBC and the Lancet papers had both never existed, we'd still know that at least tens of thousands of Iraqi civilians had died and that the situation is getting worse. The one issue between the Lancet papers on the one hand and the IBC numbers on the other that might have serious policy implications would be the number of deaths attributable to coalition forces. If the ratio is in fact in the 20-30 percent range right up to the present, then that completely removes the rationale for keeping coalition forces in Iraq. If you go by IBC numbers one could argue for removing coalition forces, but not because they are killing large numbers of civilians themselves.

But anyway, back to the scuffling. The fact that Pedersen thinks ILCS is in the ballpark doesn't tell me much--in particular, if we are talking about L1 then it seems to me that a relatively modest undercount could pull ILCS numbers up to L1 midrange numbers. There is a problem with L2, but allowing for the possibility of an undercount it might be roughly a factor of 2 difference, as Tim says.

By Donald Johnson (not verified) on 11 Dec 2006 #permalink

Josh, here's that thread from Stephen Soldz's blog that you cited just above--

http://psychoanalystsopposewar.org/blog/2006/11/26/conversation-with-jo…

I noticed that just above the part you cited, Pedersen says ILCS did not include the civilian casualties inflicted in Fallujah in the spring of 2004, and he also suspects (off the top of his head) that they didn't include the casualties from intense fighting in Shia areas that also happened in the spring.

So if you don't pick up the places where some of the heaviest fighting since the invasion actually occurred, what you probably have is an undercount, and not just in Fallujah either (though this is trusting the top of Pedersen's head).

This is cherrypicking, Josh. Not that we don't all do it--it's human, but that's what it is.

By Donald Johnson (not verified) on 11 Dec 2006 #permalink

By the way, if you click on the link in the post above, go down to comment 9 to find Pedersen's remarks, both the ones Josh quoted and the rather interesting ones that he didn't.

To clarify my previous remark, I think Josh should have mentioned that Pedersen says ILCS didn't cover the spring 2004 fighting in Fallujah and (he suspects off the top of his head) didn't cover the intense fighting with the Mahdi army that occurred around then.

IBC used ILCS in an attempt to demonstrate that the Lancet 1 midrange estimate was too high, and it now appears that ILCS might not have counted the casualties in the intense fighting that occurred while the survey was being conducted, not only in Fallujah, but possibly elsewhere. Pedersen says the thing to do is to get the overall mortality and then add in the best estimate one can for the areas of intense fighting that were missed.

So there could be an undercount because ILCS asked one vaguely-worded question about deaths and then on top of that there are the uncounted deaths that occurred in areas of intense fighting in the spring of 2004. So this is the study that was supposed to have discredited Lancet 1's midrange estimate.

By Donald Johnson (not verified) on 12 Dec 2006 #permalink