Long article on Lancet studies in Johns Hopkins Alumni Magazine

Eli Rabett has some extracts from a 5,000 word article by Dale Keiger on the Lancet studies that appeared in the Johns Hopkins Alumni Journal. Keiger says that it will be available online in a few days.

Update: Here it is.

Tags

More like this

Welcome to the 2004 Deltoid awards. Today we are giving out the Golden Rake Award, named in honour of Sideshow Bob and the rakes in the Simpsons Cape Feare episode: How many other series would waste valuable prime-time real estate by showing a man whacking himself in the face…
Dale Keiger's article on the Lancet studies is now online: Newspapers the world over put the number in their headlines. Reporters tried to explain it, often bungling the job. To dismiss the research, critics seized on its implausibility, in the process frequently distorting its meaning. Political…
Last year AP-IPSOS surveyed Americans and asked them to estimate how many Iraqi civilians had died in the war. They grossly underestimated the number, with the median estimate being just 9,890. The Atlantic has now published Megan McArdle's latest anti-Lancet screed, where she argues that it…
It never ceases to amaze me the way the Wall Street Journal combines superb news coverage with a completely clueless editorial page. To balance an excellent news article by Carl Bialik on the first Lancet study, we have an innumerate article on the editorial page by Steven E. Moore. Moore claims…

"They (B&L) reminded people that even without the war, Iraqis should have been dying at the rate of at least 120,000 per year from natural causes, yet in 2002, before the invasion, the Iraqi government had reported only 40,000."

Good luck finding a source for that (false) claim. See: http://www.iraqbodycount.org/press/pr14/4.php

Just so we know what joshd is talking about, Iraq has a population of ~20 million. 120,000 deaths per year is a death rate of 6 per 1000, a bit higher than what the surveys before and after the war found. Therefore he is questioning the reported number of deaths, 40,000.

If I understand Josh correctly, his gripe is with the claim that in 2002, before the invasion, the Iraqi government had reported only 40,000 deaths. Somebody at the LA Times told IBC that they had an official figure of 84,025 excluding Kurdistan.

Maybe Burnham et al will document their 40,000 figure someday. AFAICT it was Riyadh Lafta who provided that figure. And maybe someday Josh will tell us why IBC's official figure can't be traced to a more official source than the LA Times, which hasn't even published it AFAIK.

By Kevin Donoghue (not verified) on 07 Feb 2007 #permalink

BTW the Lancet published some correspondence about Burnham et al. in January, including a letter from Josh. Regulars will have seen most of the points before.

By Kevin Donoghue (not verified) on 07 Feb 2007 #permalink

Carping about methodological biases seems pointless, unless someone wants to argue for a factor of ten that way.

The only other recourse for skeptics is to suspect fraud at some level--Iraqi respondents, the actual survey team, or the authors. I've wondered (as well as a few others) whether the respondents were not telling the truth about who was doing the killing, since the air strike number is so high. But that could also be honest mistakes--as Kevin or someone pointed out to me somewhere, a mortar shell coming through the roof might seem like an air strike. Or maybe there really are huge numbers of unreported air strike deaths.

By Donald Johnson (not verified) on 07 Feb 2007 #permalink

Kevin, we requested from LAT the raw data which they used for a major report on Iraqi casualties in July 2006. The figure we give for 2002 is the one in the detailed raw data provided to the LAT authors directly from MoH.

"Maybe Burnham et al will document their 40,000 figure someday."

Yeah right. Let's all hold our breath.

Donald writes: "Carping about methodological biases seems pointless, unless someone wants to argue for a factor of ten that way."

Donald, there is no need to argue factors of 10. You're assuming their point estimate among other things, which is already unreliable by about 300,000 even if you assume they collected a truly random sample (which they didn't) and had no reporting error (who knows). Unaccounted biases (in sampling or reporting) would invalidate the CI as any kind of reliable measure of the uncertainty of the estimate (not that anyone knows how they calculated the CI's to begin with). This leaves the point estimate a speculation derived by arbitrarily applying 200,000% extrapolations to the data with no means to quantify what degree of uncertainty doing this would have.

The burden of proof here is not on "skeptics". Those asserting these estimates have not met any burden of proof to begin with.

"The only other recourse for skeptics is to suspect fraud at some level--Iraqi respondents, the actual survey team, or the authors."

There's plenty of reasons one might suspect this too. But since the data is all secret, interviewers and respondents who controlled the secret data mostly anonymous, there is no way any of this could be checked or disproven. You can either buy the data on faith, or not, like in all good science.

Donald also writes: "A piece on the air war. It's interesting, but inconclusive.
http://www.commondreams.org/views07/0207-32.htm"

The entire premise of the article is: L2 point estimates for the air strikes subset are reality. And "we know" everything else is wrong to whatever degree they diverge from these.

Unmentioned of course is why L2 (13% airstrikes) is so different from the "robust" and "most certain" findings on airstrikes claimed for L1 (about 80% airstrikes), which Roberts lectured us all to take very seriously, and which IBC was attacked for not reproducing in its data.

"So you're saying the crude death rate in 2002 in non-Kurdistan Iraq was around 4 per thousand?"

No. Your straw man might be saying it though. He seems to say a lot.

I genuinely didn't understand your response, Josh. The Lancet team found 400-900,000 violent deaths (cited from memory) for their CI. I'm one of the laypeople around here, but it seems to me they'd have to have made some spectacularly large mistakes to get that result if the truth for their time period was really in the 50-75,000 range.

The article on the air war doesn't rely solely on the L2 paper's claims--Turse thinks the government might be concealing a very large expenditure of some categories of weapons--rockets or small rounds fired from aircraft, I think. But without the figures we can't tell if the air war might be killing on a larger scale than one would know from press accounts, something one can say even if the 80,000 figure from L2 is hard to swallow.

By Donald Johnson (not verified) on 08 Feb 2007 #permalink

"I genuinely didn't understand your response, Josh. The Lancet team found 400-900,000 violent deaths (cited from memory) for their CI. I'm one of the laypeople around here, but it seems to me they'd have to have made some spectacularly large mistakes to get that result if the truth for their time period was really in the 50-75,000 range."

I was referring to the "excess deaths" figure. At least one person here (Donohue) has been insisting that the violent deaths figure is not an "excess" figure. But the way it's described by the authors strongly suggests it is. So who knows what that figure is. And as I said above, who knows how they calculated these CI's. It's another secret which nobody can check.

Almost every fact I can check turns out to be false or distorted, always to favor their thesis. That doesn't give me much confidence in the claims which are based on secret stuff I can't check.

Your 'range' seems to be another straw man version of others too, but "mistakes" can easily become very large when piling layers of abstraction and huge extrapolations on top of a tiny sampling of unrepresentative data.

Josh wrote: "Kevin, we requested from LAT the raw data which they used for a major report on Iraqi casualties in July 2006. The figure we give for 2002 is the one in the detailed raw data provided to the LAT authors directly from MoH."

I haven't been able to find the major report you refer to, but thanks for the clarification. Where does that leave us? Presumably the LAT got figures from the MoH some time in the first half of 2006. The MoH doesn't seem to have given these figures to anyone else. This isn't good evidence for your claim that the 40,000 figure is simply false. They may have revised their figures for 2002 some time after Riyadh Lafta (or whoever it was) did the sums. However I do agree that Burnham and Roberts should not bandy their number around without saying where they got it.

"I was referring to the 'excess deaths' figure. At least one person here (Donohue) has been insisting that the violent deaths figure is not an 'excess' figure."

In the context of Donald's comment this is a red herring, since the excess violent deaths are a large number minus a small number. In case anyone cares: our discussion of this in an earlier thread related mainly to the width of the CI, not the 601,027 figure which is still horrific even if I'm right in saying it's post-war (not excess) deaths. The summary says: "Of post-invasion deaths, 601,027 (426,369-793,663) were due to violence...." But as you pointed out the wording elsewhere seems to contradict my reading. My feeling is that the CI would be a bit wider if it took into account the uncertainty in pre-war violent deaths.

This doesn't affect Donald's point. If less than 300,000 people have died violently since the invasion, it took either a spectacular fluke or some kind of dirty work to get 300 violent deaths in a survey of 1,849 households containing 12,801 individuals, which is not a "tiny sampling" by any means. Please note that we can safely say that without any "extrapolations" whatsoever, huge or otherwise.

By Kevin Donoghue (not verified) on 08 Feb 2007 #permalink

Kevin, 300 deaths becoming 600,000 means a tiny sample. Nor do I think the "fluke" would need to be so spectaclar. Much of what's already been said could explain the "fluke", and nor is there any reason to rule out "some kind of dirty work". But if we're to apply Occam's Razor and seriously consider your and Donald's 'argument from incredulity', your implications of the estimates being wrong are a lot more plausible than all the implications of the estimate being right.

I used to understand your points pretty well, Josh, but I've either gotten slow or your writing style has become harder to follow. Anyway, what Kevin said. The Lancet team tried to pick out 12,000 people in a random fashion and out of those 12,000 people they found 300 violent deaths, suggesting a 2.5 percent mortality from violence. I'm no expert, but that's not a tiny survey. If the true violent death toll is ten times less then the survey design did a truly terrible job picking out a random sample.

This will irritate you because it's anecdotal, but the Wednesday NYT has an article about the journal of Saad Eskander, the director of the Baghdad National Library. According to the NYT, just in the month of December there were "4 assassinations of employees and 2 kidnappings, 66 murders of staff member relatives, 58 death threats, and 51 displacements." I think there have been about 2-3000 recorded violent deaths per month in all of Iraq in recent months, so I thought it'd be interesting to know how many people are on the staff of the Library. The diary is at this website--

http://portico.bl.uk/iraqdiary01.html

There are 464 people on staff. They lost 1 percent of their staff in one month. That's presumably a statistical fluke. Still, a statistical fluke of that magnitude is more likely if the death rates are much higher than the official statistics show. And their families lost 66 people in one month. I assume that's extended family, not immediate family. Still, it seems a lot higher than what you'd expect from the official statistics.

By Donald Johnson (not verified) on 08 Feb 2007 #permalink

[joshd](http://www.apa.org/journals/features/psp7761121.pdf) wrote:

And as I said above, who knows how they calculated these CI's. It's another secret which nobody can check.

I checked the 2004 study's overall CI, and the description of their 2006 approach sure sounds the same as the 2004 study.

I don't know what you think is so hard to follow Donald.

"If the true violent death toll is ten times less then the survey design did a truly terrible job picking out a random sample."

Yeah. And?

[You apparently didn't catch my meaning either when I wrote: "Your 'range' seems to be another straw man version of others too", as you do the same thing again here. You know perfectly well that nobody here has argued that such a figure would have to be "ten times lower" [60,000]. And it seems to me you should already know this quite well, so I wonder why you keep doing it.]

On your library story, it should be noted that it's a major government building next to the Ministry of Defense and near Haifa Street, in the middle of Baghdad, a city in which around 2,000 get recorded in recent months, and which (contrary to some claims) I don't think is remotely like most of Iraq, and seems to be just getting worse and worse. I'd be surprised by little about Baghdad, although I think you're mistaken that those figures are just for one month.

On the "tiny sample", you can use whatever adjective you like. Just note that there are more people on the staff of your one library than there were violent deaths recorded to produce "601,027 violent deaths".

"I checked the 2004 study's overall CI, and the description of their 2006 approach sure sounds the same as the 2004 study."

You write good equivocations Robert, but they lack substance, like always.

"Kevin, 300 deaths becoming 600,000 means a tiny sample."

No Josh, it doesn't. The scale of what you (and others) wrongly describe as extrapolation is completely irrelevant to my point. This has nothing to do with argument from incredulity. It's just basic probability theory.

Forget the 600,000. It's the 300 that needs to be explained, if the true figure is as low as you appear to think.

By Kevin Donoghue (not verified) on 08 Feb 2007 #permalink

"The scale of what you (and others) wrongly describe as extrapolation..."

Sorry. I forgot to translate to IngSoc first.

"It's just basic probability theory."

That only works with random samples Kevin.

"That only works with random samples Kevin."

That's why it is just wrong to say that "300 deaths becoming 600,000 means a tiny sample". You knew that, of course, as you now make clear. You probably also know that making inferences about Iraq's mortality rate from a random sample of Iraqis is not extrapolation. (But misuse of that term is so common it probably isn't worth bothering about it.)

So why do you say things that you know are false? (It takes a brass neck to invoke Orwell even as you confirm that you do know.) Your case boils down to the claim that the sample isn't random - it's the same old "bad sample" mantra that we've been hearing since 2004, when the likes of Shannon Love got on the case. Now, maybe one of the Lancet samples really is bad. Maybe both of them are. That's what is in dispute. All you are doing now is begging the question.

By Kevin Donoghue (not verified) on 09 Feb 2007 #permalink

"That's why it is just wrong to say that "300 deaths becoming 600,000 means a tiny sample". You knew that, of course, as you now make clear."

These sentences make no sense.

"You probably also know that making inferences about Iraq's mortality rate from a random sample of Iraqis is not extrapolation."

I know that a few people like to claim that.

"So why do you say things that you know are false?"

I don't.

"Your case boils down to the claim that the sample isn't random"

You and Donald are trying to making a case, and I'm pointing out that you're failing. One part of that failure is the underlying assumption that the L2 sample was random, which you've chosen to speculate is the case, and which is built in as an unstated premise in everything you've started putting to me.

As I've said earlier, stuff that's already been said can provide explanations for the things you raise. If you want me to focus on the 300 reported deaths for some reason, then I can try to look at potential error in the 300, looking only at critiques that are already out there.

MSB argues that the bias toward more violent streets could inflate the results by a factor of 3. So could make an illustrative CI for it. This would give us a CI for the 300 of about (100-300). Nothing would be added to the upper bound since MSB is a one-directional bias. And I'll put the MSB authors' 3-factor as the outer edge of the low-end CI.

Then there's issues of reporting error which are assumed to be zero in the published estimates. Madelyn Hicks has written on this, and Beth Daponte. Richard Garfiled, co-author of L1, also questioned this recently and guessed that about a 30% error would be reasonable for this in L2. I suspect Hicks would believe there could be more, but let's just stick with Garfield. So then let's say the CI for the 300 then should have another +/-90. So we have 300 (CI 10-390).

Though I can think of other sources of error that would widen this further, I'll just stick with those, and leaving out the issue of "dirty work", assuming that to be zero.

"if the true figure is as low as I appear to think", the number of reported deaths should fall well within this range already.

Josh, you're complaining about the factor of ten I keep referencing? That's what the whole debate is about, the factor of ten difference between the reported deaths and the L2 estimate. Besides, after complaining about it you then give us a scenario where mistakes all working in the same direction bring the L2 CI down low enough to include the reported numbers. Get your own strawman Josh--that one's mine.

I used the 50-75,000 range for contrast because for the L2 period it was my understanding that this was what you believed. Sloboda conceded a possible factor of 2 undercount for IBC's methodology and once even conceded the outside possibility of a factor of 4 (I think), but in last year's debate my recollection is that you thought Sloboda's factor of 2 was too generous a concession and that you said you and other IBC people thought IBC was counting more than 50 percent of the deaths. The factor 2/3 sticks in my mind. But if IBC or anyone else concedes that the true death toll might be in the low hundreds of thousands, then, yeah, it's a lot easier to see how a combination of main street bias and other errors might work in the same direction to give the L2 CI.

Still waiting for someone,anyone, to repeat the survey. I think there are still surveys done in Iraq, though my understanding of statistics, such as it is, is that you don't need as big a sample to accurately measure how many people want the US to leave, since that's a much higher percentage than the percentage of households which have lost family members. Failing a new survey, enough anecdotal evidence of the sort I cited above would probably convince most people that the official death toll is much too low.

By Donald Johnson (not verified) on 09 Feb 2007 #permalink

Donald writes: "Josh, you're complaining about the factor of ten I keep referencing? That's what the whole debate is about, the factor of ten difference between the reported deaths and the L2 estimate."

You badly misunderstand the debate Donald. It's not that L2 is wrong unless it produces the same number as reported deaths.

On your anecdotal evidence, again I think you've misinterpreted what you cited.

Robert writes:
"of the two of us I'm the only one who has checked the 2004 study's CI"

If anyone had been talking about the 2004 CI maybe your comments would be something more than equivocal dissembling and windbaggery.

Donald also writes: "Besides, after complaining about it you then give us a scenario where mistakes all working in the same direction bring the L2 CI down low enough to include the reported numbers. Get your own strawman Josh--that one's mine."

I think you misunderstood my comment: "'if the true figure is as low as I appear to think', the number of reported deaths should fall well within this range already."

"Reported deaths" there does not mean IBC or MoH or that kind of reporting. It means the number reported *by the sample population* in L2 should be 300 (CI 10 to 390), which fits easily "if the true figure is as low as I think".

You guys asked how I could explain 300 reported deaths in the L2 sample "if the true figure is as low as I think". And that does it.

But your comment reminded me that I left out the standard sampling error acknowledged in the study already, about 30%.

So I actually should have put it at:

300 (-80 to 480)

[joshd](http://www.apa.org/journals/features/psp7761121.pdf) huffed:

If anyone had been talking about the 2004 CI maybe your comments would be something more than equivocal dissembling and windbaggery.

Ah, so you're agreeing that they did the 2004 study CI properly. Cool.

Robert, you seem to very consistently change what I say into other things I didn't say. The words I do say seem to be little more than tangentially-related launching pads for the various positions you seem to invent through some kind of free-association exercise.

You should be able to do this just as well from picking words or sentences out of magazines or something. It doesn't seem like I'm needed.

I wrote: That's why it is just wrong to say that "300 deaths becoming 600,000 means a tiny sample". You knew that, of course, as you now make clear.

Josh responded: These sentences make no sense.

Let's try again. The ratio of sample deaths to estimated total mostly reflects the size of the sample relative to the population. Is that what you mean by a tiny sample? If so, in what sense is it a problem? If the sample is unrepresentative then there is no point in discussing its size. A large biased sample is no better that a small one.

My remark about what you knew was perhaps unwarranted; apologies for implying that you were being a bit disingenuous.

By Kevin Donoghue (not verified) on 09 Feb 2007 #permalink

On the library issue, it's the NYT that says those deaths occurred in one month. It sounded high to me, but I don't know. I'd like more such anecdotal accounts, since it doesn't seem like there's anyone except the Lancet team interested in doing surveys.

Your CI calculation seems odd to me--if the sampled population had a violent death rate known somehow to be 3 times too high, and then there are two 30 percent errors, I don't think you'd divide the 300 by 3 and then subtract 30 percent of the 300 twice to get the lower bound of -80. But maybe I'm wrong. I'm guessing that if you really knew that the sample was biased, you'd just calculate the CI for that sample--how you'd extend it to the population as a whole I don't know, but it seems very unlikely to me that any reasonable analysis would leave you with a negative lower bound unless you knew the rest of the population actually had a very sharp drop in violent deaths. In this case I think you'd probably have to throw in some resurrections to get a negative lower bound. This is sorta like the Fallujah outlier problem again--people were saying that a cluster with 52 deaths made the uncertainty so large the overall mortality rate might have actually dropped in the postwar period if you included it. That made no sense and as I understand it, turns out to be wrong.

What's this other 30 percent that Garfield and others are talking about? I've missed hearing about that. I assume the other 30 percent you're talking about is the size of the CI in the L2 paper.

What all this boils down to though, is that the L2 team tried to take an unbiased sample and found 300 deaths when the official figures suggest that on average, out of 12,000 people they should have found around 25 (plus or minus error bars). So if that's the case they did one heck of a lousy job designing their survey.

By Donald Johnson (not verified) on 09 Feb 2007 #permalink

Kevin, my point where I said "tiny sampling" was here:

"Your 'range' seems to be another straw man version of others too, but "mistakes" can easily become very large when piling layers of abstraction and huge extrapolations on top of a tiny sampling of unrepresentative data."

The point here is when you combine the inherent error of a small sample (here I was referring to the almost 50% claimed for the excess deaths estimate) on top of other problems of bias and such (maybe 50% again or maybe more), you effectively have nothing conclusive here. It couldn't really disprove any hypothesis about what the "true number" is, and part of this inability arises from the sample size being small.

Your CI calculation seems odd to me--if the sampled population had a violent death rate known somehow to be 3 times too high, and then there are two 30 percent errors, I don't think you'd divide the 300 by 3 and then subtract 30 percent of the 300 twice to get the lower bound of -80. But maybe I'm wrong.

Donald, the CI is supposed to quantify uncertainty. If it was "known" that MSB produced a 3X upward bias we'd just move the 300 down to 100. You wouldn't need any CI for that. I said that MSB authors think 3X is plausible, so I put that as the *outer edge* of a CI, which means I'm actually being generous to L2 and stingy toward MSB because, as you should know from all the L1 lectures, I'm then giving their 3X claim only the most outside chance of being right.

Then, the other biases (sampling and reporting) applied +/-30% each. The Garfield 30% is from a letter recently in Lancet which Kevin linked to near the beginning of this thread.

...but it seems very unlikely to me that any reasonable analysis would leave you with a negative lower bound unless you knew the rest of the population actually had a very sharp drop in violent deaths.

Again, you don't put the ends of the CI at what you "know" to be reality somehow. If we somehow "knew" the population had a drop in violent deaths, and someone was trying to do an "excess deaths" study to determine how big of a drop there was, the upper bound of their CI could still go into positive numbers because the CI is based on the study design and the data they wind up getting. This is no different from a case where we "know" they went up, because it's independent of what we might think we "know" about the situation on other grounds.

the L2 team tried to take an unbiased sample and found 300 deaths when the official figures suggest that on average, out of 12,000 people they should have found around 25

No, that's wrong. It would suggest that if official figures were claiming to be a record of every death that has taken place in Iraq. If the sample produced 25, official figures would suggest that's too low.

On your -80 to 480 CI, I'm saying that I doubt this is the proper mathematical way to combine the various errors you're postulating. I might be wrong.

I can't think of anything else to say at the moment, except to repeat my demand for another survey. If I demand that often enough maybe it'll happen. One thing about the Johns Hopkins article that depressed me slightly was that there was initially talk of some other group doing this survey. Too bad that wasn't done, or better yet, too bad two independent surveys couldn't have been done.. The criticisms of L2 boil down to this--either their survey design was awful or they were incompetent in some other way or someone in the process committed fraud. Sometimes people are polite about it, and sometimes they say it straight out. So it'd have been better if a different group had done the survey and either found a number in the high hundreds of thousands or alternatively, a number more like what IBC would accept. (Or something in-between).

By Donald Johnson (not verified) on 09 Feb 2007 #permalink

Joshd, as evidenced not only by your posts above but also by your posts in other threads, you don't have the technical background to be participating in this debate at this level. If you think I've been making fun of you that'd be pritnear the only thing you'd've been right about. I'm making fun because your posts are so off I think about saving them as teaching examples for my students: you're the poster boy for how bile can back up far enough to affect the brain. On the other hand, you're tenacious and it's evident that you care deeply about this issue and don't think I belittle those qualities; I wish a couple of my students had a bit more of your feistiness. But the more you post on statistical issues the more you hurt yourself. Worse, every time you post on statistical issues an angel dies. Please stop.

"On your -80 to 480 CI, I'm saying that I doubt this is the proper mathematical way to combine the various errors you're postulating. I might be wrong."

Ok. I wrote this up off the cuff, and I'm hardly attached to these exact numbers for anything. If someone would like to explain how to do it better they can do so, but the resident 'experts' here seem more interested in posing and preening for the crowd than in actually saying anything.

Which takes us to Robert. Your posts as usual are entirely empty of any substance. You love to assert dismissive opinions and ad hominems. What you never ever do is say anything _anything_ that has any substance at all, either to support your claims or to make any argument on matters at hand. If you ever said anything that wasn't an ad hominem or a straw man, I'm starting to think you'd implode.

I can not recall any post from you that does not follow this formula:

1. Assert various ad-hominems about adversay being uninformed and wrong in various ways you think are 'clever'
2. Avoid making any argument or case about any of the substantive points or issues at hand
3. When the absurdity of '2' is pointed out, use '1' as an excuse to continue doing '2'.
4. Rinse and repeat this circular, empty preening ad naseum.

The more and more this empty blather tumbles out of your mouth, the more hollow your persistent posturing becomes, and the more I wonder how much money your students (or their parents) waste each semester for the privilege of listening to a pompous windbag say a whole lot of nothing.

In case anyone here needs a refresher on Josh's professional background:

>JOSHUA DOUGHERTY (Associate researcher) is a guitarist and private instructor. He received his Masters Degree in Jazz Studies from the University of the Arts in Philadelphia, PA, USA in 2004. His website can be found here.

http://www.iraqbodycount.org/contacts.php

[joshd](http://www.apa.org/journals/features/psp7761121.pdf) wrote:

Ok. I wrote this up off the cuff, and I'm hardly attached to these exact numbers for anything.

This is exactly the point. You write everything off the cuff -- but there's still no evidence that if you didn't write off the cuff your statistical arguments would be right. Worse, it's clear that you're not competent to recognize good statistical procedures when you see them: that's why I no longer bother to correct you. Off the cuff improvisation may be essential in your area but in this one? Not so much.

So the real question is why you persist in this delusion that you can slug it out toe-to-toe with epidemiologists, demographers, and statisticians on technical issues. I certainly don't think you're a paid shill so the only explanation I can come up with is [this](http://www.apa.org/journals/features/psp7761121.pdf).

Well done poptrot. It seems like you've gotten started on Step 1 of Robert's all-purpose debating formula:

1. Assert various ad-hominems about adversay being uninformed and wrong in various ways you think are 'clever'

Posting a short bio about my studying music can perhaps help push the ad hominem, and helps take up the space that a substantive argument might have wasted, but I don't think it's very clever. It's important that the audience sees how witty you are in conducting the well poisoning. Perhaps if you could have embedded a pdf under "JOSHUA" to slyly imply another ad-hominem on top of this one, that would have impressed the target audience more.

You seem to have Step 2 down perfectly, but I'd suggest more work on Step 1 before leaping to Step 3.

I will say that one thing I've learned in the jazz circles is that there are the amateurs, the posers and the real players. The posers usually have some skills, know quite a bit of the theory and the jargon etc. They generally congregate amongst themselves, tend to be the most arrogant of pricks and like to sneer down their noses at the amateurs, whispering clever little comments to each other when they see some failing that they can poke fun at. They seem to get off on impressing each other and pointing out their superior knowledge.

I find the real players tend to be quite different. They tend to know quite a bit more than the posers and use it better when it counts. They often share their expertise and know how to explain things in ways the people they are speaking to can understand, no matter their level.

I've perhaps been all three categories at one time or another in music (though probably still just aspiring to the third). I'll happily admit to being in the category of 'amateur' on the subject of 'statistical issues', and expect that I'd make mistakes on some things. Though the issues I've actually addressed don't seem to be quite the dense enigmas that some people want to make out.

I've had discussions with people of all three categories on the types of issues I've discussed here. Some of these discussions have been very enlightening and useful. It seems though that in this case I'm dealing with a bunch of posers. So perhaps, since I know what to expect here, I'll send the thread along to some of the real players and see if or what I might have gotten wrong, or how this might affect any of the basic points I've been making.

"I no longer bother to correct you"

You never have Robert. That would require you to address some actual point I've made rather than a distorted version of it which you've invented, which you've never done, and to give arguments, which you don't do. All I've ever seen from you is the formula.

Josh, your problem isn't that you are an amateur, but that you are unwilling to learn anything. You get the statistics wrong and you refuse to make corrections, instead we get name calling and abouse from you.

Tim, I try to avoid seeking out teachers who are dishonest and hypocritical and who've never demonstrated they have anything useful to teach on the topic.

I don't recall getting any statistics wrong in our exchanges, as your cryptic message suggests. I can however recall an expert making the most shoddy and amateurish of 'off the cuff' calculations to fabricate a 'vindication' for a particular study, then pretending to make a correction for this, but then "standing by" the same conclusion even though the assumptions on which it had originally rested were now invalidated, leaving a conclusion "stood by", but no longer with any reasoning or explanation of the assumptions needed to support it.

Then I recall this same person accusing the people who corrected him of making what he called an 'error' because they explained and used an assumption.

Tim, when I want to learn charlatanry, I'll look you up.

Josh,

If I had said "Josh is a musician, so he doesn't know what he's talking about", that would be an ad hominem.

My post said "Josh is a musician", which is not.

Your inability to tell the difference doesn't help you.

poptrot, I don't know who you think you're going to fool with that line.

In any case, In an attempt to show Tim and others that I'm quite willing to learn from the Experts, I've taken up Tim's advice and begun seeing what I can learn from him about how he and his selectively approved circle of 'experts', which I'm supposed to bow down before, handle issues of error correction:

http://scienceblogs.com/deltoid/2006/05/how_the_ibc_number_is_reported… - Here I learn that if you write up an analysis consisting almost entirely of errors, the proper procedure is to pretend to have corrected it by fixing a small fraction of the errors, dissembling about the rest, and declaring that your conclusions still apply.

http://timlambert.org/2005/05/lancet34/
- I learn here that if you make crude and erroneous calculations in an analysis, you should pretend to correct them by correcting one, and then declare that the same conclusions apply even though the foundation on which those conclusions had rested is no longer valid and no new foundation is explained, and then never discuss it again.

http://scienceblogs.com/deltoid/2006/06/ibc_vs_les_roberts.php
Here I learn that manufacturing an estimate of nationwide deaths from data which doesn't contain a sample of deaths is proper statistical procedure, and should not be corrected. I also learn that citing this as corroboration for another estimate is credible and appropriate.

From his approved experts:

http://scienceblogs.com/deltoid/2007/02/long_article_on_lancet_studies…
Here I learn that when it's shown that you've been spreading an erroneous unsourced statistic about another source, the proper procedure is to not correct it and to continue spreading it.

http://web.mit.edu/cis/pdf/Audit_6_05_Roberts.pdf
http://www.alternet.org/story/31508/
http://www.iraqbodycount.org/editorial/defended/3.1.php
-Here I learn that when you publish a whole host of false claims and statistics about other sources, the proper procedure is to leave them all uncorrected and circulating. Later I learn that it's proper to admit one of these many errors, but only in an obscure email exchange that few will have any chance of seeing, and only while deceptively downplaying even that one error as merely a "date error". I then learn it's proper to assert that these errors don't change any of the conclusions, even as the corrected statistics would flatly contradict those conclusions.

http://www.thelancet.com/journals/lancet/article/PIIS0140673607600634/f…
- Here I learn that when you publish a false claim about a source, and publish a graphic containing a series of errors in a peer-reviewed report and other publications, that the proper procedure is to leave these uncorrected in the reports, but to politely concede, in a more obscure correspondence, to the errors while downplaying one as merely a "labelling error" and downplaying the other by accusing those who produced the corrected graphic of using a scale that "masks" things, even though they used the same exact scale as in the original, simply with the error fixed.

I've learned quite a bit more than this, but you get the idea.

Reading that thread over again, it seems to demonstrate some of your own amateurish approach to statistics Tim, such as this nugget of wisdom:

"Of the 21 violent deaths, 11 occurred before the ILCS was conducted, 6 happened in the months when the ILCS was being conducted, and 4 after the ILCS was finished. If we split the 6 evenly into before and after we get that 14 of 21 violent deaths would have been picked up by the ILCS. Using this to adjust the ILCS gives an estimate of 24,000x(21/14) = 36,000, which is higher than the 33,000 we used before."

You were using these calculations in an attempt to show how silly we amateurs are in claiming that ILCS did not "corroborate" L1, as you had been claiming for a year (I wonder if you still claim this, now that ILCS needs to be a "gross underestimate"), but instead suggested lower estimates. Yet as I demonstrated in detail here: http://scienceblogs.com/deltoid/2006/04/ibc_takes_on_the_lancet_study.p…
the breakdown you chose is the worst you could have used, and would actually contradict the case you're making.

This though, is left uncorrected in your article, as are a few other errors, which you simply don't address. What's left by the end of the thread is one argument about the CI shapes. Your argument is not a complicated one to understand, we simply disagreed over whether doing an illustration using clearly explained assumptions, instead of factoring every assumption into the illustration, is actually an "error" as such, rather than a different, and arguably more useful, way to do the illustration than what you'd apparently prefer.

One could do a comparison your way too, but I'm not sure how useful it would be. If that manner of illustration could get the end of the ILCS CI to pop over the L1 point estimate it would only be doing this by the extra layer of uncertainty you've just added to ILCS to do the comparison. So you could only get your "corroboration" from ILCS by weakening ILCS beyond what it actually was.

I think using some assumptions - in this case, the assumption that Lancet estimates were right - clearly explained in that section of the piece (as opposed to whatever entirely unexplained assumptions you're using for your "corroboration"), is the more useful way to go if you hope to learn anything at all by a comparison, unless all you hope to learn is whether or not you can get the ILCS CI to pop over L2's point by weakening ILCS.

I brought up all the above links because I was curious how you reconcile your persistent focus on this one "error" by IBC, while showing no interest at all in the correction of a laundry list of errors by yourself and by a fellow Expert.

Josh, I brought it up because it is obvious error that you will not admit to. It demonstrates once again that you don't understand sampling and you still haven't learned.

Adding an extra layer of uncertainty to the ILCS is exactly what you have to do if you try to use it to estimate deaths that occured after the survey ended. You can't wish this uncertainty away. You don't get it and I don't think you ever will.

Tim you seem to misunderstand. I don't dispute your claim that expanding ILCS would introduce more uncertainty. My post above in fact makes this perfectly clear.

The premise was, if these assumptions were true, here's what the CI's would look like and what they imply. And the illustration does this correctly. Your claim is that its simply disallowed to even ask or answer such a question. And this is what I disagree with.

The point on which we disagree doesn't seem to have anything to do with my understanding, or lack thereof, of "sampling".

So I already understand your underlying point about uncertainty (as has everyone, amateur and expert alike, that I've discussed your argument with). I just don't agree with your other claim that you build on it.

What I do not understand (or maybe I do) is why you evaded all the other points in my post, such as the last paragraph for example. You don't seem to realize how very shaky is the ground on which you stand when presenting yourself as a champion of error correction.

One other point Tim, again which you evaded. Your partially corrected "Lancet Study Vindicated" piece now reads:

"The resulting estimate for war-related deaths was 24,000 (95% CI 18,000-29,000, see page 54 of report). since the field work was carried in April 2004, this only counts deaths in the invasion and the following year. The corresponding number from the Lancet study is 39,000 (the rest of the excess deaths are from increases in disease, accidents and murders). When you allow for the fact that the Lancet study covered eighteen months rather than one year, the ILCS gives a slightly lower death rate. So an independent study has confirmed that part of the Lancet study."

What's entirely unclear here is what "allowances" you're actually making which purportedly lead to Lancet being "confirmed", or what "slightly" lower means. How much lower?
[I won't ask how an estimate insisted by its authors to be for 39,000 or more ("100,000 or more"), is "confirmed" or "vindicated" by an estimate that comes out lower than 39,000]

When you discuss the changes you made from the first version you write:
"This doesn't affect my conclusion -- the ILCS still corroborates the Lancet study."

This is just asserted again. What exactly constitutes what you call "corroboration" in this analysis?

Josh, here, yet again, is what you claimed:

>the ILCS data only allows for a one in a thousand chance that the true number lies within the upper half of the Lancet range (the area shaded in grey).

This is not true, and it's been explained to you over and over and over again.

Tim, that *is* true if the assumptions we explain as the premise are true. That's the context of the comparison, and we acknowledge that different assumptions could be true in the paragraph above the illustration. You're taking that sentence out of its context, a tactic by which almost anything can be made "not true".

And again, you evade all the other issues and questions. Perhaps you could explain why "the ILCS still corroborates the Lancet study" is true. The difference between our comparison and yours is that the assumptions we're using aren't entirely hidden. One can't, for example, look for "errors" in your comparison because nobody knows how you're doing it. You just make some phantom "allowances" and then presto, "corroboration".

Pederson admitted in his talks with Soldz that the ILCS survey didn't count the Fallujah deaths that occurred in the spring 2004 and he thought, off the top of his head, that they probably didn't count the deaths in the areas of heavy fighting with the Shia either. Of course the 21 deaths in L1 exclude Fallujah, but if the ILCS survey also excluded the heaviest fighting in the spring 2004 then IBC's attempt at disproving L1 doesn't hold up.

Not that the difference amounted to that much anyway. One would think that two studies, one of which gets in the mid-20's (excluding Fallujah) and the other in the mid to high 30's are in rough agreement. Yeah, I know the error bars on ILCS are supposed to make the L1 figure really unlikely, but I suspect that if people devoted some attention to possible ILCS flaws the way the Lancet papers are criticized one could argue for widening those error bars, even without getting into the issue of which violent neighborhoods were avoided during the survey. The disagreement is between ILCS and L2. (Not so much L1, which presumably has large error bars for the violent death subcomponent.)

By Donald Johnson (not verified) on 11 Feb 2007 #permalink

Donald Johnson wrote:

The disagreement is between ILCS and L2.

As I've said before, the ILCS study is so different that it can't be used in any conclusive way either to support or refute the Roberts or Burnham studies. It's a red herring.

"Pederson admitted in his talks with Soldz that the ILCS survey didn't count the Fallujah deaths that occurred in the spring 2004 and he thought, off the top of his head, that they probably didn't count the deaths in the areas of heavy fighting with the Shia either."

Well, ILCS was not a "count". It was a random survey. So it's trying to get a sample that represents the nation. If the survey didn't exclude some areas of heavy fighting it would probably produce a wild overestimate. It seems like it should include some and exclude some. So the above doesn't seem to prove anything wrong with it. You see what trying to "count the Fallujah deaths" did to the L1 survey anyway.

And the point of the piece was not "IBC attempts to disprove Lancet". It was tackling a series of falsehoods and shakey speculations drawn from and subsequent to Lancet and other supposed evidence, and which was wrapping itself in the garb of 'scientific' proof, statistical probabilities, etc. And this incessantly focused on Lancet, Lancet, Lancet, and while misrepresenting it as showing/proving at least five times more violent deaths than IBC.

But actually the highest probability estimate out there, by far, was ILCS, not Lancet, and this suggested a lower estimate. And on top of this, Lancet was only three times more than IBC to begin with. So the point is that the probabilities suggest the true figure was probably on the lower side of Lancet and not remotely close to those misrepresentations that were circulating.

As you say there are plenty of other unquantifiable things which may have an affect on any of these figures, so that would all be open to debate. It doesn't seem like any single study like this is entirely conclusive. They offer suggestions, some stronger than others.

"The disagreement is between ILCS and L2. (Not so much L1, which presumably has large error bars for the violent death subcomponent.)"

Well, there's disagreement between L1 and L2 also, unless you accept the "dartboard" argument for L1 that all it really shows is 8,000-194,000. If the only way to disagree with L1 is to go outside that, then pretty much anything agrees with it.

If accepting your premise that ILCS/L1 are in rough agreement they would suggest IBC being low by only up to factors of 2 or 3. But L2 comes along and now has IBC being low by facors of more than 10. It should be obvious from this that we're suddenly in a whole new "ballpark" and there's little agreement with anything that came before.

Robert, is the problem with ILCS the fact that it asked just one question about war-related deaths in a very long survey? That was the problem as I recall anyway.

Josh, if Pedersen's teams didn't do interviews in the most violent portions of Iraq in the spring of 2004 because of the danger, then they biased their survey in a downward direction and missed an unquantifiable number of violent deaths and there's no way to show ILCS is incompatible with L1's midrange number. Pedersen said the best way to get an estimate for the deaths would be to take their estimate and just add the number killed in the areas not covered. Good luck with that. Maybe I'm answering the question I asked Robert.

As for L1's numbers , the debate between you and Tim was over whether the midrange L1 number was compatible with ILCS, but whatever one says about that (or whether it's even worth arguing about), it's still true that L1 probably has huge error bars for the violent death component. That wouldn't be the 9000-194,000 range, btw--that was the CI for the excess death toll, violent and nonviolent. I don't think L1 gave a CI for the violent deaths alone, though my memory might be faulty. I assume it would be large.

I read the letter with Garfield and others pointing to a possible reporting error. I didn't fully understand it. Part of the problem was that the death rate for children seemed too small to them. And they also thought it was a problem that the percentage of reported deaths backed up by death certificates was higher in L2 as compared to L1. I don't quite see how this would lead to a decrease in the violent death component in L2, unless they are implying fraudulent death certificates. It seems to me that if you exclude that possibility (maybe one shouldn't), the reporting error would lead to underreporting of deaths. This might screw up the excess mortality calculations, if people are more likely to forget deaths with the passage of time. But it shouldn't lead to an overreporting of violent deaths. Maybe they do mean fraud or some sort of sloppiness in the data collection--it's not clear to me. I also don't get where the specific figure of 30 percent comes from. Anyway, the Garfield letter (his name was third on the list) still concluded that the violent death toll would be 5 times the IBC figure, so in their view it brought the lower end of the violent death CI down from 426,000 to maybe 300,000.

By Donald Johnson (not verified) on 12 Feb 2007 #permalink

As for different ballparks, well, yeah. I was shocked by the 600,000 number--I wouldn't have dreamed it could be that high. Apparently some of the L2 team had the same reaction. And maybe it is too high--I don't know.

By Donald Johnson (not verified) on 12 Feb 2007 #permalink

Donald asked:

Robert, is the problem with ILCS the fact that it asked just one question about war-related deaths in a very long survey?

Not exactly. You can do a lot with one question if it's carefully asked and carefully executed, and even lots of questions can give you junk if they're not. However, the ILCS was focused on living conditions and infant and child mortality, and the kinds of questions and protocols you follow in order to get good info about adult mortality and causes of death are quite different. Plus, there are a handful of other things that make the comparison more difficult: the time periods don't line up, the ILCS didn't ask date of death, and there is some question about clusters (on both sides of the ledger).

That means that at most the ILCS findings can be suggestive (and in, "consistent with" or "inconsistent with") but no serious researcher would consider them conclusive. That's why no one but JoshD talks about them.

"That's why no one but JoshD talks about them."

Putting aside yet another vacuous ad hominem insult from Robert (a "serious researcher" whose only contribution to the world on these matters seems to be that he once ran some L1 data through a piece of software), I've seen the ILCS findings discussed recently in a letter published in the Lancet journal, in articles in Science magazine, by its author and elsewhere. In each case these seemed to take them as more conclusive than Lancet findings.

So it's fairly easy to determine that Robert's comments are uninformed, unreliable and hopelessly biased, much like the study he wants to peddle as "conclusive". Less fanciful and agenda-driven assessments of what is or is not "conclusive" are available elsewhere, and from more serious researchers: http://www.taipeitimes.com/News/editorials/archives/2007/01/28/20033467…

[joshd](http://www.apa.org/journals/features/psp7761121.pdf) wrote:

[Link to Taipei Times article by Beth Daponte snipped]

Um, [joshd](http://www.apa.org/journals/features/psp7761121.pdf)? Beth doesn't mention the ILCS study at all in her article. AFAIK, you guys are the only ones who're making claims about how the ILCS findings "invalidate" the Roberts and Burnham studies. Even though Beth isn't an epidemiologist or a statistical demographer (and I'm sure that's one of the first things she'd admit to) she *is* a serious researcher -- and that's why she doesn't make that claim. OTOH, you *have* to make that claim, because without it you got pritnear nothin'.

As an aside, I agree with Beth that the UN Pop Division mortality estimates are pretty high quality but the reason why those estimates are so useful is because they're consistent across more than 200 countries, not particularly because they're timely. I'm sure Beth knows that, and it just slipped her mind.

Oh that's a brilliant piece of work published in the Taipei Times Josh. Masterful.

But in real time, do the numbers really add to the debate? Do they really provide us with more information than the Iraq Body Count figures provide?

I think - "yes".

Um, joshd? Beth doesn't mention the ILCS study at all in her article.

Didn't say she did Robert. That wasn't the point of the reference.

AFAIK, you guys are the only ones who're making claims about how the ILCS findings "invalidate" the Roberts and Burnham studies.

When moving goalposts, it would be best to at least explain where your "quotes" are coming from.

Even though Beth isn't an epidemiologist or a statistical demographer...

Your level of credentialism is very unhealthy. She has more relevant experience and insight on this topic than do most of either.

And I've studied a small sample of epidemiologists. My findings show that epidemiologists generally seem to suffer from an acute inability to say almost anything that isn't false or equivocal, and regularly make claims which are sheer quackery. To be fair, my sample wasn't random. But I hear that does not matter.

[joshd](http://www.apa.org/journals/features/psp7761121.pdf) wrote:

Um, joshd? Beth doesn't mention the ILCS study at all in her article.

Didn't say she did Robert. That wasn't the point of the reference.

Wait a second. I wrote: "no serious researcher would consider [the ILCS] conclusive. That's why no one but JoshD talks about them." And then you linked to her post, and call her a serious researcher. And you're saying that I'm the one moving the goalposts? What, you were just dropping in random and unrelated links?

AFAIK, you guys are the only ones who're making claims about how the ILCS findings "invalidate" the Roberts and Burnham studies.

it would be best to at least explain where your "quotes" are coming from.

OK. I stand corrected: you're not saying that the ILCS findings invalidate the Roberts and Burnham studies. So the ["one in a thousand chance"](http://www.iraqbodycount.org/editorial/defended/3.6.2.php) statement isn't relevant. But in that case, if you know it *not* to be relevant or conclusive, then including it as part of the argument *is* disingenuous.

[Beth] has more relevant experience and insight on this topic than do most of either.

I'm going to presume you meant, "than do most of us." That's absolutely true, and indisputably applies to you. In spades. However, while Beth absolutely does good and admirable work it is no dishonor to her to say that her area of expertise isn't statistical demography. As I've said, I'm pretty sure she'd be the first to admit that.

BTW, [joshd](http://www.apa.org/journals/features/psp7761121.pdf), up to this point you've mostly been arguing that credentialing isn't that important and that in these matters the opinions of physicists, economists, and jazz musicians should be of equal value to those of epidemiologists, biostatisticians, demographers, and survey specialists. I try to be reasonably sensitive to this point of view since I do think arguments should attempt to stand on their own, that no one field has a monopoly on knowledge, and (frankly) cuz I've always been sorta insecure about my own qualifications. Accordingly, I've tried never to dismiss your comments because of your background (I dismiss them because you're a nut). Have you switched your position and are you now saying that experience and training *are* relevant?

Donald, your link is interesting and is as you say, anecdotal, but from what it suggests it's very much out of line with L2.

The picture would appear to contain about 40 houses or more, with members in 11 of these having suffered directly from violence or direct threats of intent to do violence. Of these 11, only one (House 2) contains 1 death. So of about 40 houses, there's 1 violent death.

L2 claims to have recorded 302 violent deaths in 47 clusters of 40 houses each. That's about 6 or 7 violent deaths for every 40 houses, which is way out of line with your anecdote.

Your anecdote would also suggest that about 7 in 40 houses should be empty. Yet in L2 almost none are empty: "Households where all members were dead or had gone away were reported in only one cluster in Ninewa and these deaths are not included in this report." (As an aside, note that "were dead or had gone away" by the end is spun into "these deaths are not included").

The anecdote would suggest that there's a lot of population migration, which can wreak havoc with the assumptions and outcomes of a survey estimate, that fatalities are really only one small part of all the violence (kidnappings, death threats, injuries etc.) the population is actually suffering from, and it would suggest L2 is overestimating violent death by factors of six or seven. Of course, this is inconclusive and anecdotal, but I wasn't sure if you'd thought it through.

What I think is important about the anecdote is that the level of violence it describes would itself be astronomical in terms of its impact on the population. If I lived on that street and I'm seeing all this stuff they describe happening to my neighbors, I would be living every waking moment in terror. And I too would be trying to flee if I had the means. This is some insight into why I also fear that the type of 'narratives' being put forth by The Experts (or at least a very small sampling of them), really downplays what "only" the lower levels of violence can do to a society like this.

One of the participants in this interview makes this point well: http://www.wamu.org/programs/dr/06/10/12.php#11450

To bring the thread full circle to my first comment, note that LR authoritatively cites variations of his "40,000 by all sources in 2002" rubbish about three times to rebut the others and prop up his 'narrative'. He even cites rubbish from that 'sensitivity analysis' of the '8 serious studies' again. Each of the participants has somewhat differing opinions, but the only one in the interview peddling self-serving falsehoods and disinforming the audience is the Expert, the Epidemiologist, whose views we're supposed to elevate in authority over the others purely on the basis of that title.

[joshd](http://www.apa.org/journals/features/psp7761121.pdf) wrote:

but from what it suggests it's very much out of line with L2 [...] So of about 40 houses, there's 1 violent death.

Yeah, I agree this neighborhood doesn't seem typical of the experience reported in the Burnham study, but that's probably a good thing for the study. As long as you're counting things, you should probably mention that the 11 houses for which details were given included those inhabited (and subsequently abandoned) by: a Yemeni diplomat, a contractor for the US government, a former minister, two successful businessmen, and three high-ranking officers in the Iraqi military.

It's also very near a main street. It fits in very nicely with my intuition that off-main-street neighbourhoods are apt to be inhabited by people who are wealthier than average and therefore likely to be targets, but also able to buy their way out of trouble. Which way a main street bias would actually go (assuming that there is one) is not at all clear.

By Kevin Donoghue (not verified) on 14 Feb 2007 #permalink

What Kevin said, regarding main street bias. And also what I said--if there's something seriously wrong with the L2 numbers, one's best best is to say there was fraud in there somewhere. MSB didn't produce 6 deaths in this cluster. As for this link, I threw it out there because I'm in favor of anecdotal evidence, until someone else bothers to do a mortality survey. As for which side it supports, in this case it supports IBC.

On the subject of why Iraq is falling apart, if we assume IBC statistics the reaction of Iraqis to the occupation has always been a bit of a mystery to me.

Saddam was worse (if IBC is right about the occupation and if Saddam killed 300,000). Much, much worse. The sanctions were probably also worse, perhaps much worse. The Iran war killed hundreds of thousands. And it's not enough to say that Saddam's killing was mostly in the 80's and in 91. (Why didn't Iraq crumble then?) John Burns, the transparently prowar NYT reporter wrote stories in the leadup to the war which painted a pretty convincing picture of a Stalinist dungeon. One of his stories was about the mass prison release Saddam allowed in the months preceding the war, and the picture was of a country groaning under a sadistic tyrant. And numerous stories (including a book by leftwing Pacifica radio reporter Aaron Glantz) portrayed Iraqis as actually welcoming the US invasion and being willing to accept the several thousand deaths (7000 or so by IBC counting and L1's numbers are consistent with that order of magnitude) in the opening two months. IBC, as an antiwar organization, was horrified by these deaths. Iraqis, it seems, not so much. They had suffered so much under Saddam that they were willing to pay the price of several thousand civilian dead without much resentment. This isn't a country which is new to massive suffering, yet the occupation seems to be sending it over the edge, though looking at the cold statistics as summarized by IBC, it seems like things have been much worse in the recent past.

By Donald Johnson (not verified) on 14 Feb 2007 #permalink

Getting back on topic (I have a bad tendency to go off on tangents), and following up on Kevin's point, several of those families fled after someone survived a murder attempt or received a death threat. If they hadn't fled (and most people in Iraq haven't just yet), then there's a fair chance that they'd be dead. So it's actually hard to tell what death toll this cluster would support if one tried to make it represent Iraq (which we all agree is silly). Maybe most people who receive death threats aren't able to flee to other countries and subsequently get killed. Or maybe there are many more death threats than followup killings. Or maybe people in this neighborhood receive more death threats and kidnappings than average--one could argue the number up or down.

But anyway, more anecdotal evidence would be good. A few dozen stories like this one, taken from all sorts of neighborhoods, might give us a better sense than either IBC or L2 of what is happening in Iraq.

By Donald Johnson (not verified) on 14 Feb 2007 #permalink

Isn't anecdotal evidence considered generally unreliable? If that's the case, is there a point at which piling up unreliable anecdotes causes them to become reliable? On the other hand, anecdotes might be more reliable than someone's gut instincts! Seems to me a possible resolution would be for those who disagree with L1 and L2 to fund their own survey, using their 'improved' methods.

Cute, Richard, but are you saying that the Atlantic Monthly article is a lie? That the reporter didn't go to these 40 or so houses and accumulate all that information about death threats and kidnappings and one death and the professions and nationalities of the people in those homes that had suffered violence and that it would therefore tell us nothing at all about Iraq if, say, reporters went to 47 different neighborhoods scattered around Iraq and collected similar anecdotal data? It wouldn't give us a true sense of what it's actually like in Iraq? I get the impression that you are reacting in kneejerk fashion to the use of words like "anecdote" or phrases like "gut feeling". So we have this story in the Atlantic that does a superb job detailing the impact of the war on one neighborhood and to you, because I used the term "anecdotal", it's unreliable. Therefore, if it's possible for other reporters to do the same in other locations, they needn't bother, because L2 or IBC (take your pick) has already told us the truth.

Les Roberts is the one who suggested going to several local graveyards and who said that if L2 is correct on the general order of magnitude of deaths, then it shouldn't be hard for reporters to verify. I think he was forgetting the personal danger element for the reporters, but he obviously meant that a 2.5 percent violent mortality rate ought to be pretty obvious. But I guess that was just Les being anecdotal.

But another survey would be even better.

By Donald Johnson (not verified) on 16 Feb 2007 #permalink

An anecdote is an anecdote; a survey performed using scientific methods is just that. Anecdotes cannot disprove the Lancet studies, so asking for more of them is pointless.

That's just rubbish, Richard. If enough suicidally brave reporters went around to random neighborhoods and did what was done in this Atlantic article and if they consistently found that the death toll was many times less than L2, then any sensible person would come to the conclusion that something was wrong with L2. That hasn't happened, but in theory it could.

I tend to think the death toll is in the hundreds of thousands and because of that I think that if reporters spent a few months trying to do what these Atlantic reporters did, they'd quickly confirm it. Or else turn up evidence that it was inflated. Les Roberts obviously thinks so too and you know why? Because he sincerely believes his results--he really thinks that 2.5 percent of the Iraqi population has died from violence and that this number is so large it doesn't require a peer-reviewed statistical study to turn up the evidence, as though we're trying to determine if some tiny amount of food additive causes a measurable increase in some cancer. He says so over and over again. And he's right--if, for instance, 8 million Americans had been murdered in the past 4 years I don't think I'd need a statistical survey to find out the number was in the several million range. Some intrepid reporters willing to go around to different parts of the country and interview people would probably do the trick.

You're acting as though the Lancet result is fragile, only discernable by trained statisticians, when it's a claim that a society is being destroyed by massive levels of violence.

By Donald Johnson (not verified) on 16 Feb 2007 #permalink