Following Up on the Lancet Study

As expected, the Lancet study on civilian deaths in Iraq has created a firestorm on the net. What frankly astounds me is how utterly *dreadful* most of the critiques of the study have been.

My own favorite for sheer chutzpah is [Omar Fadil](http://politicscentral.com/2006/10/11/jaccuse_iraq_the_model_respond.php):

>I wonder if that research team was willing to go to North Korea or Libya and I
>think they wouldn't have the guts to dare ask Saddam to let them in and investigate
>deaths under his regime.
>No, they would've shit their pants the moment they set foot in Iraq and they would
>find themselves surrounded by the Mukhabarat men counting their breaths. However,
>maybe they would have the chance to receive a gift from the tyrant in exchange for
>painting a rosy picture about his rule.
>
>They shamelessly made an auction of our blood, and it didn't make a difference if
>the blood was shed by a bomb or a bullet or a heart attack because the bigger the
>count the more useful it becomes to attack this or that policy in a political race
>and the more useful it becomes in cheerleading for murderous tyrannical regimes.
>
>When the statistics announced by hospitals and military here, or even by the UN,
>did not satisfy their lust for more deaths, they resorted to mathematics to get a
>fake number that satisfies their sadistic urges.

You see, going door to door in the middle of a war zone where people
are being murdered at a horrifying rate - that's just the *peak* of cowardice! And wanting to know how many people have died in a way - that's clearly nothing but pure bloodthirst - those horrible anti-war people just *love* the blood.

And the math is all just a lie. Never mind that it's valid statistical mathematics. Never mind that it's a valid and well-proven methodology. Don't even waste time actually *looking* at the data, or the metholodogy, or the math. Because people like Omar *know* the truth. They don't need to do any analysis. They *know*. And anyone who actually risks their neck on the ground gathering real data - they're just a bunch of sadistic liars who resort to math as a means of lying.

That's typical of the responses to the study. People who don't like the result are simply asserting that it *can't* be right, they *know* it can't be right. No argument, no analysis, just blind assertions, ranging from emotional beliefs that
[the conclusions *must* be wrong](http://www.abc.net.au/worldtoday/content/2006/s1763454.htm) to
[accusations that the study is fake](http://rightwingnuthouse.com/archives/2006/10/11/a-most-ghoulish-debate/), to [claims that the entire concept of statistical analysis is
clearly garbage.](http://timblair.net/ee/index.php/weblog/please_consider/)

The Lancet study is far from perfect. And there *are* people who have
come forward with [legitimate questions and criticisms](http://scienceblogs.com/authority/2006/10/the_iraq_study_-_how_good_is_…) about it. But that's not the response that we've seen from the right-wind media and blogosphere today. All we've seen is blind, pig-ignorant bullshit - a bunch of innumerate jackasses screaming at the top of their lungs: "**IT'S WRONG BECAUSE WE SAY IT'S WRONG AND IF YOU DISAGREE YOU'RE A TRAITOR!"**".

The conclusion that I draw from all of this? The study *is* correct. No one, no matter how they try, has been able to show any *real* problem with the methodology, the data, or the analysis that indicates that the estimates are invalid. When they start throwing out all of statistical mathematics because they don't like the conclusion of a single study, you know that they can't find a problem with the study.

Tags
Categories

More like this

Daniel Davies: This is the question to always keep at the front of your mind when arguments are being slung around (and it is the general question one should always be thinking of when people talk statistics). How Would One Get This Sample, If The Facts Were Not This Way? There is really only one…
It seems that war supporters with actual knowledge of statistics aren't willing to criticise the new Lancet study, leaving the field to folks who don't know what they are talking about. John Howard: Well, I don't believe that John Hopkins research, I don't. It's not plausible, it's not based on…
Anthony Wells: So, what could have gone wrong? The more excitable fringes of the US blogosphere have come out with some interesting stuff. Let's look at criticisms that don't hold water first. Firstly, the turnout is unbelievably high. The report suggests that over 98% of people contacted agreed to…
In May I analysed the press coverage of the Iraq Body Count and found that the IBC numbers were usually misreported as the number of deaths and the IBC maximum was often reported as an upper bound on the number of deaths. I asked: Why not contact reporters who get it wrong and set them right? The…

I've been trying not to get drawn into reading all of the controversy over this study, because darn it, I've got actual work to do. . .

But this just convinces me again, as if I needed any more convincing, that statistical literacy is the most important tool we in the science/mathematics teaching world need to be imparting to our students and the larger society. Its lack underlies so many communication difficulties and so many irrationalities in public discourse.

All right (pant, pant), when I calm down a bit I realize that some irrationality may be beyond the reach of "more correct information" to solve. Nevertheless.

I found this is the most cogent and concise summary of many of the innumerate arguements against the study.

By justawriter (not verified) on 12 Oct 2006 #permalink

This caught my eye in the comments of one of the linked articles:

And the study doesn't take into consideration how many Iraqis Saddam Hussein would have killed had he remained in power. By most Bush Administration estimates, if the rate of killing under his regime had continued, there would be not one Iraqi left alive by now.

The Iraqi just don't know how lucky they are to have Americans there bringing them peace, security and the blessings of democracy.

It's odd. My sarcasmometer is going off the charts, but I suspect that the speaker was serious.

Can we say, "drivel"?

By Xanthir, FCD (not verified) on 12 Oct 2006 #permalink

The two Lancet studies estimated 100,000 deaths in 2004 and 650,000 deaths in 2006. The difference, 550,000, works out to about 750 per day, or over 22,000 per month. These daily and monthly estimates are not consistent with the numbers from any other source: the morgues, the press, the IBC, or the governments of Iraq or the US.

Therefore there is ample reason to question the two Lancet studies, to investigate further.

In addition, one has to wonder how an interviewer gathers unbiased information in a war zone with local warlords and insurgents in some areas and the US and Iraqi armies in others. Avoiding one area or the other would seem to lead to geographical bias, entering both would seem to subject the interviewers to coercion or manipulation.

These are reasonable questions. All the more so because the editor of the Lancet is an ardent anti-war activist.

Instead of making fun of a critique you find utterly dreadful, wouldn't it have been more of a contribution to analyze the best of the critiques? Really, isn't that the scientific approach?

Warren:

Those questions are answered in the paper.

The other estimates are based on passively acquired data (i.e., gathering figures reports by morgues); as the paper points out, past experience has shown that passive methods
under-report by a huge degree - as explained in the Lancet paper.

The interview method is documented in the paper - with answers to your questions about how the interviews were conducted, how interview locations were selected, how they verified death reports when possible, and what portion of the reported deaths were verifiable.

In other words - your objections are all answered *by the paper*, if you would bother to actually *read it*, rather than just blindly assume that it *must* be wrong because you don't like the results.

The data acquisition method is a valid, well-known, well-proven technique, and the paper paper documents how they did it. The math is good and well done, and honest about its limitations, and the paper documents how the data was analyzed to compute the results. I've yet to see anyone actually address *those* points, which are the important ones. After *reading the paper*, *exactly* what about the data acquisition methods is wrong, based on the detailed description in the paper; or *exactly* what's wrong about how they applied the well-known population cluster sampling method, based on the detailed description in the paper?

What I said was "there is ample reason to question the two Lancet studies, to investigate further".

I did not say "it *must* be wrong".

The dissing that the skeptics have received is excessive.

Astonishing results require very strong evidence. If I claim the sky is blue I need less evidence than if I claim it is paisley.

A claim like "the sky is paisley today" is only shocking because it is unfamiliar and unconnected with common experience. We assign the statement "the sky is blue" a higher likelihood of being true (and a correspondly lower need for supporting evidence) simply because our personal histories have given us many encounters with blue sky. In either case, the visual appearance of the sky is such a simple observation to make that we can comfortably consider unfamiliar statements about it to be the result of error. For example, I may have misheard, or the speaker might be making a metaphorical assertion, or the speaker might not be fluent in English — paes-lee means "cloudy" in their native tongue — or, perhaps, they are actually seeing purple haze.

Most of the people disputing the Lancet results have no direct, intuitive experience with massive death tolls. Thus, the pair of statements "the death toll is high" and "the death toll is low" cannot be compared with the pair "the sky is blue" and "the sky is paisley". Besides which, too many of them are not even offering evidence to support their claim — they are completely divorced from notions of empiricism and method and instead float freely in the paisley clouds of Truth-by-Incantation Land. Noting the absurdity of such claims is an appropriate topic for a quick blog post in the morning; discussing level-headed questions of methodology (the questions pundits do not ask) is a topic for teatime.

MarkCC wrote:

The conclusion that I draw from all of this? The study is correct. No one, no matter how they try, has been able to show any real problem with the methodology, the data, or the analysis that indicates that the estimates are invalid. When they start throwing out all of statistical mathematics because they don't like the conclusion of a single study, you know that they can't find a problem with the study.

This presumes, of course, that at least one such person is competent to do statistical math. :-/

(last note of the morning)

@a little night musing:

I doubt a widespread education in the mathematics and statistics would stop moronic pundits from chanting hate-spells at every study they dislike in an attempt to dispel science by superstition (for this is what such people are trying to do: Truth by Incantation). However, if their audience can be taught statistics, then more people might recognize ignorant bile for its true nature, and that would be something good.

As we've seen many times before, it is practically impossible to shake a True Believer's faith. Real results happen when you wake up the people in the middle.

Warren:

Instead of making fun of a critique you find utterly dreadful, wouldn't it have been more of a contribution to analyze the best of the critiques? Really, isn't that the scientific approach?

My point is that I can't find any substantive critiques.

Out of all of the people that have been talking about how impossible the casualty figure from the paper is, out of all of the critiques/attacks that I've seen on the paper, it's authors, the journal, etc., I have not seen *one, single, solitary* critique that actually addressed
the fundamental issues of the paper. It's all just shouting about how it doesn't make sense - but never addressing *what's wrong with it* in any serious, substantive way.

What I said was "there is ample reason to question the two Lancet studies, to investigate further".

I did not say "it *must* be wrong".

The dissing that the skeptics have received is excessive.

Astonishing results require very strong evidence. If I claim the sky is blue I need less evidence than if I claim it is paisley.

What are those allegedly ample reasons? The only ones I've seen are "It can't be right because I don't believe it."

The dissing that the so-called skeptics have received is quite justified. A real skeptic looks at information carefully, evaluates, and comes to a reasoned, justified decision about how much validity should be assigned to that information. The so-called skeptics of this report haven't done that - they've screamed and shouted, attacked the ethics of the researchers who wrote the paper, made blind and foolish attacks on the entire science of statistical analysis, and insisted that the results must be wrong, without offering *any* actual justification.

A serious critic of the Lancet paper would identify specific shortcomings in either the methodology or the analysis in the paper. I haven't seen anyone do that.

I consider myself a skeptic, and I sat down and read the Lancet paper. From my own experience with math and statistics, I can't find anything seriously wrong with the work. As I said, based on the techniques they used, the figures *are* probably high; and as the authors admit, the margin of error is quite large for a statistical study. But the fundamentals of the paper are *quite* strong. So my reasoned analysis is that the numbers in the paper are probably in the right ballpark.

As the authors of the paper point out, there is a huge gap in *real* situations between the numbers that you'd get from passive data acquisition (gathering casualty figures reported to the press by police, morgues, etc.) and active data acquisition (population cluster sampling, as done in the Lancet paper). The statistics that they quote are that in military conflicts, the active methods generated figures
more than 5x the size of the passive ones in every conflict where both were used with the exception of Bosnia. And after the conflict was over, and it was possible to try to go in and validate the numbers, the active methods were much closer to the facts than the passive.

Just by reading the abstract, I can find one minor (potentially major, depending on who's doing the asking, I had engineering profs back in the day that would have failed a student on a test for such an error) flaw in the study. The authors clearly do not correctly apply or perhaps they do not understand significant digits. E.g., they report 654,965 excess deaths, but they only contacted 12,801 individuals, therefore they cannot possibly have 6 significant digits to work with (they have 5 at best). Considering that their rates are (mostly) only quoted to 2 sig figs, I can't imagine why they chose to report to 6 (in the introduction they describe their prior results using variously 1, 2, and 3 sig figs)!

I'll let you know what I think of the rest of the study later on this weekend after I've had time to read the whole thing with my engineer's eye.

Most of us have not had time to read the Lancet paper. Since both papers came out in October before a US election, we are pretty safe in assuming that the authors have one foot in the scientific camp and another in the political, electioneering camp. Important elections occur every 2 years in the US and the Lancet issues a death count for the last 2 elections.

The Lancet paper is not yet being treated as a scientific issue, or not just as a pure scientific issue. The Lancet paper is being properly treated as a political act. If GW Bush said the death toll was very low, people would discount it as a politically motivated statistic. When an ardent anti-war activist comes out with surprisingly high numbers in October people will discount it as a politically motivated statistic.

It isn't true that the only critique is "I don't like it". The morgue statistics can be fairly viewed as active data gathering, as the people are financially benefited for making the death report. The morgue statistics raise questions about the Lancet paper.

To believe the Lancet data, the study would have be verified with another study by a different team using different, but valid, methods. That is only good science.

I hope to get time to read the paper this week. It certainly would be best.

On blue-versus-paisley: We are all familiar with the other sources of death statistics (IBC, press, governments...). When the Lancet study comes along as an outlier it is sensible to raise ones eyebrows and look a lot more closely than if they had come out with the same estimate. That's all I was trying to say. I wasn't saying I had direct experience with mass death.

For many of us, the claim that active methods always result in higher numbers than passive measures (including death certificates!) is a new idea and needs to be verified. If this is old hat to you then you are in a different situation.

With everything I've said, of course, I haven't given my opinion on what the true number of deaths is. That is because I do not know. I hope that was clear. I think we would all like the number to be a great deal smaller than 600,000.

A couple of short comments. If you think that the numbers reported in the Lancet study are too high, you are skeptical, but you aren't a skeptic. A skeptic would attempt to provide some analysis that would give some reason for the data reported, and for the most part "skeptics" simply have not done this. They have given us absolutely no reason to believe that there is anything significantly flawed in the methodology of the study. There is substantial agreement in the scientific community which deals with statistics of this sort that their methodology is sound and reasonable.

We'd all like to believe that in a quest to make America safe from, well, whatever it is that the Iraqi's have that we think is threatening us, that it would not require the killing of six hundred thousand Iraqi civilians. But our hopes don't actually figure into the death tolls.

I agree with Chu-Carroll: it seems to me that the figures in this paper are credible precisely because all criticism against them is emotional, rather than scientific. I did precisely what he did: I downloaded the paper and read it. Virtually all of the people claiming that the numbers simply must be too high either have not done that, or are simply ignoring the paper itself, which addresses a great deal of knee-jerk objections I'm seeing in the media.

An additional thought: if you assume that these numbers are too high, why are you believing the government and military estimates? If you think the Lancet and their system of review would allow a paper whose methodology is significantly flawed by political bias to be published, what check and balance prevents bias from entering the military's estimate? Clearly the military and the current administration has an interest in portraying the current policy as causing the lowest possible number of civilian deaths. What's to keep their methodology (currently undocumented and unreviewed by anyone) from resulting in significantly flawed estimates?

billb writes The authors clearly do not correctly apply or perhaps they do not understand significant digits. E.g., they report 654,965 excess deaths, but they only contacted 12,801 individuals, .... Sigh. The number of deaths is an integer. The abstract gives the mean value with a confidence interval, 654,965 (392,979 - 942,636). So the statistical uncertainty here has an order of magnitude of 10^5. The value 654,965 is a valid statistical result. The authors published a more readable version of the paper in pdf format entitled "The Human Cost of the War in Iraq." Perhaps you could take a look at it?

The Lancet is a prestiguous scientific journal, one of the oldest. Your thinking they would publish a 10th grade mistake in the abstract of a paper that they know will be subjected to intense scrutiny is difficult for me to understand. One thing this storm of commentary in the "blogosphere" shows is that a lot of people simply don't grasp that publication in a scientific journal presents a high standard of truth and accuracy. The highest, IMHO.

Blake:

I doubt a widespread education in the mathematics and statistics would stop moronic pundits from chanting hate-spells at every study they dislike in an attempt to dispel science by superstition (for this is what such people are trying to do: Truth by Incantation). However, if their audience can be taught statistics, then more people might recognize ignorant bile for its true nature, and that would be something good.

As we've seen many times before, it is practically impossible to shake a True Believer's faith. Real results happen when you wake up the people in the middle.

That's what I was really saying.

My statistics students, for example, when queried at the start of the course, will mostly say that a sample of 500 voters out of 2 million is just too small to draw any conclusions (regardless of how the sample is drawn). I hope that at the end of the course they will have gotten enough concept of variation and sampling that they will have changed their minds on that matter, so when (was it Tony Blair? I'm too bummed to go back and look it up) complains about the small size of the sample in the Lancet study, they would instantly know that he was as ignorant as they used to be.

I actually received word from one of my former students that after my statistics course he could no longer read the news or watch television without yelling "that's wrong!" So maybe there is some hope that the audience can be educated.

Mark's doing a great job here, as are some others, and I think I shall try to expand their readership to the best of my ability.

Mark claims that "No one, no matter how they try, has been able to show any real problem with the methodology, the data ..."

Well, one reason for this is that the authors have refused to allow access to the underlying data. (I asked repeatedly with the 2004 paper and have tried as well with this.) How could anyone show a "real problem" with the data if the he can't examine it?

When academic researchers refuse to allow such access, what inference do you draw?

By David Kane (not verified) on 13 Oct 2006 #permalink

Thanks for the exposition. I came here to ask the very same questions you have already answered for others.
Lindsay has collected almost a dozen wingnut "refutations" and they all reek of the exasperation of the dimwit who can not find what he is nevertheless certain of. [or she in Ms Malkin's case]

the points of several commentors to the effect that peer reviewed "free speech" by educated parties is more valuable than most other things you could find to read sits well with me. Scientists are NOT afraid to call BS on each other when they think it is warranted and in that climate, they INVITE each other to find fault before the whole community is offered the reading. Wouldn't you love to see that standard in your government's proceedings?

Even whith their very conservative margins for error [as few as 450,000 souls eradicated by our bumbling], the numbers are not easily grasped. I had read numbers on the order of 50,000 prior to this report. The possible range of error is now 4 times that old estimate. Perhaps acceptance of the results is not possible for the many war supporters less because they can't do the math and more because the number is so disgusting that they would rather engage in various ways of ignoring it, doing or saying anything except becoming familiar with the facts and calculations that might indict their pet beliefs.
They have to get beyond the psychology before they can do the math.

David Kane: "How could anyone show a 'real problem' with the data if he can't examine it?"

'S one of the things peer review is for. Unless of course one believes that "They're all in on it."

Beyond peer review, there is plenty of information in the article itself on which to base valid questions and criticisms, as previously cited by Mark:

http://scienceblogs.com/authority/2006/10/the_iraq_study_-_how_good_is_…

Warren: you say that the report comes out right before the election with a "surprisingly" high number. Why are you "surprised"? Is there something about Iraq that you know that we don't? Do you have data that contradicts the number? Or is it just your gut feeling?

I was not surprised either way because I simply had "no idea" how many deaths there were. The saddest bit is that nobody in the government seems to object this with their own well documented data and analysis. Regardless of the existence and conclusions of the Lancet report, the absence of a rigorous study on the subject by the people who run this war makes you wonder, doesn't it?

Warren:
"Since both papers came out in October before a US election, we are pretty safe in assuming that the authors have one foot in the scientific camp and another in the political, electioneering camp."

justawriter anticipated you. His link goes to Majikthise: "Here are today's talking points. Or should we say talking flails? There aren't many actual points here: ... 3. The study was published before the election. (Instapundit) (Political Pitbull)"

"The Lancet paper is not yet being treated as a scientific issue, or not just as a pure scientific issue. The Lancet paper is being properly treated as a political act."

Majikthise: "8. Sure the study's methodology is standard for public health resesarch. But don't forget that public health is a leftwing plot. (Medpundit) ... Cowards, all of them. They own this war, but they won't face up to the fact that their little adventure helped kill over half a million people."

"The morgue statistics can be fairly viewed as active data gathering, as the people are financially benefited for making the death report. The morgue statistics raise questions about the Lancet paper."

Active data acquisition means to plan and make specific observations to get good data, passive data acquisition to rely on data already gathered by others perhaps for other reasons and without control. Morgue statistics count as passive data acquisition.

If you read the report, or Mark's comment, you will see the explanation for the difference between the active and passive data, and why they don't raise any concerns what we know of.

"To believe the Lancet data, the study would have be verified with another study by a different team using different, but valid, methods. That is only good science. ... When the Lancet study comes along as an outlier"

That would be good, but no such study will be forthcoming any time soon. Meanwhile we have to use this, the best data. And the reported difference between passive and active data *support* their claim since it is in the usual difference range. It is not an outlier.

By Torbjörn Larsson (not verified) on 13 Oct 2006 #permalink

Why extrapolate - bodies are, I dare say, real and substantial. Go count them.

By Walter E. Wallis (not verified) on 13 Oct 2006 #permalink

I think the reaction to the paper is not primarily about the number of excess deaths. I think the reaction is about the discussion of the number of Iraqi deaths.

The administration made an explicit decision not to track civilian death. The cynic in me says that they did this so that they would not have to talk about it. And, for the most part, they didn't.

The first Lancet study changed that, in that it got widespread media attention, and eventually, the president estimated "30,000, more or less".

Now the new Lancet study brings the subject up again, with better methodology and a far higher number, once again pointing out "the man behind the curtain".

650,000 is a *huge* number - it's getting up close to the "million" range.

But even if the death toll was on 30,000, is that okay? They're still dead.

Walter:

While it sounds good in the abstract to just count the bodies, in practice, it's just not possible.

How do you find all the bodies before they're buried? (Remember that we're not talking about tracking all deaths in one neighborhood, but in a huge country. We don't even have an accurate count of the number of people who died in New York City last year!)

How do you know about bodies that never went to the city morgue?

How do you count bodies after a bomb is exploded, where there's just a collection of pieces left?

How do you count bodies at a site that was hit by an air raid, where anyone who tries to approach it will likely be shot?

How do you count bodies in an area where the people in control are hostile to the counters?

Getting an accurate count of the dead by counting bodies isn't particularly feasible under the best of conditions; doing it in the middle of an insurgency is pretty much impossible.

I don't understand why people dismiss this study as politically biased because of the timing. First, political bias doesn't invalidate verifiable science. Second, isn't it good practice in the US to base your political choice on solid information (as much as possible). Should we suppress economic data like unemployment-rate; budget-deficits; trade-deficits; etc. as well?

By Willem van Oranje (not verified) on 13 Oct 2006 #permalink

John P: Sigh. There's a well understood difference between significant figures/digits and uncertainty. Writing that some quantity is probably 523456 and in the range 123456-923456 with, say, 95% confidence implies that you were able to estimate the data point and the confidence region bounds to six digits of accuracy (even if the uncertainty does span nearly an order of magnitude around the data point). In this case, that seems absurd. What makes you think that they authors in this study were able to do anything like that? It would have been much better if the authors had done their estimates respecting significant figures and reported a result along the lines of "the number of deaths was probably 523000, and was in the 123000-923000 with 95% confidence." To do more than this conveys a false sense of accuracy to the reader.

Having published academically myself, and reviewed numerous articles for publication as a peer reviewer, I think I understand quite well what will and will not pass muster in various journals. Hell, MarkCC's efforts to highlight the bad math in some journal articles shows that there are plenty of things that can be gotten past the anonymous reviewers that might not get past the blogosphere. I've seen plenty of bad mistakes (worse than this one) in articles submitted to prestigious engineering journals, and occasionally I've seen bad mistakes get published. Had I been a reviewer on this article, I certainly would have suggested that they clean this bit up.

I thank you for pointing me to the layman's version of the article, but I think I'll take a crack at reading the full one. I mean, I might as well put this aerospace engineering PhD of mine to some sort of use ;)

Koray:
There were lots of pre-existing estimates out there. IBC, the previous Lancet article, the various governments, Bush's offhand remarks, and so on. I won't list them all.

The absence of any rigorous studies should make us wonder. But not much. Everybody who cares enough to make the study has a pre-existing opinion on the war. And it's dangerous work.

Torbjörn:
On the active-passive question; The fact that people get financial benefits from reporting deaths to the morgue should make it accurate, if not inflated. More detailed study is required than our pontificating, though.

The "Best data" come out just before a US election two election years in a row from the same source. To any American, this raises political eyebrows. The words "October surprise" is a standard American political phrase to describe this phenomenon. Of course it doesn't prove anything scientifically.

I cannot disprove the Lancet article, but it would be naive of me to simply believe it. We do need to know the truth.

The only way the study can be false is by outright fraud. And one good piece of evidence that there isn't any is that in two years, nobody has detected fraud in the other study. The other study had different problems, but fraud wasn't one of them, which suggests that the people who did the study are honest.

The only way the study can be false is by outright fraud...
That's clearly not true; there are many epidemiology studies which turn out to be non-replicable, and for a whole variety of reasons. Cluster sampling is not the best methodology ever, and i think it is wrong to paint it as a perfect method. You do get systematic sources of bias, even with the best will in the world.

There are a couple of things that strike me as interesting, and I wouldn't mind help with. The first is that the new lancet paper cites the pre-invasion iraq death rate as 5.5 per 1000, which is roughly half of the UK death rate. I don't understand why this rate is so low; clearly, it makes a difference if you start with a very low baseline mortality rate.

Secondly, I see that the UK/US and the Iraq body counts are disparaged. But I am a little confused; if the vast majority of the participants in this cluster study have death certificates, won't they all be registered with the actual death studies in Iraq ? It is a little bit confusing to hear the claim that there are many more deaths than are occuring on the official record, but that these deaths have an "official" death certificate.

any help ?
per

'S one of the things peer review is for.
you may have an over-optimistic view of what peer-review can do; I would guess it is unusual for peer-reviewers to examine raw data, particularly for data like this, unless there is a suspicion of fraud. You will no doubt be aware that there are numerous papers that passed peer-review, and which transpired to be fraudulent when the original data were examined.

Active data acquisition means ..., passive data acquisition...
I am not sure that I accept the distinction you are making, nor do I accept that either would inherently be more reliable. In the UK, there is a "passive" system for recording deaths, and I would rely on that much more than I would any survey of the type used in the Lancet paper.

One of the other issues poorly considered is that the lancet study is a small-scale sampling, covering 13k people out of a population of 27 million. As I understand, the US/UK and Iraqi government counts are based on the total number of dead. So the smaller sample has greater uncertainty because of its limited sample size. The argument being deployed is that there is a systematic bias in the actual body counts which renders it useless, whereas the questionnaire counts have less systematic bias.

It is an interesting perspective.

per

One of the other issues poorly considered is that the lancet study is a small-scale sampling, covering 13k people out of a population of 27 million. As I understand, the US/UK and Iraqi government counts are based on the total number of dead. So the smaller sample has greater uncertainty because of its limited sample size.

Um, no, it's quite the opposite. A 13k sample size out of 27 million is pretty darned big. What's that, 1/20th of a percent? That's *large* for a statistical test.

Given that, cluster sampling *does* have recognized shortcoming, which Mark addresses in this post and the previous one. It still gives fairly large confidence intervals, and it tends to overestimate. But, controlling for that, one would feel safe assuming that the actual number is around the lower end of the confidence interval, or 450-500k.

And what do you mean that the US/UK/Iraqi counts are based on total number of dead? They tend to be based on actual confirmed deaths, which isn't statistical at all. They're not comparable in method at all. And they tend to be wrong in different directions - counting will underestimate, while cluster sampling will overestimate.

By Xanthir, FCD (not verified) on 14 Oct 2006 #permalink

well, xanthir, we appear to be using strong words to say the same thing !

the survey samples 1/20th of a percent of the population, and the body counts record whole population confirmed counts. That is a small number in the sample compared to the whole population.

The two different methods have different systematic bias; as you state.

yours

per

Warren:
"On the active-passive question; The fact that people get financial benefits from reporting deaths to the morgue should make it accurate, if not inflated."

Your claim doesn't support your earlier claim that it is not a passive technique.

The problems of getting valid passive statistics in a conflictfilled country has been explained elsewhere. The difference passive-active has been explained here. You need to adress these questions and support your claim of higher accuracy with observations or new theory.

"The "Best data" come out just before a US election two election years in a row from the same source. To any American, this raises political eyebrows."

Correlation isn't causation. See Alon's response on your claim of falsehood.

per:
There is a problem with the baseline death rate and its base on a small number of deaths. See Jud's link; the estimate is that it could mean about 20-25 % down adjustment to the numbers. Still pretty high numbers, unfortunately.

"You will no doubt be aware that there are numerous papers that passed peer-review, and which transpired to be fraudulent when the original data were examined."

That doesn't seem likely since as noted the reported numbers follows an expected difference passive-active methods. (It could of course still be fraud, but the real numbers would have to be unusual to change the conclusion much.)

By Torbjörn Larsson (not verified) on 14 Oct 2006 #permalink

The first is that the new lancet paper cites the pre-invasion iraq death rate as 5.5 per 1000, which is roughly half of the UK death rate.
***
But I am a little confused; if the vast majority of the participants in this cluster study have death certificates, won't they all be registered with the actual death studies in Iraq?

Both of those issues are discussed over here:
http://crookedtimber.org/2006/10/12/death-rates-and-death-certificates/

Short answer on death certificates: The certificate is given out locally to the next-of-kin, but doesn't affect governmental death counts until a copy makes its way to the appropriate records office. Thus almost everyone has a death certificate--which, as you point out, the next-of-kin have good reason to demand--but the government often doesn't know they do. The paper describes how this was the case even before the war, leading to gross mortality undercounts.

The argument being deployed is that there is a systematic bias in the actual body counts which renders it useless, whereas the questionnaire counts have less systematic bias.

I'm not sure "systematic bias" is the appropriate term for the shortcoming of body-count methods. It's not just that they undercount, but that we don't know by how much. If they consistently counted 1/10 as many bodies as actual deaths, that would be systematic bias and we'd be able to correct for it.

By Anton Mates (not verified) on 14 Oct 2006 #permalink

Torbjörn
There is a problem with the baseline death rate...
indeed ! I followed Jud's link, but found nothing there that addresses that specific issue.
That [fraud] doesn't seem likely ...
I made no suggestion that fraud was likely here. What I did say is that peer-review is not a sufficient methodology for checking the integrity of the original data used in a study, because I believe most peer-reviewers never see the original data !

Anton Mates
I did (subsequently) read the thread at crooked timber, which i thought contained some good posts. Although it starts off with the assertion that the government machinery is broken only after the death certificate is issued, this quasi-omniscient statement is not supported in any way but by assertion.

I don't get your point about undercounts. If the 5.5 per 1000 is wrong, the whole paper evaporates.

per

The first is that the new lancet paper cites the pre-invasion iraq death rate as 5.5 per 1000, which is roughly half of the UK death rate. I don't understand why this rate is so low; clearly, it makes a difference if you start with a very low baseline mortality rate.

I need to start keeping track of how many times I've had to deal with this argument. Iraq's prewar death rate was high by regional standards; the Middle East sports some of the lowest death rates in the world (Saudi Arabia is at 2.62).

Although it starts off with the assertion that the government machinery is broken only after the death certificate is issued, this quasi-omniscient statement is not supported in any way but by assertion.

Not so--according to the study authors, "Even with the death certificate system, only about one-third of deaths were captured by the government's surveillance system in the years before the current war, according to informed sources in Iraq. At a death rate of 5/1,000/year, in a population of 24 million, the government should have reported 120,000 deaths annually. In 2002, the government documented less than 40,000 from all sources. The ministry's numbers are not likely to be more complete or accurate today."

I suppose that could all just be their assertion, though....

I don't get your point about undercounts. If the 5.5 per 1000 is wrong, the whole paper evaporates.

My undercount comment doesn't have anything to do with the 5.5 per 1000; I was merely pointing out why the error in body-count methods doesn't generally fall under "systematic bias."

As for the crude death rate...as the CT post pointed out, there's no particular reason why Iraq shouldn't have a much lower one than, say, the UK. A low death rate doesn't automatically equate to, say, high average longevity.

Check the CIA World Factbook: https://www.cia.gov/cia/publications/factbook/rankorder/2066rank.html

and you'll see that, for instance, Belgium's crude death rate is almost twice that of the Dominican Republic. Doesn't mean the latter is a nicer place to live...

By Anton Mates (not verified) on 15 Oct 2006 #permalink

Billb: Writing that some quantity is probably 523456 and in the range 123456-923456 with, say, 95% confidence implies that you were able to estimate the data point and the confidence region bounds to six digits of accuracy ...

Example: 300,001 (200,001 - 400,001). Then consider 300,000 (200,000 - 400,000). The published six-digit value is the most probable value. It is an integer. We're counting dead people. 1, 2, 3, ... N. When counting, the value 654,964 is just as significant as the integer value 1, or any other positive integer. It's stated with a five-digit spread! Round it to the nearest hundred or thousand if zeroes make you feel better. It makes absolutely no difference in the interpretation or the significance of the result. Well, perhaps to a pedant it might. If you still take issue with the stated values, write a letter to the editor of the Lancet.

Let's not lose sight of the fact that we're talking about a horrible human cost. What has my country done?

billb:

I understand (and share) your irritation with the issue of significant digits, but in the social sciences the standard practice *is* to use exact integers if the figure being measured is integral. For those of us who's training is in the physical science, it's jarring. But among the social science types, if they used the "round" number that you'd get by using the correct number of significant digits, they'd be criticized for it.

John P: Thanks for the ad hominem. I think significant figures are important, and if you think that makes me a pedant, that's fine. I still haven't had time to read the whole study (I had a bookshelving project this weekend), so I don't know if there are any more significant problems with the study. I plan to send MarkCC a critique when I do finish reading/analysing it. Perhaps he'll post it, as I'm sure this comment thread will be long since dead.

I agree that the number of deaths is actually an integer, but you shouldn't belive that just because they've reported integers that those numbers are the exact numbers that came from their analysis. I guarantee that the most likely number and the numbers involved in the confidence bound have all been rounded to the nearest integer (the odds of three statistical calculations which at the least involve division and ending up with integer results seem to be very low). So if they're already willing to round, why not report the results using a number of digits appropriate to the number of significant figures in their data and assumptions?

Let's not lose sight of the fact that we're talking about a horrible human cost. What has my country done?

I suppose that any loss of life might be considered horrible, but haven't you just begged the question? The whole point of this thread is the quality of this study. Perhaps you're considering the death toll in Iraq to be horrible independent of whether it's the US Goverment's number or this study's number that is correct. That may be a fair assessment, but then why argue with my nitpick of the study if you believe that already?

Please note, I haven't said anything about what my opinion on the reported number is. I do think that someone else should go over the study with a fine-toothed comb when someone like MarkCC, who usually does this so well, can't find a single bad thing to say about it.

per:
"I followed Jud's link, but found nothing there that addresses that specific issue."

??? "Baseline population numbers. In a prior study, the same authors obtained estimates for the populations of the various Governorates as of January 2004 from the Iraqi Ministry of Health. In the current study, they use estimates as of mid-2004 taken from a joint Iraqi/UN population study. The population numbers in the two studies are quite different ... Why is that important? There are a couple of reasons. First, these population figures were used to determine how many clusters to examine in each Governorate. ... The authors do not explain why they used the highest of the three total population estimates, and they do not mention that the other two exist. ... Given the total lack of statistical significance, I don't think using the difference in non-violent death rates to calculate "excess" was justified. Excluding those deaths lowers the number of excess deaths by about 54,000. Excluding those deaths and using the lower population levels lowers the number by well over 100,000."

"I made no suggestion that fraud was likely here."

!!! "which transpired to be fraudulent"

"What I did say is that peer-review is not a sufficient methodology for checking the integrity of the original data used in a study, because I believe most peer-reviewers never see the original data !"

I don't see your point. If the used methodology isn't good enough, the paper should be refused or criticised as incomplete later. This is what the original comment adressed, BTW.

Of course, the original data should always be available on request if fraud is suspected. If the dataset is unique and/or expensive to make as here it may be that the authors/owners want to make more studies based on it, so they are likely guarding it against premature requests based on other basis.

By Torbjörn Larsson (not verified) on 16 Oct 2006 #permalink

As to the discussion above, the number of ways in which such a study can go wrong, and by orders of magnitude, may be an integer, or may not be, but it is significant. For instance:

1) The highly non-random distribution of deaths in the kind of war being fought in Iraq can magnify any sampling errors.

2) The study acknowledges difficulties - practical and cultural - in acquiring a valid sample, but the authors may tend to underestimate the impact, at least in part due to political bias (see below).

3) As has been pointed out by others, assumptions about the baseline (how many deaths per 1,000 was normal for pre-war Iraq) are highly dubious, and are critical to all conclusions on numbers of excess deaths.

Iraq Body Count - which has anything but a pro-US/pro-war bias - has published an examination of the study's conclusions which, at the least, strongly suggests that something is very wrong with how researchers produced their results. See http://www.iraqbodycount.org/press/pr14.php
The strong, fiercely proclaimed bias of the study's sponsors (the Lancet editor can be fairly described as an extreme opponent of the war) further tends to raise suspicions that the results were procured, and at least that the study may have been partly if not wholly contaminated.

In short, studies based on similar methodologies and larger samples have produced radically dissimilar results. Studies based on alternative methodlogies and larger samples have produced radically dissimilar results. There is no good reason at this point to accept the Lancet study as definitive, or even as very useful.

"studies based on similar methodologies and larger samples have produced radically dissimilar results"

Yes, they refer to an UNDP report that was made on 2004 statistics by the Iraq Ministry of Planning and Development Cooperation with much larger samples and lower statistics. The Lancet report discusses such studies: "An Iraqi non-governmental organisation, Iraqiyun, estimated 128 000 deaths from the time of the invasion until July, 2005, by use of various sources, including household interviews. ... There also have been several
surveys that assessed the burden of conflict on the population. These surveys have predictably produced
substantially higher estimates than the passive surveillance reports."

They have some arguments one should consider.

BTW, I like their conclusion:
"In December 2005 President George Bush acknowledged 30,000 known Iraqi violent deaths in a country one tenth the size of the USA. That is already a death toll 100 times greater in its impact on the Iraqi nation than 9/11 was on the USA. That there are more deaths that have not yet come to light is certain, but if a change in policy is needed, the catastrophic roll-call of the already known dead is more than ample justification for that change."

By Torbjörn Larsson (not verified) on 16 Oct 2006 #permalink

As has been pointed out by others, assumptions about the baseline (how many deaths per 1,000 was normal for pre-war Iraq) are highly dubious, and are critical to all conclusions on numbers of excess deaths.

As has been pointed out by me so many times that I might just scream, Iraq's prewar death rate was completely normal for a Middle Eastern country.

In short, studies based on similar methodologies and larger samples have produced radically dissimilar results.

If you're talking about the UNDP study, it didn't measure death rates. It asked people if any household member died from a war-related cause. It's too vague to get a definitive death rate.

"It asked people if any household member died from a war-related cause."

The questionaire asks under Mortality a set of questions which ends with:
"What was the cause of ... 's death/... going missing?
1 Disease
2 Traffic accident
3 War related death
4 During pregnancy, childbirth or within 40 days after
5 Other (specify)
98 DK
99 NA"
( http://www.fafo.no/ais/middeast/iraq/imira/IMIRA%20Household%20English… )

It is not explicitely cause-blind, but it isn't exclusively war-related either. The UNDP report howevere only gives a figure for the war-related death subset. It was also not adjusted for households were everyone died.

By Torbjörn Larsson (not verified) on 16 Oct 2006 #permalink

This gives an action plan to those who have problems with the Lancet report - they should ask *UNDP* for the source material to make a more reasoned statistic for excess (or total) deaths to compare with.

By Torbjörn Larsson (not verified) on 16 Oct 2006 #permalink

Umm. I just remembered that it is bad practise to synthesize data outside a questionare as I suggested, and should not be accepted. (Perhaps in a meta-analysis of these studies though?)

By Torbjörn Larsson (not verified) on 16 Oct 2006 #permalink

The strong, fiercely proclaimed bias of the study's sponsors (the Lancet editor can be fairly described as an extreme opponent of the war) further tends to raise suspicions that the results were procured, and at least that the study may have been partly if not wholly contaminated.

'tends to raise'? Is it weasel season already? At least have the balls to accuse the researchers, peer reviewers and publishers of fiddling the numbers outright. I believe the libel laws in England & Wales have just been relaxed to make things harder for plaintiffs, but it's about time that the 'bordering on' and 'tends to raise suspicions' types had the courage of their convictions.

Sheesh - libel laws? I suppose any criticism of the study or its authors that doesn't hew to the party line is streng verboten? Get a grip.

The reason I use the phrase "tends to raise suspicions" is that I'm trying to give the authors of the study the benefit of whatever doubt. There is absolutely no doubt about the stance of Dr. Richard Horton, LANCET's Editor-in-Chief. Or does the following stand as moderate and neutral in today's UK?

http://www.youtube.com/watch?v=v7BzM5mxN5U&eurl=

Honestly, how would you expect the man in that video to react to a study that did anything other than strongly support his position?

Much more needs to be known about how the study was actually conducted - on the ground, in real life, not in theory. From the outside, it's very hard to see how the interview process and the collection of data could be insulated from bias and error. There has already been much discussion on this question, but I for one have not yet seen a satisfactory explanation of how it was even possible to get answers and results that weren't highly susceptible to manipulation or systematic error at each stage of the process. The authors are not releasing their raw data and logs, and those who find the study's extraordinary claims difficult to swallow are left merely to speculate about how, for instance, interviewers working on such a sensitive subject in such difficult circumstances could have achieved an incredibly low refusal rate among interviewees.

There are many other aspects of the study that give one pause, even without reference to the conclusions and the manner in which they have been presented, but, even assuming that such a study could be executed soundly given current conditions in Iraq (and not just in Iraq), all we have to depend on are implicit assurances from the authors that their methods were sound and were properly applied. Choosing to trust those assurances, to opine that the study "is correct," and to accept that its conclusions are trustworthy (or even that they are meaningful except as an expression of pre-existing biases) is at this point in essence a political decision - something which the general reaction to the study makes clear.

colin:
You seem to imply that the study was sponsored by Lancet. I can't find any evidence for that.

The study says:
"We declare that we have no conlict of interest.
...
Funding was provided by the Massachusetts Institute of Technology and the Center
for Refugee and Disaster Response of the Johns Hopkins Bloomberg
School of Public Health."

You seem to imply that Lancet has special interests. I can't find any evidence for that either:
"The journal was, and remains, independent, without affiliation to a medical or scientific organisation. More than 180 years later, The Lancet is an independent and authoritative voice in global medicine."

You seem to imply that discussing a study's methods and finding them correct is a political decision. This is of course not so. This argument seem to say more about your own interests in this question.

By Torbjörn Larsson (not verified) on 17 Oct 2006 #permalink

You were expecting, Mr. Larsson, to find statements to the effect of "Whatever our past history, nowadays we are subject to control by anti-US zealots"?

It's obvious that LANCET's Editor-in-Chief has very strong political biases. He's hardly been shy about them. It's also true that that it is neither LANCET's nor John Hopkins' first go-around on this issue. If LANCET has been taken over by in whole or part by its politically committed E-in-C and his presumable supporters, it would hardly be the first time that a formerly respectable "gatekeeper" organization was hijacked by individuals or groups whose agendas, or what they themselves may see as higher callings, superseded any sense of responsibility to those organizations.

To argue that scientific or supposedly scientific studies can be distorted by political bias is simply to state the obvious. Radical critics, more typically from the left going back to the inventor of the term "bourgeois science," will argue that virtually all scientific discourse - as well, inevitably, as scientific practice - is distorted by ideology. That is not my stance here, however. Nor do I rest on the some argument that LANCET has been in effect captured by ideologues - though I believe it's quite within the realm of possibility. I am stating (and not "implying") that in my opinion, in this narrow instance, given the type of study and the context in which it was performed and then presented, there is much reason to question it on every level - its concept, its methods, its conclusions, its presentation; that it is far too early to accept its findings; and that to do so is indeed, at this point, a political choice contrary to decent scientific skepticism, not to mention any reasonable experience of the world and the human beings who populate it.

"You were expecting, Mr. Larsson, to find statements to the effect of "Whatever our past history, nowadays we are subject to control by anti-US zealots"?"

I was expecting to find support for your claims. I can't, and it seems you can't either.

By Torbjörn Larsson (not verified) on 17 Oct 2006 #permalink

"I was expecting to find support for your claims."

I'm sorry - but that's just silly. You might as well go to the Moon to look for an elephant.

I find it more likely that the study's authors - and perhaps as or more significantly its actual on-the-ground interviewers - in some way tailored their work, consciously or not, to anticipated reception and expected political uses, than that all other estimates of "excess deaths" in Iraq and particularly of violent deaths have been off by a factor of 10 or more. When I say I find it more likely, that doesn't mean that I'm unwilling to look at the evidence and methodology, but at this point I find Iraq Body Count's arguments persuasive on this score, and I don't believe that defenders of the study have yet come close to addressing the main questions that have been posed about how it was conducted and whether the statistical model really could have been properly applied.

If an old scholarly or professional journal in the U.S. happened to be run by a committed rightwing ideologue - a man who had been videotaped ranting and raving to adoring crowds of American conservatives - published a study whose shocking results just so happened to coincide with the American conservative agenda, and also happened to be contradicted by all other studies of the same subject, are you telling me you'd be satisfied with references to the journal's proud tradition? I don't know you, Mr. Larsson, from Adam, but I'm confident that many of LANCET's most ardent defenders in the current controversy would be first to attack the dishonest scheming rightwing neocons for having perverted science and academia for the sake of their nefarious purposes.

"I'm sorry - but that's just silly. You might as well go to the Moon to look for an elephant."

In such a case your claims has no value.

By Torbjörn Larsson (not verified) on 18 Oct 2006 #permalink

"In such a case your claims has no value."

I haven't the slightest idea what you're talking about or think you're talking about. You seem to believe that an organization or sponsoring organization's mission statement will tell you everything you need to know about the organization, and in the meantime you're willing to ignore everything and anything else. Good luck in the world with such a trusting attitude.

Here's a much more direct critique of the study just out today in the WALL STREET JOURNAL:

http://www.opinionjournal.com/editorial/feature.html?id=110009108

For example:

"[In]'Mortality after the 2003 invasion of Iraq: a cross-sectional sample survey,' the Johns Hopkins team says it used 47 cluster points for their sample of 1,849 interviews. This is astonishing: I wouldn't survey a junior high school, no less an entire country, using only 47 cluster points.

Neither would anyone else. For its 2004 survey of Iraq, the United Nations Development Program (UNDP) used 2,200 cluster points of 10 interviews each for a total sample of 21,688. True, interviews are expensive and not everyone has the U.N.'s bank account. However, even for a similarly sized sample, that is an extraordinarily small number of cluster points. A 2005 survey conducted by ABC News, Time magazine, the BBC, NHK and Der Spiegel used 135 cluster points with a sample size of 1,711--almost three times that of the Johns Hopkins team for 93% of the sample size.

What happens when you don't use enough cluster points in a survey? You get crazy results..."

and

"Dr. Roberts said that his team's surveyors did not ask demographic questions. I was so surprised to hear this that I emailed him later in the day to ask a second time if his team asked demographic questions and compared the results to the 1997 Iraqi census. Dr. Roberts replied that he had not even looked at the Iraqi census.

"And so, while the gender and the age of the deceased were recorded in the 2006 Johns Hopkins study, nobody, according to Dr. Roberts, recorded demographic information for the living survey respondents. This would be the first survey I have looked at in my 15 years of looking that did not ask demographic questions of its respondents. But don't take my word for it--try using Google to find a survey that does not ask demographic questions.

"Without demographic information to assure a representative sample, there is no way anyone can prove--or disprove--that the Johns Hopkins estimate of Iraqi civilian deaths is accurate."

If the study is as extraordinarily slipshod as these and other criticisms, along with common sense, suggest, then the real question is why would LANCET lends its imprimatur to such work - and for a second time. So far, the politicization of LANCET's editorial policy, beginning with the Editor-in-Chief, is the only reasonable explanation so far advanced.

"In such a case your claims has no value."

I haven't the slightest idea what you're talking about or think you're talking about. You seem to believe that an organization or sponsoring organization's mission statement will tell you everything you need to know about the organization, and in the meantime you're willing to ignore everything and anything else. Good luck in the world with such a trusting attitude.

Here's a much more direct critique of the study just out today in the WALL STREET JOURNAL:

http://www.opinionjournal.com/editorial/feature.html?id=110009108

For example:

"[In]'Mortality after the 2003 invasion of Iraq: a cross-sectional sample survey,' the Johns Hopkins team says it used 47 cluster points for their sample of 1,849 interviews. This is astonishing: I wouldn't survey a junior high school, no less an entire country, using only 47 cluster points.

Neither would anyone else. For its 2004 survey of Iraq, the United Nations Development Program (UNDP) used 2,200 cluster points of 10 interviews each for a total sample of 21,688. True, interviews are expensive and not everyone has the U.N.'s bank account. However, even for a similarly sized sample, that is an extraordinarily small number of cluster points. A 2005 survey conducted by ABC News, Time magazine, the BBC, NHK and Der Spiegel used 135 cluster points with a sample size of 1,711--almost three times that of the Johns Hopkins team for 93% of the sample size.

What happens when you don't use enough cluster points in a survey? You get crazy results..."

and

"Dr. Roberts said that his team's surveyors did not ask demographic questions. I was so surprised to hear this that I emailed him later in the day to ask a second time if his team asked demographic questions and compared the results to the 1997 Iraqi census. Dr. Roberts replied that he had not even looked at the Iraqi census.

"And so, while the gender and the age of the deceased were recorded in the 2006 Johns Hopkins study, nobody, according to Dr. Roberts, recorded demographic information for the living survey respondents. This would be the first survey I have looked at in my 15 years of looking that did not ask demographic questions of its respondents. But don't take my word for it--try using Google to find a survey that does not ask demographic questions.

"Without demographic information to assure a representative sample, there is no way anyone can prove--or disprove--that the Johns Hopkins estimate of Iraqi civilian deaths is accurate."

If the study is as extraordinarily slipshod as these and other criticisms, along with common sense, suggest, then the real question is why would LANCET lends its imprimatur to such work - and for a second time. So far, the politicization of LANCET's editorial policy, beginning with the Editor-in-Chief, is the only reasonable explanation so far advanced.

Here's a much more direct critique of the study just out today in the WALL STREET JOURNAL:

And here are critiques of that critique.

http://www.stats.org/stories/did_wsj_flaw_iraq_oct18_06.htm
http://scienceblogs.com/deltoid/2006/10/flypaper_for_innumerates_wsj_e…

Among other things, Tim Lambert points out in the second link that

a) the WSJ author, Steven Moore, is simply wrong about Roberts et al. not recording the gender of living subjects. Which he is--it's right there in the original paper. And

b) Moore himself has previously recommended running surveys in Iraq with only 75 cluster points, so it's a bit rich for him to snark about 47 points being insufficient even for a "junior high school".

And while we're accusing people of political bias, shall we mention that Moore's employed by a Republican consulting agency, and spent the better part of a year as Ambassador Paul Bremer's PR consultant in Iraq? Yes, obviously he'd be far more objective and reliable on this issue than those anti-war fanatics at the Lancet.

http://www.prweb.com/releases/2004/10/prweb164029.htm

By Anton Mates (not verified) on 18 Oct 2006 #permalink

"I haven't the slightest idea what you're talking about or think you're talking about."

Very likely, since after I said that I could not find any evidence for your claims you have merely repeated them. That you openly state your confusion is the only reason I comment one more time.

BTW, the editor of a magazine is not only free to express his personal opinions in an editorial, he is supposed to say interesting stuff.

Now you bring in some new critique, from another Iraq surveyor.

The point about few cluster points is noteworthy. I don't know how to evaluate that, except that people above with experience of such surveys have vouched for it and that unaccuracy is stated and still the lower death count is larger than earlier surveys highest count.

The comparison with other surveys cluster numbers doesn't say anything. (And at least the UNDP asked for and did another type of survey, BTW.) Same goes for the demographic information, it didn't figure into the Lancet study methods IIRC.

I'm sure the debate about the study and its results will continue for a long time.

By Torbjörn Larsson (not verified) on 19 Oct 2006 #permalink

On the cluster size--the author of the WSJ piece was Steven Moore, and Tim Lambert on Deltoid points out that Moore himself recommended 75 clusters as adequate for a poll of Iraq, which makes his snark about 47 not being enough for "junior high school" a bit rich.

Deltoid also has links to two of the authors' remarks on the study--they deal further with the question of cluster number. Oh, and there was a Nature piece supporting the methodology as well.

http://scienceblogs.com/deltoid/2006/10/nature_iraqi_death_toll_withst…
(Assorted other recent Deltoid posts cover the study, but I can't link 'em all for fear of auto-moderation.)

And while we're complaining about the political bias of the Lancet editor-in-chief, maybe we should mention that Steven Moore himself works for a Republican consulting firm, and that he spent most of 2004 as Ambassador Paul Bremer's PR advisor in Iraq? That seems relevant somehow.

By Anton Mates (not verified) on 19 Oct 2006 #permalink

For those who provided the links to the Deltoid and STATS.org discussions, thanks. If you read through the comments on the http://scienceblogs.com/deltoid/2006/10/flypaper_for_innumerates_wsj_e… you'll see a reply from Gorton Moore, the WSJ op-ed writer, in which he politely and patiently responds directly to misunderstandings about his points on demographic controls and numbers of clusters and about his own background and expertise.

As for the other commentaries and posts at those links, I don't accept for a second that they provide significant support for the Johns Hopkins/LANCET study. Indeed, on the whole they raise additional questions whose import is sometimes masked by the reticence of those asking them. The material also doesn't begin to answer the points raised by Iraq Body Count or by other critics of the study. Most laughable to me is the non-defense defense that because a given set of controls, like a trustworthy census or a valid mortality baseline; or necessary conditions, like sufficiently safe and bias-free researchers, may not exist, we should just barge ahead anyway and hope that the results are still useful - perhaps while indignantly dismissing any and all critics as pathetic rightwing "innumerates."

And, to Mr. Larsson, I still don't get what your point is supposed to be. If you're asking for further evidence of possible political bias on the part of researchers, you can also look to the affiliations of the John Hopkins study authors, who apparently have admitted their political motivations, and have been "outed" as contributors to Democratic Party candidates. Les Roberts, author of the 2004 JH/LANCET report which this report is supposed to have improved upon, was himself a recipient of some of those contributions when he subsequently ran for the US Congress as a Democrat.

By itself, such information wouldn't disqualify the individuals or their work at all, in my opinion, but it does become relevant when we are asked simply to trust the conclusions of a study whose procedures and assumptions remain in crucial parts a mystery or admittedly guesswork, and whose extraordinary conclusions are released near the peak of a hard-fought political campaign, in a magazine edited by a full-throated ideologue and repeat offender.

"we should just barge ahead anyway and hope that the results are still useful"

Mark's reasoning is that the result is typical for cluster studies vs body counts. If there is no problem with the study it should be accepted. Unless you want to make politics out of it.

"If you're asking for further evidence of possible political bias on the part of researchers"

I have stated specifically 3 claims of yours which I can't find support for, and asked you where it is.

Neither of those earlier claims of yours where that the researchers themselves have a political bias; I believe you introduce this claim here.

Unfortunately, I can't follow your reasoning. "the John Hopkins study authors, who apparently have admitted their political motivations, and have been "outed" as contributors to Democratic Party candidates."

"Outed" is unfamiliar to me, likewise "contributors" to a party. According to Wikipedia "outing" is "taking someone "out of the closet"". Does this mean that they have admitted to financial or practical contribution to a party?

Nevertheless, I don't see why researchers affiliation to a party is a problem for a scientific study and its conclusions; if they are good citizens they should at least vote. Are you accusing them of scientific falsehood? Perhaps you should prepare a paper that disproofs theirs then.

It would be another matter if they had got party money to make the study. That would indeed cast suspicion on the why and what of it.

(OT: The idea that any US party politics makes much of a difference seems a little weak to me, since both parties are well to the right compared to the broad spectra I'm used to.)

"repeat offender"
Not that it has any relevance to the study in question, but who and why is the editor offending? Is it because he has a political view you don't like? If so, why should we care?

By Torbjörn Larsson (not verified) on 21 Oct 2006 #permalink

Mr. Larsson, I still can't understand what makes you believe you've substantially addressed any claim of mine. I give evidence of the political position and affiliation of LANCET's Editor-in-Chief and of the authors of the 2004 and 2006 studies, and you point to a declaration on the part of the researchers that they have no "conflict of interest" or a statement from somewhere else that affirms LANCET's history of "independence." Conflict of interest is usually employed much more narrowly and specifically, and - I guess this may shock you - individual human beings have in the past been known to lie to other human beings and quite often to themselves, or, to be charitable, they may simply see things differently than other people do! Imagine that! Someone might say he's impartial, or operate under a statement of impartiality, but not actually be impartial! I hope you don't find these revelations too disturbing...

Likewise, if the Editor-in-Chief has in my opinion been misusing his position of influence, I wouldn't expect him to have edited the LANCET's mission statement or masthead to reflect the fact. Indeed, I would expect him, short of a conversion to the other side, to deny it until his dying day.

The Democratic Party in the US includes many people who would probably be very comfortable in your "spectra." Since you apparently don't understand much about American politics, I will further explain to you that American campaigns are financed largely by direct monetary contributions within limits. They are an indicator, obviously, of political affinity, though certainly not, as I conceded, and disqualifying.

However, you don't need to take my word for it or go on suspicion or guilt by association. The authors themselves have admitted that they timed the release of the report politically - something which, obviously, they could not have accomplished without LANCET's "independent" cooperation. You'll have to go to the end of the video at the following link to hear Dr. Burnham admit that he and his colleagues wanted to get the report out before the current elections in the United States "if at all possible." http://www.youtube.com/watch?v=pMlAcHKFc7w

Dr. Burnham no doubt believes that the purpose he's serving is very important - important enough to allow his integrity to be drawn into question.

As for LANCET's Dr. Horton, perhaps it is your lack of familiarity with American politics or the English language, but the he is a "repeat offender" because his journal pulled the same trick just prior to the 2004 Presidential election.

In short, the whole approach of LANCET and the Johns Hopkins team totally stinks, and if you can't smell it, it's because you have no political nose, or because you've been snorting something else. I suppose it's possible that this is all just coincidence, and that these individuals happened to do good science and, in the case of LANCET as publisher, good scientific journalism, but public pronouncements and personal histories do not suggest that these men are political innocents - far from it.

In many cases the defenders of the study - as with those Deltoid and STATS.org links above - seem unable to confront what their own arguments seem to be saying - as for instance regarding the baseline uncertainties and the lack of sampling controls. Maybe, they're too comfortable in their "spectra" to confront the obvious: The study is full of holes, was politically timed, reaches incredible conclusions based on slipshod and highly questionable (where not conveniently mysterious) procedures. If the idea was to have the study taken seriously, the authors and publisher would have bent over backwards to insulate themselves from the appearance of bias. Instead, as previously, they've rushed their work out in hope of maximizing its political effect - and then their defenders cry foul when the authors and publisher are accused of political bias.

The whole thing reflects badly on everyone involved, from Iraq to London to Washington DC. If you care about the reputation and influence of science and its leading institutions, you should be outraged. I suspect instead that you're falling back even now on intellectually bankrupt equations like the one fromm the IBC reply that you liked so much - the meaningless comparison of the number of people killed on 9/11 with the number of casualties attributable to the Iraq War.

"Mr. Larsson, I still can't understand what makes you believe you've substantially addressed any claim of mine."

Good, since now I can stop this discussion which go nowhere fast.

I asked you to support your old 3 claims, which you do not.

I have adressed your new claim sufficiently to me, political affiliations of the editor and researchers isn't a problem otherwise so it isn't here either.

You differ in your opinion which is your prerogative - but it is only an opinion, and you had to show that it is a problem to affect any conclusion other than that the report is fine so far. You haven't done that either.

By Torbjörn Larsson (not verified) on 22 Oct 2006 #permalink

So colin, are you seriously implying that scientific researchers can't contribute money to a political party without calling their work into question? That journal editors can't express political opinions without calling every article into question (or at least those with political ramifications)?

If so, this argument doesn't merit a serious response -- its premises are garbage. Science is successful because it is a process that roots out bias, error, and deception via careful analysis and attempted repetition of published work. In the world of science, noting that a researcher has a particular affiliation is not a refutation of a result.

So colin, are you seriously implying that scientific researchers can't contribute money to a political party without calling their work into question? That journal editors can't express political opinions without calling every article into question (or at least those with political ramifications)?

Well, not exactly. Apparently their political activities only call their work into question if they're anti-war. Someone like Steven Moore, who's pro-war...and a partner in the Republican political consulting firm Gorton Moore...and did PR work for the current administration in Iraq...and ran the now-defunct pro-war site thetruthaboutiraq.org...and so far as I can tell has no peer-reviewed studies in epidemiology or any other area...well, he's obviously a credible and reliable critic of the study.

Even when his objections boil down to "Well, I wouldn't do it that way and trust me, I'm an expert. Also the authors secretly told me they didn't take various kinds of data even though the paper says they did and trust me, would I lie to you?"

By Anton Mates (not verified) on 23 Oct 2006 #permalink

However, you don't need to take my word for it or go on suspicion or guilt by association. The authors themselves have admitted that they timed the release of the report politically - something which, obviously, they could not have accomplished without LANCET's "independent" cooperation. You'll have to go to the end of the video at the following link to hear Dr. Burnham admit that he and his colleagues wanted to get the report out before the current elections in the United States "if at all possible."

If somebody has factual information that might be relevant to people's decision making in an election, isn't he ethically and morally obligated to make that information public before the election if at all possible? On the other hand, a reasonable objection could be raised if it could be shown that somebody withheld information to increases its political impact by releasing it closer to election day. However, that doesn't seem to be the case here. If anything, it looks like the publication received expedited review.