Lancet debunking cargo cult

Daniel Davies summarizes what is wrong with David Kane's criticism:

The mathematical guts of the paper is that under certain assumptions, the addition of the very violent cluster in Fallujah can add so much uncertainty to the estimate of the post-invasion death rate that it stretches the bottom end of the 95% confidence interval for the risk rate below 1. From this, David Kane concludes that the paper was wrong to reject the hypothesis that the Iraq War had not made things worse.

Let's back up and look at that again. Under David Kane's assumptions, the discovery of the Fallujah cluster was a reason to believe that things might have gone better in Iraq. This clearly means that these were the wrong assumptions.

The statistical problem here is basically that people can't come back from the dead. The Fallujah datapoint increases the uncertainty of the estimate, but it doesn't increase it in both directions, because there is no way that you could find an "anti-Fallujah" (a datapoint which brought the overall average down by as much as real Fallujah brought it up), because such a place would need to have a negative death rate.

And looking at the charts in David's paper, it's clear to see that the reason why the left edge of his estimate of the risk ratio has been dragged below 1 is that a substantial part of the distribution of his Bayesian estimate of the post-war death rate is below zero (and an even more substantial part is in regions of positive but wildly improbably death rates like one or two per 100K). That's all there is to it, CT readers; the majority of the rest of the Deltoid thread consists of three or four people trying to explain that the Roberts et al. paper doesn't make the same mistake.

Gavin M comments:

Look for new claims by the usual right-wing foundation hacks that the Lancet study has been 'definitively debunked,' and chalk up another petty, rigged 'victory' to the distributed intelligence of the WingNet.

And sure enough, look at all these posts. I think my favourite is this one from John Ray:

They obviously did not do a proper peer review on the paper before they accepted it and that may well be because they were so out of their field that they did not know how to do a proper review of a survey research paper. They should stick to medicine.

It may be noted that the authors of the paper refuse to release their raw data -- a most unusual thing for scientists to do. It suggests that the "research" was a fraud from beginning to end -- rather like the Mann "hockeystick" finding in climate science -- a finding that the IPCC no longer mentions!

Something Michelle seems to have overlooked, however, is that there seems to have been an attempt by Lancet to redact what they originally published. The paper Michelle cites is a version that says only 100,000 Iraqis died. Whereas the original paper said that 654,965 Iraqis died. Is this an admission? Does even Lancet now concede that they goofed? The plot thickens!

Let's see, epidemiology is part of medicine, the hockeystick is not a fraud, the IPCC still mentions it, and there have been two different surveys on Iraqi deaths covering different time frames published in the Lancet. So much wrong in such a small space...

Tags

More like this

I really don't know where to begin with this anti-Lancet piece by Michael Fumento. Should I start with the way Fumento describes Kane's paper as "so complex" that it "may cause your head to explode" while being utterly certain that Kane has demolished the Lancet study? Or with his assertion that…
I don't read Greg Easterbrook, for roughly the same reason I don't read anything else in the sports pages. When I want to get the experience of bulky men straining themselves trying to exceed their innate abilities, I watch C-SPAN. I was reminded of why I don't read Easterbrook by a comment that…
Lots of people are jumping on Gregg Easterbrook for his remarks on the Lancet study of deaths in Iraq. In particular, fellow ScienceBlogger Tim Lambert blasts him for saying: The latest silly estimate comes from a new study in the British medical journal Lancet, which absurdly estimates that since…
Obviously anything Gregg Easterbrook writes about the Lancet study is going to be really stupid, and sure enough, he gives us this: The latest silly estimate comes from a new study in the British medical journal Lancet, which absurdly estimates that since March 2003 exactly 654,965 Iraqis have died…

I did wonder if Kane's argument could be summarised as saying that Fallujah shows that, in Iraq, from time to time cities undergo a massive death toll. Since we have no idea what causes that, we have to assume that this sometimes happened under Saddam, but Roberts et al failed to catch any examples. The uncertainty associated with these random mass deaths outweighs the effects of any relatively low-level increase in general mortality.

But, if I understand the quote from Davies above aright, even that preposterous fig-leaf is denied him by his own figures.

The thought that actual Iraqi people may read these threads makes me squirm.

Initially, I was not unhappy to see David Kane's efforts as long as they provided an honest critique of the Lancet papers. Also, there did not seem to be much in the content to worry the original survey team - Daniel Davies is right, if dismissive.

In fact, David Kane has toned down the triumphalist "Gotcha" note he sounded in his original paper. Now I can see where that was coming from. From his point of view, it will make him a nine-day hero in the right-wing media. But I don't think he has converted a single sceptic.

What David Kane is pointing out to you ignorant liberals is that right now as we speak a Zombie Army is being raised in Iraq.

The huge point that Davies makes is if you assume the distribution is symmetric, some of it predicts negative absolute death rates per 100K. Therefore Kane's paper needs to be a) substantially modified, b) rideculed

Toby, would that this were true:
"From his point of view, it will make him a nine-day hero in the right-wing media."

In fact, it has already made him a nine-year hero, at least.
Does anyone expect Michelle Malkin & Co. to print corrections, let alone a retraction? Instead she writes:
"We saw this with typography experts during the Rathergate scandal; Photoshop experts during the Reutersgate debacle; and military experts during the Jesse Macbeth unmasking... He'll be presenting the paper at the Joint Statistical Meetings in Salt Lake City on Monday -- the largest conference of statisticians in North America."
And one of the only places, even in the United States, where he could get a favorable audience for this nonsense.

In the comments, worse:
http://michellemalkin.com/2007/07/25/document-drop-a-new-critique-of-th…
Read, weep.

By Janus Daniels (not verified) on 27 Jul 2007 #permalink

I still don't grasp the argument. Sure, the math may push the lower confidence limit down if you include Falluja, but so what? That only means that the basic model to which the math adheres is fallacious when you include Falluja. One of those "Your data may be true in reality, but it is wrong in theory" things. Just dumping Falluja in and turning the crank rests on the assumption that the numbers for Falluja and for the rest of Iraq are all products of the same process, just different by "luck of the draw"; when in fact, as pointed out explicitly in the last thread, here in reality we know many constraints that the mathematical function cannot, for instance that Falluja is in fact the product of a very different process. It's like taking the average of the weights of everyone in town, and the Moon. You'll get some nice numbers and all, but they don't mean diddly.

If there is a simple way to explain this:

What does our knowing there cannot be a negative death rate have to do with statistics (which presumably don't know that)?

And, isn't the negative death rate he obtains part of his criticism not a criticism of his criticism?

Since a lot of inexpert folks are trying to understand this, it seems worth explaining, if you have the time and inclination.

By Honest question (not verified) on 27 Jul 2007 #permalink

Tim - "Let's see, epidemiology is part of medicine, the hockeystick is not a fraud, the IPCC still mentions it, and there have been two different surveys on Iraqi deaths covering different time frames published in the Lancet. So much wrong in such a small space..."

When has scientific fact EVER made a difference to what a member of the Wingnet says?

I have no expertise and if I am ignored I won't take it personally.

When I read the Lancet study as a lay person, the conclusion of importance to me was NOT the confidence interval. It was not that Roberts et al could say with 95% confidence that there had been at least 8,000 deaths. The important fact was that they could say, with a confidence of greater than 50%, that there had been excess deaths of 98,000 or more. And the lay press all focused on the "100,000" figure. That was the figure of importance for the public debate.

In my own profession - law - decision-makers often act on probabilities that are far less than 95%. One standard is "preponderance of the evidence" - if quantified, that would be anything greater than 50% confidence.

It seems to me that Kane is saying that the proper 50% figure is 264,000, not 98,000. That is, Kane believes that there is a greater than 50% likelihood that there were 264,000 or more excess deaths.

Obviously Kane's curve is much flatter than Roberts'. I am not competent to derive the figures, but I would like to know: what does Kane's recalculation have to say about the possibility that there were fewer than 10,000 excess deaths? Fewer than 50,000? Fewer than 100,000?

Perhaps statisticians just don't do calculations like that. Maybe it's 95% or bust.

But if the profession doesn't absolutely prohibit calculations with confidence intervals of less than 95%, I think it would be a great contribution to the debate to see what Kane's figures show. It may be that the public policy implications of both Kane and Roberts are roughly the same. For example, if Kane's analysis were to show that the possibility of excess deaths under 8,000 is 10%, or even 20%, instead of Roberts' 5%, it should make no difference from the point of view of public policy or political debate.

And I also think that Kane should personally advise Malkin that his calculations show, on the preponderance of the evidence, that the number of excess deaths was 264,000 or more.

Bloix, this is right about the 50% chance of deaths greater than 264,000. However, the way I see confidence intervals they should be treated more as a decision-making device than a way of assigning particular probabilities to ranges of values. This is because they often represent things in their extreme limits which can't be true in reality - for example, the confidence interval in the Lancet paper suggests that after the war there is a non-zero probability that the death rate dropped to 1.4 per 1000, which is impossible (though David Kane doesn't rule it out!) So it's probably better to take the interval as a whole, especially since the interval in this case is being constructed to test a hypothesis - no change in deaths vs. a change in deaths.

(This is I suppose why we're arguing about it with David. He thinks the method for testing the hypothesis of an increase in deaths is inconsistent with that non-zero probability that Iraq has achieved an impossibly low mortality rate).

SG- thanks for responding. Let me try again and if you tell I'm wrong I'll shut up.

As I understand what you're saying, a statistician should not draw conclusions other than that a the probability of an event falls within or without the confidence interval. But this isn't how Roberts treated his own first study. His critics attacked him on this ground - saying that the CI was so wide that his estimates were no better than "a dartboard" - and his defenders pointed out that not every point within the CI had the same probability. There was a lot of heated discussion defending the proposition that the 100,000 figure should be accepted as a reasonable estimate.

If Roberts had merely been able to say that the number of excess deaths exceeded 8,000, then his study wouldn't have had any impact. Instead, he did say, and his defenders accepted, that, more likely than not, there had been 100,000 excess deaths or more.

Now, let us accept arguendo - as we lawyers like to say - that Kane is correct. Then he is saying that there is some probability that there were zero or negative excess deaths. That probability can be quantified from the calculations in his paper, and if someone will do it for us we'll see that it's small.

But he is also saying that there is a much larger probability, well over 50%, that the number of excess deaths was in excess of 100,000.

Therefore, from the point of view of the important public policy conclusion, Kane confirms Roberts.

What would be helpful for a layperson would be if someone could lay the two curves - Kane and Roberts - over each other, and then create a table of cumulative probabilities. What do each say is the probability of fewer than 0 excess deaths? fewer than 10,000? 20,000? etc.

Now, Malkin clearly thinks that Kane has demolished Roberts. She thinks he's shown that that no conclusions at all can be drawn from Roberts' data.

I don't think Kane has shown that. I think Kane -assuming he is right- has shown that Roberts' data is not as definitive as he thought it was, but that the important conclusion is not disturbed. And is that is what he has shown, he has obligation to tell Malkin that she has misinterpreted his conclusions.

Obviously, statisticians need to know how to do statistics, and therefore this is an important disagreement, but all we lay people need to know is whether there is consensus on the important point from the policy point of view. It would be nice if the two sides would stop fighting long enough to tell us that there is, and then you can go back to hacking at each other.

Honest question asked this honest question:

And, isn't the negative death rate he obtains part of his criticism not a criticism of his criticism?

Since a lot of inexpert folks are trying to understand this, it seems worth explaining, if you have the time and inclination.

Here's what I hope is a simple explanation. Suppose you are counting the average occupancy of cars on a freeway. You look at many cars as they pass and count the occupants and divide by the number of cars. You happen to do this on a weekday during rush hour. You can get the average number of occupants and the variance around that average, from which you can make an estimate of the average occupancy of all cars. No problemo. You repeat that experiment on a weekend day, and there are more families in cars so the average occupancy happens to go up. However, just by chance, a bus comes by filled with weekend tourists.

The bus is Falluja. The Roberts team made two estimates, one including the bus, and one excluding the bus, and concentrated on the one excluding the bus; then concluded that even if you exclude the bus the average occupancy on weekends went up. David Kane argues that, according to a model that treats the bus as if it were a car you find two things: 1) the average occupancy on weekends goes up, but 2) the variance goes up so fast that you can no longer exclude the possibility that the average occupancy during weekends went down even though all of your observations went up. In fact, David Kane's model is so weird that it does not exclude the possibility that the average occupancy of all weekend cars is negative.

Most of us are saying that a model that allows for negative average occupancy is not a good model and should not be used to estimate the difference between the average occupancy on weekdays and weekends. But here's the kicker: David Kane isn't just saying that the Roberts team should have included the bus. He's charging that the Roberts team excluded the bus (i.e., Falluja) because they wanted to hide the fact that, using his weird model that they didn't use, you couldn't exclude the possibility that the average car occupancy on weekends dropped.

I'll take a stab at a layperson's explanation of the statistics.

The Lancet paper reported that there were 98,000 excess deaths in the 18 months after the war with a 95% confidence interval of 8,000 to 194,000. This calculation excludes the data from Falluja, where there was a very high death rate after the invasion.

David Kane points out that if the same calculation were repeated including the Falluja data, the excess death estimate would be 264,000 with a 95% confidence interval of -130,000 to 659,000. Note that the lower bound is now negative: maybe the war lowered the death rate! The Lancet authors didn't do this calculation, but it's trivial to compute given the data presented in the paper.

These calculations rely on a normal approximation. The normal distribution has two properties that would seem to make it unappealing here: it's symmetric, and assumes a positive chance of observing any value from negative infinity to positive infinity. Here, death rates can't be negative and the distribution is skewed. So the distribution of death rates looks more like the right side of a bell curve than an entire bell curve.

Surprisingly, it doesn't actually matter whether the data are normally distributed or not, a normal approximation still provides a very good estimate of the confidence interval. If the sample is big enough, the central limit theorem says that a normal approximation will still work.

But here the sample isn't that big, and when Falluja is included in the data, it is very far from being normally distributed, so the normal approximation isn't very good. And that's why the confidence interval around the estimated mortality with Falluja (95% CI = -130,000 to 659,000) is wrong. Note that the Lancet authors didn't report this estimate, but did report other (correctly calculated) estimates that include Falluja.

Now, David Kane seems to agree with all this, including that the estimate of -130,000 to 659,000 is incorrect. But for some reason he thinks the Lancet authors should have reported the incorrect estimate.

Bloix, one of the first things you learn in introductory science classes is that mathematics sometimes produces impossible (but mathematically correct) solutions (negative mass, etc.). You lose point big on tests if you write these down as answers and your instructor rips you a new one.

Eli Rabett - I understand that. But one possibility when you get an impossible result is that your solution is no good- you've made a mistake somewhere, or your data is corrupt. Kane doesn't seem to be saying that. He seems to be saying that he's calculated a correct confidence interval using Roberts' data, and that it's theoretically possible that there were negative excess deaths. Okay, let's say for the sake of argument that he's right. What are the implications of that?

Bloix,

Nobody denies that negative excess deaths are possible: certainly the Lancet paper allows for the possibility. Shannon Love asserts that the Lancet paper doesn't allow for the possibility that the death rate went down, but he's just babbling nonsense.

Bloix asked:

He seems to be saying that he's calculated a correct confidence interval using Roberts' data, and that it's theoretically possible that there were negative excess deaths. Okay, let's say for the sake of argument that he's right. What are the implications of that?

Well, there are two parts to your query and they're separable: 1) that David calculated a correct CI; and 2) that it's theoretically possible that there were negative excess deaths.

Answering the second part first, it's always possible that there could have been negative excess deaths: in fact, when the original article was published in 2004, it was generally thought that the invasion would have led to fewer deaths than had we not interceded (i.e., negative excess deaths). That's why the article was so shocking: the study was able to reject the null hypothesis that there was no change in mortality (and the study design was double-sided: it could have rejected high, not been able to reject at all, or rejected low).

As for the first part, the implication of David having calculated a correct CI for the estimate including Falluja is that pigs would be able to fly.

Robert- thank you for taking the time to respond to me. As I recall, the shocking aspect of the 2004 study was not that there were excess deaths. It was that there were so many - 98,000. The attacks on the study were aimed not at the idea that there were excess deaths- even Bush agreed to that - but that there were more than a few thousand.

What I am asking my betters here to do is to assume that pigs can fly. If we assume this, what do we find? We find that Kane agrees with Roberts that (1) it is likely that there were excess deaths, and, (2) it is likely that there were more than 98,000 excess deaths.

I can see why a statistician would be reluctant to go down this route. Why assume the truth of work that is false, even for the sake of argument? But I think from the point of view of the public debate it's important. Kane and Roberts agree that there were likely more than 98,000 excess deaths. Kane is actually more confident than Roberts on that point. If Malkin and Co. want to trumpet Kane's results, they should be told in clear terms what Kane's work implies.

Bloix wrote:

What I am asking [...] is to assume that pigs can fly. If we assume this, what do we find? We find that Kane agrees with Roberts that (1) it is likely that there were excess deaths, and, (2) it is likely that there were more than 98,000 excess deaths.

Yes, but you can make the same statements without resorting to arguments about the CI. No one disagrees that the central estimate of excess mortality is 264000 when one includes Falluja. Why, then, is there so much effort focused on a CI estimate that has no bearing on the excess mortality estimate? It's because the consequence of promoting (or accepting) an improper CI is rather more subtle, and rather more severe: it allows those who either do not or will not understand statistical inference to dismiss the central estimate even though it is higher. Why this is so is a long story from the history of statistics, but it has become routinized in our public discourse that if some assertion attains a 6% level of statistical significance it is dismissible but not if it attains a 5% level. You can see it happening in this very thread.

Oh, and by the way, if you go back and examine the newspaper reports back when the Roberts article was first published, you'll see that it was still very much assumed that Saddam Hussein was such a bad man that removing him resulted in the saving of many Iraqi lives.

It makes no difference what David Kane thinks. The guy made it clear long ago where he was coming from.

Unfortunately for the Iraqi people, Les Roberts has been vindicated by the facts on the ground. He said almost 3 years ago now that Iraq was experiencing a humanitarian crisis and it was -- and still is. No one has done a damned thing about it except argue about stupid stats on blogs.

Only someone who is totally callous or a total idiot would try to argue at this point that the war has been good for the people of Iraq.

Iraq: One in Seven Joins Human Tide Spilling into Neighbouring Countries
Patrick Cockburn in Sulaymaniyah
Published: 30 July 2007
SULAYMANIYAH - Two thousand Iraqis are fleeing their homes every day. It is the greatest mass exodus of people ever in the Middle East and dwarfs anything seen in Europe since the Second World War. Four million people, one in seven Iraqis, have run away, because if they do not they will be killed. Two million have left Iraq, mainly for Syria and Jordan, and the same number have fled within the country."
[end quote]

It seems that the number of hours of electricity per day is not the only thing headed toward zero in Iraq.

Robert: Thanks for taking the time to answer. That was really helpful!

By honest question (not verified) on 30 Jul 2007 #permalink

Thanks to Ragout as well!

By honest question (not verified) on 30 Jul 2007 #permalink

David Kane exposed once again as mathematically ignorant with an axe to grind. Shocking.