# Journal of Peace Research publishes badly flawed paper

Unfortunately, the Journal of Peace Research has published the badly flawed “Main Street Bias” paper. My earlier criticisms still apply, so I’m reposting them. Consider this the first draft of a reply to their paper.

The authors argue that main street bias could reasonably produce a factor of 3 difference.

How did they get such a big number? Well, they made a simple model in which the bias depends on four numbers:

• q, how much more deadly the areas near main street that were sampled are than the other areas that allegedly were not sampled. They speculate that this number might be 5 (ie those areas are five times as dangerous). This is plausible — terrorist attacks are going to made where the people are in order to cause the most damage.

• n, the size of the unsampled population over the size of the sampled population. The Lancet authors say that this number is 0, but Johnson et al speculate that it might be 10. This is utterly ridiculous. They expect us to believe that Riyadh Lafta, while trying to make sure that all households could be sampled, came up with a scheme that excluded 91% of households and was so incompetent that he didn’t notice how completely hopeless the scheme was. To support their n=10 speculation they show that if you pick a very small number of main streets you can get n=10, but no-one was trying to sample from all households would pick such a small set. If you use n=0.5 (saying that they missed a huge chunk of Iraq) and use their other three numbers, you get a bias of just 30%.

• fi, the probability that someone who lived in the sampled area is in the sampled area and fo the probability that someone who lived outside the sampled area is outside the sampled area. They guess that both of these numbers are 15/16. This too is ridiculous. The great majority of the deaths were of males, so it’s clear that the great majority were outside the home. So the relevant probabilities for f are for the times when folks are outside the home. And when they are outside the home, people from both the unsampled area and the sampled area will be on the main streets because that is where the shops, markets, cafes and restaurants are. Hence a reasonable estimate for fo is not 15/16 but 2/16. If use this number along with their other three numbers (including their ridiculous estimate for n) you get a bias of just 5%.

In summary, the only way Johnson et al were able to make “main street bias” a significant source of bias was by making several absurd assumptions about the sampling and the behaviour of Iraqis.

1. #1 Kevin Donoghue
February 6, 2009

They guess that both of these numbers [fi and fo] are 15/16.

Not that it matters much, but in the last draft I saw they were both 13/14. Did they go with 15/16 finally or are you just pasting your early criticism exactly as you wrote it?

Since Robert Shone hasn’t shown up yet, I will make his point for him: you are just making up your parameter values. The obvious response is of course that that’s all the authors themselves are doing.

2. #2 Robert Shone
February 6, 2009

First, I think it’s important to note that Tim Lambert’s original post opened with a misrepresentation of Jon Pedersen’s views (taken from Stephen Soldz) on main street bias. I pointed this out at the time, but Lambert still hasn’t corrected it. I emailed Pedersen about it back in December 2006, and he responded as follows:

“Yes, probably Stephen Soldz confused the issue somewhat here. There are actual several issues:
1) I very much agree with the MSB-team that there is some main stream bias, and that this is certainly an important problem for many surveys – not only the Iraq Lancet one.
2) I am unsure about how large that problem is in the Iraq case – I find it difficult to separate that problem from a number of other problems in the study. A main street bias of the scale that we are talking about here, is very, very large, and I do not think that it can be the sole culprit.
3) The MSB people have come up with some intriguing analysis of these issues.”
(Jon Pedersen, email to me, 4/12/06)

3. #3 Robert Shone
February 6, 2009

Second, the criticism from Tim Lambert is directed at one set of parameter values which was presented only as an illustrative example by the msb authors (who later added an exploration of the parameter space).

In other words the criticism misses the point that the actual bias could be determined only as a result of disclosure by the Lancet authors on basics such as sampling procedures and main streets selected as starting points, etc.

So this brings us back, in a way, to the AAPOR thing. The Lancet authors still haven’t disclosed the basic level of information which is obviously necessary to assess how their claim of giving all households an equal chance of selection holds up.

If you’re extrapolating from 300 actual violent deaths to 601,000 estimated violent deaths, based on this claimed sample-randomness, then it would seem pretty important that the sampling scheme could be assessed in some way. Currently it can’t be, because nobody outside the Lancet team knows what that sampling scheme entailed.

4. #4 Robert
February 6, 2009

I haven’t read the published version so perhaps my criticism of it has been handled. However, on the chance it hasn’t been dealt with, I’ll repeat it.

The main methodological difference between the 2004 Roberts paper and the 2006 Burnham paper was the way the starting point was selected. Johnson et al. proposed “main street bias” as the explanation for the difference between the two papers in the number of estimated violent deaths. But one of the differences in the findings (as opposed to the method) was that there was a corresponding decrease in non-violent deaths. MSB doesn’t explain that.

A simpler explanation that addresses both the increase in violent deaths and the balancing decrease in non-violent deaths without resorting to a putative MSB is that there was a problem with attribution of cause of death. This explanation is consistent with my observations in other surveys and registries: there is often much more ambiguity about the cause of death than that a death occurred at all.

5. #5 Robert Shone
February 6, 2009

We can suggest various parameter values to plug into the msb formula. Tim claims that the value originally suggested as an example for f (15/16) was “ridiculous”. I’d argue that Tim’s own suggested value (2/16) was actually the really ridiculous suggestion, since it implied that the average Iraqi (including women, children and the elderly) spends only 3 hours out of each 24-hr day in their own home/zone (presumably sleeping), and spends the other 21 hours outside their zone.

(Since this is clearly ludicrous, I asked Tim if he was redefining “f” in an unspecified way, thus changing the whole equation, in a manner unknown to us. Tim replied that he was indeed redefining f, but he hasn’t explained how anyone could take his redefinition and his assumptions and arrive at the value of 2/16.)

And does anyone take seriously the claim of the Lancet authors that the value for n is zero?

6. #6 Robert Shone
February 6, 2009

One other thing. Given the amount of effort that Tim Lambert has put into attempts to discredit the msb authors, it really is sad that he failed to mention, above, that the Journal of Peace Research didn’t just publish the msb paper, but awarded it the best article of the year.

So, the research is not only peer-reviewed, like that other peer-reviewed study (the one we’re supposed to elevate to Holy Writ status on account of its being reviewed by peers) – it’s also prize-winning.

(Yes, I’d already mentioned that it received the award, but way, way down in some other thread, where nobody except hardcore “science” and epidemiology geeks read).

7. #7 sod
February 6, 2009

i don t have access to the paper or the older version at the moment. (anyone got a link that is still working?)

but Robert Shone gave this explanation of the number in another topic:

Moving on, what “wild assumptions” underlie “f=15/16”? The MSB team make the assumption that women, children and the elderly stay close to home, whilst allowing for two working-age males per average household of eight, with each spending six hours per 24-hour day outside their own zone. This yields f=6/8+(2/8×18/24)=15/16. Any “wild assumptions” here?

http://tinyurl.com/cczz4k

let me see: the number is huge, because they use the assumption that women stay at home all the time?

this isn t just wild, it is outright moronic!

we know already, that the majority of violent deaths in Iraq are young male. but those spend a significant time of the day OUTSIDE their “mainstreet bias homezone”

8. #8 David Kane
February 6, 2009

Kudos to Tim for starting a new thread devoted to this topic. I think that this will lead to productive discussion. Actions like this demonstrate why Tim/Deltoid are great a host/location for all things Lancet/Iraq.

9. #9 Robert
February 6, 2009

Robert Shone wrote:

it really is sad that [Tim] failed to mention, above, that the Journal of Peace Research didn’t just publish the msb paper, but awarded it the best article of the year.

Really? Actually, I think it was a kindness. Serious journals don’t usually award “best article of the year” (except sometimes for student papers). If the Johnson et al. paper was mostly unaltered from the version we’ve seen before, that they judged it “best” makes me wonder about their other articles.

10. #10 David Kane
February 6, 2009

Robert claims that “Serious journals don’t usually award “best article of the year” (except sometimes for student papers).”

False. The Journal of Financial Economics is one of the premier journals in economics. It is currently soliciting votes for the 2008 Paper of the Year.

The reason that the award is important is because Tim (and other critics) can’t just claim that a flawed paper got by one or two incompetent reviewers. This is clearly a paper that the editors of the journal are ready to stand behind. Which of those editors would you like to accuse of incompetence first?

11. #11 David Kane
February 6, 2009

Just to be clear on what the disagreement is about, here is the abstract for the paper.

Cluster sampling has recently been used to estimate the mortality in various conflicts around the world. The Burnham et al. study on Iraq employs a new variant of this cluster sampling methodology. The stated methodology of Burnham et al. is to (1) select a random main street, (2) choose a random cross street to this main street, and (3) select a random household on the cross street to start the process. The authors show that this new variant of the cluster sampling methodology can introduce an unexpected, yet substantial, bias into the resulting estimates, as such streets are a natural habitat for patrols, convoys, police stations, road-blocks, cafes, and street-markets. This bias comes about because the residents of households on cross-streets to the main streets are more likely to be exposed to violence than those living further away. Here, the authors develop a mathematical model to gauge the size of the bias and use the existing evidence to propose values for the parameters that underlie the model. The research suggests that the Burnham et al. study of conflict mortality in Iraq may represent a substantial overestimate of mortality. Indeed, the recently published Iraq Family Health Survey covered virtually the same time period as the Burnham et al. study, used census-based sampling techniques, and produced a central estimate for violent deaths that was one fourth of the Burnham et al. estimate. The authors provide a sensitivity analysis to help readers to tune their own judgements on the extent of this bias by varying the parameter values. Future progress on this subject would benefit from the release of high-resolution data by the authors of the Burnham et al. study.

Would Tim or Robert or anyone else take issue with these claims? (You are still free to maintain your other criticisms as well. But it us hard to describe a paper whose abstract you agree with as “badly flawed,” whatever issues you might have with the details.

And, by the way, this paper (working draft published well in advance of IFHS) does a great job of predicting that a better survey without main street bias (i.e., IFHS) would estimate only a small percentage of violent deaths. Too bad we didn’t bet on the IFHS outcome before we saw their estimate. Spagat et al (and I) would have won that bet.

12. #12 sod
February 6, 2009

like all papers that produce an underestimate will agree on low numbers, David?

the IFHS paper does not show an increase in violence after the samarra bombing. yes, that is the incident that caused the INCREASE of violence, that caused the “surge” of US troops.
the paper doesn t show a change. you don t think that is a problem?

you also simply ignore the massive increase in excesss non-violent deaths. no interest in looking more deeply into that?

13. #13 Kevin Donoghue
February 6, 2009

David Kane quotes the abstract: “…the authors develop a mathematical model to gauge the size of the bias and use the existing evidence to propose values for the parameters that underlie the model.”

As Tim points out they didn’t use the existing evidence; he’s too polite to say it, but the fact is they plucked the parameters out of their arses to get the result they were aiming for.

14. #14 Jody Aberdein
February 6, 2009

Intriguingly, this has been spun for the physics audience as well, in Europhysics Letters EPL:

‘Sampling Bias in Systems with Structural Heterogeneity and Limited Internal Diffusion’, JP Onella et al, EPL (85) 2009, 28001

15. #15 Kevin Donoghue
February 6, 2009

David Kane: The Journal of Financial Economics is one of the premier journals in economics.

Not that it matters but that’s untrue, unless by “one of the premier journals” you mean it’s in the top 50. Keele ranks it 45th in fact – which is highly respectable but certainly not stellar. And as dsquared remarked in an earlier thread, the fact that you consider the AER more reliable than The Lancet is in itself enough to cast doubt on your sanity.

But frankly David, you would be better off forgetting all this pecking-order shit. It clouds your thinking. If the MSB paper was published on Red State you would do a better job of judging it on its merits.

It’s odd, too, that you would think of financial economics as a field which Robert might be expected to regard as serious. I would think financial economists currently enjoy about as much esteem in the scientific community as astrologers. Clicking on your link I was amused to see that one of the JFE prizes is named in honour of Eugene Fama of all people.

16. #16 Jody Aberdein
February 6, 2009

Kevin, in the original draft of some 3 years ago they plump for 15/16, whereas in the published version this is 13/14. The reason is that they decide there are 2 working age males per 8 person household originally, but 2 per 7 in the published paper.

Why? I don’t know.

17. #17 Robert
February 6, 2009

David Kane wrote:

The Journal of Financial Economics […] is currently soliciting votes for the 2008 Paper of the Year.

Thanks for the heads-up! I’ll move it into the category of “not a serious journal.”

18. #18 David Kane
February 6, 2009

Kevin,

I don’t want to hassle you too much. You are smart guy and highly knowledgeable about the Lancet. But, please! Stuff like this — “Keele ranks it 45th in fact” — does you little good. Did you notice how that spreadsheet you linked to has things listed alphabetically (sort of) within the 4 Keele ranks? The 45th row does not mean what you think it means.

And, if you demand a journal ranked in the top category by Keele, check out the prizes award by the Journal of Finance.

Anyone who doesn’t agree that the Journal of Financial Economics (and the Journal of Finance) are premier journals in economics/finance does not know what they are talking about.

You write: “the fact is they plucked the parameters out of their arses to get the result they were aiming for.” Well, it doesn’t really matter where the estimates come from, does it? They argue that the estimates are reasonable. If you disagree, be specific as to why. And then they show the answer for a wide range of parameter estimates. Do you disagree with that calculation?

Again, I think it would be great to focus on what the paper actual says. Quote a portion that you disagree with and explain why you disagree. That is how we are going to make progress.

If Robert does not think that there are any serious journals in finance, then he should make that case. He strikes as more serious than that.

19. #19 David Kane
February 6, 2009

Tim writes:

They expect us to believe that Riyadh Lafta, while trying to make sure that all households could be sampled, came up with a scheme that excluded 91% of households and was so incompetent that he didn’t notice how completely hopeless the scheme was.

Tim: I have asked you this question before and you have refused to answer it. I will try again.

Were all the houses in Iraq (main streets, side streets, back alleys) included in the sampling frame?

Once you explain to us what you think Lafta did, then we can explain why you are wrong. 10 is a perfectly reasonable estimate for n.

20. #20 David Kane
February 6, 2009

Tim,

Perhaps I am misreading things, but your critique of f demonstrates a fundamental confusion. You write:

fi, the probability that someone who lived in the sampled area is in the sampled area and fo the probability that someone who lived outside the sampled area is outside the sampled area

Correct. The key here is that “sampled area” means, not just the home, but the “survey space”, i.e., all those areas that are within the area that the interviewers might have visited. So, for example, the park or market across the street from your house is a a part of the “sampled area.” f is then the proportion of time that a random Iraqi spends in that sampled area. Women, children and the elderly (during these times of extreme violence) obviously spend the vast majority of their time in the sampled area. Many men did as well. But, of course, many men spent hours outside of the sampled area, mainly at work. What is f? Well, reasonable people might differ and the authors make their pitch. You then write:

They guess that both of these numbers are 15/16. This too is ridiculous. The great majority of the deaths were of males, so it’s clear that the great majority were outside the home.

This is wrong on several dimensions. First, you have no idea how many deaths (whether of men or otherwise) were “outside the home.” The location of the death was never released (or collected?) by the Lancet team. Second, even if you magically knew what percentage of deaths were outside the home, that tells you little about f. If someone is killed at the neighborhood park, but they killed outside the home but still within the “sampled area”. Third, even if you knew how many deaths were inside and outside the sample area, that knowledge has no direct bearing on f, which is just the amount of time spent in the sampled area.

And when they are outside the home, people from both the unsampled area and the sampled area will be on the main streets because that is where the shops, markets, cafes and restaurants are. Hence a reasonable estimate for fo is not 15/16 but 2/16.

This is just gibberish. Again, some markets/shops/cafes/etc are in the sampled region. Also, you don’t think that people sleep in their houses? Assuming that people sleep for 8 hours (or sleep for 6 hours and spend two hours dressing/cleaning/eating/whatever), then the minimum value for f (both fi and fo) is more than 5/16. I suspect that you are just confused about the definition of fo.

21. #21 Jody Aberdein
February 6, 2009

Assuming of course that women, children and men of above working age never stray into the surveyable area.

22. #22 Jody Aberdein
February 6, 2009

Oops no. Apologies David. I’m confusing the original assumption with your minimum, which assumes everyone leaves the area in which they reside. Of course this minimum is comfortably below the 8/16 level to give no effect of place of residence on violence exposure. Bed time me thinks.

23. #23 frankis
February 6, 2009

It’s probably safe to assume that almost all the Iraq denialists around here are also climate change denialists and some are also evolution denialists. Some of them would probably deny that Dick Cheney throws like a girl and Rush Limbaugh is a fat fool. “What are you in denial about Johnny?” – “What have you got?!”

Any of them (it may be all of them) who think that the attribution of cause of death as reported to interviewers is the thing, rather than the fact of a family member being now undeniably dead, is delusional as well.

Is delusion worse than denialism? Denialism is more morally bankrupt and shameful but at least it’s curable.

24. #24 Robert
February 6, 2009

David Kane wrote:

If Robert does not think that there are any serious journals in finance, then he should make that case. He strikes as more serious than that.

1. Is David Kane claiming that every journal in finance names a “best article” each year? Excellent point! Not every journal does so!

2. I try to be only as serious as I need to be. With you I haven’t needed to be that serious.

25. #25 sod
February 7, 2009

let me repeat what Tim said:

fi, the probability that someone who lived in the sampled area is in the sampled area and fo the probability that someone who lived outside the sampled area is outside the sampled area. They guess that both of these numbers are 15/16. This too is ridiculous. The great majority of the deaths were of males, so it’s clear that the great majority were outside the home. So the relevant probabilities for f are for the times when folks are outside the home. And when they are outside the home, people from both the unsampled area and the sampled area will be on the main streets because that is where the shops, markets, cafes and restaurants are. Hence a reasonable estimate for fo is not 15/16 but 2/16. If use this number along with their other three numbers (including their ridiculous estimate for n) you get a bias of just 5%.

and here again, the easy way to test main street bias:

walk down a “main street” (busy shopping roads will do..) and ask every person you meet, whether they live in a street intersecting this one or not.

26. #26 Kevin Donoghue
February 7, 2009

Upthread, Jody Aberdein asks me why Johnson et al went from assuming 2 working age males per 8 person household originally, to 2 per 7 in the published paper. I suspect they just picked up a scrap of information on Iraqi demographics from somewhere and decided to use it (7 is more realistic than 8 if memory serves). Maybe they find that extracting too many model parameters from the rectum causes haemorrhoids.

27. #27 sod
February 7, 2009

fi, the probability that someone who lived in the sampled area is in the sampled area and fo the probability that someone who lived outside the sampled area is outside the sampled area. They guess that both of these numbers are 15/16. This too is ridiculous.

this is the biggest problem with the assumptions, and the silence of the lancet attackers on it is deafening.

they really believe that:

1. the majority of people killed by violence in Iraq are NOT young male.

2. that stupid iraqis keep their families in the deadly houses along the mainstreet, while their males spend the days i a safer zone

3. on the other had people living in saver zones send their males into the dangerous zone for work..

according to this paper, your average market bombing would kill over 90% of people who live in an adjanced street!

28. #28 Kevin Donoghue
February 7, 2009

David Kane: Did you notice how that spreadsheet you linked to has things listed alphabetically (sort of) within the 4 Keele ranks? The 45th row does not mean what you think it means.

Ye gods. David, what is it with you and spreadsheets? First you can’t calculate a crude mortality rate even when Les Roberts helpfully inserts the formula into a spreadsheet for you. Now you can’t figure out how the rows in a spreadsheet are sorted.

If by “listed alphabetically (sort of)” you mean, listed alphabetically within groups made up of journals with equal scores, you are correct. But it isn’t only the four Keele ranks which are taken into account. For example, the reason why the Scandinavian Journal of Economics is listed above the American Journal of Agricultural Economics (despite their having the same Keele rank) is not because the guys at Keele suffer from intermittent dyslexia. The former has a higher KMS score than the latter. So while the Journal of Financial Economics could climb a few places by changing its name to Aardvark Studies, it needs to do a bit more than that to break into the top 30.

But as I said above, all this talk of reputation is beside the point. I will try to avoid letting you drag me into it again. Papers should be judged on their merits. For the reasons Tim and others have pointed to, the MSB paper would deserve harsh criticism no matter who got suckered into publishing it.

29. #29 David Kane
February 7, 2009

Kevin,

By all means, let us get back to judging the merits of the paper. Here is a summary of where we are. Tim claims that the paper is badly flawed and cites exactly three parameters. That is the extent of his critique. One of those parameters, q, he agrees with Johnson et al. (I have been guilty of referring to the authors as Spagat et al, but, of course, Johnson is the lead author.)

So, the entire criticism (from Tim at least) boils down to two parameter values: n and f (where f includes both fo and fi). I have explained above why Tim misunderstands what f is. Do you have a reply? What do you think f should be? I could imagine making a case for some smaller numbers, depending on how big you think the sampled areas are. But it seems obvious that the vast majority of Iraqis spent the vast majority of their time in the sampled areas (both their houses and the local neighborhood) during this period.

Finally, we have n. Now, I agree with any critic who claims that it is very hard to know what n is. But that is the fault of the Lancet authors. We do not even know if every house was included in the sample frame! Once you tell me what you think the actual procedure was (see our discussions in other threads), then I am ready to debate whether or not n is 0.5 or 1 or 5 or 10 or 20.

Again, thanks to Tim for creating this thread. I think we are making progress!

30. #30 Jody Aberdein
February 7, 2009

Well,

As has previously been pointed out you could go a long way to demonstrating why or not n was important by doing some kind of actual Monte Carlo analysis using the study protocol as described and some real maps, or even just grids. That would be preferable to ‘here are some coloured in google maps for you to look at’. No?

Likewise you could prior to publication attempt some kind of empirical justification for your values of f.

Likewise you could attempt some empirical justification of the heightened danger of ‘main streets’ by actually looking at distribution of explosions/shootings.

You could also publish a sensitivity analysis that included values of n that include the possibility that your hypothesis is incorrect i.e. there is no excess danger distributed in the fashion you describe, or in the bias introduced by the sampling schema.

Certainly in the UK the Office of National Statistics can provde population data down to blocks of 1000 individuals and maps of these regions for street level correlation. Presumably this is beyond the ken of social science research?

Otherwise it seems to me that the haemarrhoid argument stands.

31. #31 David Kane
February 7, 2009

Jody suggests: “using the study protocol as described.” The problem, of course, is that the Lancet authors have said, in various venues, that the protocal was mis-described in the published version of the paper. The Lancet has not published a correction. The authors have made various claims about the actual protocol used (both in public and in private correspondence), much of which is inconsistent with the published statement as well as being inconsistent with each other’s statements. So, there is no way to do the Monte Carlo you recommend.

And here is where Tim could help!

Tim: Could you ask Les Roberts for a precise description of the study protocol? What, exactly, did the interviewers do?

Knowing that would allow us to flesh out Jody’s (reasonable) suggestion in much more detail.

32. #32 Tim Lambert
February 7, 2009

DK:

>One of those parameters, q, he agrees with Johnson et al.

On reflection, I have changed my mind about this. q = 5 is reasonable for main streets, but not for streets that merely intersect main streets.

33. #33 Tim Lambert
February 7, 2009

DK: I don’t agree with the abstract. Under any reasonable assumptions MSB does not make much difference.

DK: Yes I redefined fo, to something more reasonable. The only times if including in it are thois spent outside the home. But I’m thinking that their formula is so badly flawed (because it assumes that the distribution of deaths wrt to time of day is uniform) that it might be better to dump it and use an accurate one.

34. #34 Kevin Donoghue
February 7, 2009

Hell, I typed up this long comment saying I disagree with Tim about q and now I see he’s changed his mind. Never mind, I’m posting it anyway.

It’s true that Tim gives the MSB paper a pass so far as q is concerned. I think that’s because of the way he is framing his argument. He wants to highlight the completely unreasonable aspects of the paper, so he passes over the bits which are not too obviously wrong. So he can throw a bone to the authors just to show how impoverished they are. But if we are to vet the paper properly we can’t do that.

That being so I don’t think q=5 should be accepted without question. The unsampled region will include places which are simply too dangerous for interviewers to visit as well as places which happen to be too far from the nearest cross-street. So q<1 is entirely possible. A careful reader of the paper will see that of course, but the discussion (on pages 8 and 9 of the draft I found on Mike Spagat’s web-page) certainly doesn’t do much to draw attention to it. The sensitivity analysis is still worse in that respect – q is restricted to positive integers in the tables. Figure 3 is misleading, being scaled to confine the region q<1 to a narrow vertical strip. (It’s a classic example of how to fool a reader who is in a hurry.) The fact that dead interviewers report no results is considered when the authors defend cluster sampling as being a relatively safe method. But even there I see no reference to the fact that interviewer prudence may result in q<1.

Also, if the unsampled region is very large, as the MSB theory claims, then the vast majority of those who die in bed simply cannot be counted. But in any population, even in a war-torn country like Iraq, most people die at home in bed. Of course most such deaths are non-violent but it’s quite possible that if hospital resources are overstretched even victims of violence perpetrated in the survey space end up dying at home (most likely outside the survey space if n=10). Remember, it’s where they actually die that is relevant for calculating q and not where they were when the violence took place.

Hence I conclude that Tim is too indulgent on this score. But of course he is right to focus mainly on the other parameters – that’s where the MSB case really falls apart.

35. #35 Kevin Donoghue
February 7, 2009

David, since you asked Tim to open this thread on the MSB paper, wouldn’t it be nice if we could stick to discussing the MSB paper and not your suggestions for pestering the Lancet authors? Why should Burnham et al help Johnson et al to determine the parameters of their wretched model? Can’t they do some research of their own, or are they merely parasites?

As I’ve pointed out before, one can obtain precise information about the victims of other conflicts, e.g. Northern Ireland. Using that data it would be quite possible to study the biases inherent in a variety of different sampling methods.

36. #36 Jody Aberdein
February 7, 2009

Yeah you’re right.

‘The third stage consisted of random selection of a main street within the administrative unit from a list of all main streets. A residential street was then randomly selected from a list of residential streets crossing the main street. On the residential street, houses were numbered and a start household was randomly selected. From this start household, the team proceeded to the adjacent residence until 40 households were surveyed.’

I cannot see any way in which this description could be simulated. Absolutely none.

Certainly even if an analysis of bias this method generated were made, how possibly could it add weight to scientific arguments such as ‘Analysis of Iraqi maps suggest n=10 is plausible’ followed by a link to some coloured in maps inviting the reader to consider them. Pretty watertight I should say.

37. #37 Oscar
February 7, 2009

RE: Comment #9 – Best Paper of the Year

“Serious journals don’t usually award “best article of the year” (except sometimes for student papers).”

The Lancet publishes the best paper of the year:

http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(09)60081-7/fulltext

38. #38 Robert Shone
February 7, 2009

That’s hilarious.

39. #39 Robert Shone
February 7, 2009

… Deltoid should run a “best debunk of the year” award. Lambert gets my vote every time!

40. #40 sod
February 7, 2009

.. Deltoid should run a “best debunk of the year” award. Lambert gets my vote every time!

the Lancet deniers look pretty helpless in this topic. basically you are avoiding all points being made and focus on magazin awards…

41. #41 Robert
February 7, 2009

Oscar wrote:

The Lancet publishes the best paper of the year:

Yeah, and that’s regrettable. It’s a bad trend and a reaction, I suspect, to the burgeoning weight given to citation indexes as an indicator of journal quality — “articles of the year” are intended to raise the number of citation links. As an aside, the Lancet (and JFE) are popularity contests: readers vote on the article while the Journal of Peace Research award was decided by a panel. I’m not sure which is worse — I suppose it depends on whether you prefer the judgments of Simon, Paula, Kara, and Randy over the votes of the American public; or maybe JPR just doesn’t have enough readers to do a popularity contest. BTW, the JPR panel don’t appear to have had much training in biostat, epidemiology, or demography. If they had, they may have understood the problem the Johnson and Spagat overlooked.

42. #42 Kevin Donoghue
February 7, 2009

Bleg: page 3 of the draft I’m looking at refers to “Little, 1982” but there’s no Little, 1982 in the references (see page 23). Does the published version shed any light on this? (I refuse to shell out 20 dollars on the published article unless I receive credible assurances that it’s a huge improvement on the draft.)

The relevance is that they imply they are following Little’s “modelling approach” and it would be nice to know if somebody named Little really advocated the approach they are following, and if so, what justification s/he offered for it. They offer none themselves obviously, other than the usual “Gilbert would’t give Mike his data! Waaah!” – the pathetic refrain running through the entire paper.

Tim? Robert? David? anyone?

43. #43 Kevin Donoghue
February 7, 2009

Robert: BTW, the JPR panel don’t appear to have had much training in biostat, epidemiology, or demography. If they had, they may have understood the problem the Johnson and Spagat overlooked.

Well I’ve no training in those areas either but let me guess. One big problem which jumps out at me is that to implement their approach to bias adjustment, you need to know the risk of death in the unsampled region. If you knew that, why in the name of all that’s holy would you bother doing a mortality study in the first place? Just unsample the whole country and there’s your answer!

I mean come on, don’t tell me that’s not a major conceptual problem with the MSB approach? I’m not for a moment suggesting it’s the only one, but even ignoring the other turds in this crock of shit, that one has to offend any thinking person’s nostrils.

But Ireland beat France so not even David Kane can piss me off tonight.

44. #44 David Kane
February 7, 2009

Just a side note on exactly how this paper won the article of the year award.

A jury consisting of Lars-Erik Cederman (ETH Zürich), Jon Hovi (University of Oslo) and Sara McLaughlin Mitchell (University of Iowa) has awarded the third Journal of Peace Research Article of the Year Award to Neil F. Johnson (University of Miami), Michael Spagat (University of London), Sean Gourley (University of Oxford), Jukka-Pekka Onnela (University of Oxford and Helsinki University of Technology) and Gesine Reinert (Oxford University). In its assessment of all research articles published in volume 45 of JPR, the jury paid attention to theoretical rigour, methodological sophistication and substantive relevance. According to the jury, the prize-winning article, ‘Bias in Epidemiological Studies of Conflict Mortality’, Journal of Peace Research 45(5): 653–663, provides an important advance in the methodology for estimating the number of casualties in civil wars. The authors show convincingly that previous studies which are based on a cross-street cluster-sampling algorithm (CSSA) have significantly overestimated the number of casualties in Iraq.

The problem this raise for Tim is that he needs to explain, not only why Johnson et al are idiots, and why the editors of the journal are morons (both hard to do) but also how Cederman, Hovi and Mitchell could have screwed up so badly. What’s the theory? That they are paid in members of the neo-con conspiracy?

Now, obviously, just because a paper is published in a peer reviewed journal, and just because it wins an award judged by three academic unconnected to the journal, does not mean that the paper is perfect, that there are not reasonable grounds for criticism and so on. But the burden of proof is clearly on Tim (and other critics) to demonstrate exactly why this paper is so “badly flawed.”

45. #45 Kevin Donoghue
February 7, 2009

By the unsampled region I mean the population outside the survey space. Sorry to be incoherent but this isn’t just your average Saturday night. Don’t know why I’m bothering with this crap to be honest.

46. #46 Kevin Donoghue
February 7, 2009

David, this thread, which you requested, is about the paper. If you want to talk about the sociology of science and suchlike, that’s a whole different topic. Can we have your response to the criticisms upthread, if you have one? Do you know where this “Little, 1982” paper may be found?

47. #47 David Kane
February 7, 2009

Robert claims (with no evidence): “BTW, the JPR panel don’t appear to have had much training in biostat, epidemiology, or demography.” Charming! How were you able to access their graduate school transcripts?

Kevin: I bet that Little 82 refers to: Little, R.J.A. (1982). Models for nonresponse in sample surveys. Journal of the American Statistical Association, 77, 237-250.

Tim writes: “DK: I don’t agree with the abstract.” Well, that is nice but, if we are going to make progress, then we are need to go into more detail. Do you disagree with the first sentence? The second sentence? And so on. An academic paper is made up of a collection of discrete claims. Which claims made in the abstract do you dispute>

Tim writes:

DK: Yes I redefined fo, to something more reasonable. The only times if including in it are thois spent outside the home. But I’m thinking that their formula is so badly flawed (because it assumes that the distribution of deaths wrt to time of day is uniform) that it might be better to dump it and use an accurate one.

Well, by all means, feel free to come up with a better model and write a paper. But, for today, we are focusing on whether or not this paper is “badly flawed.” If you continue to change the definitions of various parameters without telling us, it will be hard for us to follow your argument.

So, using their terminology, what do you think a fair value for f would be?

48. #48 sod
February 7, 2009

But the burden of proof is clearly on Tim (and other critics) to demonstrate exactly why this paper is so “badly flawed.”

Tim and others gave the reasons. the numbers they use are a joke.

but neither you nor Robert Shone has even tried to defend them so far…

i like IPRI and the work they do. they just messed this one up. could we now get back to a discussion of substance?

49. #49 Kevin Donoghue
February 7, 2009

…not even David Kane can piss me off tonight.

I underestimated you David. Sorry about that.

50. #50 sod
February 7, 2009

So, using their terminology, what do you think a fair value for f would be?

their “terminology” is utterly useless. obviously it doesn t matter, how much time children and women spend in the main street zone. they don t get killed in high numbers anyway.

david, why don t you give us an explanation for the high number of male Iraqis getting killed, when it is WOMEN, who spent all their time in the dangerous zone…

51. #51 Sortition
February 7, 2009

> A simpler explanation that addresses both the increase in violent deaths and the balancing decrease in non-violent deaths without resorting to a putative MSB is that there was a problem with attribution of cause of death.

Comparison of the “violent deaths” counts in Lancet2 and IFHS is complicated (if not rendered impossible) because of several factors that have to do with cause-of-death attribution and classification of “violent” vs. “non-violent”.

One simple example of such a factor is the existence of a large category of “deaths of unknown reason” in IFHS, which are all classified as “non-violent”.

http://probonostats.wordpress.com/2008/01/27/ifhs-violent-deaths

52. #52 Sortition
February 7, 2009

Of course, the cause-of-death issues are just a subset of the many problems, some of them severe, with the IFHS. See: http://probonostats.wordpress.com/2008/01/17/5-problems-with-the-science-of-the-ifhs-study

BTW, is there any indication of what was the sampling methodology of the IFHS? Without an enumeration of all Iraqi households, some sort of geographical sampling seems like the only way to go. Did the IFHS possess an enumeration of households?

53. #53 Tim Lambert
February 8, 2009

I wrote:
>I don’t agree with the abstract. Under any reasonable assumptions MSB does not make much difference.

David quotes the first sentence and then asks me what I disagree with. The answer is in the second sentence.

54. #54 sod
February 8, 2009

One simple example of such a factor is the existence of a large category of “deaths of unknown reason” in IFHS, which are all classified as “non-violent”. http://probonostats.wordpress.com/2008/01/27/ifhs-violent-deaths

very good article.
i wonder why the “sceptics” have so far done little analysis of the interests of the iraqi ministry of health. they have ALWAYS tried to downplay the number of violent deaths, and have often been forced to admit that.
why do you think they didn t do the same in this study?

55. #55 Tim Lambert
February 8, 2009

To see that their formula is wrong, it is sufficient to provide an example where it gives the wrong answer.

So let’s consider a case where there are just two people, person A, who lives in the sampled area, and person B who lives in the unsampled area. The only risk of death comes from terrorist attacks on the local market. Both A and B spend one hour a day at the market, and the rest of the time at home.

By construction, A and B have the same risk of death, so no bias is introduced by just sampling A.

What does the MSB formula say? Well, in this case n = 1 (A and B are equal in population), fi = 1 (A never leaves sampled area), fo = 23/24 (B only leaves unsampled area for one hour per day), and q = infinity (all deaths occur in sampled area). Plug those numbers into formula and it tells you that R = 1.92, i.e just sampling A gives an estimate almost two times as high. If instead B is 12 people, n = 12 and R = 12.5 i.e the formula says the bias is a factor of 12.5, even though there is no bias.

It is easy to construct examples where the formula is wrong by an arbitrary amount. It is hard to construct plausible examples where the formula gives the right answer.

(For the purists out there: I didn’t actually put q = infinity into the formula, but took the limit as q approached infinity.)

56. #56 Robert Shone
February 8, 2009

Tim Lambert writes:

By construction, A and B have the same risk of death

Of course, the problem with Tim’s “construction” is that A and B artificially, by definition have an equal chance of being killed, regardless of where they live.

It’s a bit like saying that if you “construct” a case in which force doesn’t equal mass times acceleration, then you can show that Newton’s second law of motion is wrong.

I knew Tim misunderstood MSB in some ways, but I didn’t realise the depths to which his ignorance on the matter plummets.

57. #57 Kevin Donoghue
February 8, 2009

Robert Shone,

You don’t have to work with Tim’s construction. Try one of your own. Here’s what I did: assume 9.5m people live inside the survey space where the probability of death is 0.0055 and they spend 45 percent of their time there. 10.5m live outside the survey space where the probability of death is 0.005 and they spend 60 percent of their time there. Their movements offset the risks of location to some extent but not completely, so the survey produces a slight bias which you can easily calculate. The MSB formula gives the wrong answer here too. It overstates the bias by quite a bit.

58. #58 Robert Shone
February 8, 2009

Kevin writes:

You don’t have to work with Tim’s construction.

Nobody will be “working” with Tim’s embarrassingly inept “construction”. But I will be quoting it whenever he tries to pass himself off as some kind of expert on the subject.

59. #59 Tim Lambert
February 8, 2009

Umm, Robert, F = ma is a physical law, so an example where it isn’t violates the laws of physics. In my example, two people have the same risk of death. What physical law do you think this violates?

60. #60 Robert Shone
February 8, 2009

Not good enough, Tim. Your inability to see the problem in your “construction” shows more, perhaps, than simple ignorance over MSB.

61. #61 Jody Aberdein
February 8, 2009

Regarding ‘not good enough’:

On one hand:

We have some inductive reasoning, some good empirical evidence that give a range for the number of deaths attributable to the invasion of Iraq.

On the other hand:

We have some deductive reasoning with absolutely no empirical evidence, made by a group of people whose other arguments include accusation of scientific fraud, argument by incredulity, and cherry picking data to try to smear the empiricists above.

Which dear reader would you deem to be ‘not good enough’?

62. #62 sod
February 8, 2009

Not good enough, Tim. Your inability to see the problem in your “construction” shows more, perhaps, than simple ignorance over MSB.

sorry Rober S., but your silence on the SUBSTANCE of this subject, tells me a lot about you.

here is what their formula says: (some simplification made by me):

a 8 person household INSIDE the mainstreet bias zone spends his time like this: 6 female/kids/elderly (i ll sum these up as female from now on) spend all their time in the zone, while 2 male spend about half (my simplification, doesn t really change the outcome) their time outside. this gives a 6 to 1 ratio between the two groups.

a 8 person household OUTSIDE the mainstreet bias zone has 6 female(etc) always OUTSIDE, and 2 male who spend half their time inside the mainstreet zone. (a moronic assumption, but we go along with the paper. slight simplification.)

this gives a ratio of female/male INSIDE the zone of 6:2 and OUTSIDE 6:2. (funny, their weird construct just allows men to swap places..)

this would give a ratio of adult men to rest of family being killed of 3:1.
but all available data shows, that male have a much higher risk of being killed! (about 2 times that of women of their own age, MORE if looking at violent deaths..)

[IFHS report table 27](http://www.emro.who.int/iraq/pdf/ifhs_report_en.pdf)

it is obvious, that where you live is NOT the most important factor, deciding whether you get killed or not!

63. #63 Robert Shone
February 8, 2009

sod writes:

sorry Rober S., but your silence on the SUBSTANCE of this subject, tells me a lot about you.

Well, I’ve already written quite a bit about what you call the “substance” (ie parameter value examples and the assumptions behind them).

In fact you quoted, above, one of my previous long, detailed posts on it (and then you misrepresented both myself and the MSB authors by saying: “let me see: the number is huge, because they use the assumption that women stay at home all the time?” Not “at home”, but “close to home”, ie in the district in which they live – an important distinction.

Look, you can suggest different parameter values based on various assumptions. The assumptions you have about who spends how much time where, etc, may be plausible or not. You can always change them with better data. I think the MSB team kept it fairly simple – ie didn’t make too many assumptions (although you may disagree with the ones they made). I think they made plausible assumptions (certainly much more plausible than Tim Lambert’s), but you obviously disagree. That doesn’t make the MSB research wrong – it make the assumptions behind example parameter values debatable.

The bottom line for me is what various researchers (eg Jon Pedersen, etc) have said. Which is that the MSB work represents important progress the field of estimating violent deaths in conflicts. Others frame is as being yet another hostile force against Lancet. So be it.

64. #64 David Kane
February 8, 2009

Tim writes: “Under any reasonable assumptions MSB does not make much difference.” Well, it seems a big step from this to “badly flawed.” I think it is helpful to break up Johnson et al into two parts: the model that they use and the parameter values they estimate for Iraq. If there is something wrong with the model (the math is wrong, the code behind their sensitivity analysis mistaken), then that would be a real problem. But, as far as I can see (I am still trying to process Tim’s #55), no objections have been made to the math or the computer code.

The second part is the parameter values. Here, we are making some progress, but not much. (Tim’s redefining of certain terms does not help matters.) So, let’s revisit this. What specific parameter values do you find to be reasonable and what is your evidence for them?

Now, at this point, you might try to switch the burden of proof and say, “Wait! It is the job of Johnson et al to come up with, and defend, their choices.” Let’s go back to: “n, the size of the unsampled population over the size of the sampled population.” I think (I have not checked this with any of the authors) that Johnson et al would say:

You’re right that we don’t have great evidence for n = 10, but no one else has any other better evidence for any value in a wide range because no one (including critics like Tim) know what the sampling plan even was.

If the Lancet authors would tell us what the sampling plan was, then we might be able to pin down the value of n, but until they do, I think anything from 0 to 20 is reasonable.

In other words, it is not enough to say that n = 10 is wrong. You need to present evidence for what n is. Since you don’t know what the sampling plan was, you have no evidence.

Now, if Johnson et al claimed that 10 was the only reasonable number, that numbers like 5 or 20 were impossible, then you would have a point. But they don’t. They admit that n could be in a broad range and they provide a useful sensitivity analysis.

65. #65 Kevin Donoghue
February 8, 2009

My comment No. 57 is hereby retracted. I shouldn’t post on the morning after such a night before. I think Tim makes a very telling point in No. 55 and I will be interested in whether the MSB fans can do anything better than sniff.

David Kane: I bet that Little 82 refers to: Little, R.J.A. (1982). Models for nonresponse in sample surveys. Journal of the American Statistical Association, 77, 237-250.

Thanks David. It’s here in case anyone wants to buy it. Nothing in it suggests to me that one can obtain useful estimates of bias by the Gourley method (colouring in street maps). So it looks as if Johnson et al merely referenced it just to give a scientific veneer to their paper. Alan Sugar had a good name for that particular marketing trick: the mug’s eyeful.

66. #66 dhogaza
February 8, 2009

It’s a bit like saying that if you “construct” a case in which force doesn’t equal mass times acceleration, then you can show that Newton’s second law of motion is wrong.

If you could actually demonstrate this physically, then yes, you would have shown that Newton’s second law of motion is wrong. Actually, we know it’s wrong, but it’s a very precise approximation at relatively low velocity, therefore it’s usefully wrong.

How does this disprove Tim’s example showing that the formula in question is wrong?

67. #67 Le
February 8, 2009

Shone, Kane:

We’re getting bogged down in details, and losing sight of the issue.

Bottom line – addressing the now multiple sources of estimates of Iraqi mortality, what central estimate and range do YOU accept for Iraqi excess deaths due to the war?

68. #68 Robert Shone
February 8, 2009

dhogaza writes:

If you could actually demonstrate this physically, then yes, you would have shown that Newton’s second law of motion is wrong.

Well, my point was not that you can’t demonstrate that a given “law” (Newton’s or any other) is wrong, but that you wouldn’t do so credibly by means of a hypothetical construct in which you define in advance that it’s wrong.

69. #69 Robert Shone
February 8, 2009

Kevin Donoghue

I think Tim makes a very telling point in No. 55

Do you think his “construction” demonstrates (as he claims) that the “[MSB] formula is wrong”. Or is it some other telling point that he’s making?

70. #70 Kevin Donoghue
February 8, 2009

The research programme of the MSB team reminds me of the Underpants Gnomes’ business plan:

Phase 1: Get Burnham’s data
Phase 2: ?
Phase 3: Publish estimates!

Can any of their defenders tell me what is supposed to happen in Phase 2? I don’t want vague generalities. I want to know, for example, how it is proposed to estimate the risk of death in the area outside the survey space. Without that the MSB equation cannot generate a number. Assume Burnham’s data includes anything within reason – maps, dates, family composition – even things he has never pretended to know. But you cannot assume things which are untrue by definition; for example, that he has information from households outside the survey space.

David Kane? Robert Shone? Prize-winning authors? (Tim hasn’t banned you, so you are entitled to join the conversation.)

71. #71 Eli Rabett
February 8, 2009

RS: You can demonstrate that they are inconsistent with their assumptions by a constructed example, which is what Tim has done

Kevin: Hospitals were killing zones if you were of the wrong sect, so there were excellent reasons why some casualties never were taken there, or left as soon as possible, if possible

TimL: Does this sort of stupidity remind you of the hockey stick issue, where unrealistic parameterization was used to try and falsify a basically robust result

72. #72 David Kane
February 8, 2009

Le: The last estimate I offered on this question (at least for violent deaths) was 100,000 (0 — 300,000) as of July 2007. Glad you asked!

Kevin asks “Can any of their defenders tell me what is supposed to happen in Phase 2?” Well, I could offer some guesses, but that is not the purpose of this thread. Our purpose is to discuss the quality of Johnson et al 2008? You are curious about how Johnson et al 2008 would have been different had the Lancet authors shared the data with them. An interesting topic! But not for today. If you have specific parameter values that you think more appropriate (and evidence to back them up), then make your case.

Eli: I agree that the Lancet debate and the hockey stick controversy are similar, but perhaps for different reasons that you do! 😉 Also, do you have any substantive comments to make on the paper?

73. #73 sod
February 8, 2009

David, Robert S., why do you both simply ignore the problem with gender distribution among the killed?

this does completely contradict their version of “mainstreet bias”!!!

74. #74 Sortition
February 8, 2009

People can be skeptical, people can argue, people can disagree, people can make honest mistakes – that is not what is happening here.

Kane has shown clearly over the years in which he has been trying to promote himself using Iraq mortality studies as a vehicle that he is unable produce any substantive arguments.

Shone’s inability to engage substantially the issues regarding the bias model shows that he as well is not writing with the objective of finding out the truth, but with other objectives – personal gain, political commitment, etc.

There is therefore really no way to engage these people in an honest debate. Responses to their comments can only result in eliciting more manipulative, dishonest comments.

75. #75 dhogaza
February 8, 2009

Well, my point was not that you can’t demonstrate that a given “law” (Newton’s or any other) is wrong, but that you wouldn’t do so credibly by means of a hypothetical construct in which you define in advance that it’s wrong.

By your logic, if I state that “n + m = 5”, and Tim states “this is easily proven false by defining n = 1 and m = 1”, then Tim hasn’t falsified my equation because he’s given a construct in which he’s defined in advance that my equation is wrong.

Odd logic.

There’s nothing physically implausible about Tim’s hypothetical case. If the given equation fails under certain cases, then the constraints which must be met for it to be accurate must be stated along with the equation. And when applied to a particular problem, such as Lancet 2, one must show that the data being analyzed meets the stated constraints before one can state that the equation shows that something’s rotten in Denmark.

No exceptions.

76. #76 Kevin Donoghue
February 8, 2009

David Kane: You are curious about how Johnson et al 2008 would have been different had the Lancet authors shared the data with them.

No, that wasn’t my question, though of course you keep going back to that in order to avoid addressing criticisms of the paper. Now, before you ask what criticisms I refer to, try reading the thread. Let’s recall that you requested this thread. How many times now have we seen you return to your complaints about Burnham and how he won’t feed these poor pitiful wretches with data?

If you have specific parameter values that you think more appropriate (and evidence to back them up), then make your case.

You want me to pluck parameter estimates out of my arse like Johnson et al? Sorry, David, I don’t go in for that sort of thing. To me, statistical inference isn’t about colouring maps downloaded fromm Google Earth. It’s mostly about estimating model parameters, so first we need a model which is constructed in such a way that it can be estimated.

The MSB model alas, cannot be estimated, even in a world where Gilbert Burnham posts his data on the web for all to download. I take it you agree, which is why you keep ducking the issue.

77. #77 Kevin Donoghue
February 8, 2009

Robert Shone: Do you think [Tim’s] “construction” demonstrates (as he claims) that the “[MSB] formula is wrong”.

AFAICT the algebra deriving the formula itself is correct so in that sense it’s right. I presume Tim accepts that, though if he says there’s a howler in there I’ll certainly look again; he’s no slouch when it comes to maths. I take it Tim’s claim is that the model is logically incoherent; not just useless for any practical purpose as I’ve been explaining to David. Iim’s comment is not (yet) a proof of that, where “proof” is defined as convincing to me. But don’t read too much into that. I’ve often seen proofs containing the word “obviously” where the author meant: this will be obvious when you’ve thought about it long enough. And indeed it was, eventually. For now though I’m only prepared to say that the MSB paper is worthless, not that it is also nonsense in the strictest sense of the word.

Eli,

Those stories about Iraqi hospitals were indeed on my mind when I wrote my earlier comment about the possibility that q<1. The MSB paper has many worse flaws, but the fact that the authors didn’t see the need to consider that possibility, even if only to argue against it, reflects badly on them. Good papers are more judicious than that – the Lancet papers contain so many caveats about possible biases that they provide the critics with the best ammunition they have.

78. #78 David Kane
February 8, 2009

Kevin: Your tone is hardly helping this conversation. Anyway, you write:

The MSB model alas, cannot be estimated, even in a world where Gilbert Burnham posts his data on the web for all to download. I take it you agree, which is why you keep ducking the issue.

No, I just view this as (almost) too obvious to discuss, but I am all about education, so here goes:

1) Seppo Laaksonen wanted to know the average number of main streets in each cluster. Burnham refused to tell him. (This isn’t just about the L2 authors refusal to share data with Johnson et al.) If Laaksonen had that data, he might be able to come up with a rough sense of the actual coverage of the survey. If there are lots of “main streets” in the typical town, then n is probably fairly small. Almost every house will be near one of those main streets. If there are very few main streets, then n might be quite large since lots of houses will be nowhere near a main street (or any of its cross streets).

The more details that the Lancet authors release, not just about the actual data but about the procedures that they used, the more we are able to understand what they did. That knowledge allows us to come up with better (albeit still very rough) estimates for the Johnson et al (2008) model, or a different model.

By the way, I don’t think the word “estimated” means what you think it means.

Sortition: If you don’t feel like engaging in a conversation about the quality of Johnson et al (2008), then you should go away. The rest of us are busy here.

sod: With regard to #73, I confess to being confused. Gender plays no part in the model here, so it is hard to see how any gender breakdown in deaths can “completely contradict” the paper. Perhaps your point is that there use of gender in estimating things like f are inconsistent with higher male mortality? I don’t really see that, but please make your case in more detail.

79. #79 Jody Aberdein
February 8, 2009

How could Laaksonen estimate n from the average number of main streets?

80. #80 Dano
February 8, 2009

I think I’ll stay away for a few days until yapping purse dog Kane finally wears out and people get tired of responding to him.

Someone plz ring me up when this happens.

Best,

D

81. #81 sod
February 9, 2009

sod: With regard to #73, I confess to being confused. Gender plays no part in the model here, so it is hard to see how any gender breakdown in deaths can “completely contradict” the paper. Perhaps your point is that there use of gender in estimating things like f are inconsistent with higher male mortality? I don’t really see that, but please make your case in more detail.

Tim made the point in the original post. i wrote about it, in basically every post i wrote on this subject.

if female, kids and elderly 8who live in the mainstreet zone) spend all their time in the risky zone, while (the outnumbered) young male leave it for several hours per day, then we would expect the vast majority of death cases to be female, kids or elderly. but reality shows exactly the opposite!

82. #82 sod
February 9, 2009

Gender plays no part in the model here

it implicitly does. they assume, that the place where you live determines risk. they also assume that males leave their homezone a lot, while females don t.

when you chose a more reasonable basis assumption (one supported by FACTS) like “males have a much higher risk of dying from violence”, then by using their other assumptions (males move around a lot) you automatically come to a conclusion that contradicts (their version of )the mainstreet bias theory!

83. #83 Robert
February 9, 2009

In #43, Kevin offered:

let me guess

Sorry to have taken so long to respond. I wanted to check out the actual published version of the Johnson paper to see whether the fundamental problem I mentioned had been addressed. It hasn’t so, like Tim, [my earlier comments from when the paper was first discussed](http://scienceblogs.com/deltoid/2006/12/main_street_bias_paper.php) still stand. I see from that earlier discussion that no one really picked up on my point then (not that I blame anyone; my comments were necessarily brief and thus overly cryptic), so I’m pleased that Sortition has noticed it now and understands its ramifications.

The fundamental problem with the Johnson paper is that it is a “plausibility argument.” There is no actual data analyzed so the authors have constructed a model with model parameters that they believe to be “plausible.” Using these parameters, they conclude that Burnham’s estimate of violent deaths was inflated. Tim, and others, have countered by pointing out that the arameters weren’t plausible, and its defenders have been responding with “yes they are.” And that’s the back-and-forth that’s been going on for a while now.

But I think the problem is that the Johnson plausibility argument is itself flawed, and the key is very much like the Holmesian “dog that didn’t bark.” Here’s the key: the only major methodological difference between the Roberts study and the Burnham study was in the way that the starting point for each cluster was chosen. Johnson et al. have seized on this and use it to explain the difference in violent deaths by appealing to a Main Street Bias. But since the only major difference was the starting point, any plausibility argument must take into account all of the relevant differences between the Roberts and Burnham studies. The thing that everyone has been ignoring is the dog that didn’t bark: it’s the number of non-violent deaths.

If you go back and look you’ll see that, compared to the Roberts study, the Burnham study showed many more violent deaths over the same period but a consistent estimate for the number of total deaths. Any plausibility argument must therefore explain both the increase in violent deaths, which MSB may or may not do depending on model parameters, and the decrease in non-violent deaths in exactly the right amount so the totals remain comparable, which MSB fails to address at all. MSB is a theory about only half of a problem. For any argument to be truly plausible it must fit all of the available facts, not just the ones it cherry picks.

So is there an explanation that does fit the available facts? Demographers, boring drones that they are, have spent way too much time examining “typical” patterns of error in survey data. For example, we often see that there are common patterns in how age is misreported so that there is “heaping” on certain numbers. For another example, we often see evidence of recall bias where events that happened long ago get misreported: it appears that people forget long-ago events far more than the make up non-existent events. There are lots of these examples of typical error patterns. So here’s one more typical error: cause of death is often much less reliably reported than that a death occurred at all.

So the main problem with MSB isn’t that the parameters are implausible (though they may be). It’s that a much simpler explanation exists that covers both 1) the increase in violent deaths and the decrease in non-violent deaths and 2) is consistent with behavior we see in other mortality surveys: the attribution of cause of death as violent or non-violent between the two studies is off.

Note that this doesn’t mean that it was the 2006 Burnham study that was off: it could just as easily have been that the 2004 Roberts study was off (or even that both were off but in different directions).

So, do I have any data? Nope. I’m offering a plausibility argument, just like MSB. However, unlike MSB, it’s a plausibility argument that addresses both violent and non-violent deaths, not just one of them.

84. #84 Robert Shone
February 9, 2009

dhogaza writes:

By your logic, if I state that “n + m = 5”, and Tim states “this is easily proven false by defining n = 1 and m = 1”, then Tim hasn’t falsified my equation because he’s given a construct in which he’s defined in advance that my equation is wrong.

That’s not quite what I meant. Look, you can easily “demonstrate” that MSB is “wrong” without going through Tim’s whole rigmarole (comment #55). You just have to state that by definition “people” (one or many) have the same risk of being killed regardless of whether they live in a dangerous area or a peaceful area.

Of course, you’re not really “demonstrating” anything. But it might look that way to the gullible if you dress up the “no bias by definition” case in a plausible-sounding hypothetical situation.

85. #85 David Kane
February 9, 2009

Robert writes:

The fundamental problem with the Johnson paper is that it is a “plausibility argument.” There is no actual data analyzed so the authors have constructed a model with model parameters that they believe to be “plausible.” Using these parameters, they conclude that Burnham’s estimate of violent deaths was inflated. Tim, and others, have countered by pointing out that the parameters weren’t plausible, and its defenders have been responding with “yes they are.” And that’s the back-and-forth that’s been going on for a while now.

I think that is a fair summary. But don’t forget that the reason that “no actual data [is] analyzed” in Johnson et al is that the Lancet authors refuse to share the data with them.

If you go back and look you’ll see that, compared to the Roberts study, the Burnham study showed many more violent deaths over the same period but a consistent estimate for the number of total deaths. Any plausibility argument must therefore explain both the increase in violent deaths, which MSB may or may not do depending on model parameters, and the decrease in non-violent deaths in exactly the right amount so the totals remain comparable, which MSB fails to address at all. MSB is a theory about only half of a problem. For any argument to be truly plausible it must fit all of the available facts, not just the ones it cherry picks.

MSB is not attempting to address the “problem” of discrepancies between Roberts et al (2004) and Burnham et al (2006). I agree that this is an interesting topic and goodness knows that I have spent a lot of time on it myself. But this is not what MSB is about. You need to critique the paper they wrote, not the paper that you think they should have written.

Again, and sorry to be repetitive, but MSB has two parts: the actual model and the parameter estimates. If you think the model is wrong, prove it. Math is fun! If you think the parameter estimates are wrong, suggest some others. But the fact that the model (or the estimates) do not address the topic of the discrepancies between Roberts and Burnham is hardly relevant to Johnson et al. That is not their topic.

[T]he attribution of cause of death as violent or non-violent between the two studies is off.

Perhaps. Needless to say, you had best not mention this criticism to Les Roberts. He thinks that the two studies are perfectly consistent with each other.

86. #86 Tim Lambert
February 9, 2009

Robert Shone, in my example, there was no bias, but their formula said there was. This proves that the formula is wrong. I am sorry that this is too complicated for you.

87. #87 David Kane
February 9, 2009

Tim: On #55, is the market in the sampled area? I think it is, but just wanted to clarify.

Also, I am somewhat leery of an counter-example which requires taking limits (which doesn’t invalidate your point, of course), so could you provide an example of where the formula gives the wrong answer without setting q = infinity? If it is “easy to construct examples where the formula is wrong by an arbitrary amount,” then this should not be a problem for you.

88. #88 sod
February 9, 2009

Tim: On #55, is the market in the sampled area? I think it is, but just wanted to clarify.

yes. (this is pretty obvious from the values he chose for fi and fo..)

Also, I am somewhat leery of an counter-example which requires taking limits (which doesn’t invalidate your point, of course), so could you provide an example of where the formula gives the wrong answer without setting q = infinity? If it is “easy to construct examples where the formula is wrong by an arbitrary amount,” then this should not be a problem for you.

any q large enough will do. even q=5 will produce a bias of 1.62, q=100 already gives 1.9. i need to do some more thinking about the formula, but Tim s example seems to show, that there is a serious problem.

89. #89 sod
February 9, 2009

MSB is not attempting to address the “problem” of discrepancies between Roberts et al (2004) and Burnham et al (2006). I agree that this is an interesting topic and goodness knows that I have spent a lot of time on it myself. But this is not what MSB is about. You need to critique the paper they wrote, not the paper that you think they should have written.

David, you didn t understand what Robert said.

it is a similar problem with the male/female distribution of deaths: mainstreet bias would require a ratio of about 3 to 1 between female/kids/elderly and male population from the polled zones. but instead the study finds exactly the opposite, (young) male outnumber female (et al) by a HUGE margin!

90. #90 Robert Shone
February 9, 2009

Tim Lambert writes:

Robert Shone, in my example, there was no bias.

Well, that’s because by definition, your hypothetical people have the same risk of being killed regardless of where they live. In other words, it’s nonsense.

Why don’t you bring your example into the messy real-world. A bomb hitting the market might start a fire or launch a piece of shrapnel which kills people sleeping in a nearby house – or whatever.

I think MSB gave the right answer. Your definition of “equal risk” was wrong.

91. #91 sod
February 9, 2009

ah i see, fi=1 (from Tim s example) doesn t make a difference between the “mainstreet zone” and the market.
the formula seems to be ok.

92. #92 David Kane
February 9, 2009

Having now taken the time to study Tim’s example in #55, I think I see what the issue is. I don’t know an easy way to put the math into a comment, but we are looking at formula 1 from page 5 of this pdf.

In Tim’s example, “A and B have the same risk of death.” Johnson et al write:

Probabilities of death for anyone present in Si or So are, respectively, qi and qo , regardless of the location of the households of these individuals.

The tricky part (and perhaps the source of confusion) is whether Tim and the authors are using “risk of death” in the same sense. If they are, then Tim is wrong. If qi and qo are the same, then q (their ratio) is one and, as the formula makes clear, if q = 1 then R equals 1, and there is no bias, just as we would expect.

But perhaps the issue is more subtle. Tim is claiming that even though A and B have the same risk of death, then qi and qo are not the same because he is defining these terms in a different way than the authors are.

At this point, I am not sure who is right and who is wrong. But, for us to make progress, we need Tim to tell us what values for each of the terms in formula 1 he is plugging in to come up with his counter-example. He may very well be correct that there is a major flaw. But I, at least, can’t follow the argument without a more explicit mapping between Tim’s example and the formula in the paper.

sod: Or perhaps you could do this? You cite an example of q = 5, which means that the probabilities of death are 5 times higher for people in one area than another. But, in Tim’s example, probabilities of death are equal.

Hmmm. I am obviously just commenting out loud here.

The distinction seems to be between: What are the odds of person A dying in a particular region (regardless of where he lives)? Versus: What are the odds of person A dying in the region in which he lives?

I seek clarification from anyone on this point.

93. #93 sod
February 9, 2009

A bomb hitting the market might start a fire or launch a piece of shrapnel which kills people sleeping in a nearby house – or whatever.

your shrapnel is a nice one. it will have to travel quite a bit, as the Lancet way of “mainstreet bias” actually has a very small chance of polling people on a “mainstreet”.

so the shrapnel travels quite far and hits a house in a road intersecting with the “mainstreet”. now it will kill male and female/elderly/kids at a 1 to 3 ratio. and when the lancet is polling them, this is the ratio of dead they would get. but they didn t!

94. #94 Kevin Donoghue
February 9, 2009

Tim, if I understand correctly, you are getting your result by assuming rational behavior. Your two agents limit their risk by staying away from the marketplace. But the MSB assumptions rule out rational behavior. As the economists would say, q is taken to be a technological constant. If you are on Risky Street you cannot diminish your risk except by leaving.

That’s a serious flaw in the model of course. It’s “wrong” in the sense that it violates the principles which most social scientists (not just economists) apply to model building. But Johnson isn’t a social scientist and Spagat is evidently an eccentric one. If they want to assume the agents in their model are as stupid as billiard balls, that’s their prerogative I think.

Or am I missing something?

95. #95 David Kane
February 9, 2009

Kevin: I think you are missing something. Tim is not assuming rationality or anything else. He is just creating an example in which we know R is 1. He then plugs in the values of the various elements of the formula. He knows what these are by construction. Since R is not equal to 1 using these values, we have a contradiction. So, the formula is wrong.

Again, I am not sure that Tim is right because I am not sure if he and the authors are defining q in the same way. But Tim’s approach is certainly sound and requires no assumptions of any kind.

96. #96 Robert Shone
February 9, 2009

sod writes:

your shrapnel is a nice one. it will have to travel quite a bit, as the Lancet way of “mainstreet bias” actually has a very small chance of polling people on a “mainstreet”.

Well, you’re missing the point that Tim’s “equal risk” exists only in some Platonic world which looks nothing like Iraq. If it’s not shrapnel or fire it’s something else negating his hypothetically assumed “equal risk”.

97. #97 Kevin Donoghue
February 9, 2009

Robert,

Thanks for the reply (#83). I think your theory is as good as we are going to get.

98. #98 Kevin Donoghue
February 9, 2009

David Kane: [Tim] is just creating an example in which we know R is 1.

Yes, but he is doing it by allowing his agents to exercise a choice which the agents in the MSB model do not have. In the MSB model there is one way, and only one way, to limit your risk: leave the survey space. Tim’s agents equalise their risks in a different way, which the MSB model rules out by design. Tim’s agents spend the same amount of time in the riskiest part of the survey space – the marketplace. But in the MSB model the survey space does not have variations of risk within it.

I agree with you that Tim’s approach is sound, or at any rate superior to the MSB approach. But he hasn’t convinced me that there is any logical flaw in the MSB model. Sure, it’s a crappy model but I think the logic is okay.

99. #99 Tim Lambert
February 9, 2009

I am using exactly their definition of q. Where their model fails is the assumption that every location in the sampled area is equally dangerous. Since the sampled area includes main streets and streets intersecting main streets, their model assumes that main streets and streets intersecting main streets are equally dangerous, i.e. their model assumes that there is no main street bias.

100. #100 Kevin Donoghue
February 9, 2009

Where their model fails is the assumption that every location in the sampled area is equally dangerous.

That’s not a bug, that’s a feature! That’s how they force the result they want – the moment you step outside the survey space your life-expectancy improves. Of course as an approach to modelling it deserves all the ridicule you can heap on it.

It reminds me of a critique of capitalism I once read, where when you looked at the small print of the “model” you found fixed coefficients everywhere – only one recipe for producing any of the goods, consumers who all wanted a particular basket of commodities with no scope for substitution. When you got down to it, the guy was saying that markets don’t work if we assume nobody can ever make any choices. Which is logical of course, but it doesn’t stand up very well empirically.

Similarly, the MSB model is rejected by evidence that Iraqis are pretty adept at finding ways to reduce their risks. Relocation is not the only option, though judging by the number of refugees it’s often the best available.