New Scientist on AAPOR censure

Debora Mackenzie, in the New Scientist reports on the AAPOR censure:

AAPOR charges that by refusing “to answer even basic questions” about data and methods, Burnham is preventing other researchers from evaluating his conclusions.

According to New Scientist’s investigation, however, Burnham has sent his data and methods to other researchers, who found it sufficient. A spokesman for the Bloomberg School of Public Health at Johns Hopkins, where Burnham works, says the school advised him not to send his data to AAPOR, as the group has no authority to judge the research. The “correct forum”, it says, is the scientific literature. …

“I know that they have shared the data for reanalysis with others, including some vociferous critics,” says Richard Garfield of Columbia University in New York, who has analysed mortality studies in Iraq. “So I am not sure what the censure really is about.”

In fact, in March 2008, AAPOR’s own journal, Public Opinion Quarterly, published an analysis of Burnham’s Iraq survey by David Marker of Westat, a consultancy in Maryland that designs surveys. “I received the dataset they distributed. I also saw presentations they made about their methodology and they responded to a number of inquiries I made,” he says.

The AAPOR press release does not mention this and misleads by implying that Burnham would not share their data.

Comments

  1. #1 David Kane
    February 6, 2009

    The “New Scientist’s investigation” seems to have involved all contact with any Lancet critics. Is that the best way to get at the truth? I love the way that Garfield is not identified as a co-author (with Burnham and Roberts) on the 2004 Lancet study. Wouldn’t want to burden your readers with that information!

    And, for the record, I can back up Marker’s quote:

    I received the dataset they distributed. I also saw presentations they made about their methodology and they responded to a number of inquiries I made.

    Me too! But that doesn’t mean that what they have done is anywhere near enough.

    And, for the 100th time, Burnham refused to share his data with Spagat et al.

    I have never heard of a scientist sharing data behind a published study with one group of critics but not with others. Have you, Tim? Burnham’s behavior (driven, I think, more by Roberts) is unique.

  2. #2 Robert Shone
    February 6, 2009

    The apparently misinformed New Scientist piece states:

    “Yet Burnham’s complete data, including details of households, is available to bona fide researchers on request.”

    Great damage limitation though.

  3. #3 sod
    February 6, 2009

    I have never heard of a scientist sharing data behind a published study with one group of critics but not with others. Have you, Tim? Burnham’s behavior (driven, I think, more by Roberts) is unique.

    David, this claim is bogus.
    you really don t think that scientists share data with “friendly critics” but will deny it to scientist whom they perceive as being hostile?
    actually i am pretty sure that i experienced this kind of behaviour personally!

    there are social science studies that research such stuff, and i remember that i linked you to them, when we had this discussion the last time.

  4. #4 David Kane
    February 6, 2009

    sod: To clarify. I know of scientists that share their data with no one. I know of scientists that share their data only with their friends.

    I think that both these groups are behaving disgracefully.

    I know scientists who share their data with everyone. (These are the only scientists whose work I trust and respect.)

    But Burnham/Roberts are in a fourth category. They share their data with some critics (me) but not with others (Spagat et al). I have never heard of a similar case. If you know of one, please cite it.

    And, much more egregiously, I have never, ever, ever seen a data sharing restriction couched as:

    The data will be provided to organizations or groups without publicly stated views that would cause doubt about their objectivity in analyzing the data.

    You can not cite a single academic anywhere who has ever put a similar restriction on data sharing.

  5. #5 Marion Delgado
    February 6, 2009

    When I said “they have no standing” I guess I wasn’t clear. The commenter on the other thread on this who brought up epidemiology WAS clear. They are the AMERICAN Society FOR Public Opinion Research. Something a Scott Rasmussen or John Zogby would join (or not bother joining). They don’t even do epidemiological surveys. Burnham apparently doesn’t do public opinion surveys working out of the United States.

    It’s a closed issue. The only people discredited here are AAPOR and Megan McCardle.

  6. #6 Robert Shone
    February 6, 2009

    I find the New Scientist piece dubious in several ways. In addition to her errors, Debora MacKenzie writes that AAPOR’s “stated purpose, to ensure survey-based research meets high standards, has been questioned by experts”. But she doesn’t say which experts, and she doesn’t specify how the “purpose” has been questioned.

    She makes vague, unsupported statements suggestive of “conspiracy”. Even the title is framed that way: “What is behind criticism of Iraq deaths estimate?” Then there’s the statement, right at the beginning: “There is no direct evidence that the latest attack on Burnham is politically motivated…”. So why mention political motivations at all?

    There’s no evidence that MacKenzie has close associations with Richard Horton, the editor of the Lancet. And there’s no evidence that John Hopkins made a special effort at damage limitation PR with the New Scientist (after giving up on Science and Nature journals as lost causes). So, since there’s no evidence of these things, let’s not mention them.

    MacKenzie writes: “According to New Scientist’s investigation, however, Burnham has sent his data and methods to other researchers, who found it sufficient.”

    Which researchers found it sufficient? She doesn’t mention any, apart from David marker (not a particularly good example given that Marker has pointed out inadequacies in the Lancet study in terms of AAPOR standards).

    In fact she doesn’t really mention anyone apart from the John Hopkins crowd and colleague Richard Garfield. What kind of “investigation” did she conduct? Not a thorough one by the look of it. Did she approach any of the Lancet study’s many critics – people like Debarati Guha-Sapir, Mark van der Laan, Seppo Laaksonen, Beth Osborne Daponte, etc?

    Does she understand why many researchers are unhappy (to say the least) with the lack of disclosure of information necessary to properly assess the study? Is she even aware of this criticism?

    She writes: “Yet Burnham’s complete data, including details of households, is available to bona fide researchers on request.”

    Is she really unaware of all the ways in which this statement is incorrect or misleading, and all the implications which follow from it being incorrect and misleading?

  7. #7 bi -- IJI
    February 6, 2009

    > Debora MacKenzie writes that AAPOR’s “stated purpose, to ensure survey-based research meets high standards, has been questioned by experts”. But she doesn’t say which experts, and she doesn’t specify how the “purpose” has been questioned. […]

    > What kind of “investigation” did she conduct?

    Shone, did you apply the same level of skepticism to the AAPOR press release? Did you even try asking the same questions?

  8. #8 Robert Shone
    February 6, 2009

    You’re right, of course, that such press releases should be met with scepticism. There’s not enough detail there. However, if the AAPOR charges are unwarranted, you should be able to show me the questionnaire and household selection protocols used by the Lancet study. Can you? Can MacKenzie?

  9. #9 Eli Rabett
    February 6, 2009

    Johns Hopkins not John Hopkins. The old boy left a passel of railroad stock to found a university with his name on it and you go ruin the spelling. OTOH, the stock went to zero a couple of years after he gave it.

  10. #10 bi -- IJI
    February 6, 2009

    > You’re right, of course, that such press releases should be met with scepticism. There’s not enough detail there. However, if the AAPOR charges are unwarranted,

    Blatant, hypocritical attempt at shifting the burden of proof.

  11. #11 sod
    February 7, 2009

    You’re right, of course, that such press releases should be met with scepticism. There’s not enough detail there. However, if the AAPOR charges are unwarranted, you should be able to show me the questionnaire and household selection protocols used by the Lancet study. Can you? Can MacKenzie?

    a questionaire is available. why don t you explain to us, why you don t simply accept it?

    look, the lancet survey was done at a time, when doing surveys in iraq was dangerous and NOT popular. the poll was done on a pretty tight budget.
    all this hidden accusations of fraud or requests for perfect
    documentation, from people unwilling to do any fieldwork of their own, is slightly strange.

  12. #12 Robert Shone
    February 7, 2009

    sod writes: a questionaire is available

    But not one which Burnham or Roberts will confirm using. The questionnaire that Tim Lambert linked to was a template on the National Journal site which Burnham/Roberts have declined to confirm or deny using. Researchers such as Fritz Scheuren have requested a copy of the questionnaire, but have been refused. Please see previous threads which have covered this already.

  13. #13 bi -- IJI
    February 7, 2009

    Shone’s double standards keep showing again.

    He keeps asking questions about the Burnham et al. study and the New Scientist essay, yet won’t even think of asking the same questions about the supposed AAPOR “investigation”.

    He keeps defending the AAPOR news release in ways which he won’t think of when it comes to the Lancet study and the New Scientist essay.

    If there was in fact an “investigation” by the AAPOR, where’s the investigation report? What was the exact wording of the questions which the AAPOR put forth to Burnham? What was the exact wording of Burnham’s replies to the AAPOR? Why can’t the AAPOR state clearly what exactly it found missing from Burnham’s public descriptions? Why do they have to leave it to the Lancet attackers to make up what the AAPOR ‘might’ have ‘meant’?

    Why aren’t the Lancet ‘skeptics’ asking these questions?

  14. #14 sean
    February 7, 2009

    These political debates are played out on the internet, and and bloggers expect full disclosure. If it is not on the internet, it does not exist.

    The access you would give your boss. Turnkey, with a working makefile / project file / or enough instructions to run it, including any macros or libaries. Normally using Open or at least readily available tools where practical.

    People understand if you explain you make need to hold back details to protect people surveyed, or because you do not own the data/code, or there are legal / commercial reasons. Anything based on who people are and just asking for trouble…So while I have no problem with the survey, if you hold back, this is what you get.

  15. #15 David Kane
    February 7, 2009

    I agree with bi that the AAPOR should release an actual report, at the very least consisting of the questions that they asked Burnham. (I don’t think that they should publish his replies without his permission, although, in that case, they should summarize his replies and explain why they are unacceptable.)

    That said, it is fairly obvious what they asked for: the actual questionnaire. Burnham et al won’t release that (or confirm/deny that the National Journal has an accurate copy). If you don’t release your questionnaire then, whatever your other merits, you don’t meet AAPOR standards.

  16. #16 Michael
    February 7, 2009

    I find it pretty curious that AAPOR doesn’t demand of itself the same level of disclosure it expects of others.

    AAPOR doesn’t meet AAPOR standards.

  17. #17 Robert Shone
    February 7, 2009

    Bi writes:

    Shone’s double standards keep showing again.

    He keeps asking questions about the Burnham et al. study and the New Scientist essay, yet won’t even think of asking the same questions about the supposed AAPOR “investigation”.

    Bi, in fact I agree with you that we should ask questions about AAPOR’s investigation. But I think they are making a very simple claim: of non-disclosure. Either Burnham has released basic, necessary information such as household selection protocols, questionnaire, etc. Or he hasn’t.

    It seems that he hasn’t.

  18. #18 dhogaza
    February 7, 2009

    If you don’t release your questionnaire then, whatever your other merits, you don’t meet AAPOR standards.

    Why should people who aren’t doing public opinion surveys care about AAPOR standards?

    How many epidemiology surveys meet AAPOR standards? How many “investigations” into such standards has AAPOR done? Why did they choose this particular survey if they feel like they should be the arbiter of good survey work done by epidemiologists. Are they vetting the social sciences as well?

  19. #19 douglas clark
    February 7, 2009

    It seems to me that AAPOR are not perhaps as neutral in this debate, as they would like to paint thmselves.

    They have taken this issue up on the basis of a ‘complaint’ from one of their members. Who is this complainant? And, what is their criteria for accepting it as a basis for action?

    I’d have thought that AAPOR might like to comment here on their neutrality, or otherwise. And I’d especially like them to state who the complainant was.

  20. #20 douglas clark
    February 7, 2009

    sticky key

    themselves

  21. #21 douglas clark
    February 7, 2009

    It seems to me that the critics of Burnham et als’ methodology ought to be asked exactly what their own methodology might have been. Given the reality on the ground.

    Shone and Kane should, perhaps, provide an answer to that.

  22. #22 bi -- IJI
    February 8, 2009

    > But I think they are making a very simple claim: of non-disclosure.

    If it’s so simple, then why did the “investigation” have to take 8 months? Did the AAPOR spend 8 months doing nothing but asking for methods and data? Or was something else happening?

    > A spokesman for the Bloomberg School of Public Health at Johns Hopkins, where Burnham works, says the school advised him not to send his data to AAPOR, as the group has no authority to judge the research. The “correct forum”, it says, is the scientific literature.

    Sounds reasonable to me. Burnham et al. have every right to see to it that, when researchers use their data, these researchers will publish their findings through the proper peer review channels (whether the researchers agree with them or not). Because, at the end of the day, that’s the best channel for sorting out the solid from the bogus. No reason why they should allow their data to be hijacked by people who just want to write press releases.

  23. #23 Robert Shone
    February 8, 2009

    I think the Johns Hopkins line about the scientific literature being the “correct forum” is disingenuous. The problems with non-disclosure became apparent as a result of the scientific literature scrutinising the study. (Researchers complained that they weren’t getting the basic information required).

    As for the length of AAPOR’s investigation, perhaps a clue is offered by the last occasion on which they rebuked someone for ethical breach. That was 12 years ago, against the rightwing pollster Frank Luntz. That investigation started in January 1996 and concluded (with a formal rebuke) in April 1997.

    Why so long? AAPOR’s president says the following: “AAPOR tried on several occasions to get Luntz to provide some basic information about his survey, for example, the wording of the questions he used. For about a year, he ignored these requests. Subsequently, he provided partial information, but still refused o let us make any of the information public, arguing that the results were roprietary, even though he had been discussing the conclusions of the survey n public for nearly two years.” (consonants missing in original)

  24. #24 dhogaza
    February 8, 2009

    As for the length of AAPOR’s investigation, perhaps a clue is offered by the last occasion on which they rebuked someone for ethical breach. That was 12 years ago, against the rightwing pollster Frank Luntz.

    They have the expertise to investigate pollsters, since their ethical standards relate to polling.

    I see no evidence that their standards relate in any way to epidemiological or any other scientific survey technique, nor any evidence that folks doing such work bother with membership, nor any evidence that most researchers (as opposed to pollsters) have even heard of the organization.

    This is a bit like a businessman applying business auditing standards to research science.

  25. #25 Robert Shone
    February 8, 2009

    Well, ironically, David Marker, whose study the New Scientist piece cites, apparently disagrees with you about the relevance of AAPOR’s standards. Here’s what Marker says:

    “A few years ago, 35 leading survey researchers issued a consensus statement on how to minimize interviewer falsification of data (AAPOR 2003). This statement has been endorsed by the American Association for Public Opinion Research and the Survey Research Methods Section of the American Statistical Association. They listed eight factors that could affect falsification rates. Inadequate supervision, poor quality control and off-site isolation of interviewers were three of those factors that are present in this [Lancet] study. The remaining five factors (training on falsification, interviewer motivation, inadequate compensation, piece-rate compensation, and excessive workload) are harder to assess in this situation due to the limited information available on these topics. When collecting data on controversial topics, it is very important that steps be taken (and documented) to avoid falsification so that those who disagree with the findings cannot use this to try to discredit them.” http://poq.oxfordjournals.org/cgi/content/full/72/2/345

  26. #26 David Kane
    February 8, 2009

    Douglas Clark writes:

    It seems to me that the critics of Burnham et als’ methodology ought to be asked exactly what their own methodology might have been. Given the reality on the ground.

    Given the constraints of time and cost, I think the originally stated methodology was reasonable. It wasn’t perfect and it does generate some main street type biases, but it is better than nothing. (The IFHS methodology is infinitely better, but they had a much larger budget.)

    The issue is not: the L2 sampling methodology stinks. The issue is: No one know what the L2 authors actually did. No one! Any academic author who conducts a survey and then refuses to release the questions and/or answer questions about the sampling methodology is suspect.

  27. #27 Crust
    February 10, 2009

    David Kane:

    The IFHS methodology is infinitely better, but they had a much larger budget

    I thought your view was that the IFHS methodology was inferior in some ways, at least as relates to estimating mortality and causes. In particular, most of the IFHS questions did not relate to mortality (that’s not a criticism of IFHS since it was a general health study, it’s only a defect if you’re interested specifically in mortality) and they did not ask for death certificates despite knowing that a previous study (Lancet) had found a very high response rate to that request. (Personally, I would add the fact that the IFHS questioners were affiliated with the Iraqi government would also be an issue especially with Sunni households, but IIRC you don’t view that as an issue.)

  28. #28 David Kane
    February 10, 2009

    Crust writes:

    I thought your view was that the IFHS methodology was inferior in some ways, at least as relates to estimating mortality and causes.

    Well, no study is perfect and I have some minor issues with some of the IFHS adjustments, but, big picture, IFHS is far superior to L1 or L2.

    [T]hey did not ask for death certificates despite knowing that a previous study (Lancet) had found a very high response rate to that request.

    Correct. The IFHS did not ask for death certificates for the same reason that no one would ask for death certificates if doing a similar survey in the US.

    I would add the fact that the IFHS questioners were affiliated with the Iraqi government would also be an issue especially with Sunni households, but IIRC you don’t view that as an issue

    It could be an issue. Tough to know. But the L2 interviewers were every bit as “affiliated” with the Iraqi government. They were/are government employees.

  29. #29 Crust
    February 11, 2009

    David Kane:

    The IFHS did not ask for death certificates for the same reason that no one would ask for death certificates if doing a similar survey in the US.

    I don’t understand. There no doubt are better ways to estimate violent and total death rates in the US than doing a household survey. But if I was doing such a survey and I knew a previous such survey had a very high response rate after requesting death certificates, I too would request death certificates. Wouldn’t you, David? If you thought the response rate in the prior survey was too high, wouldn’t that be all the more reason to request death certificates in order to help confirm or correct that unexpected observation?

  30. #30 David Kane
    February 11, 2009

    But if I was doing such a survey and I knew a previous such survey had a very high response rate after requesting death certificates, I too would request death certificates. Wouldn’t you, David?

    No, not in the US. I would know that many people (in the US) don’t keep death certificates in their house for years. If I were planning a survey, I would not plan to ask for them because I would not expect to find them. Nor would I ask for the family’s coat of arms, as interesting as that might be. Most US families don’t keep a copy of the family’s coat of arms around the household.

    Now, you might point out that some other surveyors recently did a survey and asked to see death certificates in the US, and found them. Interesting, but also not really believable. I would think of that survey in the same way that you thought about a survey that claimed 80% responses on the coat-of-arms question. You weren’t there so you can’t be sure that the result is BS. But you would hardly change you own plans going forward.

    There are always more questions to ask in a survey then you have time to ask. So, you prioritize. Asking for death certificates was not an IFHS priority because they thought doing so absurd.

  31. #31 sod
    February 11, 2009

    Now, you might point out that some other surveyors recently did a survey and asked to see death certificates in the US, and found them. Interesting, but also not really believable. I would think of that survey in the same way that you thought about a survey that claimed 80% responses on the coat-of-arms question. You weren’t there so you can’t be sure that the result is BS. But you would hardly change you own plans going forward.

    ouch David.

    this is about as close to a claim of fraud as it gets.

    No, not in the US. I would know that many people (in the US) don’t keep death certificates in their house for years. If I were planning a survey, I would not plan to ask for them because I would not expect to find them.

    this is false.

    a nice look at death certificates can be found [here](http://scienceblogs.com/effectmeasure/2008/08/basics_the_death_certificate.php)

    let me cite:

    In the US you can’t legally dispose of a body without a properly recorded death certificate and it’s a document survivors use for all manner of other purposes, from claiming a “compassionate fare” discount from an airlines to insurance money.

    the reason why you wouldn t find a lot of death certificates in the USA is a simple one: the majority of people who die are elderly and die in their own household.

    the situation is completely different in Iraq.

    the death certificate is [important](http://picasaweb.google.com/nmyours/IRAQ1#5098503516598138946) in Iraq.

    An Iraqi official reviews the death certificate of a victim of a July 26 car bombing as his relatives apply for an emergency payment in the Karradah neighborhood of central Baghdad, Iraq on Sunday, Aug. 12, 2007. Displaced and bereaved families received about US$4,000 in relief for the losses they suffered on July 26 by a blast that killed more than 28 people, wounded at least 95 and left apartments and homes decimated. (AP Photo/ Hadi Mizban)

    you are wrong David, and you make pretty strong accusations.

  32. #32 Crust
    February 11, 2009

    David, I already addressed your point in the comment to which you were replying:

    If you thought the response rate in the prior survey was too high, wouldn’t that be all the more reason to request death certificates in order to help confirm or correct that unexpected observation?

    If the IFHS authors thought for some reason that — pace Lancet — death certificates were roughly as common as coats-of-arms, then that’s all the more reason to ask for death certificates in an effort to correct the record. It’s not like the Lancet is some obscure journal.

    There are always more questions to ask in a survey then you have time to ask. So, you prioritize.

    True that. Which is what I thought was one of your criticisms of IFHS’ methodology from the point of view of understanding mortality, that the treatment suffers because it is a small part of a lengthy study of health.

  33. #33 Crust
    February 11, 2009

    sod replying to David Kane:

    [T]his is about as close to a claim of fraud as it gets.

    Or, more precisely, it’s close to a claim that the IFHS authors thought that Lancet was a fraud even before they began their own study.

    As an aside, David, would you mind substantively replying on the gender issue re MSB? Just because sod uses boldface a lot, doesn’t mean s/he’s wrong.

  34. #34 Eli Rabett
    February 11, 2009

    Eli, being a practical bunny, imagines that a household would surely have the death certificate within half a year of the passing, probably within a year and less likely the longer the time after the death. You absolutely need it to bury someone, take care of winding up their affairs and so on. Afterwards you might keep it around buried in the sock drawer or where ever.

  35. #35 sod
    February 11, 2009

    Or, more precisely, it’s close to a claim that the IFHS authors thought that Lancet was a fraud even before they began their own study.

    don t allow David to spread false rumours again. if you publish false claims and spreading them, you are responsible. david should be able to do some research on the subject.

    the claim that people don t keep the death certificates of their sons, killed by violence, is simply idiotic.

    the meaning of a certificate is very different in a bureaucratic arab country like Iraq.

    there is a interesting aspect of this on google books [Genocide in Iraq the Anfal campaign against the Kurds](http://books.google.de/books?id=qidfVsS-z8YC&printsec=frontcover&hl=en&source=gbs_summary_r&cad=0#PPR9,M1)

    if you search the book (page 65), you will learn that Saddams hospitals issued hundreds of death certificates even for murdered Kurds!

  36. #36 Kevin Donoghue
    February 11, 2009

    Crust: As an aside, David, would you mind substantively replying on the gender issue re MSB?

    Crust, speaking as one who has been reading David Kane’s comments for as long as he has been posting, I urge you to pay close attention to what he wrote in the MSB thread:

    David Kane: I have not studied the paper and associated materials closely.

    Tim Lambert created that thread after David repeatedly urged him to, on the grounds that the publication of the MSB paper is an important development. Then David wrote:

    I have not studied the paper and associated materials closely.

    That, Crust, is by far the most honest statement David Kane has made since he first started posting on Deltoid.

    I have not studied the paper and associated materials closely.

    I think it’s the Kane family motto. Homework? We don’t do homework. We have people for that.

  37. #37 David Kane
    February 11, 2009

    Kevin: I guess it depends on what you mean by closely. I (obviously) read it more closely than you. That’s how I caught Tim redefining terms without telling us. But, at the same time, I did not check every line of the proof in the Appendix.

    All: If anyone has a better explanation for why the IFHS authors did not ask for death certificates, please supply it. (L2 had not been published at the time they did the survey, but L1, with its similarly implausible death certificate claims, had been out for more than a year.) Perhaps the IFHS authors (and WHO!) are part of the evil neocon conspiracy.

    crust writes:

    As an aside, David, would you mind substantively replying on the gender issue re MSB? Just because sod uses boldface a lot, doesn’t mean s/he’s wrong.

    Indeed! I view the gender issue as irrelevant. The MSB model applies to any situation with a correlation between sampling areas and violent areas. Gender does not enter directly. If there is a correlation (and you believe the assumptions), then you can use the model. If not, you can’t. The MSB model itself is perfectly consistent, depending on what else is going on with any particular gender distribution of deaths.

    Consider some extreme examples. What if every death in Iraq were female? Would that invalidate MSB? No! MSB makes no predictions about the gender distribution of deaths. None. So, no result can be a contradiction. What if every death in Iraq were male? Would that invalidate MSB? Again, no.

    It’s as if sod were shouting that the fact that 4,000+ US soldiers have died invalidates MSB. It’s true that 4,000+ US soldiers have died, but that fact has no connection to MSB because MSB makes no predictions about US deaths.

    To be honest, this critique seems so incoherent (and shouted) that responding is a waste of time. But, if sod (or anyone else) wanted to write it up more thoroughly, perhaps I (or Robert Shone) could understand it and respond.

  38. #38 Tim Lambert
    February 12, 2009

    David, I did not redefine terms without telling you. I told you what I was doing. The model assumes that there is no main street bias; altering the model by redefing fi and fo was a simple fix for this.

    The IFHS didn’t ask for death certificates because it wasn’t primarily a mortality study. They used the design of similar health surveys and probably didn’t even think of asking for death certificates.

    sod is addressing the parameter choices that MSB folks claimed were reasonable. The biggest claim of the MSB paper is that the it makes a big difference to the results. But the parameter choices that made imply that the gender distribution of deaths would be completely different to what was observed. You can’t hand-wave this away with complaints about shouting.

  39. #39 sod
    February 12, 2009

    Consider some extreme examples. What if every death in Iraq were female? Would that invalidate MSB? No! MSB makes no predictions about the gender distribution of deaths. None. So, no result can be a contradiction. What if every death in Iraq were male? Would that invalidate MSB? Again, no.

    look david, here is what the paper says (page 7 of the draft pdf)

    (3) Intuitively, the probability fi is roughly the average fraction of time spent by residents
    of Si in Si
    . Similarly, fo is roughly the average fraction of time spent by residents of So in
    So . Given the nature of the violence, travel is limited; women, children and the elderly tend
    to stay close to home.
    Consequently, mixing of populations between the zones is minimal.
    Using the time people spend in their homes as a lower bound on the time they must spend
    within their zones, we can obtain rough estimates for fi and fo . Assuming that there are
    two working-age males per average household of seven (Burnham et al., 2006), with each
    spending six hours per 24-hour day outside their own zone,
    yields fi = fo = 5 / 7 + 2 / 7 × 18 / 24 = 13 /14 .

    now females (from the polled area) spend more time in the dangerous zone than (polled) men do.

    and while spending time inside the zone, you have a higher risk of death.

    i guess you can calculate the expected distribution of death among females and working age males (from the polled zone) all for yourself David?

    gender does matter directly, if you follow the assumptions of the model!

  40. #40 Robert Shone
    February 12, 2009

    New Scientist has published a completely different piece on the AAPOR thing in their magazine. Much toned down. Perhaps they thought MacKenzie’s online report (which Tim Lambert quotes above) was misinformed, hysterical and read too much like conspiracy theory?

    http://www.newscientist.com/article/mg20126953.800-fresh-controversy-about-iraqi-death-estimate.html

  41. #41 Robert Shone
    February 12, 2009

    In an email to me, MacKenzie wrote the following interesting line:

    The similarity between this procedure [AAPOR’s requests] and the anonymous, unspecified charges and secret deliberations characteristic of jurisprudence in totalitarian states should be painfully obvious, and not really worthy of further comment.

    It almost sounds like the conspiracy buffs at Medialens.

  42. #42 Crust
    February 12, 2009

    David Kane:

    I view the gender issue as irrelevant…
    To be honest, this critique seems so incoherent (and shouted) that responding is a waste of time.

    I would have thought the relevance was pretty hard to miss. The last go round, I joined the chorus trying to explain to you the relevance of the much higher violent death rate for non-elderly men than for women,kids and elderly:

    As the authors themselves point out, men from the sampled zone likely spend more time on average in the unsampled zone than others living in the sampled zone. Similarly, males from the unsampled zone likely spend more time in the sampled zone. In other words, you would expect to see a greater sex/age differential in the unsampled zone than in the sampled zone. But to get their numbers to work out, they would need a dramatically smaller differential there. It just doesn’t add up.

    Do you consider that shouted or incoherent? (In fairness, of course you may just have missed the comment. But there were several other attempts in the thread to explain it to you, even if some did use boldface.)

Current ye@r *