Stephen Soldz has posted his discussion with Jon Pedersen about the new Lancet study:
[Pedersen thinks that the] prewar mortality is too low. This would be due to recall issues. ...
Pedersen thought that people were likely reporting nonviolent deaths as violent ones.
These two have to go together. If prewar mortality was too low because people forget to mention prewar deaths, it would have shown up as a significant increase in non-violent deaths. Which didn't happen, so Pedersen must also believe that a significant number of deaths were misclassifed. I don't see how this is possible, since death certificates were checked.
Pedersen believes the confidence intervals (CIs) for the Lancet studies are too small. This involves two issues. First, the calculated CIs only include sampling error, but the study also includes non-sampling error, as in when a different household was chosen for security reasons or the protocol wasn't followed for other reasons.
This is true for any survey. There isn't a good way to calculate CIs for non-sampling error so CIs are always reported as those for sampling error. The very high response rate for the Lancet study means that non-sampling error is likely to be less than most other surveys.
A second reason the CIs are too small is a technical one. Death is a relatively rare event. The statistical methods used to calculate CIs [linearized variance estimators] are based on an assumption that the events aren't too rare. Thus, the CIs are likely too small.
This one is straightforward to check. I hope Pedersen and Roberts can get together and calculate the CIs the way Pedersen prefers and see if it matters.
Pedersen acknowledges that the ILCS cannot be the gold standard for mortality as asking one brief set of questions at the end of a long survey would lead to underreporting of mortality. Thus, he assumes that the ILCS figures are low. But he thinks they are in the ballpark.
Pedersen did NOT think that there was anything to the "Main Street Bias" issue. He agreed, I thought, that, if there was a bias, it might be away from main streets [by picking streets which intersect with main streets]. In any case, he thought such a "bias", if it had existed, would affect results only 10% or so.
Joe Emsberger has posted some more discussion with Pedersen at Media Lens.
Les Roberts has emailed a response:
I appreciated the thoughtful discussion of Jon Pedersen's comments. Of all of the critics of our study, his comments cause me the most introspection.
Specifically, we measured a prewar mortality and found what we found. It is a rate consistent with the region and slightly above the nearest neighbors. Some of the lowest age adjusted mortalities ever measured (Germany after WWII, the Goma Camps in 2005) occurred after periods of extreme stress. The ILCS survey found about 1/3rd of women were overweight which suggests to me that the food shortage experience of the 1990's was well behind them by 2004. The only comparable war situations I have ever worked in, (Bosnia in 1992-4 and Armenia in 1992) also experienced low non-violent mortality in spite of violence and economic collapse. We may have experienced under-reporting as Jon suggests. The fact that the Jan. 2002 - Mar. 2003 estimates were virtually identical when surveyed 18 months later vs. 40 months later makes me suspect that this effect must be small.
Death certificates have a cause of death written on them. As I understand it, all of them! In 2004, I asked the interviewers again and again on the days (and often within minutes after) they interviewed the households and it never occurred that a household reported one cause of death and a certificate said something else. In the first 8 clusters I went to I always noted when an interview ended with a family going to get the death certificate. It never occurred that, as I watched from the car in the distance, an interviewer reported a death certificate without my seeing it be brought. The mortality rate in the clusters I accompanied was slightly higher than the others where I did not (excluding Falluja). Thus, I have absolute faith in the honesty of the interviewing doctors. If Jon is suggesting that the doctors who fill out the certificates (and our colleagues insist it is usually doctors) are calling deaths from heart-attacks and respiratory failures "deaths from violence" and then the households are fabricating narratives of how the violent death occurred ... well I suppose that this is a theoretical possibility. But, given the death certificates, I cannot imagine our violent mortality estimate is very wrong even if we are for example, consistently missing infant deaths.
I am not sure that anyone has to repeat our survey to refute or validate the findings. If we measured the correct baseline rate, there must have been over 500,000 deaths from natural causes since the invasion. If Jon is correct that our baseline is too low, that number would be more like a million. The Baghdad morgue data, MD graduate students I have met from Abu Grieb and Basra, all suggest that over the past 3 years there are far more violent death-related bodies going into the grave yards than there are bodies from natural causes. If IBC is correct, only one in 10 or one in 20 deaths is from violence.
I think that Jon is only half right on the confidence interval issue. The bootstrapped and non-bootstrapped estimates were similar in both surveys, a non-modeled comparison of the post-invasion minus pre-invasion rates yields a slightly higher estimate of the death toll. On only one occasion was a site skipped for security reasons and that cluster was excluded from the analysis. The analysis was overseen by the chairmen of Biostatistics at Hopkins. Thus, I think methodologically we are on solid grounds. I do agree with Jon that this confidence interval does not include non-sampling error (missing the homeless and soldiers, interviewees not trusting the interviewers and hiding deaths...). That problem always exists and could be better controlled in a safer environment ... but we did the best we could and encourage others to do better.
Oops. A proper debunking of the lunatic Lancet's findings.
http://www.rhul.ac.uk/Economics/Research/conflict-analysis/iraq-mortali…
Jack Lacton links to the latest from the Main Street gang. The crucial insight:
"Exactly how much territory cannot be sampled depends on precisely which streets qualify as main streets."
No shit, Sherlock. True to form, the authors provide a map drawn on the assumption that only the likes of Broadway qualifies as a main street. Well, it certainly would be a problem if the Iraqi survey team adopted that approach.
I would like to hear more (preferably from Riyadh Lafta) about the details of street selection. Maybe this fooling around with maps will prompt such a response, in which case it will have served a useful purpose.
If you have a slow connection, please note that the PDF file is 7MB for just 9 pages. It's a very nice example of the "spherical cow" mode of reasoning, though.
Oops, Jack forgot to mention that his link points to a paper - released before journal submission or peer review - which is authored by people whose work relies on Iraq Body Undercount's news clippings of English media reports. The authors themselves have forgotten to mention this on several occasions, as have other 'critics' with an Iraq Body Undercount connection, like psychiatrist Dr Madelyn Hicks. Oops, indeed.
The Lancet, if accurate, constitutes multiple-egg omelette all over their faces, the academic equivalent of Exile on Main Street. Hence the sudden PR offensive.
http://www.medialens.org/forum/viewtopic.php?t=1997&highlight=spagat
Tim, thanks for posting this. A couple brief comments.
1. The potentially low prewar mortality could be due to general memory (recall) issues. One of the main findings of 20th century psychology is that memory is extremely fallible, even for very major events. Those of us engaged in longitudinal research are intimately aware of this, which is why we are skeptical of studies with retrospective data. In psychology/psychiatry we have serious problems with this.
In one example among many, of those reporting(past year) major depression at age 18, half did not report any major depression in their life when interviewed at age 21. Similar for the age 21-26 consistency. We don't really understand the mechanisms. Perhaps its too painful. Perhaps there is sometimes fleeting recall with a decision that "what's the point of mentioning that and suffering again." But it does happen.
This was driven home to me when I read of a study in which people who had been in hospital for an operation 6 months prior were approached at home for a survey. One question was "have you been in a hospital overnight in the past year?" As I recall, again, about 50% said no on the survey. This is 6 months recall.
When I read that, I said to myself: "Nonsense! If I had been in the hospital 6 months ago, no way I would forget it." Two weeks later I recalled that I had been in hospital for an elbow operation only 4 months prior, and had forgotten it at the time of reading about the study.
When we do longitudinal research, we often make extensive efforts to prompt people with things like: "Do you remember Christmas last year? Where were you? Who was there? Do you remember how you felt?" I wonder if the mortality folks do such things.
Thus, I don't rule out underreporting. It could also occur post-war for the first two years. Thus, it is possible that the postwar decline in nonviolent mortality (which is pretty strange given what we know about postwar Iraq) COULD also be due to such an effect.
Note, I am not saying these memory effects occur. I don't know. I just think that survey research is inherently subject to these biases. When we do substance abuse surveys, many students who, last year reported substance use, report no substance use in their life this year. Its just a given artifactual problem.
This is NOT a particular criticism of Les and gang, who I deeply respect, just an issue that many outside of psychology give inadequate attention to.
As to the CI issue, the issue is that the linearization estimators used in most survey analysis software (including in Stata, which I believe they use) makes implicit assumptions which are potentially not quite valid with rare events. We don't have, as far as I know, better computational alternatives at this point. But the result can be smaller CIs than the "true" ones.
Political scientist Gary King has developed techniques for analyzing rare events. I raised this with Pedersen. But he said they had not been extended to complex sample survey data. This is a task for some clever statistician.
The argument is not that Burnham, Roberts, et al., did anything wrong. They did usual practice. Its just, as in many areas of data analysis, usual practice may not be quite correct. If the result is that the true CIs are larger than the reported ones, there is less precision to the result. Same with nonsampling error. Of course there is no way to include it in calculations. But there are always various kinds, such as errors in selecting the nearest house, reporting errors, et. These also increase the true CIs. We always report CIs based on the study design, but know in our hearts that they are not quite right.
Again, there is no claim of doing anything "wrong," simply one that we shouldn't take these numbers too precisely as they undoubtedly have larger CIs than those calculated.
As one who defended the L2004 study, I feel uncomfortable raising these issues. BuI have great respect for this study and its authors. But I also feel that, as scientists, we need to do a careful examination of studies with unexpected findings, especially when the results support our political position.
I think its an important study. It suggests that the deaths in Iraq are enormous. But we should not get fixated on 655,000 just yet.
Stephen, are you arguing that people forget the deaths of those in their households? The examples that you gave are for things much less, shall we say, memorable. One might think that there has been prior research on using surveys to study death rates. For example, the Lancet article in question references such studies.
Stephen, that is an interesting result regarding the fact that people forget hospital visits like that, but do you really believe that the same effect happens for things like deaths? I mean, surgery is a temporal thing: it happens and then it's done. But you'll remember that you used to have an Uncle Bob and that he died last year, wouldn't you? I can remember the year of death for everyone in my family who has died up to my grandparents, and if I can't remember exactly I can look it up where it was written in the family bible. I don't see this as being at all comparable to a trip to the hospital.
Barry,
Yes, I assume, for whatever reason, people don't remember or report deaths some times. I'm not sure I agree that major depression is that less memorable. And hospitalization is a pretty big deal. Notice, people forgot (or didn't report) within a mere 6 months. Other studies have found that parents' reporting on children's psychopathology if poor after 6 months. To a parent, a major problem in a child is pretty, as you say, memorable. Other studies suggest such effects for childhood sexual abuse when asked in adulthood. So, yes, mortality "forgetting" (or nonreporting) is certainly possible.
After all, most of us, including Jon Pedersen, are willing to accept that the ILCS is an undercount. If that is so, why? It must be because people are not reporting some, memorable, deaths.
Studies using surveys to study mortality are not the issue. The issue are surveys studying mortality in the past. It is hard to validate such studies, because accurate data rarely exists in the circumstances where one uses long-term recall. Lancet 2006 correctly tries to validate with L2004, and gets the same figure. That stregthens their case. But, in both cases, the estimates were for events over 1 year in the past, so the validation, while heartening, is not totally convincing.
Why does the SMART methodology say never go back past 1 year? Beacuwse of telescoping of dates (hopefully controlled by death certificates) AND non-reporting of deaths.
Note: I am NOT saying I know such forgetting (nonreporting) to have occurred to a large degree. Only that it is not unreasonable and should be considered. Such consideration can only stregthen the conflict mortality for future work.
BTW, I don't see this as that major an issue. Even if the prewar mortality was 9.03, as the WHO estimates (and around Jon Pedersen's figure), the excess mortality would still be around 350,000, out of the ballpark of IBC et al estimates, and morally completely unjustifiable on any "humanitarian" grounds.
I do wish Jon would clarify the basis for his thinking that nonviolent deaths may have been reported as violent deaths.
Stephen Soldz seems to misunderstand main street bias - or at least his interpretation of Pedersen's remarks about it seems far from clear. He says Pedersen agreed there might be a bias "by picking streets which intersect with main streets". But main street bias addresses precisely the fact that the Lancet methodology selected streets which intersected main streets. If Pedersen agrees there's a potential bias here, then he seems by definition to be in agreement with the main street bias authors, contrary to Soldz's interpretation.
Stephen,
One point: the Iraq war was not fought on any 'humanitarian grounds', irrespective of the civilian death toll. In the most exhausting research of reasons for the invasion by John Prados, 'democracy promotion' and 'humanitarian intervention' were not even indexed. These only became useful propaganda for the Pentagon and the corporate media once the original justifications (WMD, links between Saddam's regime and terrorist groups) were proven to be lies. The war was about naked imperialism and control over a region of great strategic importance. To the 'crazies' who formulate policy in the Bush White House, and the Pentagon planners, the civilian death toll, as high as it is, is a regrettable but accepted part of a more blatant political and economic strategy.
Ron F makes some guilt-by-association insinuations about Dr Madelyn Hicks, Sean Gourley and professors Johnson and Spagat. Ron states that these people have "forgotten to mention" that their work "relies" on IBC data (as if this constitutes some crime or conflict of interest).
Where does Dr Hicks' work "rely" on IBC's data, Ron? Where does the main street bias paper "rely" on IBC data? I'm aware that a previous study by Johnson/Spagat uses IBC data, but this study isn't going to be affected in any way by perceptions about the accuracy of the Lancet study. And, conversely, if every epidemiological survey on conflict mortality (including the Lancet study) were demonstrated to be affected by main street bias, this wouldn't falsify the IBC data used by Johnson/Spagat in earlier studies. So there's no conflict of interest.
I expect guilt-by-association insinuations on media website messageboards, etc, but not on serious forums discussing scientific issues.
Robert Shone, the main street bias people claim that the methodology was biased towards main streets, but what is written is in the paper implies a bias AWAY from main streets, since it says that street s intersecting main streets were chosen and not main streets.
Jeff,
I agree totally that the war wasn't fought on "humanitarian" grounds. I am simply stating that those who want to make that claim, honestly or disingenuously, need to confront massive human deaths as a result, regardless of whether one accepts precisely the 655,000.
Tim Lambert wrote:
> the main street bias people claim that the methodology
> was biased towards main streets, but what is written is
> in the paper implies a bias AWAY from main streets,
> since it says that street s intersecting main streets
> were chosen and not main streets.
You seem to have misunderstood main street bias, Tim. Your terms "towards" and "away from" are misleading. Main street bias isn't about physical proximity (per se) to main streets - it's about "network" distance. For example, the cross roads are one link away from a main street, the side streets connected to the cross streets are two links, back alleys connected to these are three links etc.
The Lancet methdology states that cross streets were selected, but that the 40 neighboring houses might take them around the block some way into a side street or two. But following this methodology, there are many neighborhoods you'd never reach. This is demonstrated pictorially here:
http://www.rhul.ac.uk/economics/Research/conflict-analysis/iraq-mortali…
The bottom line is that if the Lancet study followed its own published methodology in terms of a selection scheme (which Sheldon Rampton recently labelled the "randomly selected main street" technique), there is no way it could avoid excluding many areas from the sampling process.
Robert, the published scheme EXCLUDES main streets.
Tim Lambert wrote:
> Robert, the published scheme EXCLUDES main streets.
Tim, this has zero relevance to the main street bias criticism (which you seem to fundamentally misunderstand). "Main street bias" doe not mean "the bias from disproportionately sampling on main streets".
I'm not sure that it's relevant whether Stephen Soldz understands what main street bias is--what's relevant is whether he correctly understood Pedersen to have said it is likely to be a minor 10 percent effect. I suspect Pedersen understands what is being claimed.
And Les Roberts once again makes the obvious point--if the Lancet2 numbers are correct, a very large fraction, perhaps the majority, of all deaths in Iraq during the past 3 years are from violence. So people who work at graveyards or morgues ought to be aware of this.
Thank you Robert, but I do understand the main street bias criticism. If you look at their maps, they assume that sampling included main streets, when the study implied that they were excluded.
Donald's last point is an important one. It does seem from press reports that the vast majority of those in morgues, for example, are from violence. This is a strong piece in support of L2006.
I am confused, though, as to exactly who among the dead ends up in morgues, in hospitals, or otherwise. I've read many accounts, and they have quite conflicting accounts, as with so much about Iraq.
Graveyards would be better, as Les Roberts has said for a while, and now Donald says. I wish someone would do the "study". As American (where I am) reporters don't seem that interested, I wonder if one of the British or Australian reportsers, e.g. Patrick Cockburn, could be induced to do this. The downside might be Cockburn's less likely to be cited by other press outlets. But at least we would know.
Tim Lambert wrote:
>> I do understand the main street bias criticism. If
>> you look at their maps, they assume that sampling
>> included main streets
I've looked at their maps (and read their paper). I don't see how you arrive at the conclusion that "they assume sampling included main streets". Please explain.
Stephen Soldz writes:
It does seem from press reports that the vast majority of those in morgues, for example, are from violence. This is a strong piece in support of L2006.
I suggest you read IBC's response to Lancet where it discusses MoH figures (while debunking some fabricated ones put forward by Lancet authors). For example, it says the MoH recorded - in 2005 alone - "115,785 deaths, an average of 320 per day". Only a fairly small fraction of these were from violence. Namely, something like "only one in 10 or one in 20 deaths is from violence".
You may be confused by reports from the Baghdad morgue(MLI) saying most of their deaths are violent, but the MLI is specifically where persons who were murdered or suspected of being murdered, or otherwise 'suspicious' deaths are taken. Not every kind of death is supposed to be taken there. These will almost all be violent deaths, but then the MLI, over the whole war, has not even recorded as many violent deaths as MoH recorded non-violent deaths in 2005 alone.
The record from morgues is quite the opposite of what you suggest. So, if you were to apply your same judgment, but this time using the correct facts, you should be saying the record from morgues provides a "strong piece in support" of arguments against L2.
Robert writes:
Where does Dr Hicks' work "rely" on IBC's data, Ron? Where does the main street bias paper "rely" on IBC data? I'm aware that a previous study by Johnson/Spagat uses IBC data, but this study isn't going to be affected in any way by perceptions about the accuracy of the Lancet study. ... So there's no conflict of interest.
Another interesting point here is that if it should be accepted that arguing against the accuracy findings that might conflict with your own findings elsewhere is a "conflict of interest", then we should, for example, all be disregarding anything Les Roberts has to say about ILCS, because if ILCS is accurate, it "constitutes multiple-egg omelette all over their faces, the academic equivalent of Exile on Main Street." ...and we should all therefore dismiss any views Roberts may have on the matter because of "conflict of interest" and conclude: "Hence the sudden PR offensive", with Roberts going on radio shows to declare that ILCS is a "gross underestimate of deaths" and even accusing its authors of "knowing" this to the case (while having never told anyone of this).
One thing about recall bias is that it doesn't necessarily change the excess death toll, since presumably people could also be forgetting deaths that occurred in 2003 and 2004, though I suppose it's less and less likely the closer one comes to the present.
Also, if the mortality rate in 2002 was 9 per 1000, as I think some critics have claimed, doesn't that show that Iraqi government statistics are even more incomplete than we thought? If I recall correctly, Josh says that the Lancet team got it wrong--the Iraqis counted 80,000 deaths in 2002. That would be a mortality rate of 3 per 1000.
Finally, IBC was in contact with Pedersen--did they ask him if it was possible that his survey might be an undercount? One rather vague question about "war-related" deaths near the end of a very long series of questions doesn't seem like the best approach to uncovering the violent death toll and maybe IBC was a bit too eager to assume that this study was the gold standard for Iraq mortality surveys because they could use it to argue that their own method was getting more than half of the civilian deaths and that Lancet 1, on the other hand gave a midrange violent death toll that was probably outside the ILCS CI. That's a lot of weight for this one question to carry.
That's why it'd be nice if someone somewhere in the world with the relevant expertise would do another Iraq mortality study. The issue isn't going to be settled by having people line up on two sides and present cherry-picked versions of the evidence available.
Donald, if the baseline mortality rate were 120,000 per year that Roberts says, the MoH recorded 84,000 in 2002, while excluding Kurdistan (about 12% of the population). 84,000 of 88% of a population would suggest somewhere over 80% coverage by MoH alone. Then there were about 200-250 per month recorded by the MLI which may or may not have been also recorded by MoH, so possibly another 3-4,000 should be added on, making the coverage even greater.
If Roberts is wrong and 2002 was 9 per 1,000, then the coverage would have been much lower, like 50% or less.
Yes, IBC contacted Pedersen on all the issues having to do with ILCS and the areas of the earlier paper having to do with it. His view, like IBC's view, is that the ILCS estimate is the most reliable one so far produced. It's always "possible" for any survey to be wrong in one way or another, so that doesn't tell you much. And it's a bit hard to take seriously claims that too much has been made of this one set of questions (it was "one question" only if the house had 0 deaths), given how much weight had been placed, for over a year, on one tiny study having found 21 violent deaths.
And I do not agree with you at all that IBC has presented "cherry-picked versions of the evidence", I think IBC has been the one "side" that has been setting straight cherry-picked versions of the evidence.
joshd wrote about the ILCS/IMIRA survey:
If you're not taking it seriously then it means you're not familiar with the techniques of demographic surveys. From p. 51 of Vol. 2 of the ILCS report, based on that "one set of questions":
So it appears that a birth history had to be done in order to improve the collection of infant and child mortality, but no corresponding household history was done to improve the collection of adult mortality. This is the point I had tried to make to Stephen Soldz: the type of mortality question asked on the ILCS/IMIRA form is very much more sensitive to recall bias than directed history questions. The SMART guidelines focus on questions like the IMIRA question, which is why they recommend short recall periods. The JHU/AMU studies collected household histories which are designed to minimize recall bias so the recall period can be longer. In order for the Roberts study to have calculated person-months of exposure they could not have used a question like the one on the IMIRA survey. On the other hand, a household history is not a birth history, so I would not be surprised if the ILCS study did a better job with infant and child mortality. The bottom line to a demographer is that specialized survey questions tend to do a better job than generic survey questions.
And that's why JoshD should be taking those claims a bit more seriously.
Robert (and Roberts) speculate that underreporting of infant mortality (which Pedersen identified and corrected) would translate to all the other data on deaths, such as war-related deaths, and that I should take this speculation "seriously". Robert writes:
"If you're not taking it seriously then it means you're not familiar with the techniques of demographic surveys."
Pedersen doesn't take these speculations seriously, so your speculation about why I'm not taking these speculations seriously can't be the one, as your speculations don't apply to him.
Infant mortality - in particular - appears to have issues with underreporting pretty generally, which are not taken to all other kinds of deaths. For example, a study by Les Roberts on the Congo has a whole section on this:
"Under-Reporting of Child Deaths
Past experience has indicated that the under-reporting of infant deaths is a consistent problem in surveys in rural Africa. The Centers for Disease Control and Prevention (CDC) in Atlanta undertook a series of studies in the former Zaire about the ability of "cold call" interviews to estimate infant and child mortality.4 They reported that women 50 to 54 years of age failed to report 72% of children born to them if the child had died, while women aged 15 to 19 years failed to report 50% of children born to them who had died. The CDC findings raise the concern that this survey may have underestimated mortality. Becker et al. have shown a similar level of infant death under-reporting in Liberia.5 To overcome this potential for mothers to under-recall children who died, the recall period during this survey was limited to 2002, and all households were asked about pregnancies in the past year before they were asked about deaths"
According to your speculation, these statements and statistics should be assumed to apply to all deaths, and to war-related deaths, rather than just to "Child Deaths", yet it seems nobody assumes this, except when it is useful for casting doubt on the ILCS survey.
JoshD writes:
I've never met or talked with Pedersen; still, by his own actions and admissions I think you're overstating his position. He has admitted that the ILCS counts are underestimates, and he re-interviewed to collect birth histories exactly because that's the most reliable way to collect those types of data. For the same reason, if one is interested in a good estimate of adult death rates, one wouldn't do it the way that it was done in the ILCS/IMIRA survey. It just doesn't provide enough detail, and it's (more) sensitive to omission of events. If you want a good estimate of a rate, you need person-months of exposure. The ILCS study collected person-months of exposure for infants and children from birth histories, but apparently not for adults.
joshd continued:
Dude, you are so wrong. This isn't just my speculation, and any demographer will tell you recall omission applies not only to kids but also to adults (though less so). I'm serious. If you don't believe me, go look up any of the research done by the WFS or the DHS from about the mid-1960's onward on why their survey questions were designed the way they were. It's to minimize recall bias. There are a million papers on how characteristics of both the decedent and the respondent affect recall. That research isn't in the least controversial, though the fact that you're not aware of it doesn't surprise me: it's pretty arcane except to people in the field. No one who seriously wants to estimate adult mortality, particularly if it has been changing over time, would ask the question that appears on page 48 of the IMIRA questionnaire. If you're serious about estimating adult mortality, you ask specific questions. Specialized questions give you better results than generic questions.
And I've never tried to cast doubt on the ILCS study; that's just more of your paranoia. I just point out what is obvious to any demographer: the ILCS/IMIRA study didn't ask a specialized question except for infants and children simply because they were more focused on them. There's no evil intent in pointing that out. Kids are the canary in the coal mine so when you're trying to evaluate living conditions you focus on them rather than on adults.
Robert writes:
"I've never met or talked with Pedersen; still, by his own actions and admissions I think you're overstating his position. He has admitted that the ILCS counts are underestimates,"
That's false.
"and he re-interviewed to collect birth histories exactly because that's the most reliable way to collect those types of data."
As I said elsewhere IBC has talked to him about these issues (though not me personally). He does not, from what I am told, is not at all impressed with your (and Roberts) speculations about the infant mortality problems rendering the rest of the data as underestimates or "gross underestimates" as Roberts has recently begun calling it. Further, they identified and corrected this issue specifically for infant mortality. Why would he not do the same to the rest if he had any doubts about that data? I believe IBC has asked and received satisfactory answers to these questions from him. Have you asked him? If you have a problem with this you should talk to him about it, Dude.
The rest of your post is sophistry. I didn't say that "recall bias" can only apply to child deaths. I said it seems that there is a particular and fairly general problem with it as it applies to child deaths, as indicated even in one of Roberts studies, but which is not the same with all deaths or violent deaths. This is not to say there can be no recall bias with violent deaths of adults, but your speculation that observing it with infant mortality in ILCS proves that all the rest of the data are underreported at all, or by any significant degree, is unfounded conjecture.
I wrote:
Josh replied:
Then you should correct Stephen Soldz. The quotation from the top of this post is:
Josh, perhaps you think my comments are sophistry but, to put it bluntly, that's because you don't have the training to evaluate it. I have never criticized the work that Pedersen has done at FAFO -- go back and look, here at Deltoid or anywhere else. If you think I have then you're mistaking me for someone else, because I hold him in high regard. I'm only pointing out that a birth history is not a household history; that there is no evidence in Volume 2 of the ILCS report that suggests they did a household history in addition to the birth history; that the ILCS was designed to focus on living conditions and not on mortality estimates; and that any well-trained demographer knows that the question asked on page 48 of the IMIRA questionnaire is not designed to elicit the most reliable answers about mortality. That you are arguing over this point and call it "unfounded conjecture" shows that you aren't aware of the literature on this topic and are now just operating in knee-jerk mode. Frankly, dude, I'm thinking you're a tad too invested in this topic to be rational and dispassionate, if you know what I mean and I'm sure that you do.
In support of Robert's last comment I'll say that I'd earlier considered commenting to joshd that something a trained musician is less likely to understand about the nature of scientific thought, especially as compared to musical expression, is its aversion to emotion. It seems that in so far as there is dependence in a person's mind between the way they feel and their thought processes, the less the results of their reasoning will turn out (perhaps after some time) to have been "scientific". How many scientists would admit, I wonder, that their worst errors have been made under the influence of strong emotion (or investment in the subject, as Robert has put it)?
Robert, unlike Tim and you, I do not take Soldz' version of what Pedersen has supposedly said as exactly what Pedersen said. I defer to what I he has actually said. My guess, and what would be consistent with what he's actually said elsewhere, is that "could lead to..." became "would lead to..." in Soldz' version, and that whatever degree of "underreporting" Pedersen was actually speaking about was minor, not the "gross underestimate" that Roberts has begun declaring he's "sure" about in ILCS, and which Pedersen supposedly "knows" (but isn't telling anyone).
And you evaded my questions in favor of writing more sophistry:
"they identified and corrected this issue specifically for infant mortality. Why would he not do the same to the rest if he had any doubts about that data? I believe IBC has asked and received satisfactory answers to these questions from him. Have you asked him?"
Perhaps you could answer these
The rest of your posting indicates to me that you have nothing of any value to say, and only want to attack straw men with ad hominems, a waste of time.
But of course, we all already know anyway that the most reliable answers about mortality come from sampling 27 million people with 47 clusters using a nebulous methodology that nobody can explain or describe the same way twice.
Don't be silly josh, the most reliable answers are obviously found in newspaper clippings.
Tell me, do IBC's figures take a dip every time there's a major celebrity story competing for column-inches?
joshd huffed and puffed:
I already responded to both of these but evidently they went over your head. First, I said I've never talked with Pedersen. Second, the Iraq Living Conditions Survey properly focused on infant and child mortality because kid mortality is a better indicator of changes in (surprise!) living conditions than adult mortality. There was no reason for them to dwell on adult mortality but there was a huge reason for them to double-check and triple-check kid mortality. That you don't understand this is yet more evidence you don't understand the issues involved.
Now that I've answered your questions (for a second time), please answer mine: who on the IBC staff interviewed Pedersen about the technical issues surrounding potential omission bias, and was this person either a trained demographer or epidemiologist?
Dude, are you saying you're a straw man? Hmmm. Well, that explains a lot.
Robert the straw man is your repeated changing of my point to something else that you want to argue against. First it was changing my point to a generalized claim that recall omission can apply only to child deaths and not to adult deaths. After that bit of sophistry had failed you then changed my point to another general claim that the ILCS questionaire is the "most accurate" way one could question about mortality.
Neither of these was the point I was making, but this sophistry allowed you to evade the actual point I was making followed by the issuance of repeated ad hominem assertions based on the straw men you've built.
You then claim to have already answered my questions. You're half right and half wrong. You did say earlier that you've never talked with Pedersen about any of this, but at that point you had not yet begun putting words into his mouth to the effect that, even though Pedersen supposedly shares your views and had the same doubts about adult deaths that he did about infant deaths, that he didn't bother to do anything about it because there's "no reason" to "dwell" on adult deaths because it doesn't much matter to "living conditions". So at least now you've answered my second question (once) with this circular speculation.
Frankly, Dude, your argumentation is so poor, fallacious and evasive that it is undeserving of any more of my time.
joshd wrote:
[snip]
Josh, I notice that you evaded my question.
I'm uninterested in answering irrelevant and diversionary questions for you, dude.
joshd evaded:
It's entirely reasonable to refuse to answer irrelevant and diversionary questions -- but that's not the case here. In this case, you're evading relevant and direct questions. You say that you are satisfied with the answers that Pedersen gave you, and you claim that Soldz misrepresented Pedersen's position. But it is evident that you yourself lack the knowledge needed either to ask Pedersen the right questions or to have understood his answers. So, you must be relying on someone else. Who on the IBC staff interviewed Pedersen about the technical issues surrounding potential omission bias, and was this person either a trained demographer or epidemiologist?
Your questions are irrelevant and diversionary, as are your repeated ad hominems you're hoping will cover over the fallacious sophistry that has been your case. What is clear is that you have absolutely nothing worthwhile to say on this matter, and that neither you nor your empty, preening postings deserve any more of my time than they've already wasted.
joshd, Robert's question is reasonable. If you don't know the answer, it's better to say so, rather than carry on the way you do.
joshd wrote:
[yadda yadda snipped]
You think that the qualifications of the person who interviewed Pedersen are irrelevant? Dude, not only are you acting as if you think demographic or epidemiological training is irrelevant -- worse, you're acting as if actual expertise in these areas is grounds for disqualification. This is not a healthy sign. Just sayin', is all.
[I've changed my user name to "Bob" (from "Robert") - so as not to be confused with the other "Robert", or indeed with Les Roberts (often referred to as "Roberts"!]
Here's an email I received from Jon Pedersen which is relevant to some of the above discussion. Contrary to what Stephen Soldz wrote, Pedersen doesn't dismiss MSB:
"Yes, probably Stephen Soldz confused the issue somewhat here. There are actual several issues:
1) I very much agree with the MSB-team that there is some main stream bias, and that this is certainly an important problem for many surveys - not only the Iraq Lancet one.
2) I am unsure about how large that problem is in the Iraq case - I find it difficult to separate that problem from a number of other problems in the study. A main street bias of the scale that we are talking about here, is very, very large, and I do not think that it can be the sole culprit.
3) The MSB people have come up with some intriguing analysis of these issues."
(Jon Pedersen, email to me, 4/12/06)
The whole argument about "main street bias" is not anywhere near as clear cut as some might claim.
How far removed rom a main street does one have to be before there is no more "main street bias"?
One street? 2? 3?
Presumably there is a point at which bias in the other direction occurs. In other words, if one sampled strictly on streets that were far removed from most of daily life in Iraq, one would get a result that under-reported the mortality rate.
Is having "zero main street bias" really a desirable goal when one is trying to get a sample that is representative on average?
Most of the violence in Iraq may occur on main street, but as Tim has pointed out, most of the other activity does as well (shopping, eating out, watching movies at the theatre, just hanging out).
The fact is, one can not remove main street from the picture entirely without biasing things in the opposite direction.
I think the Johns Hopkins people understood this very well and their decision to sample on cross streets is a clear effort to strike a balance.
Calims that one has to be x number of streets away from main street in order to get representaive results for th Iraqi population as a whole are just so much jibbereish in the absence of any definitive data demonstrating as much.
With regard to Robert ("Bob") Shone's comment in which Pedersen says I may have gotten his position wrong on the Main Street Bias, this is entirely possible, as I was relying upon memory and we had only limited time to discuss. I actually think my summary of his position (Robert, did you ask his opinion on my entire paragraph on the issue, or only on the phrase you extracted, which is taken out of context.) is generally consistent with what he said in the email to Shone.
I had not intended to imply that Pedersen didn't think the Main Street Bias issue was a possible issue for surveys, as we didn't discuss that. We only discussed whether it could explain the discrepancy between L2006 and the mortality figure that Pedersen believes is correct. The 10% figure came from him spontaneously. He may since have increased his estimate.
In any case, here was his November 28 response to reading my account of his thinking:
He doesn't seem to feel he was grossly misrepresented, as Josh, and, perhaps, Robert S. are suggesting.