Pandemic model paper is not model behavior

I'm a supporter of mathematical modeling as another way to get a handle on what might happen in an influenza pandemic. But a recent paper by the group at London's Imperial College, published in Nature, shows what can happen when modelers allow their work to bear more weight than it can sustain. When a prestigious scientific journal, Nature, publishes such a paper, it also gets attention it wouldn't get if published in a more appropriate place -- meaning a place where its scientific contribution could be judged in the usual way, not under the glare of global publicity. I'm not blaming the wire services. The reporting on this paper is pretty high quality. I am blaming the scientists and the journal. But first, a sketch of the findings:

Closing schools during a flu pandemic might slow spread of the disease but probably won't have as big an impact on overall cases as some pandemic planners hope, a new study suggests.

British and French researchers reported that one in seven cases of pandemic flu might be averted if schools are closed and parents ensure that dismissed children don't simply congregate elsewhere, such as in formal or informal daycares or at the mall.

[snip]

Earlier modelling studies have predicted closing schools could dramatically lower the number of cases in a pandemic. This new work suggests those studies may have been overly optimistic.

"I think our predicted impact is quite limited, but not so limited that school closures should not be considered as an option," said senior author Neil Ferguson of the department of infectious diseases epidemiology of London's Imperial College.

"I mean, we're in the regime of 'It may be worthwhile, but the costs need to be borne in mind."' (Helen Branswell, Canadian Press)

One of the problems in modeling is trying to estimate the effects of particular parameters, like transmission in school settings. These researchers used data on school breaks in France, where effects on influenza-like illnesses could be estimated from concurrent medical data. That's interesting and this kind of science is worth doing. Imperial College's Neil Ferguson and his group are well-published and adept mathematical modelers. They know what they are doing. But Ferguson is not always careful about what he says about it (and this is not the first time). Branswell quotes him about practical consequences he believes are implied by his model results, consequences he should have little faith in as these kinds of models are complex, depend on many assumptions and are often run under conditions intentionally meant to isolate certain features, like school closing, independent of all other factors. The Canadian Press reporter, Helen Branswell, is the best in the business. I feel quite confident she quoted Ferguson accurately and in context.

These models are genuinely useful when used as one would use a laboratory experiment, a highly stylized and somewhat artificial setting that can still tell us important things if properly interpreted. Models allow us to get a feeling for the relative effects of different parts of a very complicated, interacting system, but a system where the impact of one intervention like closing schools may be strongly affected by another intervention, like reducing attendance at work of adults. It may also us to see the range of possible behaviors, behaviors that are often counter-intuitive. But the effects need to be seen as qualitative, not quantitative. The equations are highly non-linear, meaning in practical terms you cannot take the system apart, analyze the parts separately, then try to put them back together again. They are coupled so each affects the other. The estimates of effect given here are not only ridiculously over precise, but almost certainly biased in magnitude and direction in ways we don't know and can't predict.

This kind of paper and the attendant publicity doesn't help anyone. Michael Osterholm, Director of the Center for Infectious Disease Research and Policy (CIDRAP) is right to say it is unhelpful for planners, or worse, confusing. There is a sense in which this kind of modeling could even be said to be irresponsible, although I won't go quite that far. It isn't the modeling effort itself. It is the promotion of the work that I object to and Ferguson, like many scientists, is promoting his work. Nature also deserves blame here. Nature is probably the most prestigious scientific journal in the world and I hold it in high regard. But they shouldn't have given any of their precious print real-estate to this paper. They did it because they know it makes news. Nature, like the scientists, was promoting itself. This is a paper that would have been better published in a specialty journal, not in a high profile journal. It does not make a significant and urgent contribution to the scientific literature. It's just another modeling effort, another data point in that literature.

A year ago I wrote seventeen posts here describing a mathematical modeling paper in great detail. I thought then, and I think today, that the paper I chose was extremely informative, very well done and a genuine contribution to how we think about what might or might not be going on in the complicated dynamics of a pandemic. I wanted to show the value of this kind of work. That paper was also measured and clear in what it did and did not mean and its authors, Marc Lipsitch and his group at Harvard, did not promote its significance inappropriately.

I can't say the same for this latest paper by Ferguson and his colleagues or the editorial decision by Nature to publish it.

Just our opinion, of course. But then it's our blog, too.

More like this

So, what is your problem with the paper?

anon:The way it was promoted, overprecise predictions, over interpreted. Less information for planers than implied.

Revere,

While on the topic of questionable research and the positioning of same by researchers, have you seen the latest from Peter Doshi? I don't have access to the American Journal of Public Health, but according to an MIT press release, in the May issue Doshi published results of a study contending that "the widespread assumption that pandemic influenza is an exceptionally deadly form of seasonal, or nonpandemic, flu is hard to support." (Let me guess. Your jaw just dropped.)

The study challenges common beliefs about the flu--in particular the Centers for Disease Control and Prevention (CDC) claim that "the hallmark of pandemic influenza is excess mortality."

Analyzing more than a century of influenza mortality data, the press release says, Doshi "found that the peak monthly death rates in the 1957-1958 and 1968-1969 pandemic seasons were no higher than--and were sometimes exceeded by--those for severe nonpandemic seasons."

Doshi says the pandemic-equals-extreme-mortality concept appears to be a generalization of a single data point: the 1918 season, a period in which "doctors lacked intensive care units, respirators, antiviral agents and antibiotics."

He also cites improved living conditions, nutrition and other public health measures as reasons why influenza death rates substantially declined across the 20th century.

Doshi calculates an 18-fold decrease in influenza deaths between the 1940s and 1990s, a trend that began far before the introduction of widespread vaccination.

Noting the gap between evidence and fear, Doshi identifies possible reasons that pandemic flu might be so misunderstood, including the possibility that commercial interests may be playing a role in inflating the perceived impact of pandemics. (MIT press release)

It makes you want to pull your hair out, doesn't it?

I'm constantly amazed at how some people brush away all concerns because "today we have respirators, antibiotics, and better living conditions." (I addressed these underpinnings of false security in a long-winded series of posts in December 2006, beginning with "You're right, it's not 1918. Is that good or bad?")

Anyway, Doshi the MIT Grad Student is at it again. And a journal has published his findings.

Thanks for your above post.

Chirp

Chirp: Yes, I did see it. I'll probably have some comment as soon as I get a chance to look at it more carefully. I know Peter slightly and he has been singing this song for a few years, now. It certainly deserves comment. The MIT news folks are doing the usual self-promotion. That's their job, of course, but it sometimes is tiresome.

I loved the paper by Ferguson in Nature and I think he made his point very well. Closing schools during a pandemic is not going to make much of a difference in death rates. It will shatter an economy however, as more parents will have to stay home to take care of their kids and not work. It will also make them miss their rent as they will have no income and they will not be able to buy food after awhile. It will have a ripple effect through the economy and collapse our way of life. This is the genius of HHS and the CDC folks that want to close all of the schools during a pandemic. Even though all of the kids will still find a way to play together. Teenagers will never sit at home. They will sneak out to be with their friends. What a stupid idea to close schools.

Ferguson is right, 1 in 7 is the best it will save. You can get the same odds if you encourage parents to stock up on anti-virals now and stress hand washing with good hygiene.

The whole report that HHS depends on that was written by the professor in Michigan about all the lives saved in the 1918 pandemic because schools were closed is a farce. It was co-written by two doctors from the CDC and they don't mention that when they tell you to rely on that paper.

Everything that the Harvard Health people do is funded by the CDC so I would take all of their data and reports with a grain of salt when they report on pandemic related items because they tow the line for the CDC.

HHS and the CDC have an agenda and they are pretty screwed up. They have only their interest in mind, not the public's. They don't care their plan will wreck the economy, they want their way or the highway.

Model that Revere.

By bigdudeisme (not verified) on 14 Apr 2008 #permalink

bigdude: It's OK to disagree but it would be nice if you had a reason other than you like the outcome of one paper and haven't read the other. The 1 in 7 estimate is nothing much to lean on. How do you know he is right? Even he wouldn't say the number 1 in 7 is accurate. The value of a model is qualitative, not quantitative and this one relies on data from French schools and a French health system that is a lot better than ours. Regarding the Harvard modeling, you might like to either read the paper or read my explanation of it and then make up your mind. I have been extremely critical of CDC here, but you have to evaluate each piece of evidence on its own terms. At least we do that in science. Your mileage might differ. You can find the modeling posts at the link I gave. You might learn something.

BTW, the Harvard study is not CDC funded. It is NIH funded, as is most of the Harvard work. CDC funds relatively little research. I'm sure you knew that, since you are apparently a modeling expert.

Doesn't the implications of such pandemic modelling all come down to the actual pandemic CFR, and the publically perceived CFR? Don't the behavioural effects also directly depend on the publics perception of the risk of death?

Trust me on this, if the mums and dads see that the CFR is severe (whatever that figure is, let's say 5% for arguements sake) then you won't have to worry about whether to close schools or not. You might as well close the school because there will be no kids there anyway. And I'll bet the absenteeism from teachers will be significant as well.

And yes, the impact of the public seeing that their chances of survival are slim if they "catch the killer flu" will be that life as we know it changes until the pandemic wave has subsided.
Do you really think people will go to work and risk death and bringing home the virus to their family so that they can pay the rent? Get real.
Hence the importance of thinking this through rigorously NOW, and preparing accordingly.

It would help if legislators had already drafted emergency legislation that considers the impact of rent/loan/employment defaulting due to something like extended pandemic. But of course it will be left to the last moment, and cause untold grief...

> Nature is probably the most prestigious
> scientific journal in the world

I have bad experience. They refused to correct the Declan
Butler article about the Karo-mutations after pointing
out the flaws in it. That paper was then quoted later in
the Ferguson paper to "proof" h2h in Karo.
(although not directly used for the proof)

why is precision bad ?

what IMO is "ridiculously over precise" here is your
"certainly" in "almost certainly biased" ;-)

I very much prefer
"one in seven cases averted"
over
"quite limited, but not so limited"
or
"I mean, we're in the regime of It may be worthwhile"

and I very much prefer
"not always careful about what he says"
over
"we don't know and can't predict"

But the level of uncertainety should be made clear.

If we only read about results which are 100% confirmed, then
most books and papers wouldn't be written.

Ahh, yes, and I very much prefer Neil Ferguson over Michael Osterholm.

> The way it was promoted, overprecise predictions, over interpreted. Less information for
> planers than implied.

seems to me that they just critisized earlier estimates on school closures this way.
Trying to correct and discuss them.
"over-interpretation" is commonly considered good strategy in almost every paper.
That's what the introductions and abstracts are are for : making the reader believe that the
paper is more important than it is.

Anon:
I think Revere got it exactly right. When you are modeling, it is easy to forget the frame of reference (no, not that �framing�). What may look like testing an hypothesis in the modeling context is usually generation of an hypothesis that is yet to be tested. Confusing hypothesis generation with hypothesis testing is one way for researchers to "allow their work to bear more weight than it can sustain." Another is failure to respect the uncertainty inherent in modeling results, which results in pseudo-precision. Both appear to be at issue in the Nature paper.

By dubiquiabs (not verified) on 15 Apr 2008 #permalink

Hey Revere,

Don't get me wrong. You may be the expert here (are the expert? lol). I am no modeling expert, I have known a few and I think they are all full of Cr-P. You can model anything and get it say what you want. I am sure they modeled the Iraq war to say they would find weapons of mass destruction for sure, 100% for sure. Modeling is imprecise. In My opinion, closing schools during a pandemic is not going to work(and I have read several, perhaps ten studies and models now). Most studies follow Markels lead in pointing out the difference between in what happened in St Louis and Philadelphia and the importance of non-pharnmacuetical interventions, pointing at mostly the closing of schools as being effective. Times were different then and that is my point.

You can't apply what worked then to our society now and that is what modelers want to do now. Kids will not stay at home and play, they will sneak out and play with each other because thay have no respect for their parents and do not follow the rules anymore. Additionally, the parents don't follow the rules. They will find someone to take care of their kids, i.e. underground day care will be big business and the kids will be together that way, because the parents need to work to get money. So even if the government closes the schools, the kids will be together and this will defeat the purpose of closing the schools is my point. So model all you want, it is pointless in a pandemic. People are going to get sick and people are going to die because the rules have changed. You can't model death and disease until the pandemic is over, you silly people. Best practices, best minds, can't stop a freight train. You had better model how to stay alive and start getting some supplies now. Remember, you can get a disease from a corpse also.

By bigdudeisme (not verified) on 16 Apr 2008 #permalink