Swine flu: fast track publishing and marketing

It is clear that if you want to get a so-so paper published in a top tier journal, the best way to do it is to write about a breaking medical news event and get there first. We saw this with avian influenza and SARS and now it's being repeated with swine flu. The Scientist had a story yesterday about how The New England Journal of Medicine (NEJM) and Science, two of the highest profile science journals in the world, pushed through some swine flu papers at record speed last week:

An international research team led by Neil Ferguson of Imperial College London published a report online today (May 11) in Science showing that the current outbreak is on par or less hazardous than previous influenza pandemics. The researchers analyzed data from late April and found that the virus' transmission rate and clinical severity are not as bad as seen during the 1918 Spanish flu but are similar to other 20th century pandemics. Although the study was received and published in less than a week, "the paper was subjected to usual standards during the rigorous review process," Natasha Pinol, a Science spokesperson, said in an email.

Last Thursday (May 7), a team of epidemiologists from US Centers for Disease Control and Prevention published a study in the New England Journal of Medicine (NEJM) chronicling all 642 reported cases of human infection with the virus dating from April 15 to May 5. This analysis detailed the most common symptoms of the disease and showed that young people might be particularly susceptible to infection. "We knew this was important and we wanted to get it out," Edward Campion, NEJM's senior deputy editor and online editor, told The Scientist. The paper underwent a full peer review process, but each review "was compressed into a day or two." (Elie Dolgin, The Scientist)

Like many other scientists interested in the topic I read the papers as soon as they appeared. The NEJM articles were highly pertinent and informative and their expedited publication certainly warranted. The modeling paper by Neil Ferguson and his colleagues I'm not so sure about. It was certainly competent work. This group are experienced modelers. Science knew this paper would make a splash as it had no scientific competition and interest was high. Those are both reasons to publish and reasons to take some care in publishing. It's not that the paper isn't worthwhile or makes no contribution at all. It's that by expediting its publication it gave it more prominence, importance and weight than it deserved considering its content and the uncertainties.

Science was publishing a scientific paper, which is its mission, but it was also promoting itself and so was Ferguson. Science isn't alone. Nature did the same thing with a Ferguson paper last year and I noted it then, too:

When a prestigious scientific journal, Nature, publishes such a paper, it also gets attention it wouldn't get if published in a more appropriate place -- meaning a place where its scientific contribution could be judged in the usual way, not under the glare of global publicity. (Effect Measure, April 14, 2008)

In the case of the recent paper by Fraser et al. (which I am calling the Ferguson paper and which I discussed here the day it came out), the data are just too preliminary and tentative to be really useful. Just because a computer model was used doesn't make it any more reliable. Of course if it took longer to publish it probably wouldn't be publishable, either, because we would know more about what this virus is doing and the actual data would supersede the computer-aided tea-leaf reading. Science can say it went through the same rigorous peer-review process, but it clearly didn't. The usual peer review process would have said, "yes, this is competently done but it is premature. Let's wait to see how this evolves." The paper was rushed into print because getting there first was the idea, for the authors and the journal. It's hard to identify any real contribution it made to our understanding.

The NEJM papers, by contrast, were chock full of actual data and important information. Yes, NEJM also wants to promote itself. In this case it found a more appropriate way to do it.

Categories

More like this

Not every journal can secure an early report of a large case series on an emerging infectious disease epidemic when they want one.

NEJM won.

Farmer: NEJM got the article on the basis of their high profile. That served the interests of the authors, the journal and the scientific readership and the media. Science (and Nature before them) decided to publish articles on the basis of the high profile of the subject matter. That served the interests of the authors, the journals and the media, but not the scientific readership.

you don't want us to get predictions and estimates.
you want to keep the world in the dark.
see bird flu.
It's the red thread which walks through your blog since years.

Annon:

People presents proposal is unselfish; and people provides comments is responsible. We affirm both inputs and are open to further researches without attachment.

There will be several schools of theory in the same era towards one topic, especially a complicate issue like flu and pandemics preventions. We now perhaps value the new learning of forwarding prediction and estimation- so called learning from mistakes. At the same time we allow paradoxically the thorough scrutiny on the forwarding projections.

anon: You have been asking for the same thing for years and I have declined to provide what you ask for. The idea that we have a good idea or any idea other than what I say here but am not telling the world is not the case. We truly don't know what you want to know. If we did I'd tell everyone.

We do our best here to write about what we know and try to be as clear-eyed and honest about it as we can. Not everyone will agree with us. There are sites you can go to that are not reluctant to speculate or predict or say things are happening on slim or uncertain evidence. Feel free to go to them and, as always, you are welcome here. But this blog will remain what we think is appropriate, not what you wish it were.

As long as writers acknowledge that their estimate is early and based on incomplete information, I don't have a problem with it being published. (The general public has likely never even heard about this study.)

As a local public health professional, I now realize the uncertainty of measures like the CFR and Ro makes it very difficult to decide when to implement the community mitigation interventions recommended by the CDC in their 2007 guidance, and even in their guidance from late April 2009.

The greatest mitigation benefit in a severe pandemic may be early implementation of social distancing, but early on, that the data that trigger these interventions are unavailable. If these can't be reliably measured until 6-12 months after a pandemic outbreak begins, they aren't useful as triggers.

We need to think creatively to come up with some more readily measured trigger points, if possible. Increase in influenza/pneumonia deaths over baseline for given time of year? (This measure would still run a couple weeks or more behind real-time events) Local surveillance of practices with electronic medical records to track visits for influenza-like illness? (Still tracking illness that's already widespread, lessening effectiveness of interventions)

I'm assuming that Revere doesn't believe in psychics or crystal balls, any more than in deities. Any thoughts about what triggers to think about?

"We need to think creatively to come up with some more readily measured trigger points, if possible... Any thoughts about what triggers to think about?"

BC -- Seems I read or heard something about a study using Internet search engine searches of key words and phrases associated with influenza to track community/state/regional outbreaks. The study's premise was that folks search for information about the things the are most interested in, thinking about, and/or experiencing, and that flu outbreaks may be able to be tracked via Internet searches because people are only likely to search for information on the flu (ILI) if they are someone they know are sick. I don't know what has become of the study, but it is one idea for a more "real time" trigger solution.

By River the writer (not verified) on 17 May 2009 #permalink

I think you have illustrated a fundamental problem with the peer review process itself rather than a problem with Science or Nature cutting corners. Why should it actually take such a long time for a paper to go through the peer review process?
It is not unusual for a paper to take over a year from submission before it is actually in print - and that is in a situation where very little extra work in required following the reviewers comments. In an environment where grant funding, prestige and ultimately career prospects are completely entangled in the publication record of individuals it's no wonder that so many have resorted to ethical 'shortcuts' to expedite their papers.

Sigmund: As a journal editor in chief, I can tell you it can take months to get two or more reviews back from reviewers. Reviewing is onerous and is purely volunteer. Finding a reviewer willing to do it takes time and then getting the review back more time. Then there is requested revisions and re-reviews and finally formatting. Authors are often very slow and tardy about all this. If it is a print journal, it goes in a queue, since there is limited space (and galleys to check). For an online journal this part can go faster, if the publisher has adequate staffing, which many don't. Also, some fields, like math and physics, traditionally take a long time, so they have preprint servers to make papers available immediately.

we should make an exception with panflu
and enter a process of faster publishing, faster
availability of new drugs, vaccines.

how about letting flu-cases fill a questionnaire
and publishing those anonymized online in real time ?
directly available to everyone, no delay

revere, I believe in truth and the power of argument. Earlier or later the true (accepted) experts
will write about it ...
> We truly don't know
And I don't believe you.You (one of you) did it at Gleeason's : http://bit.ly/jPW5o
And then you enjoy jumping at others who have other(lower) estimates.
(Siegel,Fumento,tenpenny,...) somehow claiming you know it better !?
The main "motive" here is: "we don't know" ...but then you claim for expertise nevertheless.
That's OK, but proves that "we don't know" is exaggerated. You are not 100% certain
about something, but may still have useful theories,expectations,estimates.
If you were no better than average people in estimating the pandemic risk(which I don't think),
then I'd say(which I don't) that ~70% of your blog were useless.
But, I think you're "only" dishonest, unlogical here.
> not what you wish it were
you continue with your mission, I with mine. No censoring, OK ?
http://bit.ly/SLzIU
those who say something can't be done should not stop those who are actually
doing it http://bit.ly/uUJz5

anon: Your first suggestion is basically what is done: expedited publshng. But it has more than one motive and when a high profile journal does it it may get unwarranted emphasis in the press. As for your second suggestion, I have no problem with it but it isn't science if done this way since you can't tell what it means. This is the same idea as using Google searches as a surveillance system. But it's not scientific publishing.

anon: Fumento is a paid troll and doesn't make estimates. Neither did Siegel. But both made arguments, which I addressed. You make estimates (or ask for them) but you don't make arguments. You just want numbers. You don't seem to care where they come from.

Revere -- "I have no problem with it but it isn't science if done this way since you can't tell what it means. This is the same idea as using Google searches as a surveillance system."

I didn't mean to imply I thought using Internet search engine queries about flu would be a scientific method for tracking spread of ILIs, just that it might be a usable alert system for scientists to zero in on communities experiencing outbreaks and potential transmission among folks who don't seek medical care. I don't know if it would be worthwhile or not, but I thought it interesting.

". Of course if it took longer to publish it probably wouldn't be publishable, either, because we would know more about what this virus is doing and the actual data would supersede the computer-aided tea-leaf reading."

I'd read the Ferguson paper more as a product to inform policy makers than a rigorous scientific work. Given Ferguson is one of the foremost influenza epi modelers, then folks would be interested in what he says about. Yes, by the time solid data arrived Ferguson's paper would be obselete. It'd also be too late to act if Ferguson's modeling indicated something radical.

If Ferguson's modeling indicated something radical, then wouldn't you rather read it in a Science article than a garbled report from a press conference at Imperial College?

By Sock Puppet of… (not verified) on 18 May 2009 #permalink

Sock: The question for Science and for policy makers is whether such preliminary data told us something one could base policy on. It didn't.

The "prestige" of the journals has to be questioned if publication standards have weakened to the level described. First isn't best ... or maybe it is nowadays?

With respect to the main criticisms presented, one could only conclude that the so-called peer-review process is occasionally laughable. Rapid publication of tangible data is valid, provided that inferences and interpretations are appropriately limited. Nothing more.

So far as the data that arise from investigations such as these, along with who has access to it and who may conduct and publish analyses, keep in mind that many hundreds of persons were involved. There should only be corporate authorship in such instances, no specific names.

statistics is science, or not ?
All science is basically estimating, only that above some
probability threshold (95%?) you say: "we know" while below that you say: "we don't know".
Nothing is 100% certain, not even when revere says : "we know"
Fumento offered 1:10 for panflu in 2008 -->estimate <10%.
Where is your bet offer ?
All the forums are full of my arguments and analysis.
Summary,excerts here: http://www.setbb.com/fluwiki2/
You may have missed them because
you ignore the forums and concentrate on the blogs.

I thought the NEJM papers were crap. Almost none of their claims were supported by any evidence provided. I think peer review failed. I agree that CDC has some interesting data to share, but the papers were terrible. 45 impact factor! They need higher standards of quality control.

Revere: Weigh in here.

Can't just let go of the rope ...

Peer: A colleague of mine used to say, "Real peer review starts after publication. So what is your peer review of the NEJM articles? I thought the Miller et al. Perspective was valuable and timely. I thought the CDC data were valuable and timely. What exactly did you think was "crap" about these papers?

For the record, I didn't think the Ferguson group paper in Science was crap. I thought its publication was premature and its placement and timing gave premature results a prominence a possible influence on policy in appropriate weight.

Ok, 'crap' was a little harsh. The first paper was fine, most of the data is publicly available and the results have already been known/verified long before the publication.

But where was the evidence that for the second paper? They claimed the virus was from a triple reassortant source, possibly a precursor for this outbreak. They further conclude that the viruses were not different than those circulating in swine. But they didn't present any of the trees other than a H1 tree in the supplementary. Maybe I don't understand flu enough, but they said each gene source was identified and that they were from triple reassortant swine viruses. How is that possible with only one gene? Where is the evidence? The analysis could have been done to support the first paper, but probably didn't warrant a second paper.

As for the Ferguson paper. The authors are part of a WHO rapid pandemic assessment collaboration. These results are already getting high prominence and influence policy making. At least their work is in the public domain and now others can try to assess their work.

Peer: well, we disagree on value. The CDC data was the first compilation, organized for the medical community. The paper on the triple reassortant (this was well known and goes back to 1998 or so) was clarifying the garbled reports that this had avian and human segments. It is all swine. The avian and human segments have been there since the triple reassortment event sometime in the 1990s or earlier. So it really was a swine virus. The unusual feature was the North American and Eurasian reassortment (all swine), which was new. So this was information and background at a time when people who weren't following this closely had it in one place. The Miller et al. paper was important for perspective, which is how it was labeled. These are the best in the business at that kind of araeoepidemiology.

Regarding Ferguson, I still think there was little value and an artificially heightened prominence to this otherwise competent paper. And Ferguson and company were not shy about over interpreting it to the press. He's done this before and it's not good.

The the Miller paper was very informative and interesting. My complaint is with the Shinde et al paper. In their figure if they identify the genetic source of the gene segments from swine viruses that have infected humans since 2005. Therefore they've done a phylogenetic analysis. As a scientist I'll believe what they've done if they present the evidence. But, like this entire post suggests, some papers were rushed to press.

A ncbi query for influenza from swine from North America since 1998 show 7 different subtypes circulating. That implies that the diversity of influenza in swine is high enough that the source can not be assumed to be triple reassortant viruses. That alone raise sufficient doubt in my mind that if I was the reviewer I would ask to see the analysis.

As for Ferguson and others. I have to maintain my earlier statement. While they may be using the current outbreak to raise the value in their stock, they were asked by the WHO to collaborate and do their magic. They already have a lot of influence. If they didn't publish then, Revere, how could people criticize the models being developed or the results being used to influence policy and pandemic planning. Like you said, it's a competent paper, which Science decided they wanted. Nature, Science and NEJM don't even review the majority of papers they receive. Many papers don't receive fair treatment whether a pandemic is imminent or not. The fact that the opposite happens for a topical paper should not be a surprise.

The Ferguson paper may not be pertinent in the long run. Time will tell. And the conclusions from the Shinde et al paper are probably right. But the evidence wasn't presented. So, like the first sentence of this posts states

"It is clear that if you want to get a so-so paper published in a top tier journal, the best way to do it is to write about a breaking medical news event and get there first."

Peer: I don't think we are that far apart, but here is where I think we differ. In my view, the information presented in the NEJM articles was useful to readers and policy makers. In the case of the Ferguson paper it wasn't. The issue with the Ferguson paper wasn't the model, it was the inputs to the model, which were such that no model would have been very informative at that stage. And indeed this one wasn't. The model wasn't much of a model, either. It was essentially a back calculation that could have been done on the back of an envelope (in principle).