Vaccines is a topic I don’t like writing about so much for many reasons. Vaccination programs are important to public health but we (all the Reveres, including this one) have always interested either in basic science or programs that are applied to the whole population at once, such as clean water, air or food or safe products in the marketplace. But vaccines keep coming up so we talk about them. Since this blog has spent a lot of time on flu, most of it has related to influenza vaccine, but not always. This is a “not always” post, and it is partially about the latest news that US Army and Thai researchers are reporting a modest success with an AIDS vaccine. We mentioned it in passing a week or so ago, and while the effect, as reported, was indeed modest it was the first time any AIDS vaccine looked like it worked at all. The trial was controversial at the outset, but the results seemed to vindicate it as a proof of principle for a vaccine, which some thought might never be a workable way to deal with AIDS. Like everyone else, the idea filled us with hope, even though we hadn’t seen the paper it was based on. Now a controversy has erupted about whether the data really support the hopefulness of the Army’s press release. It turns out we weren’t the only ones who hadn’t seen the data. Now that some people have, there are new questions. So first a little about the specifics and then a comment about the press release.
Jon Cohen over at ScienceInsider seems to have broken the story:
The press conference and press releases discussed an analysis that included all 16,000 people who participated in the trial, except for seven who were infected before receiving any doses of the two vaccines that were used in combination. Seventy-four people in the placebo arm of the study became infected with HIV, while the similarly sized vaccinated group only had 51 infections?a 31.2% efficacy. The analysis indicated that there was about a 96% level of confidence that the effect was real and not due to chance?just above the 95% cutoff that is widely used as a measure of statistical significance.
In the private briefings, researchers learned that a second analysis, which is usually performed in vaccine studies and was part of the Thai study?s design, also found that vaccine recipients had fewer infections, but the reduction was not statistically significant and the level of efficacy was slightly lower. This analysis eliminated people in both groups who did not rigorously follow the protocols. ?Anything that really works, you?ll have enough robustness in results to be significant with both analyses,? says Douglas Richman, an AIDS researcher at the University of California, San Diego, a longtime critic of the study. Richman did not discuss the specific results with Science. (Jon Cohen, ScienceInsider)
It’s not exactly correct to say that the 95% cutoff is one between the effect being “due to chance or not due to chance” as this implies, but the more important issue is that the data were not completely reported and it sounds like the conventional level of significance was very sensitive to moving a case or two from one group to another. It’s the uncertainty about this that is involved with the second (unreported) comparison.
Failing to achieve statistical significance doesn’t mean that there was no effect, but it leaves open the alternative possibility that the lowered AIDS in the vaccinated group was a chance fluctuation rather than the vaccine. This was still not very likely, but more likely than a conservative but conventional criterion would permit and certainly not the stuff of a press release and press conference. And that, it seems to us, is the real story here. The details about the analysis will get sorted out one way or another in the peer review and publication process (a paper with both analyses is supposedly under review at the New England Journal of Medicine).
More from Cohen’s piece:
Several researchers wonder why the data were even released publicly before the Paris meeting. People running the trial learned the results on 10 September, and [study researcher Colonel Nelson Michael] said there was concern that the information would leak before 20 October [the paper was to be discussed at an open AIDS vaccine meeting in Paris]. Thai collaborators asked for the 24 September date, Mahidol Day, which commemorates the passing of the current king?s father, a clinician who studied public health at Harvard University.
We’ve complained here many times about science via press release prior to publication. Many journals (including one I edit) issue press releases when a paper is published. It’s a way of promoting the journal and the research. I also dislike the common practice of giving reporters copies of articles prior to publication (a practice called embargoing) and even worse, having a press release of research that hasn’t yet been reviewed or published and can’t be examined, as in this case. If you are going to talk about it in the press, put the data out there for everyone to see. That’s what happens when papers, even in preliminary form, are presented at conferences. If there is a finding of major importance to public health then you put it out there without peer review so everyone can see it as quickly as possible and form their own opinions.
We have a bad example currently with the alleged “Canadian problem” suggesting that getting last year’s seasonal flu vaccine increased your chances of getting this year’s swine flu. That news leaked out somehow but most of the scientific community can’t see the data because it’s under review at an unidentified scientific journal. That’s bad behavior at any time by both the researchers and the journal, but egregious behavior in the midst of a flu pandemic and the roll-out of numerous national vaccination campaigns. Once the genie is out of the bottle there is no excuse for keeping the data from the scientific community. Both the researchers and the journal editors deserve censure.
Meanwhile we’ll have to wait longer to see what the deal is with the AIDS vaccine. It’s undergoing peer review at The New England Journal and there was no pressing reason to have a press release before that process was over, unless it was a naked attempt to put pressure on the peer reviewers not to nickel and dime the analysis.
Journal peer review isn’t the end of the process, though. It’s the beginning. As one of my epidemiologist colleagues is fond of saying, “Real peer review starts after publication.” So far we’ve had neither kind of peer review. Just a press release.