The other day we did something we don’t like to do when talking about flu, we made a prediction. We predicted a bad swine flu season in the fall in the northern hemisphere. The history of flu epidemiology is that making predictions is dangerous. Flu has the ability to make fools out of anyone, regardless of expertise. One commenter in particular disdained the risk we took as being no risk at all. It was perfectly obvious to him (or “anyone paying attention”) that next flu season would be a swine flu horror show. It may well be, and then this commenter will certainly gloat and perhaps have justification for doing so. But since this is a site where we try to add some value to what you can read anywhere (whether it is Fool’s Gold or not, you and history will decide), we thought it might be useful to explain why we think that what looks obvious may not be so obvious.
First, if you make a prediction it’s nice to give some reasoning. The commenter’s reasoning was that “anyone paying attention to what was going on in the southern hemisphere” would see that things were going to hell in a handbasket there, with health care workers “dying in droves.” The evidence for this last is a newspaper article saying 10% of the dead in Argentina were health care workers. When we suggested such reports had yet to be verified, his response was to point to a news report of a nurse in California who had died of the swine flu and ask if we likewise rejected this report. We give these details not for the purpose of arguing against a single commenter but to make a point about data. Not all newspaper reports are equal. It is perfectly possible for a newspaper to give an accurate report about the death of a single person. That is something reporters know how to get accurate information about and we all know such information is often available. But when newspapers start reporting about things that are much more difficult for anyone to get accurate information about it is different. As an epidemiologist I know first hand that it is rarely as simple as counting or accepting reports. If you’ve never investigated an outbreak, you probably wouldn’t be aware that the first step, verify the diagnosis, often has surprising results. I have personally looked into numerous cancer clusters, based on first hand reports from individuals in neighborhoods that “I have brain cancer, my next door neighbor has brain cancer, the lady across the street died of brain cancer, etc.” only to find out that some of the people had other kinds of cancer or didn’t have cancer at all. Even more surprising is that you find out that some people have been diagnosed with cancer but don’t know it or don’t know what kind of cancer. It’s not just true for cancer. If you do a random sample of people and ask them how many have been diagnosed with lupus, for example, you will get a substantial overestimate of the true figure, because many people take a doctor saying “You had a positive lupus test” as meaning they have lupus. On the other hand, if the newspaper gives the number of cases of cancer in a town based on the report of the state’s cancer registry, we are more inclined to take it is reliable data. Not all reports are equal, not even all newspaper reports.
As we saw in Mexico originally and is being repeated in many other countries, some kinds of information on infectious diseases can be very hard to ascertain, too. In the midst of stress, fear and the necessity to make estimates and guesses, it can be difficult to even count the dead from flu, much less ascertain their occupations or whether there is any relation of their occupation and getting the flu (many people identified as nurses and doctors don’t work in health care settings or work in health care settings where they are any more likely to see flu patients than in the grocery store). When I hear a figure in a newspaper that 10% of the dead are health care workers (who by implication got flu in the course of their work), I first ask myself if there is any way that the newspaper — or anyone — is likely to have an accurate fix on that number. It isn’t like the nurse in California. Even in the US, CDC had to do a special study in a very restricted time frame and geographic area to get some information on health care worker cases. Is the number in Argentina truly 10% of the dead? It could be, but I doubt anyone knows at this point. That’s why my reply to the commenter was, “We’ll see” and his misinterpretation was that I was rejecting any newspaper report as false. I wasn’t. As an epidemiologist I know not all data are created equal. Some is more certain than others.
In a similar vein, the commenter pointed to news reports of the collapse of the health care delivery system in the southern hemisphere as it tried to cope with an overwhelming demand. When confronted by other commenters who actually lived in some of those countries that this was a gross exaggeration, the response was this was inconsistent with what he had been reading in the newspapers. This isn’t exactly the same as the numbers issue, but it’s worth commenting on. For one thing, in a very bad flu season (and it appears that Australia and New Zealand are seeing what looks to be a bad season), some health services will indeed be overwhelmed. It happens in the US even without a bad flu season because our health care system is so brittle, inefficient and with no reserve capacity. But it’s also true that flu is notoriously patchy in time and space and what you see in one place is not necessarily what you will see in another. I think a newspaper can, with some accuracy, report conditions in emergency rooms, although there will always be the added element of spin to sell newspapers. But disagreements on how to describe what is happening can be honest and accurate and all parties can be seeing some of the truth, perhaps describing it differently. One thing for sure, however. The disparate ways the situation is being seen by observers, whether through reading it in the newspapers up north or experiencing it on the ground down south certainly doesn’t constitute a situation that could be called obvious to “anyone paying attention.” Again, it’s a question of having some experience understanding and evaluating the data and when to put weight on it and why.
Presumably the reason for the commenter bringing these things up was the inference that if the southern hemisphere was having a really bad flu season it was a given that we would also have one in the fall. We were not willing to make that leap without having some reasoning behind it. Again, this is a difference between how epidemiologists think and what might seem obvious on its face to a non-epidemiologist. Maybe epidemiologists are too cautious about things like this, but we are scientists and it is our experience that things are often not what they appear to be on the surface. This goes double for influenza. Consider some examples.
Let’s return to the patchiness of influenza. The toll it takes on the population varies from year to year. There are some serious technical difficulties on how this impact is estimated (see our post here), but it is measured in excess mortality, not actual mortality from flu (which is not determinable with current data). There is only an excess when the number of people dying during flu season exceeds a certain threshold (one standard deviation above an estimated average mortality that varies throughout the year). In some years there is no excess at all (like three out of the last four years). Last year there was quite a large excess. Sometimes the excess is extremely large, 70,000 or more. Sometimes it is zero. Over many flu seasons it averages 30,000 – 40,000 (or sometimes higher depending on how the estimate is done), with some of the years being essentially zero excess and others very high. When we talk about “a severe flu season” we are talking about one in which the excess is large. For interpandemic years the bulk of the excess is in the oldest age group, 65 years and older. Some of the variation from year to year seems to be related to seasonal subtype, with H3N2 years worse than H1N1 years, but sometimes H3N2 years can vary substantially, too. In the last couple of decades there is a four fold difference in excess mortality in bad versus not so bad H3N2 years (source for this is the Viboud et al. article, discussed next). We don’t know why this is.
In an interesting article in the journal Vaccine in 2006, Viboud et al. (this team is among the most experienced analysts of historical influenza statistics in the world) drew attention to the unusual influenza season of 1951 (we posted on another paper on this subject in 2006). But they did more than discuss 1951. They compared a measure of virus transmissibility (effective R, like R0), which they estimated for all pandemics and severe interpandemic flu seasons from 1918 to 1970 (they did not use data from “mild” flu seasons because there is no excess mortality to give reliable data). They used the method of Mills, Robins and Lipsitch, first published in 2004 to estimate R0 for the 1918 pandemic.
One of the big surprises was that in terms of mortality, 1951 was as, or even more severe, in England and Wales and Canada than the 1918 pandemic. The epicenter of the 1951 season was Liverpool where transmissibility and mortality of the virus was higher than all three waves of 1918 and also the 1957 pandemic and much higher than the 1968 pandemic. But 1951 had no other hallmarks of a pandemic year. The mortality pattern remained heavily on the elderly and there was no evidence genetic shift or drift. The two previous seasons had been mild or moderate. There was no evidence of antigenic differences to account for this single season outlier. There are no genetic sequences available for the 1951 virus, so we don’t know what made it so transmissible. One explanation is that some effect on viral fitness may have been responsible. An influenza virus has eight genetic segments that work together as a team. It is surmised that some kind of reassortment involving the internal genes might have made a difference in transmissibility and/or virulence. We don’t know. Viboud et al. point to more recent seasons where in some locations the virus seems genetically similar but behaves differently epidemiologically in some locations (but not others). Examples are two severe H3N2 seasons, 1989-90 in the UK (A/England/89 and 1999-2000 in the US (A/Sydney/97). As the authors state, “These observations remain unexplained.” Even more striking was that the same season (1951) was a mild flu season in the US and northern Europe. It appears two different H1N1 flu strains may have been simultaneously circulating with different geographic distributions.
Given this history, perhaps our reluctance to predict, even when it might seem “obvious” to some, is more understandable. Not much about flu is obvious. But we did make a prediction anyway, based more on a hunch than any set of reasons we thought “obvious.” At least we were frank in saying it was a hunch and we gave some detailed reasoning that led up to it. That’s about all we feel comfortable doing at this point.
Of course your mileage may vary.