Effect Measure

The end of flu season

Flu season is over. Before you heave a sigh of relief, I’m talking about the (official) 2008 – 2009 flu season, which ended August 30 in week 34 or the calendar year. Welcome to the new flu season, the one called 2009 – 2010. It promises to be, well, “interesting.”

Not that the one just concluded wasn’t interesting. Indeed before we get done analyzing the data already collected it’s likely to be the most informative in flu science history, partly because we have tools we never had before, partly because we have information gathering and handling capacity we never had before. But mostly because this was the season of the long awaited periodic influenza pandemic that history told us was bound to come. We just didn’t expect that particular strain, subtype or starting location. Which is just what we did expect with flu. It always confounds us.

We started with a more or less convention, if somewhat less severe, flu season with the usual seasonal H1N1, H3N2 and influenza B. One reason it seemed less severe was that this was mainly an H1N1 seasonal flu. Experience shows that H1N1 seasons are often produces less excess mortality, so when the season wound down in April it looks the influenza impact might not have been as bad as it sometimes is. Then the pandemic hammer fell and it fell with surprising swiftness, considering influenza virus doesn’t usually like the spring and summer. From April to the end of August a new form of H1N1, this one of swine origin, added 40,000 confirmed or suspected cases to the official rolls, but everyone acknowledges these are just the tip of the iceberg. How much of the tip we don’t know. CDC reckoned at one point over a million cases, too many to confirm or even count. It’s just a guess. There were about 9000 hospitalizations and 600 deaths. Those are the officially recorded numbers, but once again, they are an underestimate, although not as much of an underestimate as the case count. It’s obviously much easier to miss or ignore community cases than those severe enough to require hospitalization or resulting in a fatal outcome. But we have certainly missed hospitalized and fatal cases resulting from flu, so those numbers are the bare minimum.

If we look at the surveillance system for the last flu season we clearly see we had two flu seasons packed into one. Here is a bar graph for positive specimens submitted to the 150 CDC/WHO labs involved in virologic surveillance in the US. You can clearly see the two components, the one on the left being the usual seasonal flu season, the one to the right (starting in week 17), the pandemic flu component. When interpreting this graph it is important to understand it does not represent what epidemiologists would call the “epidemic curve,” the evolution of number of cases over time. That’s because what is being counted here are the number of positive specimens for influenza of various types and subtypes submitted to laboratories, not the number of cases. If the fraction of all cases that were swabbed and submitted to these labs remained the same throughout the year, this could be interpreted as an epidemic curve, but that is not the case. The fraction changes continually as participating hospitals and practitioners change the way they take specimens, triage cases and submit samples from cases they suspect might be flu. But by looking at the colors of the bars one sees very clearly the two components. During the pandemic phase, the seasonal flu types (A or B) and subtypes and strains (seasonal H1N1 and H3N2 versus swine flu H1N1) change dramatically and become essentially all swine flu. So that tells us that many new cases appeared that were different (all the charts are from the weekly CDC surveillance report, FluView).

i-a965021dfc0bc1e0ff1bf85673670a41-virological.jpg

We can see the same thing with pediatric mortality. This is (thankfully) much rarer and more accurately counted. Each year somwhere between 50 and 150 children die of flu, the higher figures being for really bad flu years. This year we will end up with something like 111, but over a third (43) were swine flu deaths. Without them this season would have had fewer pediatric deaths than last season, which was a comparatively bad flu season. The double hump in pediatric deaths in the 2008 – 2009 is plainly different than the pattern of the last three years.

i-81dd212cd6b7a389c121b6c6f34d8beb-peds.jpg

The results of the pandemic are harder to see but still visible in another major component of the surveillance system, the 122 city count of pneumonia and influenza (P&I) deaths. You need to know how to read this graph to see the evidence of the pandemic in it. Deaths from P&I vary seasonally. Not all of them, or perhaps even most of them, are from influenza. There are a lot of other respiratory viruses and other things that cause pneumonia than viruses. You can see the seasonal average as the bottom wave like solid line. Since that’s an average (obtained from data over many seasons but taking variation in year to year seasons into account), the actual number of P&I deaths each week vary around that bottom solid line, sometimes above it, sometimes below it. When influenza goes through its seasonal cycle it adds to the P&I death total. The upper solid line is also a statistical average, this time the one representing the upper 35% or so of seasonal variation. When there is a lot of flu around it pushes P&I deaths into that upper zone (above the upper solid line). Those are the excess deaths associated with flu season. For seasonal flu, those are mainly people over the age of 65. You can see that this occurred for a few weeks around New Year in the 2005 2006 season, but that the 2007 – 2008 was a fairly bad year, contributing a lot of excess mortality (we covered all this in detail in a previous post, “How do we know how many people die of flu every year“). It is the excess deaths, the ones poking above the upper line, that represent the oft misquoted figure of 36,000 deaths from seasonal flu each year. As you can see, for four of the last five years this number was essentially zero (essentially nothing poking above the line). But many years the numbers are more like 2007 – 2008, sometimes seventy or eighty thousand. On average they amount about 36,000 over 20 or so flu seasons. This year, except for a week or two in May or June (it’s hard to tell from the graph), it was again very small. So how do we see the evidence of the pandemic? This year looks much like the other years except for the bad one of 2007 – 2008:

i-3f7a27795fb560cadee6fbcebfeb2514-seasonalwave.jpg

But if you look more closely you will see something else. Starting at week 20 P&I deaths are above the seasonal average every week. The only other time this happened over this spring summer stretch was in the bad year of 2007 – 2008. In that case these were mostly the elderly. This year, they are mostly in the under 60 age group. That’s one of the reasons the effects aren’t so evident in the P&I graph. Most of the seasonal average P&I mortality is in the elderly and the noise from that age group is masking the pandemic effect.

You can see the age effect most clearly in another series of charts. In each of the panels, from top (the youngest) to the bottom (the elderly) you will see a dashed line. That’s the seasonal average rate of flu per 100,000 population. These numbers are cumulative, so as the weeks go on the curve has to rise (each week adds to the week before). When the line reaches the dashed line it means that we have reached the point where the risk to an individual of getting flu is equal to the average risk of the last three years during the months considered “flu season” (October to April). Flu season peaks in January to March depending on the year. Your risk of getting flu “off season” (April to October) is very low (as far as we know, although we haven’t done very much surveillance in this time period so that may be a statement that will be revised in coming years). But the way the graph should work is that the line starts at zero at the beginning of April or May (here it is week 16, sometime in April I think) and if it is an “average year” it should reach the dotted line by the following April. But you can see that for some age groups things are already ramping up.

i-3728ed79b99f8095347298dbb9e4cafa-agepanels.jpg

But if you compare the 65+ panel (bottom) with the 5 – 17 year old age group you’ll see a dramatic difference. For the oldest age group the flu season is just starting (note that the vertical scale differs for this this age group). It is just barely inching up from the bottom. But in the age group hardest hit by swine flu, 5 – 17 year old, the risk of flu has already exceeded the seasonal average for the end of the flu season and we’re not even to October. We’re already there for adults (18 – 49 year olds), too. And the risk for the under 4 year olds is on the way. Getting old stinks (I know), but in this case I’m doing better than a 4 year old. It’s something, and I’ll take it.

Finally, if you have any doubt that the new flu season is now underway, take a look at the percentage of outpatient visits for influenza-like illness (ILI) to the CDC network of providers that are part of ILInet. Not only has that percentage been higher than the previous years, but there is now a marked uptick.

i-d1f729ecaa38dd496a5502f283841d5d-outpt.jpg

The 2008 – 2009 flu season is over. The 2009 – 2010 flu season is here. I’m not going to say, “Welcome.”

Comments

  1. #1 ABradford
    September 10, 2009

    Wow, blog post from the future!

  2. #2 revere
    September 10, 2009

    Abradford: LOL. Thanks. Brought us back to the present. You really don’t want to know what the future holds.

  3. #3 melbren
    September 10, 2009

    Revere:

    I wanted to ask about the possibility of extrapolating from the data above some sort of indication that we had experienced a “herald wave” in spring 2009–and the possibility that some immunity may have been conferred to some of the population at that time. But, that got me thinking about the following.

    To test in real time the “herald wave” theory and its possible conferring of immunity, would it be of value to compare illness rates of college freshman to illness rates in that of their “returning” peers–sophomores, juniors, and seniors–among university populations that seemed to have experienced an uptick in (ahem) “bad colds” last spring?

    For example, we all know that I suspect my daughter’s college campus might have experienced a wave of “bad colds” that spread through its community last spring. If that were the case, the population of returning students at that university may exhibit greater immunity in this second wave than their (incoming) freshman counterparts. In other words, it might be of interest over the next few weeks to see if the freshmen on particular campuses are more likely to become ill than their sophomore, junior, and senior counterparts; at least, on campuses that likely experienced a first wave of H1N1 last spring.

    I realize that there could be mitigating factors that might influence such statistics– (freshman are more likely to live in dorms, freshman may come from nearby communities that also experienced a herald wave, etc.) But, maybe it’s worth looking at?

  4. #4 revere
    September 10, 2009

    melbren: Those kinds of studies are what epidemiologists do, but they are difficult and require complex logistics and often sophisticated statistical analysis (because of clustering with correlations). They are also quite expensive and require tricky ethical review procedures. I wouldn’t be surprised if someone is undertaking something like this but I don’t know for sure.

  5. #5 Grahame Grieve
    September 10, 2009

    Does that mean that the season is over for us downunder?

    There was some data presented a few weeks ago at the Australian Healthcare informatics conference showing that we have been having significantly (in both senses of the word) higher rates of flu since Dec last year, though the new epidemic was painfully obvious in the data starting in March. There seems to be some support for this notion in your data here too?

  6. #6 JeffreyY
    September 10, 2009

    An interesting thing about your first graph is that _all_ positive tests spiked around week 17, not just swine flu. I’d guess that’s because many more people got interested in testing their flu.

  7. #7 Rob
    September 10, 2009

    Revere:

    A question. You indicate that the northern hemisphere temperate season is defined as going from week 35 of one year to week 34 of the next. So what’s the corollary for southern hemisphere temperate zones? Is it exactly six months out of phase with the north? Or does it just follow the calendar year (not quite a mirror image)?

  8. #8 revere
    September 10, 2009

    Rob: The definitions are just the ones the US CDC uses for flu seasons. (Sept. 1 – Aug. 30). They are not related specifically to any astronomical or earth science definitions. They are administrative.

  9. #9 Curious
    September 10, 2009

    I believe it’s started. Little girl I’m babysitting today is sniffling and coughing with that same noisy cough(SO mad at her mother for sending her to school and thus, my house). Friends and family on Facebook who live in the south and west all have status updates: “I’m sick! My kids are sick!” There seem to be equal complaints of “bad colds” and “stomach flu.” Can H1N1 present *just* as stomach flu, do you think?

    All we can do is hold on and hope for the best.

  10. #10 cpg
    September 10, 2009

    Curious

    Yes. Also no fever.

  11. #11 Rob
    September 11, 2009

    Thanks Revere. But do you know if CDC also has a convention for the southern hemisphere? Now that we can track viruses from one hemisphere to the next as seasons change, does CDC have a convention for defining what goes on down under?

Current ye@r *