Influenza models examined by IOM

by Revere, cross-posted on Effect Measure

On December 11, The Institute of Medicine, one of the four constituent parts of the National Academies of Science, released a “letter report” reviewing the scant information on effects from non-drug measures to slow or contain spread of an influenza pandemic (available as a free download here). The report was produced after a special workshop on October 25 in which the panel participants heard from a variety of experts, with subsequent deliberations that produced the summary letter report and its recommendations.

“Letter Reports” are mini-versions of the full IOM treatment where a specially selected panel of experts deliberates a particularly difficult question and issues a consensus view. Full reports from NAS panels can be influential or they can pass into history relatively unnoticed. Usually, however, they are taken with some seriousness. I have been involved with several NAS panels as both member and chair, so I have helped produced both letter and book length reports. Take that as a disclosure of my possible biases. In my experience the NAS process works surprisingly well, much better than anyone would have a right to expect. It shouldn’t work, but it does.

Letter Reports go through the same process as full reports, but on an expedited basis. They are usually much shorter than full NAS reports (which are issued in book form and sold to the public). But don’t let the designation “Letter” mislead you. This is a substantial piece of work running more than 30 single-spaced pages of text, full of useful information and observations. In this post we will discuss only the portion that looks at the wide range of modeling efforts that attempt to improve decision-making for the use of various non-drug non-vaccine interventions.

The Report opens with a brief discussion of the outstanding uncertainties about influenza epidemiology. None of this will be new to readers of this site, but most people remain surprised and dismayed at the extent of our ignorance of important facts, for example:

  • What the clinical attack rate would be in the case of an H5N1 or other strain to which the population was immunologically naive. In ordinary influenza attack rates are highest in young children while case fatality is highest in the elderly. In pandemics, however, this is likely to change, but how and to what extent we don’t know
  • What is the incubation period for a pandemic strain? for seasonal influenza it is usually taken to be 2 – 3 days, but for H5N1 more than 4 days seems common
  • What is the mode of transmission (small aerosols droplets, large droplets, contact with contaminated inanimate objects)? We still don’t know this key piece of information
  • What will be the average transmission rate, i.e., how infectious will the disease be from person to person? How much before symptoms appear will a person be infective (if at all)?
  • What will be average time between one case and the cases it produces?

These uncertainties make modeling an outbreak difficult, but not impossible. It is possible to use models to give a range of outcomes and use them to reveal what kinds of information are most important. Possible, but difficult, as the Report shows. For this and other reasons the Panel emphasizes that models should be used as aids to decision making, not as substitutes for decision making. Used wisely they provide additional information and insight. Used badly, they provide an opportunity for a policy disaster. Since this administration rarely passes on an opportunity to commit a disaster, this isn’t very heartening.

More than nine pages are given over to a review of six models, including a household model by Larry Wein which used historical data to model within household spread, assuming aerosol transmission was the main form of transmission. His work was recently the subject of a New York Times Op Ed on the use of masks. Because of his prior assumption that aerosols were the primary means of spread, he concludes that faces masks are the preferred intervention, followed by ventilation, humidifiers and social distancing. The panel was non-commital on his analysis, observing only that

Wein’s analysis sheds important light on blind spots in current thinking and raises questions about assumptions that were implicit in the other models presented. Specifically, his model highlights the significant uncertainty that surrounds the modes and mechanisms of influenza transmission, suggesting this as an important area for future study. In addition, Wein’s analysis forces us to ask whether minimizing influenza transmissions is too narrow an objective, emphasizing the critical importance of further research to address these issues. (p. 6)

The panel goes on to exmanine three models from the MIDAS network (Models of Infectious Disease Agent Study), an NIH-supported group of modelers of infectious disease. All models examined variations of “targeted layered containment” (TLC), combinations of targeted antiviral use, isolation of cases, targeted prophylaxis and quarantine of household contacts of indiex cases, school closures with keeping children at home (not an alternate site), social distancing (e.g., telecommuting, event closures). These measures were suggested by policy makers, and while of great interest, constrain the results, as the panel points out. These models were done by three different groups, using different assumptions about the degree of transmission in schools, different social network models and the natural history of the disease. There were marked differences in the estimates of the effects of interventions but also broad commonalities dictated by the similarity of the policy questions that were being asked. EAch indicated TLC, even with models compliance, could be effective in reducing transmission, with early isolation of the sick and school closures the key elements.

These were simulations of spread in large cities. Sandia National Laboratories also has done a simulation of a small town of 10,000 people, also using a social contact network. In these methods the computer simulates the average rates of interaction and transmission among different age groups and community segments. Again, assumptions about rate of contact between various groups and the likelihood of transmission are important to model behavior. In this case, the assumed high rate of contact between children and teens was an important factor. If the pandemic were mild (say of the 1957 level) closing elementary and highschools would be effective. As infectivity and severity become more serious, social distancing methods involving adults would be needed. In their analysis, these methods come first, only later followed by targeted antiviral use. The use of a partially effective vaccine on a portion of the population did not seem to be useful. But as the panel observes, as in other models there are some strong assumptions, not all of which are realistic:

Beyond assumptions about the virus strain and social network structure, the results depend on a number of key assumptions, some of which may not be realistic in all communities. For example, the model assumes that all mitigation strategies begin after 10 individuals are diagnosed within the community, that adults are able to stay home to care for the sick or watch children following school closure, and that there is high compliance with interventions (90 percent). Several sensitivity analyses were conducted to examine the impact of changes in individual parameters on attack rates, including compliance with interventions, implementation threshold (number of cases diagnosed before intervention measures are implemented), disease manifestations (e.g., period of infectivity; asymptomatic infected vs. symptomatic infected), and infectious contact network. The model results were found to be highly sensitive to a reduction in compliance and changes in the contact network. (p. 9)

Finally, the panel examined an unpublished model by the RAND corporation. This appears to be the more conventional compartmental model (as opposed to the MIDAS and Sandia agent-based models) whose aim was to see what interventions continued to work over a very wide range of plausible assumptions. They used 17 different interventions and also an Expert Choice package: hand hygiene, respiratory etiquette, surveillance, rapid diagnosis,social support, voluntary self isolation, domestic travel restrictions, surgical masks or N95 respirators (for health care settings only). After making some guesses about base cases for effectiveness of these interventions in a pandemic, they assumed a large range of uncertainty and performed their model simulation over and over again (1000 times) with randomly chosen values for these assumptions. The most effective combination was the Expert Choice package, which gave acceptable results (criterion not given in the IOM report) in 97.4% of the repetitions. But these measure do not work equally well in all instances, although they worked in all:

They also found that the choice of NPIs is most important in a moderately severe epi- demic, because in mild epidemics many NPIs are viewed as being effective, and in very aggressive epidemics, most are not. However, they found that the relative ranking of the NPIs varies little with changing epidemic scenarios. (p. 10)

So what to make of these models?

In terms of strengths, the committee found that the models were useful in organizing the current state of knowledge about potential responses to influenza pandemic. The models helped articulate alternative strategies, available information, and gaps in knowl- edge so that policymakers could have a more informed discussion, and also so that im- proved questions and data could be developed for the next iteration of pandemic planning and modeling efforts. In addition, the models highlighted important areas of uncertainty and topics for future research, as discussed below. Similarly, the models examined a wide range of interventions. Furthermore, the discussions at the workshop served as an important forum for open dialogue among policymakers at various levels, modelers, researchers, and other stakeholders. (.10)

But:

As noted, however, it would be a mistake for policymakers to assume that any of these models can provide an exact roadmap of actions to take during the next influenza pandemic. Comments at the workshop suggested that some policymakers might be seeking guidance about which model(s) are ‘best’ and can be relied upon in forming their strategy. While there are ways to improve the predictive ability of the models and their utility for decision making, the models should serve primarily as a tool to aid in open discussion for making explicit alternative strategies, assumptions, data, and gaps. (p. 11)

This is an exceedingly interesting and useful report by serious people.