The first cases of swine flu were diagnosed in the US in San Diego in mid-April. The discovery was serendipitous, the result of out-of-season US-Mexican border surveillance and use of a new diagnostic test at the Naval Health Research Center. When the new test protocol showed infection with influenza A with undeterminable subtype, follow-up testing showed it to be an previously unknown swine flu virus. Detection of a second, apparently unlinked swine flu infection in San Diego got the outbreak (now pandemic) investigation rolling.
That was just over 2 months ago, but it established the initial diagnostic pattern. April was the tail end of the flu season, but seasonal influenza was still present in the community and for the first weeks of the outbreak CDC's lab in Atlanta was the only place that had the reagents to confirm an infection was from swine flu and not seasonal flu or another virus altogether. So a make-shift case definition was set-up to take this into account. If a person with an influenza-like illness (which required sudden onset, fever and respiratory symptoms) had a rapid flu test positive for influenza (or influenza A if the test could differentiate), a nose or throat swab was sent to the state lab. As a result of preparedness activities envisioning a possible pandemic with bird flu, CDC had been training state labs to make the differentiation between the two seasonal flu subtypes, H1N1 and H3N2, and bird flu, H5N1, so the capability to do seasonal subtyping already existed outside of CDC. But neither the reagents nor the proficiency for the new swine virus did. Therefore all specimens that were positive with a rapid test at the point of visit, and so were putative influenza A, were first subtyped at the state lab level. If they could not be subtyped, they were sent on to CDC for confirmation as swine flu. CDC later determined that virtually all unsubtypable influenza A specimens turned out to be swine flu.
The initial diagnostic filter was a positive rapid test in the hospital emergency room, clinic or doctor's office. How accurate was this? A letter just published in the New England Journal of Medicine confirms what we already knew. If you don't have influenza, the test is pretty good (99% accurate) in confirming that. Unfortunately, if you do have influenza A, the rapid test, typically taking about 30 minutes while you wait freezing in the ER cubby wearing your hospital johnie, isn't very accurate. It only picks up about half of the cases of flu that are tested:
From April 20 through May 30, 2009, the center processed 3066 specimens with the use of a real-time reverse-transcriptase-PCR (RT-PCR) assay, which revealed 273 confirmed cases of S-OIV (8.9%), 18 cases of H1N1 seasonal influenza (0.6%), and 31 cases of H3N2 influenza (1.0%). All suspected cases of S-OIV [swine flu virus] were confirmed with the use of the CDC's S-OIV assay.2 All specimens were collected from patients with influenza-like illness who met the CDC's guidelines for screening. Rapid-test results for 767 patients during this influenza season were available for comparison and were positive for 20 of 39 patients who had positive results for S-OIV on RT-PCR assay (sensitivity, 51%; 95% confidence interval [CI], 35 to 67), for 12 of 19 patients who had positive results for H1N1 seasonal influenza on RT-PCR (sensitivity, 63%; 95% CI, 39 to 82), and for 6 of 19 of patients who had positive results for H3N2 influenza on RT-PCR (sensitivity, 31%; 95% CI, 14 to 57). The specificity of the test, as compared with that of RT-PCR, was 99% in all cases. (Faix et al., Letter, New England Journal of Medicine; cites and figure references omitted)
There are two measures of test accuracy, here, the proportion of people without flu the rapid test correctly identifies as uninfected is 99%. It is termed the specificity of the test. The proportion of those with swine flu correctly identified as having an influenza infection in this series was 51%, termed the sensitivity of the test. So if you have an influenza-like illness and are test negative with one of these rapid tests, does this mean you don't have the flu? No, because the test misses about half of all flu cases it sees. On the other hand, you still don't know what your chances are of having -- or not having -- the flu because that calculation depends on something besides the accuracy of the test. It depends on how much flu there actually is among everyone with ILI. Here's how that works.
Suppose there are a number of other viruses circulating in the community that cause flu-like symptoms. Typical examples might be metapneumovirus or respiratory syncytial virus. Many influenza-like illnesses (ILIs) have unknown or undiagnosable causes. Let's look at three seasons, one where there is very little flu in the community, say 1% of all ILIs being flu; one more like flu season, with 10% of ILI; one where flu is very prevalent, say 40% of ILI.
If 1% of 1000 ILI clinic visits are flu, that's 10 cases. The test will correctly identify 5 of them (50% of 10). Of the 990 people without flu, the test will incorrectly identify only 1% or 10 of them as having flu (99% specificity). Thus of the 15 people identified as having the flu, only 5 or 33% actually have it. For those with negative tests, 99% of 990 = 980 will be correctly identified while 5 of the ten with flu will be missed. So of the 980 + 5 = 985 negative tests, 980 or 99.5% will be right. So during a normal summer when there is little flu about, a positive rapid test will only be correct about a third of a time but a negative rapid test will be right almost all of the time.
Now let's say that out of 1000 ILI patients seen in the clinic, 10% have influenza, a figure that might be encountered during flu season. That means 100 people with flu. With a sensitivity of ~50%, 50 will be picked up by the rapid test. Of the 900 ILIs that don't have flu, 1% will be wrongly diagnosed as having flu because the test is 99% specific (meaning 1% false positives). So there will be 59 positive tests, of which 50 id 59, or 85%, will be true positives. Out of 100 flu cases among the 1000, 50 of them will test negative, while 891 of the true negatives will do so. Thus for a negative test, your chances of being a true negative are also pretty good (891/941): 95%.
Now let's see what happens during a period, like now, when the number of circulating viruses that might cause ILI is quite different and a much greater proportion is caused by the swine flu virus. What if the test is positive? If 40% of the 1000 ILI visits are swine flu, the rapid test picks up half of them (200). 6 more test positive when they don't have the flu (1% of 600), so the total of positive tests is 206, of which 200 are correct, or 97%. Now let's look at a negative test. Of the 1000 ILI cases, 594 (99% of 600) are true negatives, but 200 of the 400 flu cases also test negative (50% sensitivity of the rapid test). Thus the chance that a negative test is accurate under these circumstances is 594/794 = 75%.
What do we learn from this. One thing is that the data presented on sensitivity (50%) and specificity (99%) in this letter are important measures of the accuracy of the test, but how it appears to the patient and the doctor might be quite different. From the user's point of view, the questions are about the chances of positive or negative test being the correct (the negative and positive predictive values of the test). That depends not only on the accuracy of the test itself but the prevalence of flu in the community. It will be different at different points in time as the prevalence of flu waxes and wanes. When there is an outbreak or a pandemic, positive tests pretty much tell the story (correct 97% of the time; a negative test is not quite as good -- 75%, but not horrible). As a normal flu season gets underway, a positive test is also pretty good, 85% chance of being correct, and a negative test even better: 95%. But at a time when there isn't much flu around, a positive test is pretty poor, although a negative test is rock solid.
Prior to this, these rapid tests were mainly used as a signal for when flu was in the community. When there is little flu about, these tests are rarely positive and when they are they aren't very accurate. When there's an outbreak, a positive test is a pretty accurate indicator, but you don't need a test to tell you there's a lot of flu around. You can just take a peek at the waiting room in the emergency department. Use of the test to diagnose flu in individual patients, however, is problematic. If you depended on a rapid test to tell if someone had swine flu, even when swine flu is the only flu around, you'd be wrong about 25% of the time when the person tested negative. On the other hand, it's probably a waste of time, energy and money to do PCR on every ILI that walks in the door. When there's a lot of flu around, you can make a clinical diagnosis. It's not likely to change treatment.
It is unsettling to a lot of people that not everyone is being tested for flu any more. If there were unlimited resources, we might like to have a day by day count. But if resources are scarce, you adapt to conditions. And when it comes to available flu tests, having a lot of flu around constitutes a change in conditions.
So Revere, how do you think doctors should dole out antiviral prescriptions, given the uncertain office diagnostic tests, in the midst of a pandemic?
Given the "40% of patients with ILI have flu" scenario you describe above, one out four patients with flu will test negative. So should a doctor give an antiviral prescription to everyone presenting with ILI symptoms? Or only those who test positive on the rapid test?
The first risks using up the Tamiflu stockpile--which is limited--treating people who don't have flu. The second risks denying the drug to people who really could benefit.
Mr. Nobody: It is important to get the direction of these conditional probabilities straight. If a test is 50% sensitive it means that if you have flu, the test will correctly say so 50% of the time, regardless of how much flu is in the community. That means that if everyone were tested and there was 40% flu in the community, the test would say there was 20% flu plus it would also say that 1% of the 60% without flu also had flu (assuming 99% specificity). That aside, the question becomes what the best way to diagnose flu is. The two available methods are clinical judgment and a rapid test. Each has its own sensitivity and specificity and the chance that a positive or negative diagnosis will be correct will depend on how much flu is in the community. That's the quantitative analysis. The clinical one is to play the odds and use judgment. When you do so you have a chance of being wrong and you need to weigh the gains and losses of making a mistake. If there's a lot of flu around, you usually go with flu. If there isn't you don't. That's why CDC's method of telling us if flu is widespread in a region or not is more informative than it appears on the surface.
But flu or not flu is only part of the process. Good clinicians don't make judgments based purely on formulas or algorithms. They may treat or not treat a patient given the same test results based on how they look clinically.
Thanks Revere for detailed discussion of sensitivity, specificity and accuracy. I think this is one of the best lessons I have read regarding this complex topic and one that is important for medical and epidemiological types to clearly understand not just for flu. It is the key to understanding the results of virtually all diagnostic tests done day in and day out for a whole host of common and rare disorders that affect people.
For clinicians in the office or ER then if I read you correctly, in addition to knowing the limitations of the test in terms of sensitivity and specificity, it is also very important for us to have a clear idea of what the pre-test probability for the condition is before obtaining the test and certainly when interpreting the results.
Despite the focus on decision analysis and evidence based medicine over the past 30 years in medicine, I doubt that many physicians apply these rules routinely when deciding what test to order and when interpreting them.
It is also of note that in the Quest and LabCorp test manuals where these two leading US biochemical testing laboratories list all the tests doctors can order for patients, they do not provide the tests sensitivity or specificity or even the coefficient of variation for the test. Some of these tests are commercial kits they buy from others while some are inhouse propitiatory tests. What is certain though is all this data is available for every test they offer and since all three measures of the tests performance is necessary for proper use of them, this information should be included in the test description and interpretation section of the lab manual.
Grattan Woodson, MD
Sorry, but "a negative result is almost always right" is nonsense. If I state that there is no flu without testing, I'm also "almost always right". In the 1% flu case 10 people would get unnecessary treatment and 5 people wouldn't get it even though they need it. That's wrong in 15 cases vs right in 5. A test like this is completely useless for treating patients.
(I wonder though: what if you repeat the test? Do you get the same result, or do you now get false positives only 0.01% of the time?)
What the CDC and their friends around the world should do is RT-PCR test a random sample of the population, or possibly a random sample of the population that is positive for the rapid test. Testing an unknown fraction of suspected cases (after several levels of (self-) selection) provides statistics that are meaningless. This way it's going to take a very long time until we have a good overview of how infectious and lethal this virus is.
Gratt: You have it exactly right. Thanks.
itjitsch: Remember, that in the case of a negative test with 1% prevalence, you've get 985 negatives of which only 10 are wrong. That's what it means to say "a negative test is almost always right." CDC is sampling the population through the NREVSS surveillance system. Thats how the weekly virologic surveillance is reported.
But if I don't test and just say "none of you have the flu" then I'm also only wrong in 10 cases. So why go through the trouble and expense of using the test?
iljitsch: That was exactly the point.
To test or not to test - the issue hit close to home for me Tuesday.
My son, who is 6 and has asthma, developed a cough Sunday and a fever Sunday night. The fever was pretty high Monday, but the cough was not terrible. However, he complained of headache and Monday night his breathing was rapid and the cough worsened, so Tuesday I decided to take him into the pediatrician, even though he still wasn't acting super sick despite the fever (which ranged between 101-103).
The doctor checked his throat, his ears, listened to his chest. Nothing looked particularly bad and my son's lungs sounded good, so the doctor was about to send us home when I sheepishly asked for my son to have a flu test "just in case." The doctor agreed, but I had some embarassed moments feeling like an idiotic, overanxious boob until -- to my surprise and the doctor's astonishment -- my son's rapid flu test came back positive and typed A.
The doctor then told me that there really wasn't any need to treat it, but I persevered and asked him to check the latest CDC guidance. He was good about it and did; turned out my son was the first kid he'd seen with a positive test. He eventually came back and offered my son Tamiflu since the asthma made him at higher risk for complications. He also prescribed a prophylactic dose for my daughter who also has asthma.
Unfortunately, the fever got worse and my son started vomitting that night (7 episodes in 2 hours), so he threw up the dose of Tamiflu I'd given him and didn't keep any significant Tamiflu in him until the following day. Good news is that he started getting better after the first dose and is now fever-free, and my daughter seems to be staying healthy.
Now it's my turn to start getting sick, but I'm grateful for that flu test and glad that I was able to overcome my reticence and embarassment to ask for it. I have this uneasy feeling that my son wouldn't be doing nearly as well at this moment if he hadn't gotten the Tamiflu, and he wouldn't have gotten that without the test.
PS - A thank you to the Reveres for your flu coverage. If it hadn't been for the information gleaned here and on other sites that have done in-depth coverage about H1N1, I don't think I would have had the information to be a good advocate for my son. And now my son's doctor is also armed with more information when the next kid with similar symptoms walks in the door.
katerina: Glad you found flublogia useful. Glad also you seem to have weathered the storm with your kids. I went through this with my daughter how also had intense nausea and vomiting and has asthma. She's done fine. But flu virus is dangerous so being forceful with your doc was a good thing to do and sounds like educated him as well. Good job.
I was under the impression that tests for novel H1N1 are only being taken when patients are hospitalized, because of limited numbers of technicians/reagents. Does this mean the cases we see on the CDC's weekly influenza report are only cases serious enough to require hospitalization? Or are they taking a random sample of 'influenza like illnesss'. I'm not clear on what the criteria for the reported data are.
Also, as long as I'm in ?? mode...what are the normal procedures for the yearly flu shots? Are they tested for efficacy and safety or is it assumed that injecting the chicken eggs with the chosen viruses and killing them results in a safe flu shot? Are the procedures being changed for adding novel H1N1 into the mix? And if so, to what?
ps there's a NASTY virus out there. I hope it was swine flu. Then I'll be immune, right? Please say yes!
You are correct ipmat, testing is limited by the CDC and state PH labs to the most seriously ill only because there are just too many cases of ILI out there to test them all.
So, since Mid-May when they stopped testing all submitted samples and began limiting testing to the very ill only, all the positives reported by the CDC are just the "tip of the iceberg" as the CDC doctors stated in their last two news conferences with the most recent being on 26Jun2009. It is also why they have publicly estimated at these same conferences that it is their guess that the actual number of cases in the US is now over 1 million and maybe a lot higher than that.
They are using computer modeling to work these numbers and say that it will take some time to give us more information but when they get something firm, they will share it.
Thanks for the clarification, Revere. "Sensitivity and Specificity" is a hard horse to ride. But if, say, 50% of all ILI in the midst of an outbreak is flu, it seems to me that a doctor shouldn't be making treatment decisions on the basis of a test with only 50% sensitivity.
Katerina: The vomiting may have been a side effect of the Tamiflu. I've never taken it, but I understand it hits a lot of people that way.
Mr. Nobody: Sensitivity, specificity and positive/negative predictive value are very tricky. From the doctor's and patient's point of view, the question is the reverse of the one answered by sensitivity and specificity. Sens./spec. ask the probability of being right given that a person really does (does not) have the flu. The patient/doctor wants to know the reverse conditional probability: given they have a positive (negative) test, what is the probability they actually have (don't have) the flu. That is the question that is influenced by prevalence in the community and will have different answers at different times of year given the exact same test with the exact same sensitivity and specificity.
Does this mean the cases we see on the CDC's weekly influenza report are only cases serious enough to require hospitalization? Or are they taking a random sample of 'influenza like illness'.
I can't speak for everywhere, but in my state, our lab is only testing submissions from perceived outbreaks and from our recruited (seasonal) influenza surveillance sites, which we have asked to be more aggressive in collecting and submitting specimens during this "off season" (which is turning out to be much more "on" than we'd like.) They are using their professional judgment to determine when to test, and we have asked for a minimum of 3 tests during the 'off season'.Our positives, like most states, get reported to CDC and counted. The bottom line answer is that all those requiring hospitalization aren't necessarily being tested/reported, and we're not exactly getting a 'random' sample of ILIs because of the cadre of people (our normal sentinel flu sites) who are testing the non-outbreak cases.
And BTW, I echo the other comments about the quality and clarity of your discussion of specificity and sensitivity. Should be required reading by all 1st year med & nursing students.
It is important to remember that the FDA waived quantitative (yes/no) POC tests for flu (or for drugs of abuse, etc) that are designated as an IVD *rule out* test have their strength in saying no accurately, allowing the clinician to follow another diagnostic path based on patient presentation. This is usually the same for send-outs to LabCorp or other labs, unless the methodology is specified as being pcr or cell culture. In those cases, positive and negative quality control (QC) material is required for each assay run.
Qualitative rule-out testing should not be confused with routine quantitative chemistry (for example), where the FDA has pretty strict rules regarding sensitivity and specificity. Non-waived qualitative testing is held to a CV of =< 5%, and those machines are challenged with multivalent QC material up to three times daily, if not more. The waived qualitative POC testing may or may not have QC material included in the testing kit, and may only have a process control to indicate that the chemistry actually worked.
a friend of mine was admitted to hosp with flu like symptoms..high fever, sore throat etc etc.. they have tested for hini..how long doe sit take to get results and one test did come back positibe for gram positive cocci.. what is that and is it related to the h1ni stuff..thanks
Kate: They should be able to do a rapid test for flu immediately. If it is negative it is not conclusive although makes flu less likely. If it is positive, it is probably swine flu. Gram positive cocci in the lung (if that's where it's from (might have just been in the nose, where it doesn't mean much) means bacterial pneumonia, which must be treated aggressively and can be quite dangerous. It can be secondary to flu. Otherwise, pretty difficult to do diagnose with so little info like this and I am reluctant to do so for obvious reasons.
thank you so much! I so appreciate your very prompt answer
Repeated false positives
I work in a lab and we have had a person for over month test positive for Influenza A. This person has had mild to moderate symptoms. We use quickvue a+b. Has anyone experienced repeated positives.
My son (9) has asthma and started running fever, body aches and headache last night. I took him to the pediatrician yesterday (before the fever started but due to all of this and his body aches) he tested negative for flu. The pediatrician said if he runs fever to bring him back for another test. I did today since he ran fever 102 last night and yet again today the rapud flu test was negative. She said that it must be another virus and sent us home. I questioned the false negative possibility on the result and she said it was only 5%. I also could not understand why even with the negative tests she could not go ahead with the Tamiflu. Please help..this is not my sons primary pediatrician but another one in the same group. Should I take him back tomorrow and talk with his primary pediatrician? I am worried with his asthma (and he has had pneumonia in the past) that they could be missing this and he needs to be on the anti viral meds.
Karen: False negatives with rapid flu tests are about 50%, not 5%. So it could be flu or another virus (there is a lot of respiritory syncytial virus around as well as others), so you really don't know what virus he has. It could be flu or something else. The big issue is how sick he is and the fact he has asthma. The issue here now is secondary bacterial infection and since this ha gone on for a few day I'd have him seen again and have a chest x-ray. I don't like to give out medical advice here. You have to see a patient. So some one should see him if you believe he is still sick. But if he seems on the mend, feeling better, you might wait so as not to jam up medical services when it's not needed. But if he gets better and then worse, bring him in immediately. That's as far as I'd go and it's just general advice, not really specific to your son. I hope all goes smoothly and he recovers well.
My daughter is fourteen she went to the doctor today and they said she had the H1N1 after a throat swab.Can they detect that from symptoms and that swab? Thanks
Bruce: It's now a clinical diagnosis on the basis of a positive quick test that says she had flu A and the fact there aren't any other flu variants out there except swine flu at the moment (more or less), plus symptoms (usually fever and a respir. one like cough or sore throat). So the short answer is, "Yes, that's how they make the diagnosis and it's as reasonably and plausibly accurate."
This is a great discussion of how prevalence and test characteristics should both be taken into account when interpreting results. I always use the same argument when testing for Lyme disease in endemic and non-endemic areas. When the prevalence is above 80% or less than 20% the use of tests to help in you making a clinical decision is usually not helpful.
I also think it is great that you posted this 2 months in advance of what has actually happened.. everyone tested like crazy up until three weeks ago. Now, no one tests (or the docs who still test and do not understand this concept are referring all their "Flu negative" patients to our ER.. great summary