Misrepresenting science: Time to look in the mirror

If there's one thing I've been railing about for the last few years, it's how scientific and medical studies are reported in the lay press. It seems that hardly a week passes without my having to apply a little Insolence, be it Respectful or not-so-Respectful, to some story or another, usually as a result of the story having caught my interest, leading me to look up the actual study in the peer-reviewed literature. This is something I've learned the hard way that I have to do, having been burned a couple of times in my early blogging career. Sometimes, even now, I forget. For example, recently, I sent an article about a study to a few of my colleagues because it seemed to contradict the hypothesis behind a project we were working on. A colleague responded by sending me the actual study, which didn't really say what the story said. D'oh! You'd think I would have learned by now, but apparently sometimes even the mighty Orac needs a humiliating reminder that you can't trust press reports of scientific studies.

So why are medical and scientific studies reported so badly in the lay press? Some would argue that it has something to do with the decline of old-fashioned dead tree media. With content all moving online and newspapers, magazines, and other media are struggling to find a way to provide content (which Internet users have come to expect to be free online) and still make a profit. The result has been the decline of specialized journalists, such as science and medical writers. That's too easy of an answer, though. As is usually the case, things are a bit more complicated. More importantly, we in academia need to take our share of the blame. A few months ago, Lisa Schwartz and colleagues (the same Lisa Schwartz who with Steven Woloshin at Dartmouth University co-authored an editorial criticizing the Susan G. Komen Foundation for having used an inappropriate measure in one of its ads) actually attempted to look at how much we as an academic community might be responsible for bad reporting of new scientific findings by examining the relationship between the quality of press releases issued by medical journals to describe research findings by their physicians and scientists and the subsequent media reports of those very same findings. The CliffsNotes version of their findings is that we have a problem in academia, and our hands are not entirely clean of the taint of misleading and exaggerated reporting. The version as reported by Schwartz et al in their article published in BMJ entitled Influence of medical journal press releases on the quality of associated newspaper coverage: retrospective cohort study. It's an article I can't believe I missed when it came out earlier this year.

Until fairly recently (i.e., since I first started blogging nearly eight years ago), I didn't pay much attention to press releases from medical journals and universities promoting published research to the media. If anything, the very existence of medical journal press releases about such studies puzzled me, striking me as unnecessary and unseemly self-promotion. (Yes, I was that naive, as amazing as it sounds to me now.) In fact, although I could sort of understand why universities would issue press releases when one of their investigators published a paper that might be of high interest or that appeared in a high impact journal, I wondered why on earth journals would even bother. After all, the quality and impact of science shouldn't depend upon how many news stories are published about it in the lay press, should it? However, back in the late 1990s, research from investigators in Spain found that journal press releases work; journal press releases are associated with subsequent publication of news stories about the journal article(s) featured in the press release. Other studies suggest the same effect, although a subsequent study suggested that it's not entirely clear whether this is correlation or causation. The reason is that journal press releases tend to be associated with "medical information that is topical, stratifies risk based on demographic and lifestyle variables, and has lifestyle rather than medical implications." The authors concluded that medical journals "issue press releases for articles that possess the characteristics journalists are looking for, thereby further highlighting their importance." Although I can't prove it, intuitively this rings true, and it's a finding that is reinforced by a survey of science and health journalists, who reported the "potential for public impact" (which studies that stratifies risk based on demographic and lifestyle variables almost always have) and "new information or development" as their major criteria for newsworthiness, followed by the "ability to provide a human angle" and "ability to provide a local angle."

It's hard not to note that these characteristics are pretty much completely unrelated to scientific rigor or importance.

Whatever criteria journals use to choose specific articles to issue press releases about and how large the effect journal and university press releases have on whether the media pick up on a study and report it, there's a paucity of research out there that looks at how journal press releases influence the actual quality of the reporting on the articles touted by the journals. Therein lies the value of Schwartz et al, because it suggests that we as an academic community are at least as much to blame as reporters and the media companies for which they work. A surgery attending under whom I trained back in the early 1990s used to have a most apt saying whenever we had a patient transferred in: What they tell you the patient has and what the patient actually has are related only by coincidence. In the case of press releases, it sometimes seems that what the press release says and what the study says are related only by coincidence. As a result, since many news articles are based largely on the press release, that means that what the news reports of a study say and what the study itself says are often related only by coincidence.

What Schwartz et al did was to review consecutive issues (going backwards from January 2009) of five major medical journals (Annals of Internal Medicine, BMJ, Journal of the National Cancer Institute, JAMA, and New England Journal of Medicine) to identify the first 100 original research articles with quantifiable outcomes and that had generated any newspaper coverage. These journals were chosen because their articles frequently receive news coverage but they have widely differing editorial practices. For instance, Annals and JNCI routinely include editor notes and editorials highlighting the significance and limitations of the studies featured, while the other journals do not. The NEJM doesn't issue press releases. Their methodology is described further:

To rate newspaper stories on how they quantified results, we included only journal articles with straightforward quantifiable outcomes (that is, we excluded qualitative studies, case reports, biological mechanism studies, and studies not using an individual unit of analysis). To identify associated newspaper stories, we searched Lexis Nexis and Factiva (news article databases) for stories that included the medical journal name (the time frame for the search extended from two months before the journal article’s print date, to two months after).

The authors then did content analysis on the press releases and the news stories using the following criteria:

  • Basic study facts. These include factors such as study size, funding source(s), identification of randomized trials, longitudinal study time frames, survey response rates, and accurate description (compared with the abstract) of study exposure and outcome.
  • Main result. This included asking several questions, including: "Was the main result quantified? If so, was it quantified with any absolute risks (including proportions, means, or medians)? Were numbers used correctly (for example, did the numbers reported correspond to those in the abstract in terms of magnitude and time frame, or were odds or odds ratios misinterpreted as risks or risk ratios? The odds ratio issue was included under the category of numbers used correctly rather than as a separate measure."
  • Harms. These included questions such as (for studies of interventions claiming to be beneficial): Were harms mentioned (or a statement asserting there were no harms)? Were harms quantified? If so, were they quantified with any absolute risks?
  • Study limitations. The authors asked if major limitations of the study were correctly described (or described at all) and considered "limitations noted in either the journal abstract or editor note, or on a design specific list that we created (web appendix) that includes limitations inherent in various study designs: small study size, inferences about causation, selection bias, representativeness of sample, confounding, clinical relevance of surrogate outcomes, clinical meaning of scores or surrogate outcomes, hypothetical nature of decision models, applicability of animal studies to humans, clinical relevance of gene association studies, uncontrolled studies, studies stopped early because of benefit, and multifactorial dietary or behavioural interventions."

So what were the findings? To those of us in the science/medicine blogging biz, the findings are depressing but not surprising. For instance, although nearly all stories reported the exposure and outcome accurately, most stories were missing critically important information. For instance, only 23% quantified the main result using absolute risks (which actually surprised me; I would have guess the number to be lower, but perhaps there was a sampling difference between what I read and what Schwartz et al picked to study). Meanwhile, only 41% mentioned harms associated with interventions reported as beneficial (which, I admit, is lower than I would have guessed), and only 29% mentioned any study limitation (which is about what I would have guessed).

So now that we know that the medical reporting in the sample studied in this article was not so hot, what is the correlation between the press release and the news reporting:

For all 13 quality measures, newspaper stories were more likely to report the measure if the relevant information appeared in a press release (that is, a high quality press release) than if the information was missing (P=0.0002, sign test; fig 2⇓). This association was significant in separate comparisons for nine quality measures (fig 2). For example, when the press release did not quantify the main result with absolute risks, 9% of the 168 newspaper stories provided these numbers. By contrast, when the press release did provide absolute risks, 53% of the 77 newspaper stories provided these numbers (relative risk 6.0, 95% confidence interval 2.3 to 15.4). Because the accessibility of absolute risks varies with study design, we repeated this analysis for randomised trials only, in which absolute risks are always easily accessible. For the 140 newspaper stories reporting on randomised trials, the influence of press releases reporting absolute risks was the same as the main analysis (relative risk 5.8, 95% confidence interval 2.2 to 15.5).

It also turns out that the presence or absence of a quality measure in the press release had more influence than the presence or absence of the quality measure in the journal article abstract on whether that measure ended up being included in subsequently reported news stories:

Presence of a quality measure in the press release had a stronger influence on the quality of associated newspaper stories than its presence in the abstract (table 3⇓). The association between information in the press release and in the story was significant for seven of the 12 quality measures that could be compared (table 3). The corresponding association between information in the abstract and in the story was significant for two of the 12 measures. In absolute terms, the independent effect of information in the press release was larger than the effect of information in the abstract for eight of 12 measures. The only measure for which the absolute effect of the abstract was greater than that of the press release was for quantifying the main result (although press releases were substantially more influential for quantifying the main result with absolute risks).

Interestingly, the existence of a press release of high quality had little effect on reporting compared to reporting on articles that had had no press release issued at all for the basic study facts. High quality press releases did, however, positively influence reporting of quantification using absolute risks, mentioning harms, and discussing limitations of the study. More disturbingly, Schwartz et al suggests that poor quality press releases are worse than no press release at all in that important caveats and fundamental information were less likely to be reported in news stories if they weren't included in the press release than if there was no press release at all. This, however, was a trend, because Schwartz et al point out that these latter findings were not statistically significant. The wag in me can't resist asking why Schwartz et al even mentioned this last finding, even though it seems to make intuitive sense.

It's not just medical journals, of course. Universities and academic medical centers have press offices, and these press offices frequently issue press releases touting the research of their investigators, sometimes in parallel with journals when a new article is published, sometimes independently, and, I can't help but note, sometimes even before a researcher's work has even passed muster in the peer review process. It turns out that Woloshin et al (including Dr. Schwartz) looked at this issue three years ago in a study published in the Annals of Internal Medicine entitled Press Releases by Academic Medical Centers: Not So Academic? They didn't try to determine the influence of university press releases on subsequent media coverage (although I'd be shocked if such a study weren't in the works as a followup to this one), but they did look at the quality of university press releases.

Because the investigators used similar methodology in their 2009 paper as they did in their 2012 paper, I'm not going to slog through this paper's details as much as I did for the previous one. Its key findings are probably enough. Specifically, the results support the hypothesis that university press offices are prone to exaggeration, particularly with respect to animal studies and their relevance to human health and disease, although press releases about human studies exaggerated 18% of the time compared to 41% of the time for animal studies. Again, this seems to make intuitive sense, because in order to "sell" animal research results it is necessary to sell its relevance to human disease. Most lay people aren't that interested in novel and fascinating biological findings in basic science that can't be readily translated into humans; so it's not surprising that university press offices might stretch a bit to draw relevance where there is little or none. In addition, a Woloshin et al observed a tendency for universities to hype preliminary research and to fail to provide necessary context:

Press releases issued by 20 academic medical centers frequently promoted preliminary research or inherently limited human studies without providing basic details or cautions needed to judge the meaning, relevance, or validity of the science. Our findings are consistent with those of other analyses of pharmaceutical industry (12) and medical journal (13) press releases, which also revealed a tendency to overstate the importance and downplay (or ignore) the limitations of research.

It's well-known that industry press releases tend to do all of these things, which is the main reason I'm not harping on them so much in this post. It's just expected that big pharma exaggerates because the evidence that it does is so overwhelming. In contrast, however, such exaggeration is usually not expected from universities. At least, it wasn't in the past. Apparently today is a new day.

In human studies, the problem appears to be different. There's another saying in medicine that statistical significance doesn't necessarily mean that a finding will be clinically significant. In other words, we find small differences in treatment effect or associations between various biomarkers and various diseases that are statistically significant all the time. However, they are often too small to be clinically significant. Is, for example, an allele whose presence means a risk of a certain condition that is increased by 5% clinically significant? It might be if the risk in the population is less than 5%, but if the risk in the population is 50%, much less so. We ask this question all the time in oncology when considering whether or not a "positive" finding in a clinical trial of adjuvant chemotherapy is clinically relevant. For example, if chemotherapy increases the five year survival by 2% in a tumor that has a high likelihood of survival after surgery clinically relevant? Or is an elevated lab value that is associated with a 5% increase in the risk of a condition clinically relevant? Yes, it's a bit of a value judgment, but small benefits that are statistically significant aren't always clinically relevant.

Now here's the kicker that suggests that investigators are part of the problem. Nearly all of the press releases included investigator quotes. This is not in itself surprising; investigator quotes would seem to be a prerequisite for even a halfway decent press release. What is surprising is that 26% of these investigator quotes were deemed to be overstating the importance of the research, a number that is strikingly similar to the percentage of press releases judged to be exaggerating the importance of the research (29%). Indeed, at one point Woloshin et al characterize the tone of many of these investigator quotes as "overly enthusiastic." This suggests that it is the investigators, at least as much as their universities, who drive the exaggeration of the importance of their research findings. Our culpability as scientists in distorting our own science is even more glaring in light of the additional finding that all 20 major universities studied said that investigators routinely request press releases and are regularly involved in editing and approving them. In addition:

Only 2 centers routinely involve independent reviewers. On average, centers employed 5 press release writers (the highest-ranked centers had more writers than lower-ranked centers [mean, 6.6 vs. 3.7]). Three centers said that they trained writers in research methods and results presentation, but most expected writers to already have these skills and hone them on the job. All 20 centers said that media coverage is an important measure of their success, and most report the number of "media hits" garnered to the administration.

Five press release writers can churn out a heck of a lot of press releases in a year! Also, lack of outside evaluation can very easily lead to a hive mentality in which nothing is questioned.

There were several potential actions listed in the papers I've just discussed that universities and medical journals can take to lessen the problem, although it's improbable that the problem of bad press releases will ever go away. Perhaps the most important (and most unlikely to be implemented) is for universities and medical journals to stop hyping preliminary research so heavily. It's noted that 40% of meeting abstracts and 25% of abstracts that garner media attention are never subsequently published as full articles in the peer-reviewed literature. Unfortunately, universities are highly unlikely to show such restraint to any great degree, because the rewards of not showing restraint are too great. After all, it is the preliminary research that is often the most exciting to the public and—let's face it—to scientists. It's the sort of research that gets an institution (or journal) noticed. There's a reason why journals like Nature, Cell, Science, and the NEJM like to publish preliminary, provocative studies. Unfortunately, it is just such research that is likely to be subsequently shown to be incorrect. Another thing universities could do is to to be sure to include as many of the quality indicators for each study as is appropriate, including absolute effects, study limitations, warnings that surrogate outcomes don't always translate into clinical outcomes, and the like.

Perhaps most importantly, we scientists have to do two things. First, we have to avoid the temptation to oversell and overhype our results in press releases or when talking to the press. It's a very strong temptation, as I've become aware, because we want to look good and we want to please our employers by making them look good too. We also want to justify the press release. Too many caveats and cautions make our work sound less important (or, more accurately, less certain) to a lay audience, and if it's one thing that's hard to explain to a lay audience it's the inherent uncertainty in science. We can't avoid that; we have to embrace it and work to explain it to the public.

Finally, I don't want to leave you with the impression that reporters are off the hook. They bear their share of the blame as well. While some reporters definitely "get it" (Trine Tsouderos of the Chicago Tribune and Marilyn Marchione of the Associated Press come to mind), many do not. However, we as an academic community—investigators, universities, and medical journals—can greatly decrease the chances that reporters will produce grossly exaggerated or incorrect reporting if we make sure that their sources are scientifically accurate and don't fall prey to the sins enumerated in the studies discussed here. That's part of our job as scientists, every bit as much as making sure our science is rigorous and obtaining funding for our labs.

And part of my other job (blogger) is to point out when my colleagues fail at this task.

Categories

More like this

There are 27 new articles in PLoS ONE today. As always, you should rate the articles, post notes and comments and send trackbacks when you blog about the papers. You can now also easily place articles on various social services (CiteULike, Mendeley, Connotea, Stumbleupon, Facebook and Digg) with…
Tuesday night - time to see what's new in PLoS ONE - 28 new papers: Reporting Science and Conflicts of Interest in the Lay Press: Forthright reporting of financial ties and conflicts of interest of researchers is associated with public trust in and esteem for the scientific enterprise. We searched…
Science Based Medicine is a site we highly recommend with experienced scientists and practitioners in charge. In other words, it's run by adults. But scientists often disagree about things. This is apparently a secret to non-scientists and many reporters who assume that when two scientists disagree…
Ever have one of those times when you have a cool new blog post all ready in your head, just needs to be typed in and published? Just to realize that you have already published it months ago? Brains are funny things, playing tricks on us like this. I just had one of such experiences today, then…

It's not just the decline of specialised journalists; it's likely also an increase in 'user submitted' content. Far easier to report on the incoming press releases or videos sent by your readers, than invest time and energy in ferreting out news.

And this is where good press release writing comes in handy.

Consider that the more outstanding the claims in the PR, the more likely an editor will assign someone to write on it.

The lifestyle issues would explain a lot why the media like touting things like "chocolate is good for you", "drink red wine", etc. And this is why I mentioned in a comment on a recent thread that I consider a lot of mainstream media to be 'warmed over press releases'.

I wonder actually if it's also partial naivete: a trust from the reporters that what is in the press release is based on truth.

Exaggeration might also be to do with the increasing noise and the difficulty of attracting anyone's attention at all.

Oh, that's the first time I ever got to the comment thread before anyone else... How strange... and empty.

I wonder actually if it’s also partial naivete: a trust from the reporters that what is in the press release is based on truth.

This is a big part of it, along with laziness. All too frequently, and not just in science and medicine, the story reported in the papers is based almost exclusively on the press release.

This phenomenon predates blogs, so we can't blame it on the internet. The internet arguably makes the situation worse, but I think we would still be seeing this problem if our only news sources were of the dead tree variety.

By Eric Lund (not verified) on 27 Aug 2012 #permalink

I know Orac's lights blink an angry red on trivial corrections, but Schwartz's home institution is Dartmouth College (not Dartmouth University).

It is, in reality, a small university with a med school, a business school, and small grad programs in the sciences, but it still calls itself a College.

By palindrom (not verified) on 27 Aug 2012 #permalink

A scientific article studying the effect of publications of scientific articles and related public release.
With statistics.

If you want to know about something, collect data and analyze its significance.
I think I'm falling back in love with science.

Re: "time to look at the mirror"

What is surprising is that 26% of these investigator quotes were deemed to be overstating the importance of the research

Maybe not so surprising if you take a hard look on grant applications, or at academic politics: overstating one's work's importance is not a rarity.
Humans are actually quite prone to this, especially in a social setting where the pecking order is established by a popularity contest based on your ability to advertise your work.

By Heliantus (not verified) on 27 Aug 2012 #permalink

OT: but are Survivalist and Anti-vax lunacies ever TRULY OT @ RI, I ask you?:

Mike Adams has out-done himself- and that's quite a tall order with-
The Silver Lining Behind the Coming Collapse ( @ Natural News)

And:
Mamacita ( @ TMR) thinks that it 's:
Time To School The School Nurse

I'm sure lilady and MI Dawn will like that. ( sarcasm)

By Denice Walter (not verified) on 27 Aug 2012 #permalink

Speaking from experience, I think part of the problem is that the press office of a university, journal, or other org is often evaluated on how often they get the org mentioned in the media, and on how often it's mentioned by the "biggies". To do that, you have to have a "mediagenic" hook--a hot topic or a major finding. Three guesses as to how often you have the latter. So relatively minor (or sometimes poor) studies that have "mediagenic" potential get spun in order to generate as much pickup as possible, and as you point out, there are lots of reporters who don't have the will or the ability to separate the spin from the scientific reality.

My pet peeve with online news stories about studies is that they rarely even link to the actual abstract.

I know it is off the subject but could we get an article about the recent APA decision to endorse circumcision?

I' m sure I remember reading that a few decades ago reputable newspapers would usually have a scientifically trained science editor, but often no longer do. This might explain the common failure to dig a bit deeper into the meat of a paper instead of just reporting a version of the press release.

By Krebiozen (not verified) on 27 Aug 2012 #permalink

Oh no, someone mentioned the "C" word [ducks in anticipation of an infestation of fanatics].

By Krebiozen (not verified) on 27 Aug 2012 #permalink

'Science by press release', whether committed by researchers, their employers, journals, or reporting agencies, seems to be a case of edutainment rather than a comprehensive, careful reporting of the science.

But hey, edutainment gets the advertisers & the alumni donations! (At least, that is my presumption.)

By Composer99 (not verified) on 27 Aug 2012 #permalink

Squillo says:

My pet peeve with online news stories about studies is that they rarely even link to the actual abstract.

This!

By Composer99 (not verified) on 27 Aug 2012 #permalink

Oooh! Oooh!

Off-topic, but the HuffPo Science section is headlining a video segment by Cara SantaMaria that is ANTI-HOMEOPATHY! It features Ben Goldacre and might as well have been written by Orac. It even goes after quackademic medicine!

A HUGE comment flame war is sure to follow. Get out your flamethrowers!

By palindrom (not verified) on 27 Aug 2012 #permalink

@ Denice Walter: Hmmm, I read mamacita's post on the TMR blog.

Mamacita fired off an email to one of her kids' school principal (how many kids does she have?). She was indignant that from what she "has read" her State's Department of Health promulgate laws and regulations about vaccine requirements for school entry and continued participation (Tdap booster for adolescents). Mamacita also opined that that Department of Health staff is "on the take" from *Big Vaccine* and the principals are being duped.

Actually the "on the take" public health officials get their "marching orders" from the CDC. And, mamacita does not realize that every State's education law and regulations also get their "marching orders" from the *corrupt* CDC/International Cartel.

The school nurse at mamacita's child's school asked "what is the nature of your child's *vaccine injury* and mamacita did not reply.

School nurses, in my opinion, are the gatekeepers to keep kids safe. They have lists of every child who has valid medical contraindications against each vaccine. They also have lists of kids whose parents have "opted out" which come into play whenever there is any case of any vaccine-preventable disease. Someday mamacita will get a phone call to come and take her child(ren) out of school, immediately and she will be instructed to keep her child out of school...for the duration of an outbreak of a vaccine-preventable disease in the school, or in a neighboring school or within the County where she resides.

Cripes, I hate these know-nothing anti-vaccine parents.

Off Topics:

Krebiozen: I did not first mention the "C" word...I only provided the links to the AAP Policy Statement :-)

@ palindrom: I've got two comments in moderation, including one reply to your comment.

It’s an article I can’t believe I missed when it came out earlier this year.

Obviously Schwartz or the BMJ should have accompanied the article's publication with a better, more sensational press release.

By herr doktor bimler (not verified) on 27 Aug 2012 #permalink

@lilady
August 27, 4:11 pm

It is most likely that "mamacita"'s kids (does her husband know she's advertising as a mamacita?) will be the source of the outbreak.

How is the school going to prevent that?

By Spectator (not verified) on 27 Aug 2012 #permalink

@Eric

I'm not sure it's all laziness. If you're paid less or have less time to investigate, then less energy and time will be expended on research. It just makes the situation easier for those who are lazy.

Consider also that many writers will not work on one story at a time or are expected to submit X number of articles per day. This means you may also expend more time and energy on something (dependent on editor's wish to focus on something particular; prior knowledge of the journalist; etc etc) compared to writing something else on the same day/week.

I'll also note that the phenomenon of writing PR with an "angle" is not unique to medicos. Everyone has to do that otherwise the media release gets trashed. The best way to come up with an angle is to: fit it into a current affairs issue (whatever is the hot topic of the day); make it a human story or a local issue; etc.

The issue is partly to do with the fact that too many people are clamoring for attention and the louder you are, the more likely you'll be noticed.

It's a bit of a chicken/egg thing.

@ Spectator: I'm sure Mr. Mamacita is *supportive* of her decision to opt out of vaccines for the little Mamacitas.

The schools cannot bar a kid from school...if the State permits philosophical/exemptions.

lilady 12:14 am

The schools cannot bar a kid from school…if the State permits philosophical/exemptions.

I suspect that it's dependent on the state's laws. You may well be correct for New York's laws, but ... In Utah there was a measles outbreak (last year? the year before?) — all the schools in and around the Salt Lake valley banned from school, for the duration, any student or adult (! staff and faculty) who did not have documented measles immunization. 'Twas all over the news, here. IIRC, San Diego city or County did the same for the Sears measles outbreak.

By Bill Price (not verified) on 27 Aug 2012 #permalink

@ Bill Price...Sorry I was somewhat ambiguous when replying to Spectator just upthread. Mamacita has philosophical exemptions for her kids, because under her State law, *opting out* is a *right* and allows the kids to enter school without immunizations.

See also my reply to Denice Walter above, at 4:11 PM..

"School nurses, in my opinion, are the gatekeepers to keep kids safe. They have lists of every child who has valid medical contraindications against each vaccine. They also have lists of kids whose parents have “opted out” which come into play whenever there is any case of any vaccine-preventable disease. Someday mamacita will get a phone call to come and take her child(ren) out of school, immediately and she will be instructed to keep her child out of school…for the duration of an outbreak of a vaccine-preventable disease in the school, or in a neighboring school or within the County where she resides."

Here's the CDC Case Surveillance Manual-Measles Chapter about barring unvaccinated kids from school, in the event of a single case or outbreak:

http://www.cdc.gov/vaccines/pubs/surv-manual/chpt07-measles.pdf

"XIII. Outbreak Control

The primary strategy for control of measles outbreaks is achieving a high level of immunity (2 doses) in the population affected by the outbreak. In practice, the population affected is usually rather narrowly defined such as one or more schools. Persons who cannot readily document measles immunity should be vaccinated or excluded from the setting (school, hospital, day-care etc.). Only doses of vaccine with written documentation of the date of receipt should be accepted as valid. Verbal reports of vaccination without written documentation should not be accepted. Persons who have been exempted from measles vaccination for medical,
religious, or other reasons should be excluded from affected institutions in the outbreak area until 21 days after the onset of rash in the last case of measles."

As you can see, these are the guidelines followed by public health doctors and nurses to contain an outbreak of a vaccine-preventable disease.

lilady 4:25 am

@ Bill Price…Sorry I was somewhat ambiguous when replying to Spectator just upthread.

I just reviewed the exchange with Spectator, and saw that I had read into it something that wasn't there. No, you weren't all that ambiguous – it was my mistake.
Thanks for the CDC info.
By Bill Price (not verified) on 27 Aug 2012 #permalink

That's why I'm still looking for the preview button, that feature of any competent blogging software. Let's try it again:
@lilady 4:25 am

@ Bill Price…Sorry I was somewhat ambiguous when replying to Spectator just upthread.

I just reviewed the exchange with Spectator, and saw that I had read into it something that wasn’t there. No, you weren’t at all ambiguous – it was my mistake. Sorry.
But at least it prompted you to dig out the CDC info, again. Thanks for that.

By Bill Price (not verified) on 27 Aug 2012 #permalink

I'm a reporter for a mainstream daily newspaper and science stories are some of the worst things we cover.

Every scientist seems to have been burned by a bad reporter in the past, so few want to talk to us. Those that work in academia are too busy with classes, or when classes are not in session, are unable to be reached.

Cutbacks have given us more work spread among skeleton crews, and we tend to get paid for 40 hours and have no chance of getting paid for overtime.

I really resent the "lazy reporter" explanation. We do not have the time to do our job right, and scientists rarely help us make the story right. I love science, but it really stings that lawyers are easier people to work with.

Orac is right that press releases can steer us in the wrong direction. If someone is a scientific illiterate and receives one, they don't have time to read the actual study and will mentally be prepped for that narrative.

"such exaggeration is usually not expected from universities. At least, it wasn’t in the past. Apparently today is a new day."

Universities are in competition with each other – why do you expect they would refrain from blatant selfish and over-exaggerating marketing?

And mind you, the competition between universities isn't a new thing. We live now for quite some time in a world were the ruling paradigm is that everybody and everything must compete with each other – just in case you hadn't noticed that too.

By Tony Mach (not verified) on 06 Sep 2012 #permalink

Perhaps most importantly, we scientists have to do two things. First, we have to avoid the temptation to oversell and overhype our results in press releases or when talking to the press. It’s a very strong temptation, as I’ve become aware, because we want to look good and we want to please our employers by making them look good too."

Orac, have you been hiding in a little plastic box? That is not a temptation, that is the rule. Everybody has to self-market themselves, no exceptions tolerated.

By Tony Mach (not verified) on 06 Sep 2012 #permalink

Mohammad Shafiq Khan:

Open challenge is on

And then presents two websites that tell us these profound words:

404 Error Page Not Found

I am not convinced.