There’s a new feature article by Liza Gross  up at PLoS Biology. Titled “A Broken Trust: Lessons from the Vaccine-Autism Wars,” the article does a nice job illuminating how the themes of trust and accountability play out in interactions between researchers, physicians, patients, parents, journalists, and others in the public discourse about autism and vaccines. Ultimately, the events Gross examines — and the ways the various participants react to those events — underline the questions: Who can we trust for good information? and To whom are we accountable for our actions and our decisions? In many ways, it strikes me that the latter question needs more consideration than people typically give it.
The question of trust, on the other hand, is one with which people seem more ready to grapple. The challenge, however, is that such grappling seems more often than not to result in mistrust.
Consider, for example, the outcome of efforts to be transparent about the contents of foods and medicines, and to err on the side of caution in instances where definitive information was lacking. You would think such efforts would inspire trust.
Sometimes, they don’t.
In 1997, a US congressman from New Jersey inserted into a funding bill a provision that gave the Food and Drug Administration (FDA) two years to measure levels of mercury in all products under its jurisdiction, and release its findings to Congress and the public. The FDA’s analysis revealed that because several new vaccines were added to the immunization schedule after 1988, some infants could be exposed to as much as 187.5 micrograms of ethylmercury by the time they were 6 months old–if every dose of Hib, hepatitis B, and DTaP contained thimerosal.
Based on this new finding, says [UC-San Francisco medical anthropologist Sharon] Kaufman, leading vaccine experts began to investigate the possibility that mercury in vaccines was putting kids at risk. While the ethylmercury levels exceeded the federal safety guidelines for methylmercury, which gains toxicity as it accumulates through the food chain, no guidelines existed for ethylmercury at the time. Its toxicity was largely unknown; however, there was evidence that very high doses of ethylmercury could cause neurological damage. It was also known that methylmercury can cause subtle neurological effects in infants born to mothers who eat large amounts of fish and whale meat. Studies have since shown that ethylmercury is eliminated much faster than methylmercury and is unlikely to accumulate. But in 1999, no one knew what dose to consider safe for the developing brain.
Given the uncertainty about ethylmercury’s toxicity, Neal Halsey, director of the Institute for Vaccine Safety at Johns Hopkins University, urged vaccine policymakers at the CDC and American Academy of Pediatrics (AAP) to remove thimerosal from vaccines as a precautionary measure and to maintain public confidence in their safety. The agencies agreed, and vaccine manufacturers responded quickly; by March 2001, no children’s vaccines contained thimerosal.
Anticipating the FDA’s release of its findings, the AAP issued a statement explaining its decision as an effort to minimize children’s exposure to mercury, asserting that “current levels of thimerosal will not hurt children, but reducing those levels will make safe vaccines even safer”. Unfortunately, Kaufman says, “rather than reassuring parents, the statement fueled public fears and prompted all sorts of questions.”
To Halsey, one of the most respected figures in the vaccine world, simply ignoring the FDA’s findings was not an option. He hoped the rapid response would demonstrate the government’s “commitment to provide the safest vaccines possible”. But it was too late for reassurances. Several months later, Medical Hypotheses–an unconventional journal that welcomes “even probably untrue papers”–received and later published a purely speculative article called “Autism: a novel form of mercury poisoning”. Two of the authors, Sallie Bernard, a marketing consultant, and Lyn Redwood, a nurse, had just launched the parents’ advocacy group SafeMinds to promote their thimerosal hypothesis. Although their now debunked theory appeared in a journal that openly eschews peer review and evidence-based observations, several parent advocacy groups still cite it as evidence that mercury in vaccines causes autism.
Imagine you’re someone charged with keeping a product regulated by the FDA safe. Substance X is in that product (and is in it for a good reason, although possibly its function could be accomplished with a different ingredient or a different mode of using the product). The possibility is raised that X might cause harm. Of course, it might not. Before the matter has been studied (either exhaustively, or preliminarily), how should you regard the potential for harm?
My impulse would be, if the product did not need X, to go without it. But this doesn’t consider the potential impact the removal of X might have on the attitudes of people using the product. In the case of the removal of thimerosal from vaccines, members of the public responded to a move that was intended to err on the side of caution by asking:
- Why are you removing it?
- Why didn’t we know it was in there in the first place?
- What else is in there that we didn’t know about (and what harm could it cause)?
- Could that stuff you’re taking out be what harmed my kid?
- Could something else that’s in there that we don’t know about (because you haven’t told us) be what harmed my kid?
- Can you prove to me that the stuff you took out didn’t harm my kid (or other kids)?
- If it wasn’t that stuff you’re taking out, what did hurt my kid?
- What do you mean you don’t know?
There may well be more underlying this mistrust. The FDA is a government agency, and suspicion about the government (whether of secret plots or mere incompetence) is a national pass time. As well, non-scientists seem to have some issues with the medical and research communities.
By 2004, the IOM [Institute of Medicine] panel had reviewed over 200 epidemiological and biological studies for any link between vaccines and autism. In its eighth and final report, the panel unanimously determined that there was no evidence of a causal relationship between either MMR or thimerosal and autism, no evidence of vaccine-induced autism in “some small subset” of children, and no demonstration of potential biological mechanisms. Considering the matter resolved, the panel recommended that “available funding for autism research be channeled to the most promising areas”.
The report should have delivered the final blow to the vaccine-autism theories. Instead, it gave anti-vaccine activists a new target. An online group called Parents Requesting Open Vaccine Education–or PROVE, a not-so-subtle challenge to scientists to “prove” that vaccines don’t cause autism–posted a roundup of parents’ groups denouncing the IOM panel as “riddled with conflicts of interest” and urged parents to spread the word that panelists conspired “to sweep a generation of children under the rug and maintain current vaccine policy at any and all cost”.
While it might be tempting to pin this reaction entirely upon non-scientists’ poor understanding of science and its methods for building a body of reliable knowledge, the situation is probably more complicated. Conflict of interest is a serious issue in the conduct of science. Pharmaceutical company influence over how research results are reported and how physicians make treatment decisions has been documented and covered in the mass media.
This is one of the reasons that bad behavior by one scientist or physician is a problem for the whole professional community. The public has a hard time separating a professional community from the misdeeds of some of its members. Maybe the professional community takes serious action to respond to such misdeeds, but if the public doesn’t see it, the public may start to think that the main point of the professional community is to help its members profit financially.
To whom or what are medical researchers and physicians accountable? Surely they are accountable to the patients counting on them for proper care and accurate information. They are accountable to their professional communities, whose standards and reputation are entwined with their own. Arguably, they are also accountable to the facts as they have been established using recognized methodology (and to updated facts delivered by new research).
In the discussions about vaccines and autism, though, there are some other relationships that make non-scientists more wary.
Are medical researchers and physicians accountable to the dominant paradigm — even though (as the casual reader of Kuhn will point out) scientific paradigms are often discarded? They are, but not slavishly so. Breaking successfully with the paradigm that guides current research requires coming up with an alternative and persuasive reasons to prefer it over the one it’s replacing.
Are medical researchers and physicians accountable to the corporations funding their research? The insurance companies making decisions about what treatments are covered? Their own financial interests? What precisely is the give and take in each of these relationships?
And do these relationships swamp out their accountability to their patients, their professional communities, or the truth? Medical researchers and physicians would say that they don’t — but wouldn’t they say this even if they did?
What with human frailty and the reality of bills that need paying, it’s not hard to be suspicious.
The suspicions are amplified when things happen that are not easily explained. When a child’s developmental progress seems suddenly to stall or to turn around for no obvious reason, the people who inspire suspicion already may seem like attractive candidates for the villain who is somehow responsible.
In April of 2008, CNN’s Larry King hosted a show on the vaccine-autism “debate” featuring Jenny McCarthy, a celebrity “autism mom” promoting a book about her son Evan’s “recovery” from autism. McCarthy told King that she speaks to thousands of moms every weekend who relay the same experience: “I came home, he had a fever, he stopped speaking, and then he became autistic.” “It’s time to start listening to parents who watched their children descend into autism after vaccination,” she urged, because “parents’ anecdotal information is science-based information.” McCarthy said the Poling decision proved that “vaccines can trigger autism.” No scientists were on hand to challenge her.
“There’s a lot of good autism research out there,” says Paul Offit, chief of infectious diseases at Children’s Hospital of Philadelphia and head of the hospital’s Vaccine Education Center. “But you never hear about it because the anti-vaccine movement has taken this issue hostage.” Offit has turned down requests to appear on any show with McCarthy. “Every story has a hero, victim, and villain,” he explains. “McCarthy is the hero, her child is the victim–and that leaves one role for you.”
Assuredly, there is a significant difference between parents’ anecdotes and scientific information. Parents are generally working with a very small sample size (as far as the number of kids they are observing), and they are not always on their guard against confirmation bias or other factors that may skew what they notice and how they interpret it.
However, it’s possible that part of why Jenny McCarthy has an audience is that there is a nugget of truth in what she says. Many patients, and parents of patients (and other care-takers of patients who are the primary persons interacting with health care providers on the patients’ behalf) know all too well the experience of not being listened to — of having a doctor treat you like you don’t know anything about what’s going on with yourself or your child or the patient you are taking care of. This can be incredibly frustrating, seeing as how there is at least the potential that information you have from extensive day to day interactions with the patient may be relevant and useful. As well, even folks who are trained as scientists (and who, arguably, may understand the scientific method better than the physician with whom they’re dealing) can be on the receiving end of this kind of dismissiveness.
Being accountable to your profession’s existing body of knowledge does not mean you’re not accountable to deal with each actual patient in front of you as an individual — one whose case may vary in interesting ways from the norm, and one whose actual experiences and concerns ought to be taken seriously. Brushing these off tends to erode trust. (It’s worth noting, of course, that you can listen to concerns and experiential information, and indicate that you are taking it seriously, while drawing different conclusions than the person presenting them.)
Kaufman sees the persistence of the vaccine-autism theory as a consequence of how individuals manage risk in modern society. People must trust experts to protect them from risk, whether they’re getting on an airplane or vaccinating their kids, she explains. When faith in experts erodes, personal responsibility prevails. “People think if you blindly follow experts, you’re not taking personal responsibility,” she adds.
Offit blames the media for keeping the myth alive by following the “journalistic mantra of ‘balance,’ ” perpetually presenting two sides of an issue even when only one side is supported by the science. And shows like “Larry King Live” have been “just awful on this issue,” he adds, placing ratings and controversy above public health by repeatedly giving McCarthy and other “true believers” a platform to peddle fear and misinformation. But Offit also wishes scientists would do a better job of communicating theoretical risk and the difference between coincidence and causation. Once you raise the notion of a possibility of harm, he says, “it’s hard for people to get that notion out of their head.”
Kaufman thinks the problem is more immediate than bridging the gap between lay and expert understanding of risk. Parents treated theoretical risk as fact even as scientists tested, and ultimately rejected, the possibility that thimerosal might harm children. Thinking the institutions that were supposed to protect them from risk failed, Kaufman says, people now do their own research. But instead of leading to more certainty, she explains, “collecting more information actually increases doubt.”
There are a lot of factors coming together here.
First off, there’s the question of whether to accept expert advice at face value. If you’re already mistrustful of the expert offering the advice (perhaps because he or she seems unwilling to listen to the experiential information you have or to explain how it fits with the advice on offer), just accepting that advice probably isn’t going to happen. But we shouldn’t forget that there are competing voices offering what they claim is expert advice. How can a non-expert recognize the real experts from the charlatans?
Don’t we want people to be critical consumers of information?
We do. The problem is how exactly people who are not experts are supposed to evaluate the expertise of others. If they knew enough about the subject matter on which the putative experts are holding forth, they could just evaluate the advice itself. But, since they need the experts to deliver the expert information, they can’t necessarily tell the real experts from the fake ones by evaluating their claims, and they need to find other ways to assess expertise. These might include educational and work credentials, or publications — but then you need to be able to distinguish the good schools from the flaky ones, the rigorous journals from the non-rigorous ones (not to mention the publications that look like journals but are industry-sponsored fake journals).
That’s a lot of work. If the local TV news team or Oprah’s bookers aren’t screening out the fake experts, how can a parent, armed only with the internet and a few free hours after work, be expected to do it?
Another issue here is that people have a hard time wrapping their heads around probabilities. It’s too simple here to say that people don’t get, probabilities, though. I think, rather, that we have some outcomes that we’re very serious about avoiding, even if the probabilities of those outcomes are fairly low. If an outcome would result in permanent damage and we perceive it as being in our control to avoid it (even if the chances of it happening are pretty small to begin with), sometimes we’ll do that. Perhaps this is a way of feeling like we have some measure of control in a world where lots of the bad outcomes that are possible seem beyond our control to prevent.
But there is necessarily some selective attention here. Efforts to avoid one bad outcome may overlook our prospects for avoiding another bad outcome — one that could turn out to be much, much worse. If autism is the bad outcome in the center of a parent’s visual field, death or permanent disability from infectious diseases like measles is the bad outcome lurking on the periphery.
Though overall vaccination rates in the US are high, vaccine-resistant communities like Ashland have emerged in several states, including Colorado, Washington, and California, as more parents adopt alternative schedules or seek exemptions to avoid vaccination. Recent studies have shown that exempt children in Colorado were 22 times more likely to contract measles and about 6 times more likely than vaccinated children to contract pertussis, while exempt children nationwide were 35 times more likely than vaccinated children to contract measles.
Sadly, studies suggest that the burden of lowered immunization rates will likely fall disproportionately on poor people living in crowded conditions, hotbeds of disease transmission, and exacerbate existing health disparities among minority populations–where kids go unvaccinated not by choice but because of limited access to health services. Exemptions also pose a threat to children who can’t be vaccinated because of a medical condition or who didn’t mount an immune response to the vaccine, as well as to hundreds of thousands of people on chemotherapy, recovering from organ transplants, or struggling with compromised immunity.
Here, it’s appropriate to ask the question: To whom are parents accountable?
Of course, parents are accountable to the kids they are raising. They have a duty to do what is best for them, as well as they can determine what that is. They probably also have a duty to put some effort into making a sensible determination of what’s best for their kids (which may involve seeking out expert advice, and evaluating who has the expertise to be offering trustworthy advice).
But parents and kids are also part of a community, and arguably they are accountable to other members of that community. I’d argue that members of a community may have an obligation to share relevant information with each other — and, to avoid spreading misinformation, not to represent themselves as experts when they are not. Moreover, when parents make choices with the potential to impact not only themselves and their kids but also other members of the community, they have a duty to do what is necessary to minimize bad impacts on others. Among other things, this might mean keeping your unvaccinated-by-choice kids isolated from kids who haven’t been vaccinated because of their age, because of compromised immune function, or because they are allergic to a vaccine ingredient. If you’re not willing to do your part for herd immunity, you need to take responsibility for staying out of the herd.
Otherwise, you are a free-rider on the sacrifices of the other members of the community, and you are breaking trust with them.
Our ability to make informed decisions, and to make decisions in a way that does not cause undue harm to others — indeed, our very ability to live with others in a community — comes back to trust and accountability. None of us can have complete information; our decisions are made with the best partial information we can marshall at the time. To get good information, we (including scientists) depend on the efforts of others as well as on our own efforts. And the decisions we make, more than we realize, have consequences for people we may not have considered or consulted when we made those decisions. Even focusing on a small task that seems relatively isolated — taking the best care of our children we know how — can create conditions that make it really hard for other people in our community to succeed when they focus on the same task in their own immediate environment.
I think this is a reasonable initial analysis of the problem. How to address this problem — how to get people to be accountable to each other — is a much harder question to answer.
I should mention here that in the examination of accountability and failures of trust around vaccines and autism, I was struck that people with ASD seemed absent from the conversation. Casting their lives in terms of “vaccine damage” and other harms to be avoided does little to address their needs, nor to address our accountability to them.
Orac also weighs in on the PLoS Biology article.
 Gross L (2009) A Broken Trust: Lessons from the Vaccine-Autism Wars. PLoS Biol 7(5): e1000114. doi:10.1371/journal.pbio.1000114 (Published: May 26, 2009)