Orac takes issue with a pair of posts I wrote yesterday about the National Center on Complementary and Alternative Medicine (NCCAM). I gather he thinks I’ve been far too trusting as far as the information provided on the NCCAM website, and that I’m misrepresenting the issues the critics of NCCAM have with the center. If my posts communicated that they were giving the straight dope on NCCAM and the objections to it, then I blew it; that wasn’t at all what was intended. Rather, I wanted to have a look at the ethical issues that arise from such an official effort to examine medical treatments that are not part of the mainstream, and to start to tease out how these might be connected to broader issues around the interactions between scientifically grounded health care providers and patients who are not adherents to scientific ways of thinking.
Here, let me reiterate that I am not an expert on NCCAM or the movement to get broad acceptance for alternative medical treatments. Rather, I’m trying to understand the political battles in terms of the divergent ways of understanding the world driving the participants in these battles.
With that in mind, some specific responses to Orac’s post:
I didn’t actually offer a clear definition of plausibility in my post because it seems to be a bit like the demarcation problem — drawing a clear line in advance ends up putting things on the wrong side of the line when we try to apply that definition. My questions:
Does “X could plausibly treat condition A” mean that there must be some clear mechanism by which X might act to cure or improve condition A? There have been compounds whose efficacy was widely accepted before we had anything like a detailed understanding of the mechanisms by which they worked (think aspirin). Does “X could plausibly treat condition A” mean that there already exists a body of empirical data demonstrating its efficacy? If this were the standard, no compound that wasn’t already being used by a significant number of people could ever make it to clinical trials.
were meant to rule out the too-simple definitions of plausibility (some of which, by the way, I have heard deployed in critiques of CAM — usually as shorthand, no doubt).
Of course, as Orac points out, what is scientifically plausible is judged through the lens of our existing body of scientific knowledge. At the frontiers of our knowledge, scientists may disagree about what is plausible and what is not — and accordingly, in these cases, the evidentiary bar is set much higher for the implausible result.
And we need to remember that what is plausible to scientists is quite a different thing from what is plausible to non-scientists. This is a point we’ll return to.
Credulity with regard to the contents of the NCCAM site.
I think I observed that the writing is clear. Also, I appreciate that there is a way to find out what exactly NCCAM funds are spent on as far as research. I agree with Orac that one would not want to take any organization’s website as the whole unbiased story about what that organization is up to.
Obviously what the NCCAM website omits — especially the studies like TACT that scientists and IRBs have seen as especially problematic — is important. Those omissions leave visitors to the NCCAM website with a skewed picture of the NCCAM research portfolio, and they may give us clues to what NCCAM is really trying to accomplish (building a body of objective scientific knowledge vs. pushing to make CAM more respectable and more widely accepted by the medical establishment).
There are big political issues here. I get that.
Still, I think even a quick perusal of what is posted on the NCCAM site (especially the array of projects actually funded and carried out) is enough to raise some pretty meaty ethical questions. That was my goal in perusing those projects, not to give an authoritative evaluation of NCCAM as a government agency.
The clash between two worlds (at least).
It is, quite literally, a clash between two opposite world views over the very rules that will determine the scientific method. Science-based medicine emphasizes repeatable, observable phenomena and testing new therapies yielded by scientific investigation on patients in carefully controlled, blinded, randomized trials. There may be weaknesses in this approach, the main one of which is applying population-level data to individuals (which can be sometimes tricky indeed), but these shortcomings pale in comparison to the difficulties posed by the methodology CAM advocates, namely anecdote-based studies, where personal experience trumps science.
Unfortunately, the two world views are completely incompatible, and the reason NCCAM is failing is because of this incompatibility. I’ll give Janet an example that is relevant. There are numerous–and I do mean numerous–studies that have failed to find even a whiff of a correlation between vaccination and autism. Yet Jenny McCarthy is still out there pushing the myth. Generation Rescue is still out there raising money for “biomedical research” (i.e., quackery) to treat “vaccine injury.” The DAN! doctors are still out there selling quackery in the form of chelation therapy, supplements, diets, and many other dubious therapies. Thousands upon thousands of parents are still terrified of vaccinating, and Age of Autism continues to defend Andrew Wakefield, author of the MMR scare in the U.K. a decade ago, against well-documented charges of undisclosed conflicts of interest and falsifying data. Nothing has changed.
Science has one set of procedures for tackling questions about the world, collecting data that bear on those questions, drawing conclusions from those data, and evaluating the credibility of the data and the conclusions. In addition to those procedures, the science camp has a philosophical commitment to go where the data point, to be ready to kiss goodbye even the most appealing hypotheses should the data provide good reason to believe they’re wrong.
The fans of CAM — at least the most vocal ones — seem committed to their hypotheses even in the face of substantial evidence against them. The scientific approach to building reliable knowledge is, for them, only persuasive if it yields the answers they want to hear about their hypotheses.
Orac is right that these two approaches cannot be reconciled.
A reasonable question to ask is where the agenda of NCCAM falls between these two poles. If NCCAM were run by scientists and administrators committed to bringing the same standards of scientific rigor to research on CAM (specifically, the potential treatments deemed most plausible in the light of our current body of scientific knowledge and with the least risk of harm to human subjects in the studies), then we might hope that it could add to our body of knowledge. Good scientific information about offerings from CAM, whether this information supports their safety and efficacy or shows them to be unsafe and/or ineffective, would arguably be of use to a population of physicians trying to offer their patients evidence-based medical care (including advice), and to a population of patients who recognize that science gives useful guidance as to which treatments are reality-based and which are not.
The smallness of these populations, understandably, will be a source of frustration for scientifically minded physicians like Orac. There seems to be a sizable population of physicians, and an even more sizable population of patients, who are much more loyal to their alternative treatments than to the scientific method. To the extent that the people in these populations have any interest in the scientific method, it is as an external recognition that their alternative treatments are great — like a Good Housekeeping Seal of Approval. And if science can’t recognize how awesome their alternative treatments are, well, clearly there must be something wrong with the rules of science.
I share Orac’s frustration with the multitudes who seem to take this attitude. To the extent that politicians and administrators steering NCCAM might be trying to change the rules of science as they apply to NCCAM sponsored research, scientists and friends of scientific methodology need to call shenanigans. You don’t get to wear the science label if you’ve drifted away from the methods of inquiry and critical analysis that make it science.
But, as committed as biomedical researchers are to the methodology of science, and as useful a guide to reality as they regard the knowledge built using this methodology to be, that doesn’t mean that non-scientists will be convinced. Physicians who are trying to deliver evidence-based care can do their best to persuade their patients that science is a good screen to separate valuable treatments from snake oil, but they can’t make their patients believe it.
What is plausible to the science-minded person is judged in connection to the scientific knowledge we’ve built so far and in terms of the methodological approach to uncovering (and testing the credibility of) new knowledge. What is plausible to lay people is often judged differently — in terms of what seems convincing on a commercial or a segment on Oprah, or in terms of the testimony of family, friends, or even celebrities. The scientific view of plausibility includes conditions for letting go of what seemed plausible if the evidence just doesn’t support it.
On the basis of the non-scientific view of plausibility, it’s less clear how people can come unstuck from their hunches. Certainly it happens, but a lot of people seem to stay stuck, even as the contrary evidence mounts.
We have a situation in which people with very different world views are trying to share a world — indeed, in which they bump into each other in the examination room. The scientists and physicians shouldn’t have to abandon their commitment to scientific methodology just because so many non-scientists have no such commitment. And patients probably shouldn’t be paternalistically strong-armed into accepting that science gives us the last word on reality.
I’m committed to the scientific world view, so it’s hard for me to put myself in the head of a patient who is not. But I’m guessing that there are some strategies that might help physicians reach these patients a little better, to bring them a little closer to understanding why biomedical research makes the evaluations it does. I doubt that stern warnings, or even reasoned arguments, will get all the people with non-scientific standards of plausibility to abandon those in favor of scientific standards. I don’t think you can make someone accept your standards of plausibility. However, I think you can explain scientific standards of plausibility, to at least make it easier for folks to see that the scientists and physicians mean something different when they use this word than everyday folks might. And I wonder if research on the alternatives such folks find appealing — research whose methodology is laid out clearly, whose evidential standards are specified in advance — might help some lay people “get” the scientific approach to answering questions and bring them a bit closer to understanding where their physicians are coming from.
Some people, I’m sure, are unpersuadable. But given the possibility of educating and persuading those who can be educated and persuaded, it seems like we ought to try. If there were a way that rigorous scientific research on CAM could help with that project, that might be a good thing.