One of the favorite fallacious arguments favored by pseudoscientists and denialists of science is the ever infamous "science was wrong before" gambit, wherein it is argued that, because science is not perfect or because scientists are not perfect, then science is not to be trusted. We've seen it many times before. Indeed, we saw it just yesterday, when promoters of quackery and anti-vaccine cranks leapt all over the revelation that American scientists had intentionally infected Guatemalan prisoners with syphilis without their consent as part of an experiment in the 1940s. They didn't attack the story because it was an inexcusable and horrific violation of human rights; rather, they attacked it because they thought that they could use the story to discredit science-based medicine (SBM) and, if they could discredit SBM, then it would somehow constitute an argument that their pseudoscience and quackery are valid. We've seen this behavior many times before. Another example includes Robert F. Kennedy, Jr.'s salivating over the Pol Thorsen scandal, even though Thorsen was a relatively minor figure in the Danish studies that demonstrated that thimerosal in vaccines is not associated with autism.
Over the years, I've vacillated between the view that this sort of behavior is either outright deceptive and that the pseudoscientists using it know what they're doing or that they genuinely believe what they're saying. In this latter case, the pseudoscience supporter or denialist seems to subscribe to a serious case of black-and-white thinking to the point where, if something is not perfect, then, as the sketch goes, it's crap. There's even a name for this fallacy, the Nirvana fallacy or the fallacy of the perfect solution. Of course inevitably there is a huge amount of selectivity in the application of the Nirvana fallacy. If it's science-based medicine and imperfect, it's crap, but if it's "alternative medicine," seemingly any flaw is excusable.
A perfect example of this is a post on that wretched hive of pseudoscience and quackery, The Huffington Post, by the father of that other wretched hive of pseudoscience and quackery, so-called "functional medicine." Yes, I'm referring to Dr. Mark Hyman of Ultrawellness, whose graced both HuffPo and his own blog with Science for Sale: Protect Yourself From Medical Research Deception (the version on Dr. Hyman's on blog here). You'll see the Nirvana fallacy combined with a heapin' helpin' of pseudoscience and logical fallacies on display. Like a typical Mike Adams screed, it begins with a study that finds fault with evidence-based medicine (EBM):
A recent study in the Journal of the American Medical Association found over 40 percent of the best designed, peer-reviewed scientific papers published in the world's top medical journals misrepresented the actual findings of the research.(i) The "spin doctors" writing the papers found a way to show treatments worked, when in fact, they didn't.
Doctors and health care consumers rely on published scientific studies to guide their decisions about which treatments work and which don't. We expect academic medical researchers to determine what needs to be studied, and to objectively report their data. We rely on government regulators to prevent harmful medications from being approved, or to quickly remove harmful medications or treatments from the market.
The study to which Hyman refers did appear in JAMA in May. Written by Isabelle Boutron, MD, PhD of the Centre d'ÃpidÃ©miologie Clinique, HÃ´pital HÃ´tel Dieu in Paris and team, the article was entitled Reporting and Interpretation of Randomized Controlled Trials With Statistically Nonsignificant Results for Primary Outcomes, the study analyzed randomized clinical trials reported in December 2006 that failed to find a statistically significant difference in primary outcome for what they defined as "spin. Primary outcome is defined as the main outcome for which the treatment is being tested for an effect, such as death from cancer, lowering of blood pressure, etc.
They looked at different strategies of spinning results in which no significant difference was found in the primary outcome of a study, which they divided into the following three main categories: "(1) a focus on statistically significant results (within-group comparison, secondary outcomes, subgroup analyses, modified population of analyses); (2) interpreting statistically nonsignificant results for the primary outcomes as showing treatment equivalence or comparable effectiveness; and (3) claiming or emphasizing the beneficial effect of the treatment despite statistically nonsignificant results." They also attempted to quantify the extent of spin. As Hyman points out, Boutron et al did indeed find that approximately 40% of the 72 articles examined contained spin of one of these types in two or more sections of their text.
Let me make one thing clear. I am not condoning or defending spin. However, one is human nature. Think about it this way. You've just spent years doing a study (and most studies do take at least a couple of years; some as many as ten). The results didn't turn out resoundingly positive--or maybe they didn't turn out positive at all. It's human nature to want to salvage something out of all that work. Personally, I view "spin strategy #1" as in essence an attempt by scientists to salvage something useful out of a trial. Within-group comparisons, secondary outcome analyses, and various other "data-mining" techniques are not per se bad science, although I will agree that if secondary results are presented as meaning that the trial as positive (or at least more positive than it was) they can constitute deceptive spin; what I won't necessarily accept is that they are always intentional or deceptive. When I see this sort of thing, though, I do wonder where the reviewers were. These are exactly the sorts of techniques that, whether intentional or not, reviewers are supposed to slap down.
You'll see where I'm going with this after you read what Hyman writes next:
What most physicians and consumers don't recognize is that science is now for sale; published data often misrepresents the truth, academic medical research has become corrupted by pharmaceutical money and special interests, and government regulators more often protect industry than the public. Increasingly, academic medical researchers are for hire, and research, once a pure activity of inquiry, is now a tool for promoting products.
While it's hard to argue that there is undue pharmaceutical company influence in the medical literature (I've written about the issue on numerous occasions on this very blog), there's just one problem.
Boutron et al is not evidence of undue pharmaceutical company influence. Her article doesn't even look at the issue. In fact, I find it rather ironic that Boutron et al write:
Our results are consistent with those of other related studies showing a positive relation between financial ties and favorable conclusions stated in trial reports. Other studies assessed discrepancies between results and their interpretation in the Conclusions sections.10, 26 Yank and colleagues10 found that for-profit funding of meta-analyses was associated with favorable conclusions but not favorable results. Other studies have shown that the Discussion sections of articles often lacked a discussion of limitations.27
Clearly Boutron et al want you to believe that their results indicated undue pharma influence resulting in more spinning of negative studies. They don't come right out and actually say that directly, though. They're too clever for that. In fact, they didn't even do an analysis to show a correlation between increasing spin or claiming results that the data do not support and funding source. Indeed, in a letter to the editor, David B. Allison and Mark Cope call Boutron et al out for making this statement thusly:
Although this implies an analysis of the association between source of funding and reporting, in particular on the use of spin, such an analysis was not included in the article. The authors noted in the "Methods" section that they assessed source of funding. It would therefore be helpful if the authors could examine this relationship.
Not surprisingly, Boutron et al's response indicates that there was no statistically significant relationship between the funding source reported and the amount of "spin" in the article, which would tend to support my argument above that simple human nature is a major contributor to attempts to "spin" data. I would also argue that the pressure to publish "positive" results also plays a role. Certainly industry influence could in addition be a factor in how scientific results are reported or misrepresented, but the paper presented by Mark Hyman as slam dunk evidence of the perfidy of big pharma is nothing of the sort. Indeed, it is not evidence to support his contention at all. In fact, I find it rather odd that Boutron et al conveniently left out their analysis that found no relationship between industry funding and level of spin in the articles they examined. I daresay they were very disappointed by that result. I might even speculate that their disappointment might have led them to leave that result out of their manuscript.
What a very human thing to do.
This wouldn't have been a big deal to me except for one thing. Boutron et al tried to have their cake and eat it too by not reporting their analysis showing a lack of correlation between industry funding and level of spin in the articles they studied while still implying rather cleverly that their results supported a relationship between industry funding and spin. In fact, one might say that Boutron et al are guilty of the very sort of spin they claim to have found in other articles. Is that evidence that Boutron et al have been influenced by their funding source? In any case, Boutronet al were forced to admit, "The statement in our "Comment" section that was noted by Allison and Cope was too strong. Because of small numbers and missing data, we cannot draw any clear conclusion on the relation between funding source and the presence of spin."
Ouch. That's going to leave a mark. It's also going to leave a mark on Mark Hyman's reliance on this study to support his argument. He even misrepresented the study by claiming that "the authors of this report did not just read the abstracts and conclusions of the studies they reviewed, but had independently analyzed the raw data." I don't know if we read the same paper, but I couldn't find anywhere in the methods a description of Boutron et al obtaining and analyzing the raw data. Then Hyman goes on to claim that Boutron et al supports his contention that this "spinning" of negative research results is a direct result of malign pharma influence when this particular study clearly doesn't support such an assertion.
Of course, Hyman doesn't just rely on this study. He trots out the same old tropes, basically napalming burning-man sized straw men into ash by claiming that science is an "objective endeavor that removes bias and is inherently true and reliable." No, science is not "inherently true and reliable," nor is it necessarily always objective. (Look for quacks to quote mine that sentence.) Rather, science is a method that seeks to minimize bias and the effects of normal human cognitive oddities that lead to incorrect conclusions. Indeed, much of what is published in the scientific literature is incorrect; that is not a flaw in science, but rather scientists publishing their observations. Those observations don't always end up standing up to scrutiny. Ultimately, science is messy as hell, with conflicting results that may take years or even decades to resolve. However, resolve they do ultimately. As messy as it is, science works, although its very messiness can be confusing to lay people and allows an opening for ideologues like Hyman to take advantage of how confusing scientific results can look.
It's particularly amusing to me to see Hyman harping on authors of scientific papers misrepresenting their results. The reason is that Hyman's history of misrepresenting and twisting science to support his own pseudoscience is truly prodigious. Amusingly, Hyman's article that immediately preceded his attack on evidence-based medicine was an article that completely misrepresented research on gut flora and disease in an outrageously pseudoscience-laden article entitled 5 Steps to Kill Hidden Bugs in Your Gut That Make You Sick. In it, Hyman buys into the idea that "toxic byproducts" of gut flora can make you sick, and he marshals and tortures a variety of studies to try to prove his point. Indeed, my only regret is that I didn't devote an entire heapin' helpin' of not-so-Respectful Insolence to this article.
It's on HuffPo, of course.
Perhaps the most amusing part of Hyman's article is how he concludes with recommendations. Coming from him, they are truly howlers. Of the seven, a couple stand out as particularly amusing, so much so that they fried yet another one of my irony meters. For example:
2. Do your homework: Be suspicious of media reports of scientific findings. Does the finding make sense in the context of other studies and is it the best possible approach. Educate yourself by learning to use PUBMED (the National Library of Medicine) and reviewing different perspectives.
The wag in my can't resist pointing out that Dr. Hyman should take his own advice. His pathetically inept analysis of Boutron et al is evidence that he has no clue how to analyze the scientific literature. Rather, he tortures it until it supports his pseudoscience.
Does it pass the "sniff test": Is the treatment suggested just a "me too" drug that has not been proven to be any better than existing treatments? Does it make sense to you or does something smell rotten? Trust your intuition.
This is particularly hilarious because "intuition" matters little in science. The "intuition" that scientists develop to detect studies that don't seem convincing comes not from any sort of "common sense" but from having a deep knowledge of the scientific literature. This is where the weakness in "Google University" knowledge is most frequently laid bare.
The bottom line is that Dr. Hyman is taking advantage of known shortcomings in how science is conducted and the messiness of its process in order to sow fear and doubt in a classic denialist fashion. He's building huge straw men about science and then blasting them with flamethrowers of burning stupid in the form of the Nirvana fallacy. (Nirvana flames of burning stupid? I like it.) Hyman may start out with a legitimate criticism of how medical science is done in 2010, but, like Mike Adams, he can't resist going far beyond that into the stratosphere of crankery, all in the name of supporting the quackery he happens to like.
Hah. A study investigating spin in reporting non-significant results spins their non-significant results...
Which is sad, cos in many ways what Boulton et al really shows is the problems with the bias towards publishing positive results. At the end of a two year study, researchers need a publication to continue their careers. If there was more acceptance that negative results were still results, as long as the science is good, then there wouldn't be this need to wangle a positive result, any positive result out of the data.
Very astute point. Negative results can be just as instructive, though not often as profitable, as positive ones although negative results can often prevent you from wasting money. I work at a company that uses black belt training - a way to enlist all employees in finding profit. The problem with the implementation here is they do not accept negative results as having any value. Silly and frustrating.
"If there was more acceptance that negative results were still results, as long as the science is good, then there wouldn't be this need to wangle a positive result, any positive result out of the data."
Absolutely. The first time I was told to write a paper about a negative result I got while doing research, I thought I was just lucky to have a professor who'd let me get away with something like that. Now, I finally realize that he was really teaching me that in science, any result is something to be considered, analyzed, and scruitinized so we can avoid mistakes and help others by cataloguing our own bad ideas and shortcomings so they don't waste their time going down the same dead ends we did.
Like a wise person once said, "learn from the mistakes of others because you don't have enough time to learn only from your own." It would be great if researchers aimed for success and tried their hardest to make something work, but at the same time, were willing to admit their failures and let others know where they went wrong. All too often I see mistakes and failures slightly spun, then buried with no further comment.
So... you're using multiple examples of scientists determining that science reporting is inaccurate to prove... what exactly was that point again?
Trust your intuition...
...and you too can "know" that bloodletting, Perkins tractors, mesmerism, and homoeopathy all work.
I possess a copy of Hyman's *Meisterwerk*, "Ultra-something-or-other", which is currently eluding me ( probably hiding in shame), thus I perused his blog : seems that he sells books and DVD's *and* if I remember correctly, from said *magnum opus*, he works for the Canyon Ranch- a high-priced spa/ weight-loss resort. Although Hyman shows more sophistication ( as he is probably more intelligent, person-savvy, and better educated in general) in sales technique than say, Mike Adams, his writing is ultimately, sales technique. The standard formula appears to be : cultivate mistrust of SBM by "revealing" its dastardly deeds and greed, then offer up a "friendlier" substitute. I've discovered a simple, time-tested method to immediately evaluate blogs like his ( or any of the more *flamboyant* creations bashed here regularly): does it have a section labelled ,"Store"?
when promoters of quackery and anti-vaccine cranks leapt all over the revelation that American scientists Science-based medicine scientist (SBMers) had intentionally infected Guatemalan prisoners with syphilis without their consent as part of an experiment in the 1940s.
You seem to have posted only a partial thought there. Did you have a point?
FWIW - I think it's pretty clear that the syphilis experiments you mention were deplorable. This has been discussed in some length in a different thread.
@#6 Denise Walker
That's a good point there!
Stil, I have no doubt the excuse on the part of the CAM apologists will be "But they don't need a store since the pharmaceutical companies pay them out directly!" or a very similar variant.
@ muteKi : Exactly! Little do they realize that most of us "avoid the middleman" and are paid *directly* by the Source (You-know-who).
OT But I don't see how you missed this:
Isn't that how Kim S. claims her daughter(s)became autistic?
Antivaxers will be all over the funding for that study.
And, of course, the banner ads for rotavirus.
Oops! I should have taken another second to read that. The ads are for Rotarix, the vaccine.
Nice article! Quoting:
"The first time I was told to write a paper about a negative result I got while doing research, I thought I was just lucky to have a professor who'd let me get away with something like that. Now, I finally realize that he was really teaching me that in science, any result is something to be considered, analyzed, and scruitinized so we can avoid mistakes and help others by cataloguing our own bad ideas and shortcomings so they don't waste their time going down the same dead ends we did. "
Completely agree with you! There is a a new journal, called The All Results Journals focus on publishing negative results. Their total open access policy makes them a very convenient venue to publish your negative results and combat the current publication bias.
I have proposed a series of journals called The Journal of Null Results for any number of disciplines. The journals would publish studies based upon the rigor of the design and execution, with no necessity of statistically significant results.
"The file drawer effect" always struck me as a waste of valuable information. At the very least, investigators would benefit from knowing what studies have been attempted in the past rather than repeatedly wasting time and resources reinventing a wheel that others have failed to make work. But it can also put into context studies that report false positives. So for instance, a positive study of acupuncture could be put into the context of 20 other similarly conducted studies that found no significant results.
Having a journal only solves part of the problem. So long as it's career suicide (only slightly overstated) to spend time on writing up negative results, instead of producing "valuable" papers, such journals won't get many articles. It really does call for a widespread cultural change among scientists.
It's a start, though, and I find that encouraging. After all, culture change doesn't happen by magic -- it happens by people gradually deciding that this is an okay thing after all.
One of the points lost on Hyman is that the biased articles still contained enough information to recognize the bias! Hyman may base his conduct on the last sentence of the abstract and expect others to obey his personal spin. However, in science based medicine, we actually read the whole article, assess the weaknesses and factor those in to our decisions.
Richard Feynman's observations about pseudoscience = cargo cultism seems apposite.
â¦there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying science in schoolâwe never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. Itâs a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honestyâa kind of leaning over backwards. For example, if youâre doing an experiment, you should report everything that you think might make it invalidânot only what you think is right about it: other causes that could possibly explain your results; and things you thought of that youâve eliminated by some other experiment, and how they workedâto make sure the other fellow can tell they have been eliminated.
Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you canâif you know anything at all wrong, or possibly wrongâto explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; ...
You seem to have posted only a partial thought there. Did you have a point?"
Why do you seem surprised?
BKsea: "One of the points lost on Hyman is that the biased articles still contained enough information to recognize the bias! Hyman may base his conduct on the last sentence of the abstract and expect others to obey his personal spin. However, in science based medicine, we actually read the whole article, assess the weaknesses and factor those in to our decisions."
In an ideal world, yes. In real world medicine there is limited time to read journal articles, and many will skim them, read abstracts and/or depend on summaries in review articles. There are also limits on knowledge of optimal study design and statistical rigor. Practicing physicians depend to some extent on knowledgeable reviewers picking up on defects before or after publication, and editorial commentary that puts study results in perspective.
My take-home message is that there remains room for improvement both in training docs to evaluate research, and in journal oversight of submitted articles (and vast room for improvement in how alt med research is conducted and the way altie practitioners and promoters misrepresent its findings).
@Dangerous Bacon - While he might have accidentally pushed the "post" button before he was ready, I was concerned he might have succumbed to foul play in mid post. But the post didn't end with a yelp or cry for help, so it's unclear.
I was hoping he might have succumbed to foul play in mid post.
Fixed that for you :)
@23 - I wish Augustine well, in that I don't actively wish him harm. Usually he at least expresses a complete thought. I often disagree with him sometimes believe his comments are not well thought out, but do not wish him ill.
http://amzn.to/bHaL03 Medicine is an art -not- a science. Medical doctors have a hippocratic oath. The 40's do not seem so far back to me that their dubious morals don't caste a bad light today. Missing the emotional part of the brain the patient just sits. Why do anything unless there is caring? Intuition is a global type idea of human cognition. If you don't believe it, ask a pediatrician if he doesn't find it of use in kid's Moms.
Orac, one of the paras in the middle of your post seems to stop in mid-sentence, leading to a loss of impact in that para. Otherwise of course, this is as usual a penetrating critique of the doubleplusungood duckspeakers (ie quacks).
Usually he at least expresses a complete thought
I feel you're being too generous here. Augustine's "thoughts" usually represent something less than the quanta of perception normally qualifying for the term.
I'm not even sure that he exists, in the sense that we might properly interpret Descarte's famous conclusion as "there is thinking going on."
Mephistopheles and Anthony,
I wonder if comment 7 was our random SPAM quote copy bot again.
As I recall, it used a variety of names including other commenters on other threads.
amanda more... between your opinion @ 25 ("medicine is an art -not- a science") and the opinion of the editors, contributors, etc. of this book (http://www.amazon.ca/Scientifica-David-Ellyard/dp/1921209224/ref=sr_1_1…), who include medicine in the category of 'science', I wonder which one to choose from...
@ Mephistopheles, Anthony, etc.
Sorry do disappoint, but I think that Augustine has nothing to do with post #7. I'm actually surprised he/she is not around, I believe that bad ethics in science is one the axes he/she likes to grind.
If you search in Orac's main post, you will find out the exact text posted under @7. Obviously some random copy-and-paste.
I was tempted to post this yesterday, but I finally decided it was not worth the wear and tear of electrons. But since you are still talking about it, maybe pointing this was needed.
There are bots copying-and-pasting old posts on old threads of Respectful Insolence, occasionally stealing another poster's name in the process. This has been on-going for 2 or 3 weeks, now. Apparently this one managed to find the most recent thread. Eventually.
But I grant you that the result, on a first reading, is close to that we come to expect from Augustine on a bad day. Is this bot on its way to satisfy the Turing postulate?
Not surprisingly, Boutron et al's response indicates that there was no statistically significant relationship between the funding source reported and the amount of "spin" in the article,
Wait, so in a paper examining the tendency for author's to "spin" their results in the absence of statistically significant data, the authors chose to "spin" some of their results in the absence of statistically significant data? hahahahaha, that's awesome. "The irony, it burns!"
(To be clear: I don't mean to unduly criticize Boutron et al. As Orac says, this is just human nature. Actually, I'm a little surprised that only 40% of papers that failed to find statistically significant primary results tried to play things up. What next, are you going to tell me that 40% of men exaggerate the number of sexual partners they've had when conversing with other men? Nooooo, say it ain't so!!!)
That was pretty much my thought, too. But, I think augustine showed up for real on one of today's blog threads.
about halfway through this post you have a paragraph that just ends abruptly with "Then Hyman goes on to claim that Boutron et al supports his contention that" It seems like you left out the last part of this sentence.
The observation that "... Boutron et al conveniently left out their analysis that found no relationship between industry funding and level of spin in the articles they examined" leads me to believe that the Boutron et al study can therefore be recursively and ironically included in the "40 percent of studies misrepresenting the findings."
I do think that Hyman wants to sell his products...but to be fair, there is a big problem between the relationship of doctors and pharma companies. I've seen is first hand and it can influence how a doctor views a patient's condition. To think that money can't seep into an industry is ludicrous.
What I think functional medicine provides is a step back into time, one in which doctors cared more about the patient themselves, versus how many patients they can fit into their daily schedule. In addition, I think looking for the underlying cause of a problem rather then treating symptoms is how doctors should approach each patient.
My finance, from her teen years to her early 30's, went to many (20+) traditional doctors to help diagnose why she was getting migraines. Each one gave her the latest medication to mask the headache upon it's onset......we decided a few months ago to go see a functional medicine doctor and he's the only one to take the time to figure out the root cause......she's recently been diagonosed with celiacs disease. All the others wanted to prescribe meds....not one took the time to dive into her condition and figure it out.
Are there good traditional docs? yes! Are there good functional medicine docs? yes! Are there sh!tty docs out there in general...yes there are!
My point is that I don't want people to think because Mark Hyman is a functional medicine doctor.....that all functional medicine docs are quacks. There are good ones out there doing great things........if you're like my fiance and I, and you have an issue that your doctor can't seem to figure out, and they just prescribes meds....think about seeing a functional doctor, it may be worth your time.
Just FYI, celiac disease has only recently become more well-known. Not a whole lot was known about it even 15 years ago. It's only within the last few years that lab tests have been developed to help aid diagnosis. So, it's not surprising that your fiancee went undiagnosed for so long, especially if her case was relatively mild.
Especially since migraines weren't linked to celiac until this decade. And then the link had to be confirmed by follow-up studies. And then that has to trickle down to the physicians. So I wouldn't expect a physician to screen for celiac based on migraines until sometime within in the last couple of years. It's not obvious what the other docs tried to do to determine what the cause was. Migraines can be caused by many different triggers, only a small fraction of which have diagnostic tests. It may not be fair to blame them for not making a connection that wasn't known until very recently. Note that I'm not pooh-poohing your fiancee's problem, just that the docs may have been more vigilant than you or she realized at the time.
I hear what all of you are saying, and I partly agree. My point is medicine has become, at least what I've observed, a business....and that's it. I don't mind physicians making money....but they should looking out for their patients' best interests, and not how many meds they can prescribe.
I do agree that it's been fairly recent since migraine sufferers were 10 times more likely to have celiacs disease. But that study came out in 2003, and she has probably seen 10+ doctors since then.
What I think is a great thing about medicine in general are scientists constantly testing and retesting. Doctors (scientists) should keep an open mind when it comes to different treatment options.....if they don't, then no progress will be made in medicine. Is there anyone out there that would want to be treated by a physician who was trained in the 1800's?
Modern medicine is constantly evolving and a doctor who has the mindset that there's nothing more to learn is a doctor I have no interest in seeing. People are always threatened by something new or by something they don't understand........and I think a lot of you, and the editors at sciencebasedmedicine.org, fall into that category. By no means am I trying to be rude, but these are similar discussions I have had with my fiancÃ©'s family. They all said I was a quack for looking for someone who took a different approach.....but guess what they're doing now since my fiance has reduced her migraines from 5 a week to about 1 a month: they're all making an appointment with a functional medicine doctor because their "traditional" doctor couldn't find anything....and their only solution was to prescribe more medication for their migraines.
I hope everyone here stays healthy and happy seeing their doctor.....but try to keep an open-mind. We still have a lot to learn....change and progression are ok, especially in medicine!
Is there anyone out there that would want to be treated by a physician who was trained in the 1800's?
Yes, anyone who thinks homeopathy is effective.
People are always threatened by something new or by something they don't understand........and I think a lot of you, and the editors at sciencebasedmedicine.org, fall into that category. By no means am I trying to be rude, but these are similar discussions I have had with my fiancÃ©'s family.
The problem is not with something "new", or "don't understand." It has to do with scientific plausibility.
Using migraines is a difficult example because that is a complex symptom where the cause is not often known. If real medicine does not understand migraines fully, than those who claim to be naturapaths or "functional doctors" know even less.
A simple example of something Orac and the doctors are criticized for is not understanding homeopathy. Except, anyone who did not sleep through basic high school chemistry should understand that diluting something to the point of none of the original stuff is left in the solution is not actually going to work.
Yes, it is a good idea to have an open mind and be able to take in new information. But not so open your brain falls out.
I do agree that it's been fairly recent since migraine sufferers were 10 times more likely to have celiacs disease. But that study came out in 2003, and she has probably seen 10+ doctors since then.
First of all, that's a misrepresentation of the findings. In the study you mention, 4.4% of migraine sufferers had celiac disease, whereas 0.4% of healthy controls had celiac disease (N= 90 and 236, respectively). This was a preliminary study with a small sample size. It takes years to confirm such studies. 5-7 years is not that long a period between preliminary study and diagnosis by practitioners. The reason is that most preliminary studies don't pan out when scaled up.
Non-traditional doctors tend to follow fads. They'll jump on preliminary studies and hype them up, especially if the study implicates diet or nutrition. In your fiancee's case, it appears that she got lucky.
Then again, did the doc actually get her tested for celiac disease? Or did he just prescribe her a gluten-free diet? That would be no different than a traditional doc prescribing a migraine medication. Or perhaps the traditional docs asked her questions that would help determine whether she had celiac disease that she didn't answer honestly. These questions are very uncomfortable, after all.
The point is that there is a balance between accepting new ideas and practicing established medicine. When we have limited knowledge, we have to make a decision on how much information is enough to come to a conclusion.
I would like to review below for the lay public the JAMA paper entitled "Reporting and Interpretation of Randomized Controlled Trials With Statistically Nonsignificant Results for Primary Outcomes", which Mark Hyman misrepresented in his Huffington Post article. I also point out Hyman's distortions below.
1. What was the paper about?
The paper was an attempt to examine the validity of the prior tacit understanding that subjective bias enters into the reporting of negative results of drug treatment trials. This understanding was supported to some extent by earlier studies of others.
2. What did the authors do?
They selected 72 out of 616 papers dealing with what are known as randomized controlled trials (RCTs) published in December 2006. Only 72 papers were selected because these contained non-significant or negative primary results, which was what the authors were focusing on. Two researchers independently read these 72 papers, and subjectively assessed whether the authors had spun the interpretation of the negative results. Reading the description of the methods it is clear that the following excerpts from Mark Hyman's article were misrepresentations:
"....analyzed in detail 72 of those they considered to be of the highest quality."
"The authors of this report did not just read the abstracts and conclusions of the studies they reviewed, but independently analyzed the raw data."
3. What did the authors find?
They found that what they defined as spin in the interpretation of negative results was present in as many as 42 of these papers in one of their sections, and more than 40% of them in at least 2 sections in the main text. One caveat/limitation of their findings, which they mention is that the two researchers did not always agree on the presence of spin in the different sections. Accordingly, they state that their reproducibility was moderate. Reading these findings it is clear that the following statement of Mark Hyman is highly misleading:
"They found that 40 percent of the articles misrepresented the data in the abstract or in the main text of the study. Furthermore they uncovered that in cases where studies had negative outcomes--in other words, the treatment studied DID NOT work--the scientists authoring the studies created a "spin" on the data that showed the treatments DID work."
4. What did the authors infer from their findings?
They inferred that in reporting of negative or non-significant treatment outcomes the authors of many studies with such negative results consciously or subconsciously introduce distortion or spin to make the most of those outcomes. They also made the following observation:
"Our results are consistent with those of other related studies showing a positive relation between financial ties and favorable conclusions stated in trial reports."
....Boutron et al.
However, when they were challenged to substantiate the above statement with actual data by two other authors of a subsequent comment on their paper, they had to retract the statement. Here is what they wrote in their retraction:
"The statement in our "Comment" section that was noted by Allison and Cope was too strong. Because of small numbers and missing data, we cannot draw any clear conclusion on the relation between funding source and the presence of spin."
.....Boutron et al.
5. What are the limitations of their approach, findings and conclusions?
In addition to the above retraction, and the already mentioned caveat, the authors themselves stated the following limitations of their paper:
i) That their assessment is subjective, and there may be disagreements between different researchers/authors on their conclusions.
ii) That they cannot say whether the spin was deliberate or because of lack of knowledge or both.
iii) That they cannot tell whether the spin had any effect on readers and peer reviewers.
Reading this and the above there should be no doubt that the following statement of Mark Hyman is a gross misrepresentation:
In plain language, 40 percent of the studies we count on to make medical decisions are authored by scientists who act as "spin doctors" distorting medical research to suit personal needs or corporate economic interests. "Spin" can be defined as specific reporting that could distort the interpretation of results and mislead readers. If the conclusions in 40 percent of the papers published in medical journals are being spun toward independent interests, how can we consider the medicine we are practicing "evidence based?"
So, if you count the number of words that Hyman has spent on describing this paper in the Huffington Post (Please see - http://www.huffingtonpost.com/dr-mark-hyman/dangerous-spin-doctors-7-_b…), you will find that out of 262 words, not counting his direct quotes from the paper, 179 words make up blatantly false or misleading statements.
[DELETED AT THE REQUEST OF THE COMMENTER DUE TO HIS APPARENT REALIZATION OF THE INAPPROPRIATENESS OF HIS BEHAVIOR.]
Mr. Burgess, Orac's identity is the worse kept secret on the internet. If you look carefully you will find that the article on Weil was posted under his own name not too long ago elsewhere. Why are you posting on an old article, and not on the one you posted on this morning?
@ Jonathan Burgess,
Orac's true identity is one of the worst kept secrets of the internet. If you aren't able to figure it out, you're even dumber than your writings have shown to date.
Dr Mark Hyman is ruffling a lot of feathers. His book the "Ultramind solution" was life changing for me. Sorry big business, I don't eat corn syrup any more, not interested in it now that I'm educated thanks to Mark Hyman. Sorry pill pushers, I don't need anti depressant drugs any more. I just needed a life style change. Thanks to Dr Mark Hyman, I found what was working against me and beat it. This article of attack is a joke. When the writer can help someone come out of Autism, depression, so called ADD and more, then I'll listen. As for me, all I can say is, I listened, tried, and now believe what works for me.