I admire Brian Deer. I really do. He’s put up with incredible amounts of abuse and gone to amazing lengths to unmask the vaccine quack Andrew Wakefield, the man whose fraudulent case series published in The Lancet thirteen years ago launched a thousand quack autism remedies and, worst of all, contributed to a scare over the MMR vaccine that is only now beginning to abate. Yes, Andrew Wakefield produced a paper that implied (although Wakefield was very careful not to say explicitly) that the MMR vaccine caused an entity that later became known as “autistic enterocolitis” and later implied that the MMR vaccine causes autism itself. Aided and abetted by the credulous and sensationalistic British press, Wakefield then played the myth he helped create for all it was worth. Were it not for Brian Deer and his dogged investigation into Wakefield’s perfidy, the fraud at the heart of the myth that the MMR vaccine causes autism might never have been discovered and Wakefield might still have his medical license, rather than having been struck off the U.K. medical register.

All of which is why it pains me to have to disagree with Deer now.

Of course, this is not the first time I’ve had a problem with something that Deer’s written, and I’m guessing that it won’t be the last. No one expects that we’ll agree on everything, nor, I hope, would anyone expect that I’d hold my fire when even a usual ally like Deer makes a misstep. My admiration for his having exposed Wakefield is enormous and buys Deer a lot of credit in my estimation when it comes to giving him the benefit of the doubt about what he writes, but that credit and benefit of the doubt only go so far. Unfortunately, in the wake of his vindication with respect to Wakefield’s fraud, Deer seems to have developed a bug up his posterior over science and scientists. Now, I can sort of understand why that might be true, given the seven or eight years of relentless abuse he’s suffered at the hands of the anti-vaccine movement and the chilly reception he’s received from some scientists. I can even kind of understand why Deer has lashed out at Paul Offit, Ben Goldacre, and Michael Fitzpatrick, although I think he made a big mistake in doing so. I also think he’s gone a bit overboard in his latest article, Scientific fraud in the UK: The time has come for regulation. He begins in a very derogatory, ad hominem fashion:

Fellows of the Royal Society aren’t supposed to shriek. But that’s what one did at a public meeting recently when I leapt onto my hobbyhorse: fraud in science. The establishment don’t want to know. An FRS in the audience – a professor of structural biology – practically vaulted across the room in full cry. What got this guy’s goat was my suggestion that scientists are no more trustworthy than restaurant managers or athletes.

Now, obviously I wasn’t at this public meeting, its being in the U.K. and all, but I highly doubt that this particular Fellow of the Royal Society literally “shrieked” at Deer. I really do. Even granting a bit of artistic license, his characterizing it that way is clearly meant to paint a picture of someone who disagrees with him as being shrill and unreasonable, “shrieking” and “vaulting across the room at full cry” at him. Not “strenuously disagreed” with the proposal Deer is arguing for. Not “criticized it harshly.” “Shrieked” and “vaulted across the room at full cry.” Now, maybe the FRS described really did shriek, or maybe scientists did raise their voices to be heard. Who knows? Even if they did, this is not an auspicious start to Deer’s argument. Deer is usually much, much better than that; he usually doesn’t usually take cheap shots like this. Sadly, he takes an even cheaper shot at scientists later, as you will see. But first he has to dump on scientists a bit more:

Restaurant kitchens are checked because some of them are dirty. Athletes are drug-tested because some of them cheat. Old people’s homes, hospitals and centres for the disabled are subjected to random inspections. But oh-so-lofty scientists plough on unperturbed by the darker suspicions of our time.

Is Deer actually proposing surprise inspections of labs? Probably not, but if he’s suggesting through his analogy that this would be likely to catch fraud, he’s going to be sorely disappointed. Such inspections would be even less likely to detect overt fraud than the peer review system. So what is Deer proposing? I’m really not sure, and, rereading Deer’s article, I’m not entirely sure he knows what he’s proposing, either. In any case, the analogy is really, really bad. Unsanitary conditions and practices in kitchens can usually be easily detected by surprise inspections. Ditto hospitals. Random drug tests don’t work quite as well, but they do certainly weed out those who aren’t clever enough to evade them. Often such inspections distort the very thing being regulated. For instance, we just went through our JCAHO inspection a few weeks ago, and I’m not sure that the months of preparation that went into getting ready for the inspection actually made us better as a hospital. All that preparation did get us ready to pass the test. True, inspections of the laboratory used by Wakefield to do PCR looking for vaccine strain measles virus might have turned up the rank incompetence there, but in the majority of cases I highly doubt that such inspections would find evidence of data falsification.

In any case, Deer airily dismisses scientists pointing out that there already exist checks and balances in science, referring to appeals to the scientific method as a method “which separates true from false, like a sheep gate minded by angels.” Damn. I’ll give Deer credit for being a really good writer, but the contempt dripping from his prose is palpable as he dismisses arguments he doesn’t agree with even while admitting he has little evidence to support his assertions:

They heard, of course, that there’s no evidence of a problem: no proof of much fraud in science. Publishing behemoth Reed Elsevier, for example, observed that of 260,000 articles it pumps out in a year, it will typically retract just 70. And for nearly all of these the reason was that the stuff was “plain wrong”, not because it was shown to be dishonest.

This sounds like the old Vatican line about priests and child abuse. Or Scotland Yard and tabloid phone-hacking. And, although I know that the plural of “anecdote” isn’t “data”, the anecdotes of science fraud are stacking up.

Comparing scientists to pedophile priests is the cheapest of cheap shots, and, quite frankly, I resent it. If he were to use that sort of simile at a hearing that I attended, I might be tempted to “shriek” at him too.

As for whether anecdotes of science fraud are “stacking up,” maybe I’m just blinded by being a–you know–actual scientist, but quite frankly, I just don’t see it. To me, data talks, and, quite frankly, Deer doesn’t have much in the way of data. Actually, he doesn’t have any (or at least he doesn’t present any), and he’s actually right about one thing: The plural of “anecdote” isn’t “data.” Yet all he presents are two anecdotes. He has Wakefield, and he has “Woo-Suk Hwang’s fabricated claims in Science about cloning embryonic stem cells.” But he has no real hard data on how common scientific fraud and misconduct are. If anecdotes are what Deer is dealing with, then what does he make of my anecdote? In my 20+ years in science I have never witnessed or had personal knowledge of anyone where I’ve worked falsifying data. In fact, when The Lancet editor Richard Horton is quoted as saying that “flagrant” scientific fraud is “not uncommon,” I have to wonder what, exactly, he referring to. If it were that common, presumably my colleagues and I would have seen some. Don’t get me wrong. I’m not holding up my experience as necessarily being representative. Maybe I am blind. Who knows? What I am doing is trying to point out how relying an anecdotes can easily lead to a distorted picture. The anti-vaccine movement taught us that.

Also, let me just repeat yet another time that I detest scientific fraud, and have written about it on multiple occasions. But fraud is a continuum. Far more common than outright data falsification à la Wakefield is the reporting of half-baked research, the use of inappropriate analyses or selective data reporting to squeeze positive-appearing results out of what are really negative results, or exaggerating the strength and/or importance of their work.

As an example, let’s do a little thought experiment and imagine a situation, for the moment, in which Andrew Wakefield’s Lancet paper was not fabricated in the least, as Deer showed that it was. Pretend, for the moment, that it was a perfectly well-executed case study, with clinical histories accurately reported. Even if that were the case, his paper was merely the result of a measly twelve patient case study. At best, it could generate hypotheses. Making any sort of firm conclusions from such results is profoundly irresponsible, as was promoting such results so publicly. In any case, science actually did eventually sort it out; other studies were done and failed to find any association between the MMR vaccine and autism or enterocolitis in autistic children. The problem was that no one seemed to be listening. Science was self-correcting in Wakefield’s. The damage caused by Wakefield’s fraud was not so much to science, but rather to the public opinion of the safety of the U.K. vaccination program. It is thus the public perception of that science became the problem, not science itself. Look at it this way. Even if there were no fraud and Wakefield’s initial paper had been meticulously carried out, the damage would still have been done because it was how Wakefield reported his results to the press and how the British press credulously lapped up his line of BS, coupled with the irresponsibility (well documented by, yes, Brian Deer) of The Lancet editors and the leadership at the Royal Free Hospital that caused the MMR scare.

Deer continues to report on the House of Commons science and technology committee proceedings, including the report it recently issued, quoting and heartily endorsing its conclusion:

Finally, we found that the integrity of the peer-review process can only ever be as robust as the integrity of the people involved. Ethical and scientific misconduct–such as in the Wakefield case–damages peer review and science as a whole. Although it is not the role of peer review to police research integrity and identify fraud or misconduct, it does, on occasion, identify suspicious cases. While there is guidance in place for journal editors when ethical misconduct is suspected, we found the general oversight of research integrity in the UK to be unsatisfactory. We note that the UK Research Integrity Futures Working Group report recently made sensible recommendations about the way forward for research integrity in the UK, which have not been adopted. We recommend that the Government revisit the recommendation that the UK should have an oversight body for research integrity that provides “advice and support to research employers and assurance to research funders”, across all disciplines. Furthermore, while employers must take responsibility for the integrity of their employees’ research, we recommend that there be an external regulator overseeing research integrity. We also recommend that all UK research institutions have a specific member of staff leading on research integrity.

Deer concludes with a tweak:

The fellows of the Royal Society, I’m sure, won’t like it.

Of course, the question is: Why won’t they like it? It might not be, as Deer seems to think, because they are reflexively resistant to any oversight (although that might be true). Might it not also equally be because this proposal is ill thought out and in general a bad idea? It might.

For example, consider these questions:

  1. What, exactly, would each specific member of each staff of UK research institutions charged with “leading on research integrity” actually do? Seriously. Think about it. What would such a peson do? Would he pop into investigator’s labs? Would he inspect lab notebooks? Would he peruse computer hard drives? Watch students, postdocs, and technicians do experiments? Reanalyze random data? And if he doesn’t do those things, then what, exactly, would he do to stop fraud in his own institution that would have any hope of actually making a difference?
  2. What, exactly, would a regulatory body do? David Colquhoun is spot on in the comments when he asks, “Would it reanalyse each of my single ion channel records to make sure I’d done it right? Would it then check all my algebra to make sure there was no misteke? Even if you could find anyone to do it, that would take as long as doing the work in the first place. I fear that Deer’s suggestion, though made from the best of motives, shows that he hasn’t much idea of how either experiments work or how regulatory bodies (don’t) work.” And if it were just a passive surveillance system that only acts after a complaint is filed, how is that any better than the situation in the US?

The bottom line is that we really don’t know how common fraudulent research, specifically examples of blatant fraud like Wakefield’s, really is. There is evidence that perhaps 2% of researchers admit to having manipulated data, but perhaps 33% admit to having at least once engaged in “questionable” research practices. Of course, such results depend upon how you define “questionable.” Whatever the true incidence of scientific misconduct is, we do know that peer review isn’t very good at catching outright fraud and never has been. So what to do? Another consideration is that any regulatory body could be hijacked by ideologues. I don’t know how much of a problem this is in the U.K., but imagine, for example, someone like global warming denialist Senator James Inohofe taking over a panel on science integrity and what he could do with climate scientists.

Even though peer review isn’t particularly good at finding fraud, it is still one major tool to be used to do so. What, however, should be added to it? The problem with many ideas, such as oversight panels or institutional science integrity officers is in the details. They sound like good ideas on the surface, but when you start to think a bit about the details and what, exactly, such mechanisms would mean and how they would be set up, suddenly it’s not so easy at all. Let’s go back to my thought experiment in which Wakefield’s work was not fraudulent. Now switch back to reality, where Wakefield did commit fraud, but imagine that the ideas advocated by Deer were policy at the time he was signing up patients for his case series. Would Deer’s and the committee’s proposals have stopped Wakefield or exposed him earlier. Would a science integrity officer at the Royal Free Clinic or a science oversight board have made a difference? Maybe, but I highly doubt it. At what point in Wakefield’s fraud would either such mechanism have been able to intervene? Afterward, what, specifically, would have triggered an investigation by either the institution or the regulatory body proposed? After all, it took even Brian Deer himself years to dig up the evidence that started to suggest fraud.

Still, it’s not all bad. Deer is right about one thing, and that’s that the “R” word (responsibility) has to be injected into the system. Here in the U.S., government granting agencies, such as the NIH, investigate allegations of scientific fraud in research funded by the U.S. government, the penalty for fraud being banned from receiving federal grants for a period of time and possibly even criminal charges for defrauding the government. The problem is that the NIH Office of Research Integrity is underfunded and overwhelmed.

More important than any external regulatory force is changing the culture of science so that it is considered acceptable to report suspected scientific misconduct. Science is a means to an end: to find out how nature works, to plumb its mysteries and discover rules by which it works and that we can use to make predictions. Science, to its credit, also tends to work more or less on the honor system, which is why Deer is probably correct that scientists tend not to consider perhaps as much as we should the possibility that an investigator is lying or falsifying data when anomalous results such as Wakefield’s are reported. On the other hand, he appears not to understand that the vast majority of anomalous results are not due to fraud but rather differences in experimental design, analysis, bias (often subtle, but sometimes not), publication bias, and many other factors that can lead scientists astray. Some results are just wrong through sheer random chance. Yet Deer seems to assume based on his experiences with Wakefield but without strong empirical evidence that much of these problems are due to fraud, rather than just bad science, biased science, or random flukes that produce .

Even if Deer were correct that scientific fraud is a massive problem, it doesn’t mean that his blanket condemnation of science in the U.K. or that his likening the culture to the Roman Catholic Church shielding pedophile priests wasn’t way over the top. Unfortunately, burned by his experience pursuing Wakefield, I fear he’s become a bit cynical. It’s also clear that, for all his amazing skill as an investigative journalist, he hasn’t really developed a firm grasp of the nitty gritty of how science actually works and how research is actually done. In the end, science usually does correct itself regardless of the source of error, be it fraud or, well, error. Results in issues that matter will eventually be corrected because other investigators interested enough to expand on a line of investigation need to start by replicating the results that interested them. In the case of fraud, they won’t be able to. Meanwhile, results that other investigators don’t bother to try to replicate as a prelude to moving beyond them usually don’t matter much and have little or no effect on science. It’s a slow and messy process, sometimes maddeningly so, but over time it does work. Science does correct itself.

The problem is, many of the cures to accelerate the process of discovering fraud are potentially worse than the disease. Then there’s another question to consider: How does one determine whether highly novel or potentially groundbreaking work is correct or a mistake or a dead end when there’s nothing to measure it against? One can’t; other scientists can by examining the results and trying to reproduce them so that they can move on.


  1. #1 lilady
    August 5, 2011

    Such an interesting article and thank you Orac for your usual superb job of updating us about Brian Deer’s latest column.

    When I first saw the remarks from Dr. Offit I questioned why he emphasized (regarding Wakefield) the fact that the science behind the study was wrong. IMO, if Dr. Offit was at a medical conference, that remark would certainly be sufficient…but his remarks were solicited by a newspaper and he knew (or perhaps should have known), that the remarks would be picked up by the media worldwide and go “viral” on the internet.

    Dr. Offit is considered to be the “go to expert” on vaccines; he richly deserves the title. I would have preferred that he prefaced his remark with “As a researcher I think the most important thing is that Wakefield was wrong”. To clarify he could have stated that the reason why Wakefield was wrong was because his wrongness had an agenda of profit and financial gain.

    So Orac, on this argument we disagree.

    I first became aware of Mr. Deer’s journalism, when I viewed the Public Broadcasting documentary about the Wakefield study and how poorly it had been conducted. Yes, I certainly did know about the Lancet publishing the small Wakefield study and its ramifications for public health…I saw it myself while working as a public health nurse. (No need to comment here about the respected Lancet Journal editorial board…they dropped the ball completely.) That very night after viewing the documentary I went straight to Brian Deer’s website and into the wee hours of the morning I read all the articles he had ever published regarding medical issues. Now Mr. Deer does not write for NY Times or the Wall Street Journal, where his more colorful commentary would be edited out to meet “the standards” of the aforementioned newspapers. Mr. Deer writes for a newspaper where his journalistic style is in demand by the publishers and his readership…and very consistent with the many articles he has written for the readership.

    Insofar as comparing Wakefield’s fraud and what Mr. Deer labels as a cover-up, the remark may have been a bit over-the-top with a comparison to the Church covering up abuse of altar boys, but not totally out of sync with Mr. Deer’s style of reporting.

    Must dash now for an appointment. I sure this debate will be hot and heavy and ongoing and I’ll probably be back to re-post.

  2. #2 jaranath
    August 5, 2011

    I forgot to add: Just to mitigate the apparent pile-on a bit here, I may agree with at least some of what Brian’s arguing, especially if there’s no mechanism in the UK at all. I just don’t know the ins and outs of practicing research well enough to suggest what is reasonable to implement. I know that the auditing I deal with at work is generally positive and helpful, except for a couple cases where I’ve seen it go horribly wrong.

  3. #3 Orac
    August 5, 2011

    So Orac, on this argument we disagree.

    “Partially disagree.” You see, I can understand where Dr. Offit is coming from, but I could also understand where Brian Deer was coming from on the issue of science being wrong due to fraud or just being wrong. I tackled the issue in my characteristic logorrheic fashion here:



  4. #4 stripey_cat
    August 5, 2011

    Well that IS true. I’ll agree to that. It DOES happen. It IS a problem.

    But the problem being so large? That’s a conspiracy theory.

    Wow, I’m not even thinking about retaliation or prejudice. I’m thinking that if you call out someone you’ve been working with for two years, that’s two years of *your* data and publications that have to be thrown down the crapper along with theirs. That’s a real setback to someone early in their career. Of course we hope people will be sympathetic (although not all will), but even if they do, there’s a gap in your publication record you could drive a bus through.

  5. #5 Chris
    August 5, 2011


    (No need to comment here about the respected Lancet Journal editorial board…they dropped the ball completely.)

    (have more detailed comment in moderation)

    Except they included an article by Shen and DeStefano on why Wakefield’s study was flawed. Also there were several other scientists who questioned Wakefield’s claims, and that included several published papers.

    There was no question that Wakefield was wrong. The question is why did so many people believe he was right, and why did it take Mr. Deer’s articles to prove it to the general audience, and not scientific world?

  6. #6 Denice Walter
    August 5, 2011

    @ Chris: “why did so many believe he was right”

    Because it appealed to deep-seated irrational beliefs that they held (“vaccines are dangerous tampering with Nature”/ “autism is not inherited”) just like Burt’s “data” about IQ being largely inherited ( thus not affected by environmental causes) fit in fine and dandy with stereotypical thinking about race and class differences in intelligence being somehow *intrinsic* and in-alterable.

    On a somewhat *lighter* note: Much as I’d like to see more regulation contra fraud, I question whether there will be any money left, given our ( both UK and US) current economic situation, for either regulators or research. But that’s just me.

  7. #7 Jarred C
    August 5, 2011

    Einstein is. Hawkin was at least 8 years ago, not so sure now. Sagan hasn’t been seen for ages, so he’s fading fast from stardom (much like, say, Johhny Rotten has).

    Hawking is hitting the lime light again, especially with him on the new show entitled, “Curiosity: Did God Create The Universe?” by the Discovery Channel.

    Sagan’s shows are still aired, so I wouldn’t count him out just yet.

    Also, what about Bill Nye?

  8. #8 Beamup
    August 5, 2011

    The question is why did so many people believe he was right

    A significant part of the answer is the way parts of the media swallowed the whole thing hook, line, and sinker without any consideration of whether his results actually justified his claims.

  9. #9 Jeremy Shaffer
    August 5, 2011

    Brian Deer said:

    I find it difficult to believe that you can’t see an entirely reasonable parallel with sexual abuse in the Catholic church, which is perhaps the seminal example of institutional denial in the face of decades of relentless anecdote.

    Outside of the inflamatory nature of this statement making you appear shrieking, the reason why it is not an entirely reasonable parallel is that it ultimately works against you. The RCC’s institutional denial of sex abuse didn’t get to international levels of disgust until real evidence was produced that showed it and the cover up of same to be institutional. Before then it was largely and easily dismissed as little more than attention- seeking accusations while there were only anecdotes.

  10. #10 Adam C.
    August 5, 2011

    One has to remember: the editor of the Lancet conspired with Wakefield against Deer, and Goldacre wrote articles saying that Deer’s reports shouldn’t matter, because the science was already against vaccines causing autism.

    Brian Deer’s reactions are fully understandable, even if you disagree with them.

  11. #11 Orac
    August 5, 2011

    Didn’t I say on more than one occasion that I could understand how Deer might react the way he did? The problem is that he shoots his own case in the foot and risks alienating some of his biggest fans (like me) by using such comparisons. He’s a big boy; he can decide to do that if he wishes, but I will call him on it, regardless of how much gratitude and admiration I have for his work unmasking Wakefield.

  12. #12 rw23
    August 5, 2011

    Finally, we found that the integrity of the peer-review process can only ever be as robust as the integrity of the people involved.

    True, but of course one could say the same about Parliamentary committees…

  13. #13 lsm
    August 5, 2011

    Isn’t there a rather large elephant in the room here? I’m talking about Mr. Deere going after the trickle of fraud in conventional research when compared to the veritable flood of deception in CAM, which seems to enjoy carte blanche when it comes to accountability. Why not go after the greater medical fraud, Mr. Deere, and then revisit real science, worthy as that my seem to you. My humble opinion.

  14. #14 hibob
    August 5, 2011

    @beamup #43, Orac #44: granted, the volume of data for some physics papers (and NMR papers come to mind as well) wouldn’t work well for this model. Raw patient records from clinical trials would be difficult to unethical to copy and store, and the scored/anonymized data would probably be substituted. Data in an accessible database wouldn’t need to be copied. Different types of journals (and granting bodies) would certainly need to work out their own standards for what gets filed and under what circumstances it should be unsealed. But the data for the vast majority of papers in chemistry, biology, psychology and most other fields would fit on an individual DVD per paper. Blots before the contrast was adjusted, TLCs, chromatograms, surveys, interviews – most raw data just doesn’t take up that much space. For the next rung, say NMR papers: put it on a hard drive, sign and seal it into an envelope, give it to the dean or mail it to the editor.

    (a) Proprietary data formats: so what? Store it in whatever format was used to generate the paper, proprietary or not. If there are serious questions about your work, enough so that your editors have a formal beef with you, you will presumably be able to replicate your results from your own data.

    (b) Looking at the data in any meaningful way would require a complete replication effort: Yes, exactly. That’s often how fraud/error is identified: people try to replicate an experiment or technique and can’t get it to work. The vast majority of experiments are replicable by someone else in the field, given the proper materials, strains, etc. Having all the information might obviate many needless attempts at replication (i.e., the error gets found in the data/methods), or at least speed those efforts up via details that had been left out of the methods section.

    (c) exabytes of data pared down from zettabytes of data: all right, got me there. Y’all get a pass.

  15. #15 lilady
    August 5, 2011

    I mis-posted (is that a word?) about disagreeing with Orac…I partially disagree insofar as his taking Deer to task about Dr. Offit. I do agree that (on the face of it) Deer’s comparison of a cover-up equates to the bishops and cardinals of the Church covering for pedophile priests by transferring them to another church with new altar boys is a bit over-the-top.

    It is difficult for me and I suspect for other pro-vaccine advocates, to ignore the ramifications of the Wakefield study on the individual lives of kids with autism, their families as well as the undermining of public health initiatives to prevent the spread of vaccine-preventable diseases.

    I have some other (good) “baggage” as well as the parent of a profoundly/multiply handicapped child and as a strong very vocal advocate for all disabled children and adults…since 1976. I suspect that is why I so appreciated Deer’s efforts to reveal the extent of Wakefield’s treachery and Orac’s ongoing efforts to bring the story to his wide readership. Orac, no one could ever accuse you of treading lightly when it comes to Wakefield, nor could that accusation ever be leveled at Brian Deer.

    There is a problem with certain physicians who do not practice good medicine with a very small percentage of doctors who blatantly malpractice on multiple patients. It seems to me that way too much time elapses between the time that serious complaints are lodged with licensing boards and the process is started to really investigate the alleged malpractice. And, there is the perception “out there” that in many instances, other patients are put in peril due to the slowness of the process to fully investigate and take action, if deemed necessary.

    But, there were no complaints from the parents of the children that were subjected to invasive and needless testing. Some of the parents had the (false) belief that their children were damaged by the MMR vaccine. Others were looking for the big payoff…but they all participated with the anticipation of suing Big Pharma and being compensated. Of course, the background of kids being sent to Wakefield by solicitors and Wakefield’s highly paid fees as the “hired gun” professional plaintiff’s witness as well as the big score of developing a single antigen measles vaccine, were the real reasons for the study and the fraudulent results of the study.

    The take-away lesson from today’s debate is that we are all on the same side…honesty in research, protecting kids, providing for the public health of our citizens and taking down the researchers and their bogus studies who imperil preventive health initiatives.

    It should be quite interesting to see how the “journalists” at Age of Autism “spin” Deer’s latest article…as well as Orac’s blog.

    (Now, reverting true to form) When is that SOB Wakefield going back to the U.K.?

  16. #16 rob
    August 5, 2011

    i have to agree with Ism, in that CAM modalities can be broadly characterized as fraudulent. how many billions are wasted on that crap when it could be used for legimate scienfic endeavors?

  17. #17 Orac
    August 5, 2011

    It should be quite interesting to see how the “journalists” at Age of Autism “spin” Deer’s latest article…as well as Orac’s blog.

    I rather suspect they’ll ignore Deer’s latest article. Although it must be painfully tempting to them to latch on to its attack on the scientific establishment, they can’t do so without mentioning Deer’s previous brilliant takedown of Wakefield’s fraud.

    It is rather amusing that today AoA published part 5 of its most recent attack on Brian Deer, with no end in site. Dan Olmsted himself wrote it! Talk about hilarity ensuing.

    The key thing to remember is that Brian Deer and I agree about far more things than we disagree about.

  18. #18 herr doktor bimler
    August 5, 2011

    The UK House of Commons science and technology commitee proposed that fraud be tackled with a national body to co-ordinate action, and that designated research integrity officers be appointed in academic institutions.

    Politicians want more control over other people’s activities! Who could have predicted it?
    I can’t see any possible downside to a system that inflicts paperwork and a structure of state commissars upon academic researchers, while leaving private-sector researchers (e.g. pharmaceutical-company employees) to do what they like.

    Then of course there are free-lance, self-employed scientists. Good luck imposing “research integrity officers” upon them!

  19. #19 herr doktor bimler
    August 5, 2011

    Essentially the UK House of Commons science and technology committee seems to regard all scientists as servants of the State, who can be saddled with whatever scrutiny the committee might dream up.
    Go to hell. Go directly to hell, do not pass go, do not collect $200.

  20. #20 JakeS
    August 5, 2011

    and Goldacre wrote articles saying that Deer’s reports shouldn’t matter, because the science was already against vaccines causing autism.

    But this is actually true, even if it is, perhaps, impolitic to point it out in so many words. Fakefield was always a marginal character in terms of actual science, as opposed to column inches. His study had been conclusively debunked years before it was proven to be fraudulent. So Fakefield was not a problem for science. For public health? Certainly. For science as a social institution? I would argue so. But not for science as a way of improving the state of human understanding, which is what Goldacre was talking about.

    Denise Walter brings up a very important similarity in #14: Routine regulatory audits of banks and other financial firms – like peer review in science – are not there to prevent fraud. Routine regulatory audits are there to prevent gross stupidity. Anybody in a position to suborn the internal audits that are conducted to catch embezzlement can hide financial fraud from routine inspection if he’s smart enough to commit it in the first place.

    Financial fraud is caught when Stuff Blows Up in a sufficiently spectacular fashion to make the police and the firm’s creditors send in the forensic accountants. Scientific fraud is discovered when fraudulent conclusions blow up in someone’s face hard enough to prompt him to spend time tracking down what he usually presumes is a simple error. Or, as in Fakefield’s case, when the fraudster makes such a big public production that he attracts attention from people who track down fraud for the sake of tracking down fraud.

    Now, in finance you have a good case for more intrusive and heavy-handed regulation, because financial fraud that never blows up in imaginative and novel ways can still do a lot of damage to innocent bystanders. Also, if Stuff Does Blow Up in finance, you have a pretty clear prima facie case for fraud being committed somewhere in the process, since Stuff Should Not Blow Up in a legally run financial institution of any importance if your financial regulation is properly stringent.

    Basic science research, by contrast, has very few innocent bystanders (aside, perhaps, from a few grad students who get caught up in fraud committed by their advisor), and lots of stuff blows up for the perfectly innocent reason of attempting to traverse uncharted territory of human understanding. So in basic science, the trade-off is between higher signal-to-noise ratio from cracking down on fraud vs. lower absolute volume of sound research from the overhead imposed by the crackdown. I am not convinced that this tradeoff provides a compelling argument for a crackdown at the moment.

    If the purpose is to protect innocent bystanders, I’d look at the ways in which it is possible to suborn IRBs and other ethics bodies dealing with human or primate research. But it’s not scientific fraud you’re looking for in that case – you can get perfectly fine data from the most appalling atrocities (granted, the sort of mentality that makes atrocities seem like a good idea might also be predisposed to scientific fraud – but that’s tangential to the point).

    – Jake

  21. #22 lilady
    August 5, 2011

    Damn Orac, you beat me to it…I’ve been slumming again at Age of Autism. I find this “diversion” to be compelling…sort of like mind crack to this addict of voodoo science “practitioners”.

    Olmsted’s latest invective in the Age of Autism’s Six (or Sixty) Degrees of Separation amongst Big Pharma and the Rupert Murdoch investigation is that Brian Deer purportedly stated (about Wakefield), “He’s a charlatan and as slippery as condom lube”.

    That quote might be another of Olmsted’s “misquotes”…if it isn’t, Bravo Brian Deer!.

    Orac, you and Brian Deer both are my heroes.

  22. #23 herr doktor bimler
    August 5, 2011

    one could say the same about Parliamentary committees…

    rw23 “has struck the wooden skewer upon its thicker end”. It is a marvel that UK politicians can turn their attention to fraud and non-integrity amongst non-politicians, without their heads exploding.

  23. #24 Antaeus Feldspar
    August 5, 2011
    “It doesn’t matter that [Wakefield] was fraudulent,” Dr Paul Offit, a vaccine inventor and author in Pennsylvania, was quoted in the Philadelphia Inquirer the next day as saying. “It only matters that he was wrong.”

    suggests to me that we may have a problem. Offit seems to be saying, “Hell yes. Invent the data, lie about the findings, just hope you’re lucky”.

    I am entirely confident that this is not, at all, what Offit was trying to say.  I have my doubts whether it is even a fair representation of what Offit actually said to the reporter from the Philadelphia Inquirer; reporters have been known to leave out important context in order to get a more provocative-sounding quote.

    I would suggest that what Offit meant is probably far more like the following:

    “It might seem that the lesson of the Wakefield affair is that we need to guard against science that is wrong because of fraud.  But ‘science that is wrong because of fraud’ is only a subset of ‘science that is wrong.’  If we focus our efforts on discovering and correcting science that is wrong, that will include science that is wrong because of fraud.  If we focus our efforts on discovering and correcting scientific fraud, however, we may overlook science that is wrong but not fraudulent.  It makes more sense to concentrate on ‘is it right or wrong?’ than ‘is it fraudulent?'”

  24. #25 Laura
    August 5, 2011

    How about fraud in the sense of someone stealing your work? That happened to me. I was using a computer program that did a calculation that was based on a published paper. Part of the calculation was finding values for derivatives, and that part didn’t work.
    So I read the paper. It had tons of “typo” mathematical errors – it seemed the author wanted to protect his calculation by hiding his actual formulas with “typos”.
    But there was one real mathematical error, in the calculation of derivatives. It was a subtle error, based on a change of coordinates with nonzero second derivative. I fixed that, and rewrote the computer program based on the fix – and after that, a finite differences test showed the derivatives were right.
    So, we called the author of the paper. He flatly dismissed the idea that I might actually have fixed his program, even though I told him I’d tested the derivatives with finite differences.
    A few years later, a paper of his appeared, with my fix in it. We *were* mentioned, but in very small print and in an ambiguous way, so someone reading it would probably not realize that he hadn’t come up with the fix.
    So his ego led him to intellectual theft.

  25. #26 Denice Walter
    August 5, 2011

    @ Jake S: Thank you. While I focused on cognitive and developmental psych ( and stat),I found myself managing a great deal money c. 2000- basically, by trying to figure out how to protect said money, I became fascinated with fraud and mispresentation in pseudo-science and finance. Fraud- if it is to be successful- rests upon higher mental functioning/ social cognition-i.e. person perception, recursive thought, taking the role of the other, self-evaluation, assessing demeanor- skills usually developed around the time of adolescence. A good fraudster needs to anticipate how the other will react, what they want, and what will make them suspicious.

    @ Orac: I read those articles plus Jake’s most recent one. Funny how he doesn’t mention his little adventure @ RI and his tango *avec moi*- I wonder why?( Hint: his audience might read what commenters wrote in response to him).
    -btw- I am basking in the glory of being despised by an Age of Autism contributor.

  26. #27 Narad
    August 5, 2011

    This was a quite literal LOL moment for me. Clearly you’ve never dealt with interesting data volumes. In some fields it might possibly be vaguely credible. In others, this would require journals to accept and store hundreds of petabytes (even exabytes) of data for a single paper

    Well, yeah, but they’d store it in a box. I had a similar reaction, and I’m only from the publishing end. The delivery of “supplemental materials” by journals remains quite haphazard. Even to the extent that somebody is canonicalizing the stuff in the real editorial office, sanity-checking should not be expected.

  27. #28 Antaeus Feldspar
    August 6, 2011

    Isn’t there a rather large elephant in the room here? I’m talking about Mr. Deere going after the trickle of fraud in conventional research when compared to the veritable flood of deception in CAM, which seems to enjoy carte blanche when it comes to accountability. Why not go after the greater medical fraud, Mr. Deere, and then revisit real science, worthy as that my seem to you. My humble opinion.

    I agree completely about CAM representing a flood of deception, but disagree that the reasonable response to this would be for all investigation of scientific fraud to leave conventional research alone and pursue fraud only in the CAM domain. If this isn’t what you meant to suggest should be done, then I apologize for misunderstanding you, but you’ve left important parts of your argument out: why should investigation of conventional science be done, but not by Deer, who is already doing it and has a track record of doing it well?

  28. #29 VM
    August 6, 2011

    The British media’s penchant for sensationalism and scare mongering is what caused the damage to the public during the Wakefield affair. They ran with Wakefield’s “research”, and continued to do so even when subsequent studies questioning Wakefield were published. It was a rather comprehensive admission that the British media is both scientifically ignorant and prefers sensationalism to sell stories.

    It’s disappointing, then, to see Deer engage in the very same exaggeration and emotive sensationalism (Shrieking scientists! Science is like the secretive pedophile-loving Vatican!) that wrought this mess in the first place. Even Deer’s defensiveness when he gets called out on this mirrors the British media’s frequent aversion to admitting culpability.

    If Deer has serious suggestions on how to deal with scientific fraud, or has data relating to the problem, he is welcome to contribute to the discussion. Anything else, like the sensationalism in Deer’s article, is simply more noise.

  29. #30 StuartG
    August 6, 2011

    Wow @46

    “Elvis is still going strong.”

    The immediate response by two teenagers:

    “Elvis Costello?”

    “Elvis who?”

  30. #31 Svlad Cjelli
    August 6, 2011

    “We also recommend that all UK research institutions have a specific member of staff leading on research integrity.”

    Sounds like a bad thing. Not only is it a singular member of staff, but a specific one. I’m no expert, but that looks like relying on the opposite of public, independent criticism and replication.

    Would there be value in that?

  31. #32 JakeS
    August 6, 2011

    @Denise#75: There’s only one rule for protecting money under your care: Do your own due diligence.

    The same rule applies to science, but scientists tend to be a lot better at it than banksters.

    – Jake

  32. #33 Denice Walter
    August 6, 2011

    @ Antaeus Feldspar:

    We use a series of filtres to separate the wheat from the chaff:

    our scammers often have spurious educational backgrounds- degrees from matchbook cover schools ( OK, so I’m slightly prejudiced) never completing standard degrees or training.

    However, many of our charlatans *are* doctors or have graduate degrees from reasonably respectable, accredited universities.

    A finer filtre might be questioning how their work fits in with what we already understand ( what first alerted me to Mr Wakefield re neuro-development early on).

    What’s Step 3? Wish I knew but somehow I suspect that institutional self-monitoring will not do the trick.

    @ JakeS- You got it, Mister!

  33. #34 Jarred C
    August 6, 2011

    This entire subject reminds me of the John Darsee, MD case. Back in the early 80s he was a Harvard medical researcher. He was accused of fraud, and later found to have fabricated much of his research, not only while at Harvard, but also while at Emory (residency) and Notre Dame (undergraduate). While at Emory, he published 10 papers and submitted 45 abstracts, of which only 2 of each are still considered valid.

    For those of you who have access to Science, try these reports:

    1) Science. 1983 Apr 1;220(4592):31-5. Coping with fraud: the Darsee Case. Culliton BJ. PMID: 6828878

    2) Science. 1983 May 27;220(4600):936. Emory reports on Darsee’s fraud. Culliton BJ. PMID: 6844919

    They’re good reads.

    From the second article:

    In one instance of reputed fraud, Darsee coauthored a paper with cardiologist S.B. Hyemsfield in The New England Journal of Medicine. The Moran committee report states that “there are no original records of this project at Emory University.” Hyemsfield does not remember the patients. The published paper does not identify any hospital or clinics as the place the patients came from. Futhermore, the acknowledgement at the end of the paper expressess “indebtedness” to three scientists who apparently do not exist.

  34. #35 Chris
    August 6, 2011

    Yesterday on NPR’s Science Friday was a discussion on Retraction Watch. They mentioned of the over 200 retractions they have covered in one year over 80 were from one researcher.

    I’ll probably peruse it later today after doing some errands. Hoping I can get through them after spending the night listening to the alarm from my neighbor’s car going off once an hour from 2am to 6am.

  35. #36 Jarred C
    August 6, 2011

    Correction: Darsee didn’t publish 10 papers and 45 abstracts while at Emory. 10 papers and 45 abstracts were published, of which all of them were based in some part on Darsee’s work.

    I think this is more important than the way I mis-reported it in my earlier post at #84. It shows how one person’s fabrication of data can impact many other people.

  36. #37 Chris
    August 6, 2011

    Oh, wow, there is a Turkish version of me in an alternate universe!

    (yes, I know it is a spam-bot)

  37. #38 Venture
    August 7, 2011

    @ Brian Deer 29:

    “With regard to UK institutions, very many have no policies whatsoever for such concerns. In the Wakefield case, for example, University College London – an elite academic institution – introduced procedures I think last September.”

    UCL has had a policy on academic/ research misconduct for many years – for example, I have an old version with a 2001 date on it.

    The current policy (http://www.ucl.ac.uk/ras/acs/resgov/research-misconduct-procedure.pdf ) does have a September 2010 date on it but that is the date of the latest edition.

  38. #39 Orac
    August 7, 2011

    In the unlikely event that Mr. Deer is still reading this thread, perhaps he would do us the favor of listing a few of these UK institutions that have “no policies whatsoever” covering research misconduct. I’m serious. I’d really like to know.

    If he can’t produce such a list (or even an example or two of such a UK institution), then I’ll have no choice but to go beyond simply disagreeing with him over tactics and question his entire thesis, given that he has made the claim that “very many” UK institutions don’t have a policy on research misconduct/fraud.

  39. #40 Eleanor
    August 8, 2011

    my 2p worth:

    I worked in a lab in the UK where, because of the funding we were receiving (from the government Agriculture dept (DEFRA) maybe?) we had to keep meticulous lab books and document every procedure and sample we took or did. In theory, we could have had a random inspection on all of this. I still have no idea if a story cooked up by the lab head to make us more careful, although we were allowed to be less document-heavy on other projects. Not being in the habit of fudging or lying, I can’t say it made any difference to the way I worked, bar having to do lots more paper-work. So some of the funders in UK academe do have systems in place, but my feeling is that it would have prevented bad book-keeping rather than fraud.

    I currently scrape funds together to get my research done and have to see 40 % of it cut by the Uni for overheads. If we had to fund ‘fraud squads’ too, how much would we lose then? 50%? To put it in context, my research money this year is 10 000 pounds, of which I see 6. To put it in more context, my research is in an esoteric field which few give a crap about.

    I’m with Orac, in that I think fraud is rare and hard to detect. A quick look at the Retraction Watch blog does show that peer review catches some of it, and that it is generally (and eventually) dealt with pretty harshly. Asides each institution having a system in place to deal with fraud as it arises, I don’t know how we deal with the rest.

    As an aside, when do we get an official regulation of journalists in the UK? or is that off limits? given there maybe as many ‘paedophile priest’ journos as there are scientists (though my money is on there being far more), can we see heavy-handed and expensive spot checks there, too?

  40. #41 Wow
    August 8, 2011
  41. #42 Wow
    August 8, 2011

    hibob: “That’s often how fraud/error is identified: people try to replicate an experiment or technique and can’t get it to work. The vast majority of experiments are replicable by someone else in the field, given the proper materials, strains, etc. ”

    But that’s how science works now. It just doesn’t bother with the “detect fraud” bit and just has people try to re-create the test of the theory.

    So what is this fraudbusting office doing now?

  42. #43 Wow
    August 8, 2011

    “Also, what about Bill Nye?”

    I would count that as a regional specialty. Still popular with hundreds of millions, but regional. I.e. I’ve heard of him and can google/youtube for him, but I’ve never seen him.

    Professor Brian Cox is somewhat of a regional one too. At least up until Secrets of the Solar system.

  43. #44 Wow
    August 8, 2011

    Stripey: “that’s two years of *your* data and publications that have to be thrown down the crapper along with theirs. That’s a real setback to someone early in their career”

    But by producing a paper that shows their senior’s errors, you’ve not thrown out your work and just shown how your work is BETTER than theirs.

    Insta-fame. In science circles.

    Most fraud is caught. In the case of Wakefield, it was already as far as the science was concerned, of absolutely NO IMPORTANCE WHATSOEVER.

    It was the MEDIA who pushed it and made the fraud.

    Not science.

    A credulous media who were looking for a good story and have carte blanche to make any old shit up “in the interests of balance”.

    It isn’t the science that needs a fraudbusting unit, it’s the media needs one to make them culpable for crap journalism.

  44. #45 Gray Falcon
    August 8, 2011

    I’m seconding Wow@92 on fraud detection. The fact that Wakefield had not been reproduced was enough to question the results, although without further evidence, one could not be sure whether Wakefield had allowed his biases to color his procedures, or if he had been engaged in deliberate fraud. Regardless of Wakefield’s intentions, the end results were the same.

  45. #46 Jud
    August 8, 2011

    decades of relentless anecdote

    Decades, maybe. (Back to Piltdown Man and before.) But relentless? I’m just a layperson, and bow to the superior knowledge of those who’re better informed, but I’m certainly not aware of anecdotes of scientific fraud to nearly the extent that, e.g., I’d been aware of anecdotes of priestly pedophilia before more actual data began coming out.

  46. #47 rork
    August 9, 2011

    @70: “Basic science research, by contrast, has very few innocent bystanders”.
    Those who get away with more cheating are rewarded with money and high-visibility papers, while their more honest counterparts aren’t. Review Potti/Nevins/Duke story. Also: they actually had clinical trials running, though perhaps not leading to direct harm of the patients. Their inquisitors at NCI and MD Anderson and other places got to waste lots of time looking into the matter too. The institutional response at Duke was and is very interesting (and complicated), and leads to the question of what role or accountability the larger institution has. All of that might have been avoided by reviewers or editors saying “I can’t follow your methods, data or results”. We often don’t demand that there’s enough data and methods to be able to check the results even just in principal. When mathematicians review math, I don’t think they return opinions like “well, it might be correct”, but in biomedical sciences that’s just what we often do. We could do a better job, but I admit I can’t answer the question of exactly how much effort and money should be spent to achieve that (and who pays).

  47. #48 JakeS
    August 12, 2011


    Re: Sloppy peer review: True, but tangential to the present discussion. What you describe would IMO be inexcusable even if everything about the paper was completely kosher and the conclusion happened to be true.

    Re: Funding: Dedicated fraud detection costs money. If you spend more money on fraud detection than the frauds would have otherwise flushed down the drain, then fraud detection is in red ink.

    Re: Prestige: I can live with getting upstaged by a crook, if the price of taking him down is that less real work gets done. (Of course, if fraudbusting saves more than it costs, go for it. I just don’t think it will.)

    Re: Institutional response: “Interesting” and “complicated” are not terms that I like to see in the same sentence as “institutional response to fraud” 😉

    Re: Clinical trials: That is why I’d focus on improving IRBs and similar review structures. Of course it is unethical to conduct clinical trials and then render them pointless by fiddling with the data.

    But from a practical perspective, I believe you will catch more unethical clinical trials by strengthening the safeguards against trial designs that, e.g., are too weak to actually test the hypothesis advanced. Those are almost as useless as ones where the data is made up out of whole cloth, and likely to be more numerous (particularly the oversight of clinical trials is adequately rigorous).

    – Jake

New comments have been temporarily disabled. Please check back soon.