Science isn't perfect, but it's better than the alternative.

testttube

It often comes as a surprise to proponents of alternative medicine and critics of big pharma that I'm a big fan of John Ioannidis. Evidence of this can easily be found right here on this very blog just by entering Ioannidis' name into the search box. Indeed, my first post about an Ioannidis paper is nearly a decade old now. These posts were about one of Ioannidis' most famous papers, this one published in JAMA and entitled Contradicted and Initially Stronger Effects in Highly Cited Clinical Research. It was a study that suggested that at least 1/3 of highly cited clinical trials may either be incorrect or or show a much higher effect than subsequent studies, which settle down to a "real" effect. Of course, this was nothing more than the quantification of what clinical researchers instinctively suspected or knew: That the first published trials of a treatment not infrequently report better results than subsequent trials.

In other contexts, this phenomenon has also been called the "decline effect." Possible reasons for discrepancies between initial results and later trials may include publication bias (positive studies are more likely to see publication in high-impact journals than negative studies) or time-lag bias (which favors the rapid publication of interesting or important "positive" results). Also, high impact journals like JAMA and NEJM are always on the lookout for "sexy" findings, findings likely to have a strong impact on medical practice or that challenge present paradigms, which may sometimes lead them to overlook flaws in some studies or publish pilot studies with small numbers. In any case, I was not kind to a certain blogger who misinterpreted Ioannidis's study as meaning that doctors are "lousy scientists." Of course, lots of people misinterpret Ioannidis' work, particularly alt med cranks, as they did with his most famous (thus far) paper, entitled Why Most Published Research Findings Are False.

Why do alt med advocates find it surprising that I'm a huge fan of John Ioannidis? The reason, of course, is that Ioannidis has dedicated his life to quantifying where science goes wrong, particularly sciences related to medicine, such as clinical trials, biomarkers, nutrition, and epidemiology. He pulls no punches. Alt med aficionados often labor under the misconception that proponents of science-based medicine are somehow afraid to examine the flaws in how science operates, that we circle the wagons whenever "brave mavericks" like them criticize science. Of course, the reason they criticize science is because it doesn't show what they think it shows. They also assume that proponents of SBM will react to criticism of their discipline the same way, for instance, homeopaths react to criticism of homeopathy. In general, we don't. That's because science embraces questioning. Such questioning is baked into the very DNA of science. Oh, sure, scientists are human too and sometimes react badly to criticism, but we usually manage to shake it off and then seriously look at critiques such as what Ioannidis provides.

That's why I was immediately drawn to a recent interview with John Ioannidis by Julie Belluz, John Ioannidis has dedicated his life to quantifying how science is broken, a title that has the great virtue of being true. It's a fascinating read and provides insight into the mind of perhaps the greatest analyst of the scientific method as it's actually practiced currently publishing. Part of it also shows how prescient Ioannidis was a decade ago when he published the article describing how most published research findings are false:

Julia Belluz: The paper was a theoretical model. How does it now match with the empirical evidence we have on how science is broken?

John Ioannidis: There are now tons of empirical studies on this. One field that probably attracted a lot of attention is preclinical research on drug targets, for example, research done in academic labs on cell cultures, trying to propose a mechanism of action for drugs that can be developed. There are papers showing that, if you look at a large number of these studies, only about 10 to 25 percent of them could be reproduced by other investigators. Animal research has also attracted a lot of attention and has had a number of empirical evaluations, many of them showing that almost everything that gets published is claimed to be "significant". Nevertheless, there are big problems in the designs of these studies, and there’s very little reproducibility of results. Most of these studies don’t pan out when you try to move forward to human experimentation.

Even for randomized controlled trials [considered the gold standard of evidence in medicine and beyond] we have empirical evidence about their modest replication. We have data suggesting only about half of the trials registered [on public databases so people know they were done] are published in journals. Among those published, only about half of the outcomes the researchers set out to study are actually reported. Then half — or more — of the results that are published are interpreted inappropriately, with spin favoring preconceptions of sponsors’ agendas. If you multiply these levels of loss or distortion, even for randomized trials, it’s only a modest fraction of the evidence that is going to be credible.

This latter problem is the sort of problem that initiatives like the US Food and Drug Administration Amendments Act of 2007 (in the US) and Alltrials.net (primarily in Europe) are intended to correct. The FDAAA mandated that enrollment and outcomes data from trials of drugs, biologics, and devices must appear in an open repository associated with the trial's registration, generally within a year of the trial's completion, whether or not the results have been published. Of course, the FDAAA hasn't been enforced as it should be, with a lot of studies not being published in a timely fashion, but it's a start. In any case, since Ioannidis first made his big splash, there's been a lot of research validating his model and showing that a lot of what is published has flaws. The question is: How to do better.

One thing Ioannidis suggests is what's already happening: More post-publication review. He notes that there are two main places where review can happen: Pre-publication (peer review) and post-publication. How to improve both is a big question in science, as Ioannidis emphasizes in response to the question of how to guard against bad science:

We need scientists to very specifically be able to filter [bad] studies. We need better peer review at multiple levels. Currently we have peer review done by a couple of people who get the paper and maybe they spend a couple of hours on it. Usually they cannot analyze the data because the data are not available – well, even if they were, they would not have time to do that. We need to find ways to improve the peer review process and think about new ways of peer review.

Recently there’s increasing emphasis on trying to have post-publication review. Once a paper is published, you can comment on it, raise questions or concerns. But most of these efforts don’t have an incentive structure in place that would help them take off. There’s also no incentive for scientists or other stakeholders to make a very thorough and critical review of a study, to try to reproduce it, or to probe systematically and spend real effort on re-analysis. We need to find ways people would be rewarded for this type of reproducibility or bias checks.

I've often wondered how we can improve peer review. Most people who aren't scientists don't understand how peer review is done. Usually an editor tries to get scientists with expertise in the relevant science to agree to review a manuscript, in general two or three reviewers per manuscript. They will find time to squeeze in the review. Sometimes senior scientists who agree to review a paper will ask their postdoc to review it for them. It's not an ideal system, but the scientists who do peer review are all unpaid. They basically volunteer or are asked and agree to review papers without recompense. Often the academic editors (as opposed to the permanent editorial staff responsible for getting post-review manuscripts ready for publication) are also unpaid. Basically, we scientists tend to do peer review out of a sense of obligation, because it's considered part of our duty to science, as part of our job. All in all, it's a very ad hoc system. Given these issues, the system actually works pretty well most of the time.

However, as Ioannidis notes, there are flaws, in particular our inability to analyze the primary data. In fact, I wouldn't want to have to analyze the primary data of all the manuscripts I review over the course of a year. It would be way too much work, and I'd have to stop reviewing manuscripts. In fact, I was quite annoyed at the last manuscript I reviewed which had 13 (!) dense figures worth of data in its supplemental data system in addition to its 6 figures for the paper. Going through supplemental data sections, which journals now encourage scientists to load up all that data that they don't want to include in the manuscript, all to be dumped online, has become downright onerous. As much as I agree with Ioannidis that it would be very good for peer review if we could find a way to reward scientists to do such activities, I really have a really hard time envisioning how, under the current financial model of science, this could ever happen.

I do agree with this:

We need empirical data. We need research on research. Such empirical data has started accruing. We have a large number of scientists who want to perform research on research, and they are generating very important insights on how research is applied or misapplied. Then we need more meta-research on interventions, how to change things. If something is not working very well, it doesn’t mean that if we adopt something different that will certainly make things better.

Thus far, Ioannidis has been very good at identifying problems in science. What hasn't (yet) followed are evidence- and science-based strategies for what to do about it, how to improve. What I do have a bit of a problem with is Ioannidis' other suggestion:

So if you think about where should we intervene, maybe it should be in designing and choosing study questions and designs, and the ways that these research questions should be addressed, maybe even guiding research — promoting team science, large collaborative studies rather than single investigators with independent studies — all the way to the post-publication peer review.

"Team science" has become a mantra repeated over and over and over again as though it will ward off all the evil humors that have infected the methods and practice of science, and there's no doubt that for certain large projects team science is essential. Team science is, however, quite difficult. Whether it adds value to scholarship, training, and public health remains to be seen, and it tends to foster an environment in which the rich get richer, leading to a few leaders with a lot of followers. It's labor-intensive and often conflict-prone. Team science also poses a significant challenge (and risk) to young investigators trying to establish a distinct professional identity. Moreover, it is becoming clear that investments in team science are not as cost-effective as advertised and, indeed, might even be the opposite. Hopefully, Ioannidis's group is working on methods of evaluating when team science is useful and when it is not.

I do like the idea of post-publication peer review, as well, but that's an even more difficult sell than peer review. After all, I get credit when I fill out my yearly activity report for my peer review activity, both for scientific manuscripts and grant review. I don't get credit for post-publication review, where I read an already published paper and comment on it. Heck, blogging for me often represents an exercise in post-publication peer review, wherein I eviscerate bad papers and comment on good papers that catch my interest. Fortunately, there are now more journals permitting comments on their websites, and there is now a websites like PubPeer, which exists to facilitate post-publication peer review, although this is not without peril.

Science is not perfect. It has problems. The limitations and problems inherent in how it is currently practiced point to strategies to improve it. Ironically, it will be the rigorous application of the scientific method to science itself that will likely lead to such improvements. In the meantime, if you want me to illustrate the difference between science-based medicine and "complementary and alternative medicine" (CAM), I can do it. Just ask yourself this: Who is the John Ioannidis of CAM? Arguably, it was Edzard Ernst. Now ask yourself this: How did the CAM community react to him compared to how the community of medical scientists reacts to John Ioannidis? I'll spell it out. Ernst was attacked, rejected, and castigated. Ioannidis is, for the most part, embraced.

That should tell you all you need to know about the difference between CAM and science-based medicine.

Categories

More like this

I'm going to peer review Orac's article about Julia Beloz's article on Jon Ioannidis's article about peer review.

Oh wait I'm not Orac's peer.

One other issue about peer review is that can be rigged. Many journals ask you to list potential peer reviewers when you submit your paper. In this way you can direct your submitted paper to friends who will be more lenient.

Another issue in some journals is a long delay in review. In grad school, my major professor had me write a paper not quite worthy of a publication. However, he had been a diligent peer reviewer. The peer reviews came back reject, however, the editor went ahead and published the manuscript with some revisions.

Not precisely on topic, but FYI Barbara Loe Fisher is going to be on Crosstalk, a radio call-in show produced by VCY America (a christian station originating in WI). It wil be airing @ 2PM CST. The show itself is in limited locations, but also can be accessed on the internet through VCY's website. Supposedly the majority of the show will be devoted to taking calls, so have fun! The calls are screened, so you may want to tone it down when you first call before you let her have it.

By General Factotum (not verified) on 18 Feb 2015 #permalink

Not precisely on topic? More like not on topic at all. You could have posted this on any recent vaccine post, but you posted it here, on one post that has nothing to do with vaccines per se interspersed among several that are all about vaccines. Come on, man.

My apologies. I did not know how many people would be checking the older posts, and I wanted as many people as possible to see this so that she would be called to account. Please feel free to delete it, and I will repost to a more pertinent entry.

By General Factotum (not verified) on 18 Feb 2015 #permalink

To bring this back on topic...the challenge with improving the review process was alluded to throughout this post. It's really a lack of incentives to do so.

That isn't suggesting that scientists are awful people only focused on their work at the expense of everyone else. Rather, it's just the human reality of having only so many hours in a day to do your own job. I couldn't imagine trying to manage your own research, peer review papers pre-publication, and then spend time to dissect and analyze papers post-publication.

When you have a limited segment of time to do the latter two activities, you're going to pick the things that would probably benefit you career-wise or perhaps as favors for colleagues or friends. And I'm not a published scientist, but I can't imagine that helps matters either.

I don't mind if comment threads drift off topic after they've gotten a bit longer. It's what comment threads almost inevitably do anyway and trying to prevent it is more work than I'm willing to put in. I do get a little testy when someone inserts something way off topic in the first ten comments (or even worse, in the first three comments) or so because that's how comment threads are derailed or hijacked right from the beginning.

At Uni we had one session a week where we had to review a paper purely to point out any flaws. To start with we all thought the papers were fine but as our tutor poined out mistake after mistake we began to see things differently. Probably one of the most useful sessions you could have.

Academic science is a mentor/trainee pyramid scheme, with only a tiny fraction of graduate students ever achieving independence as an investigator. I’m only one small step up from the bottom of this dismal pyramid myself on the basic science side of medical research. Despite a decade of collective hand-wringing, time-to-first-RO1 continues to increase, as does young investigator frustration (reviewed last month in PNAS). I pessimistically expect that any adverse effects of the changes proposed – more post-pub reviews, data replication, team science – will disproportionately fall to postdoctoral fellows and younger faculty. Any burden on investigators’ time that is not data and manuscript generation just makes the hill steeper, provided the requirements for independent funding remain as they are. This is not to say these ideas are bad, but I think they do run the risk of further increasing the career infant mortality rate.

By CTGeneGuy (not verified) on 18 Feb 2015 #permalink

@CTGeneGuy #9 - and that's kinda my point above. Adding more review seems like a great idea in theory, but what would likely happen is that you'd only do the review that you're either forced to do and/or might benefit you career-wise, or tasks you may do to help out a friend. I don't know that "more" is the answer. Perhaps "better and more efficient", although I do not presume to have any suggestions in that regard.

Many of the comments made here about SBM are also applicable to other scientific fields. I'm a physicist by training. When Orac says, "high impact journals like JAMA and NEJM are always on the lookout for 'sexy' findings," he's saying something equally true of the journals considered most desirable for publishing physics results: Nature, Science, and Physical Review Letters. The first two are notoriously known as "glamour mags", publishing many results that are "sexy" but wrong, and PRL has also been known to publish "sexy" but wrong articles.

I share the opinion that the peer review system is fragile. ISTM that there is a classic example of what our economics colleagues call a perverse incentive: there is no explicit reward for doing the job well, and in fact your "reward" for doing it well is to be asked to do more of it, while editors learn to avoid reviewers who consistently are late and shoddy/superficial with their reviews. And of course reviewers don't catch everything. I have read several papers where my reaction was, "How the #%*^ did that get past the referees?"

And that's not even getting into the issue of fraudulent research. There are strong incentives for doing it if you can get away with it, even though getting caught faking research is generally a career-ending move. Anti-plagiarism software and search engines have made it easier to detect certain kinds of research fraud, but others are harder to detect. ISTM that biomedical research is particularly prone to fraud, because the stakes are particularly high compared to other fields, but it may be that there is so much more biomedical research than in other fields that the percentage of fraudsters is roughly the same. I know that physics is not immune. And of course a really ambitious fraudster will try to publish in high-impact journals--most of Jan-Hendrik Schön's retracted papers were published in Nature or Science.

By Eric Lund (not verified) on 18 Feb 2015 #permalink

"Science is not perfect. It has problems."

Orac, this is a f--- euphemism !
I'm a scientific, I love science, It's the only thing that matter to me, but today science is sick. You just have to look at retraction watch everyday, seeing established prof with major paper retracted for reproductibility problem or outright cheating (Not in some Pakistan predatory journal but in Science or Cell, the very high standard of science). Ioannidis is right that the only thing that we can do is post peer review, it can staunch this bleeding for a while and help a lot.

But as always, when awful lot of money are in the balance, and career, and university reputation, every retraction/correction become a pain (lawsuit treatening etc...) so it would never be enough. Worst, the damage is done, how many citation get a 'cheated/sloppy' paper ? How many more paper it would have affected ? Guiding other scientist in wrong direction, wasting huge amount of time and money.

Indeed you can do 'meta-research, i'm all for that, really I could do it all day but wich journal is going to publish my work ? No groundbreaking finding, no new drug target, no biomarker or fancy neuro-imagery correlated to cognition... PLOS one ? Yeah sure, i don't give a crap about impact factor, but well, director of university look only at this.'

The very foundation of scientific system is crumbling, I'm not Ioannidis, I'm no expert so I don't see what to do. But I think that we have to rebuild the system with something more adapted to the explosion of the number of scientist (Emergent country launching true research system with). Think about it, more scientist, same money (maybe less) more selection, more impact factor craving, more cheating/battle for the grant. Our old system is not builded for this. And we can try to patch with post peer rewiew but it won't stop... Because the benefit of getting a paper in a glamour journal far outweight the first aim of science : trying to understand the universe.

Well, I'm in a sad mood, so maybe things are not that bad... But still.

On a lighter note, the hero of the CAM research, Edzard Ernst is releasing a book : The scientist in wonderland. It look very interesting, and more interesting is to see the awful review (using lies and defamation) and ad nominen that pro CAM sent to him... I wonder if Ioannidis get the same ? I guess not =)

+ I just saw the post of Eric Lund : I perfectly agree with you.

@Mike

On This Week in Virology, they recently touched on some of the ways that peer review can be gamed, for instance submitting bogus names/emails as suggested reviewers. Those email addresses then go to accounts set up by the paper's author. A way to fight this is for editors to check out the names to see what papers those "reviewers" have authored, as well as to ensure that the email addresses provided go to the actual person recommended, and not to some dummy account set up by the paper's author.

A recent example of lazy editors approving inappropriate reviewers can be seen right from the anti-vaccine movement with Brian Hooker's retracted paper.

Fascinating stuff: scientific research on scientific research would seem so obvious, yet John Ioannidis is considered a bit of a maverick. It's a loaded word, isn't it? With a philosophical spin rooted in a kind of contradiction; it's good if our processes allows for mavericks as that shows the process is not fascist, yet the must be some issues with processes if mavericks prove to have value as that means something set the herd off on the wrong path from which the maverick diverged.

Part of the problem of talking about 'science' is the noun embraces too many things: the work products of science (knowledge); the methods of science; the philosophy embeded in the methods themselves, the communities of scientific researchers, and last but not least the institutions within and through which contemporary science is conducted. Thinking about Ioannidis should lead us to think about all these things, and the thoughts will be mixes of positive and negative varying across the board.

If we read "Who is the John Ioannidis of CAM?" as a rhetorical-question rebuke to alties of 'we have an Ioannidis and you don't', I'd broaden the point to address all critics of science more simply with, "we have an Ioannidis." And then I'd ask:
1) what the existence of Ioannidis reveals as positive values present in each of those different meanings of "science"?
2) what the existence of Ioannidis reveals as shortcomings present in each of those aspects of "science"?
3) how should we understand/interpret his research: which component parts or 'sub-assemblies' are broken, which aren't, and why?
4) (always) what is to be done – not just in terms of policy, but setting an agenda for further intellectual inquiry?

And I'd suggest the existence of Ioannidis work deploys the methods of science in ways that push the communities of scientists to talk about things individual scientists don't like to talk about. Which imho is very positive. But even more positive, his work provides an avenue for scientists to talk about this stuff with scholars OF science on some kind of common ground that might get around the pissy obstacles that have short-circuited dialogue in the past.

Which is to say I don't see how scientists can address the stuff Ioannidis work without doing 'science studies'; history of science, philosophy of science, sociology of science, politics of science, economics of science, etc. And one of the reasons scientists don't like to do this stuff is they're not very good at it, either by skill or temperament – even if we take away their hostilities to being the looked-at rather than the lookers.

So _i_ would say in order to grapple with Ioannidis, scientists are going to need folks who not only like doing that stuff but are good at it — oh, no! the humanities! gaaack! But if scientists would balk at "need", I'd hope they'd at least consider engagements and dialogs potentially useful.

(Well, that was all rather 'meta-'. The topic calls for some 'brass tacks' methinks. But that should be a separate comment because... well, you know.)

Of course, lots of people misinterpret Ioannidis’ work, particularly alt med cranks

This is my favorite recent example. It's just plagiarism from Sayer Ji, but with an even dumber hed that reveals they didn't bother to read the PLoS One paper* – the negative predictive value would apply.

there is now a websites like PubPeer, which exists to facilitate post-publication peer review, although this is not without peril

That docket, FWIW, lives here. Unfortunately, the site is so broken that all the event links go to the January 15 scheduling order.

* Not that Ji did, either, as he failed to note that the Atlantic quote isn't referring to the PLOS entry.

Just ask yourself this: Who is the John Ioannidis of CAM?

Another good question is this one: What would John Ioannidis have to say about the research and review done in CAM?

I once had an altie throw Ioannidis at me as if he was on their side and a skeptic of science itself. But he wants a stricter, more focused, more rigorous application of scientific standards. That's not exactly advocating that we resort to anecdotes and personal trials-by-error. My guess is that if he thinks mainstream medical science is bad, he'd think alternative medical 'science' is much, much worse. Assuming otherwise is childish, like thinking you can now get away with knocking over a gas station because your goody-two-shoes brother knocked over a vase.

It's as if they visualize Ioannidis throwing up his hands and allowing anything goes. Science is broken: let's try magic!

Going through supplemental data sections, which journals now encourage scientists to load up all that data that they don’t want to include in the manuscript, all to be dumped online, has become downright onerous.

The Journal of Neuroscience bid farewell to supplemental material back in 2010 both for this reason and because it "encourages excessive demands from reviewers."

Underlying data sets and code are one thing, but there's a reason some journals have long had supplement series (in this case, the supplement has a higher Impact Factor than the main journal).

A way to fight this is for editors to check out the names to see what papers those “reviewers” have authored

Don't forget the latest pay-to-play offering from the same sort of publishing brain trust that thinks DOIs are a remarkable technological achievement, ORCID.

Damn, this is scary: I actually agree with Orac on something!

By Chris Shaw (not verified) on 18 Feb 2015 #permalink

I kind of think of it more as - science is fine, science works. The way it's implemented is flawed, because we're flawed. Ioannidis himself is using science in order to show this!

To me, it's all about continually improving the implementation of science, to look at our own flaws and find more and better ways to control for them.

By Roadstergal (not verified) on 18 Feb 2015 #permalink

Were it not for some crazy fool using the scientific method to try and figure out the age of the Earth, we might still be pumping tons of Pb into the atmosphere.

@CTGeneGuy - yeah there is a reason I did not pursue my PhD in Epi. I got my fill of academia doing my graduate work. Post graduate you are essentially entering the indentured servitude of slaving away under a senior researcher who is generally dumping an inordinate amount of their own work on to their grad students who are being paid minimum wage, usually for 20 hours a week, while working 60. I know many people personally who put in years and never got their PhD. A lady I went to grad school with I met last year and she had spent almost 10 years working on her PhD, and never got it. I don't even want to know how much debt she racked up doing that. I have even peer reviewed papers as a grad student so yeah, usually that job falls to someone other than the senior researcher who was asked to do it.

@General Factotum

Thank you! I appreciate knowing about this and would likely not be looking at other blog entries on the day this airs.

@Orac, how does a simple notification derail the conversation? I would think you would be delighted to give commenters a chance to phone in on such a show. I know, it’s YOUR blog, not mine, but sometimes I think you need a tune up. I’ve coughed up a lot of money for a retiree to come and hear you blink and beep in NY in April (hotels!), so it’s not that I don’t appreciate your work.

By Cheesehead (not verified) on 18 Feb 2015 #permalink

A way to fight this is for editors to check out the names to see what papers those “reviewers” have authored, as well as to ensure that the email addresses provided go to the actual person recommended, and not to some dummy account set up by the paper’s author.

A conscientious editor does that already, and often will choose only one reviewer from the list of author's suggestions, and one other. This is somewhat easier in specialty journals, where the editor is likely to know (or at least know of) most of the significant researchers in the field. But even editors of more general journals can do it. An obvious place to look is the reference list: papers written by people who are not co-authors of the present manuscript or their close collaborators.

The National Science Foundation requires proposers to supply with their biographical information a list of people with whom they have collaborated in the last 48 months, as well as the names of their advisors (Ph.D. and postdoc) and students. This list has the obvious function of telling the program manager who should not be asked to review the proposal. However, NASA has no similar requirement, nor does the Air Force Office of Sponsored Research. I don't know about other funding agencies, because those are the only three I have proposed to thus far in my career.

By Eric Lund (not verified) on 18 Feb 2015 #permalink

Dear Dr. Shaw,

Even a broken clock is right twice a day.
Congratulatons on using up half of your share of accuracy for this 24 hour period.
On everything you publish and pontificate on you are clearly and completely wrong. Use your one other "right" credit for the day wisely.

By J.W.Chaplin (not verified) on 18 Feb 2015 #permalink

Wouldn't you know that Ioannidis is often cited by the loons I survey as evidence that SBM is intractably compromised, a den of thieves and liars lying furiously, of no value whatsoever...
as if he would approve of the bs research they promote.

OT/ In other news:
the great hacking scandal @ PRN continues hilariously-
it seems that following a 5 hr vaccine special, nefarious black ops hacked all of PRN's crapola articles and thrice-encrypted show archives or suchlike, disabling any broadcasting or public access for nearly a week.
After much frantic scurrying about and histrionics, it was announced that the precious archives were miraculously salvaged. And broadcasts commenced.

Interestingly, the new archives include play figures per broadcast and totals. They appear to be in the thousands ( 1000- 10000) per show despite the hoary host's brag that he has millions of followers.
SO it may be possible to calculate his followers:
a small number listen via landbased stations ( low 1000s?/ cited as the least source), others listen by phone ( number of minutes discussed/ low 1000s? ) some may listen via computer live.

I wonder if the hacker is one of Orac's minions?

By Denice Walter (not verified) on 18 Feb 2015 #permalink

I was thinking about peer-review problems last June and sent an email to this effect:
.
“Peer-review” is the practice in the science community of requiring professional independent review and approval of an author’s paper before it may be published in one of their prestigious journals.

Evolutionists frequently attack the work of their
critics/challengers by pointing out that the anti-evolutionists’ work has not passed the muster of the community, has not been published in those prestigious, peer-reviewed journals.

What the evolutionists fail to mention is that all of those “prestigious” science journals are controlled by evolutionists, and those peers aren’t going to OK anything that challenges their Darwinian dogma. The critics will never get past the group-think gestapo and see their work published in those “prestigious” journals. [Also noteworthy is the fact that some peer-reviewed, published papers in various scientific fields have later been revealed to be wrong or even fraudulent.]

Anyway, maybe the critics of evolution should take heart. Maybe not passing (or even having) peer-review could become a badge of honor:
http://theconversation.com/hate-the-peer-review-process-einstein-did-to…

By See Noevo (not verified) on 18 Feb 2015 #permalink

the group-think gestapo

I was expecting the Spanish Inquisition.

By herr doktor bimler (not verified) on 18 Feb 2015 #permalink

Anyway, maybe the critics of evolution should take heart. Maybe not passing (or even having) peer-review could become a badge of honor: [Einstein!]

I'm going with this one (Disqustink).

Dr. Shaw: “Damn, this is scary: I actually agree with Orac on something!”

Then you would not mind correcting your scaremongering on certain vaccines?

I, for one, would be interested in whether Tomljenovic has continuously been some sort of terminal postdoc, or whether there was a break.

Science, or actually the way Doctor's can interpret it, can often be wrong, sometimes with pretty bad consequences for society.

Eg Advice that fat is bad for you - turns out wrong. Statins don't have many side effects - turns out wrong. (NICE want 40 % of the British public on statins!!). This is truly scary.

NICE want 40 % of the British public on statins!!

I am certain that Philip Hills can corroborate this construction of reality.

Or 40% of the Brits have hyperlipidemia from crappy diet choices.

Did you guys see that? I think it was an elephant, just walked in the room.

Narad

NICE have changed their recommendations. Previously NICE advice to doctors was, prescribe statins if you have a 20% risk of cardiovascular disease. This has been halved to a 10% risk. About 17 million people in the UK have a 10% risk of CVD.

That's about 40% of the eligible population.

Fergus,

Eg Advice that fat is bad for you – turns out wrong.

If you are referring to the Cambridge University study that came out last year, many scientists would disagree; the association between saturated fats and cardiovascular disease is well-established. Also, too many calories is bad, and fat is a very efficient way of consuming too many calories, especially when combined with sugar, which we are hard-wired to find pleasurable.

Statins don’t have many side effects – turns out wrong. (NICE want 40 % of the British public on statins!!). This is truly scary.

Where did you get that idea from? The latest evidence I have seen is that most reported side effects are not due to the drug at all (according to a paper co-authored by Ben Goldacre who is no friend of 'Big Pharma'):

Only a small minority of symptoms reported on statins are genuinely due to the statins: almost all would occur just as frequently on placebo. Only development of new-onset diabetes mellitus was significantly higher on statins than placebo; nevertheless only 1 in 5 of new cases were actually caused by statins.

Statins reduce deaths in those without existing CVD by 0.5% and in those with existing CVD by 1.5%, which I think is pretty good for an increased risk of diabetes of only 0.5%.

By Krebiozen (not verified) on 19 Feb 2015 #permalink

Dammit - brain not currently functioning in optimum state. Having finally mastered the art of not screwing up blockquotes I now can't seem to get a link right. I also probably meant "too many calories are bad", or is "too many calories" a thing I can refer to in the singular?

By Krebiozen (not verified) on 19 Feb 2015 #permalink

Fergus,

NICE have changed their recommendations. Previously NICE advice to doctors was, prescribe statins if you have a 20% risk of cardiovascular disease. This has been halved to a 10% risk.

This is true.

About 17 million people in the UK have a 10% risk of CVD.
That’s about 40% of the eligible population.

This is not true. Firstly, according to NICE (op cit), "Up to 4.5 million people could be eligible for statins under the lower threshold". Secondly, current advice is that "preventative lifestyle measures are adopted" before starting medication. This is what I did when my total cholesterol crept up to put my CVD risk over the new threshold - I lost a bit of weight and it came down. No statins required.

By Krebiozen (not verified) on 19 Feb 2015 #permalink

I also probably meant “too many calories are bad”, or is “too many calories” a thing I can refer to in the singular?

The singular intuitively seems correct to me, actually. I suppose "eating too many calories is bad" would technically be more correct.

Krebiozen

"Reduce death by 1,5% is pretty good?"

Yes what you say is true after 5 years use. However after a further 6 months all those 1.5% are dead. So what you should say is 1.5% of people after 5 years on this drug will have 6months more life. And 92% find no effect. Not really so good now.

Fergus,

Yes what you say is true after 5 years use. However after a further 6 months all those 1.5% are dead. So what you should say is 1.5% of people after 5 years on this drug will have 6months more life. And 92% find no effect. Not really so good now.

Citation needed. I don't see how that could possibly be true. Those are absolute risks - are you seriously suggesting that 1.5% of people with CVD on statins die after between 60 and 66 months on the drugs? Also, that 1.5% of people would not all have suddenly died at the five year mark, so even if what you claim is true (which I don't believe it is), they would have had more than 6 months of extra life, more like 2.5 years extra.

This study looked at post-PCI patients on statins for a median of five years and found a 50% reduction in all-cause mortality.

By Krebiozen (not verified) on 19 Feb 2015 #permalink

Statins are looking pretty good for primary CVD prevention too. The authors of this paper estimated the effects of everyone in Germany starting to take statins at age 40:

It revealed that average life expectancy would increase from 76.9 to 79.6 years (2.7 y) and from 82.3 to 84.3 (2.0 y) in women and men, respectively. The annual costs would be about 180 Euros, corresponding to total treatment costs of slightly more than 7,000 Euros per individual or costs per life year saved between 3,000 and 4,000 Euro.

With the amount of evidence now available about lipids, statins and CVD I doubt very much that we will find that it is all wrong.

By Krebiozen (not verified) on 19 Feb 2015 #permalink

Krebiozen, I understand your incredulity I really do, I was the same. However these figures are from the Heart Protection Study (HPS) placebo v statin, published in The Lancet. Big big study.

" are you seriously suggesting that 1.5% of people with CVD on statins die after between 60 and 66 months on the drugs?"
Nearly. Actually 1.5% die after one year. The 6 months is the average increase in life expectancy for that 1.5%. Bear in mind that 8.4% on statins have died already in the first 5 years (as opposed to 9.2% in the placebo).

These are the figures that are used to promote statins, amazing in my opinion. I will need to read Ben Goldacre's piece on side effects. All I will say is that here in the UK the Cholesterol Treatment Trial (CTT) collaboration did not look at all side effects, like muscle issues, only CVD and cancer rates. Raised blood glucose is one effect admitted but I think more will become apparent as more and more people are put on statins.

Fergus,

Krebiozen, I understand your incredulity I really do, I was the same.

I've been following this subject for almost 30 years now, and was fascinated by the so-called cholesterol skeptics for a while, so I have read a lot on both sides of the issue.

However these figures are from the Heart Protection Study (HPS) placebo v statin, published in The Lancet. Big big study.

I'm familiar with that study. It looked at secondary prevention of cardiovascular events and found an 18% reduction in coronary mortality, a 24% reduction in major vascular events and a 13% reduction in all-cause mortality. A further 4-year follow-up of the same patients found that "previous benefits persist during at least 4 years of posttrial follow-up".

” are you seriously suggesting that 1.5% of people with CVD on statins die after between 60 and 66 months on the drugs?”
Nearly. Actually 1.5% die after one year. The 6 months is the average increase in life expectancy for that 1.5%. Bear in mind that 8.4% on statins have died already in the first 5 years (as opposed to 9.2% in the placebo).

Where are you getting these figures from? In the HPS study 12.9% of statin patients died and 14.7% of placebo patients died during the 5-year study. No excess mortality was seen in the 4-year follow-up.

By Krebiozen (not verified) on 19 Feb 2015 #permalink

Fergus,
There is also this metaanalysis of statin use in 47,296 high risk patients over a period of 6.7 to 14.7 years that was published last year. They found:

Over the entire 6.7-14.7 years of follow-up, a significant reduction in the rates of all-cause mortality (relative risk 0.90, 95% confidence interval 0.85-0.96; P=0.0009), cardiovascular mortality (0.87, 0.81-0.93; P<0.0001) and major coronary events (0.79, 0.72-0.86; P<0.00001) was observed in favour of the original statin group. During 2-year post-trial period, further reduction in all-cause mortality (0.83, 0.74-0.93; P=0.001), cardiovascular mortality (0.81, 0.69-0.95; P=0.01) and major coronary events (0.77, 0.63-0.95; P=0.01) was observed among initially statin-treated patients.

That shows the reduction in mortality and cardiac events persists for much longer than 5 years. Incidentally, there was also no increase in cancer in statin patients, as has been claimed by some.

By Krebiozen (not verified) on 19 Feb 2015 #permalink

My figures were 2nd hand so your figures trump mine. I don't disagree that statins have a small effect on CVD. I've still to be convinced that the small benefit is worth medicating so many that won't gain any benefit with the possibility that side effects have been minimised by the statin supporters.

recent changes in the risk of dietary cholesterol actually support earlier use of statins. Reducing LDL on morbidity and mortality has been repeatedly proven in studies of tens of thousands of patients. Since lowering dietary cholesterol doesn't significantly effect plasma LDL yet lowering plasma cholesterol with statins has been shown to have clinical significance, statins should be initiated earlier not later.

Thanks Chris and JW Chaplin: every time I think there may be some hope for a civil discourse on science (any science) on this and related sites, vs. rabid opinion, folks like you step up and make sure the same can never happen. Ah, what would the blogosphere be without ad hominem shite that passes for argument?
Be well all. I'll check back in next time I need a laugh.
PS: You really do a discredit to Dr. Gorski: At least he puts his positions into a scientific framework. If you want to take on stuff you disagree with, at least try to do it with a modicum of actual discourse, not just slurs. The latter, as I hope you know, constitutes a logical fallacy and is therefore meaningless. Not even funny, unlike the good Dr. Gorski, on occasion.

By Chris Shaw (not verified) on 19 Feb 2015 #permalink

Be well all. I’ll check back in next time I need a laugh.

I guess Tomljenovic's career trajectory under Professor Shaw's training will remain shrouded in mystery.

Chris Shaw,

JW Chaplin was making a statement of fact and Chris was asking you a question(s). Why did you read otherwise?

Alain

@Chris Shaw:

Geez, what am I, chopped liver?

Did you ever find out how to diagnose cerebral vasculitis?
Oh forget it. How about looking up the definition of ad hominem?

Over the entire 6.7-14.7 years of follow-up, a significant reduction in the rates of all-cause mortality (relative risk 0.90, 95% confidence interval 0.85-0.96; P=0.0009), cardiovascular mortality (0.87, 0.81-0.93; P<0.0001) and major coronary events (0.79, 0.72-0.86; P<0.00001) was observed in favour of the original statin group.

Them thar's some decent-lookin' values.

Fergus,

My figures were 2nd hand so your figures trump mine.

Beware, there's some very plausible 'cholesterol skepticism' BS out there. Some of it has some basis in fact, often echoes of doubts from the past, before the evidence was clearer.

You can find Steinberg's excellent interpretive history of the cholesterol controversy here, in five parts, the link is to the first part, and there are links to the subsequent parts at the bottom. If you are pushed for time, Part V 'The discovery of the statins and the end of the controversy' addresses the development of statins and "traces the early studies that led to the discovery of the statins and briefly reviews the now familiar large-scale clinical trials demonstrating their safety and their remarkable effectiveness in reducing coronary heart disease morbidity and mortality". It's a fascinating tale that I highly recommend.

To be honest, I'm not entirely happy with the idea of medicating millions of healthy people, and would prefer to see them achieve a lower cholesterol through lifestyle changes. However, many people can't or won't do that. I worked with a biochemistry consultant for several years; she ran a weekly lipid clinic and had reached a state of deep cynicism about the number of her patients who were seemingly incapable of change, despite encouragement and warnings, and she would resort to prescribing statins. It's not ideal, but given the apparent safety of statins, it does seem a reasonable solution.

By Krebiozen (not verified) on 20 Feb 2015 #permalink

Narad,

Them thar’s some decent-lookin’ values.

They are indeed. Even in the very first clinical trials of statins the results were impressive: the Scandinavian Simvastatin Survival Study found a reduction of coronary heart disease deaths of 42% (P < 0.00001). These initial results seem to have stood the test of time, unlike other drugs as discussed in the OP. I don't think statins are a very good example of the 'Ioannidis effect' (to coin a phrase).

By Krebiozen (not verified) on 20 Feb 2015 #permalink

I should perhaps clarify that some people are unable to reduce their cholesterol despite making major lifestyle changes. I remember when I first worked in a clinical biochemistry lab 30-some years ago we did cholesterol and triglycerides once a week. The reagents were so corrosive we had to change all the tubing on the autoanalyser every time. Today my GP can get a fingerprick LDL/HDL in five minutes, but I digress.

There was a local family with familial hypercholesterolemia whose blood samples were instantly recognizable because they looked like strawberry milkshake, due to the excess lipids in them. I don't think any sort of lifestyle change would have helped them much.

Incidentally, back then we used flame photometry to measure plasma electrolytes, and serious lipemia like this was enough to make results inaccurate. Sodium and potassium are only dissolved in the aqueous fraction, so if you measured the sodium in the total volume you would significantly underestimate the real (clinically relevant) concentration. We would do an ether extraction to remove lipids from the sample in cases like this. These days we use ion selective electrodes that accurately measure the concentration in the aqueous fraction, so it's no longer a problem.

By Krebiozen (not verified) on 20 Feb 2015 #permalink

Dr. Shaw: " Ah, what would the blogosphere be without ad hominem shite that passes for argument?"

How is asking about the source of your funding an "ad hominem"? Do you not have any conflict of interest by getting funding and a trip to Jamaica from those who have beliefs that are counter to the scientific consensus?

Dear Dr. Shaw,
Hmm... Let me see. You used a IHC stain that was semi-selective for aluminum then claimed that it was aluminum without proper controls. You injected in the scruff of the neck, just over the spine as a surrogate for immunization in the deltoid muscle. You injected a tiny, tiny amount of aluminum hydroxide, under one one hundredth of what we regularly use for priming experiments without adverse effect and yet you have large behavioral defects that no one else sees. You apparently cannot do accurate statistical evaluations and your attempts at retrospective analysis have botched controls or none at all.
No, I feel pretty solid in that your work product is excrement. No ad hominem is needed.

By J.W.Chaplin (not verified) on 20 Feb 2015 #permalink

Krezbiosen

I've been looking at your links and I'm afraid I still need reassuring that statins are a good thing - all papers quote relative risks which I think are misleading. Indeed most cancer help sites warn patients about relative risks and say look at the absolute risk.

eg "Where are you getting these figures from? In the HPS study 12.9% of statin patients died and 14.7% of placebo patients died during the 5-year study. No excess mortality was seen in the 4-year follow-up."

I found the actual references for this (rather than 2nd hand!) and your figures are correct. However my claim still holds. Absolute decrease in risk of CVD of 1.8% after 5 years on statin drug. If you extrapolate the data to year 6, you see that in the statin arm 1.8% are now dead. So rather than saying 1.8% have had their life saved we should say 1.8% extend their life by on average 6 months. Bearing in mind that 98.2% of people on statins have had no benefit with possible adverse effects, I don't see how this is great meedical management.

I enjoyed your passion Orac and I wonder if you have any evidence on how many drug trials outdo the placebo, ya know the sugar pill? I'm also curious how the scientific world who are obviously very bright believe how putting toxic material in our body like pharmaceutical drugs....can ever make us healthy..?

I really appreciate this article.

Anyone can be a quack but it takes integrity and humility to be a scientist who seeks the truth to the best of their ability instead of prestige, a paycheck or a legacy.