"Science-Based Medicine 101": FAIL

Science Based Medicine is a site we highly recommend with experienced scientists and practitioners in charge. In other words, it's run by adults. But scientists often disagree about things. This is apparently a secret to non-scientists and many reporters who assume that when two scientists disagree, one is lying or wrong. But it's true nonetheless. Whatever the subdiscipline, there are disagreements. If you pick up almost any issue of Science or Nature you will find plenty of them, usually (but not always) couched in polite language in the Introduction or Discussion section of a paper or in the Letters. So it's not surprising that I disagree with Val Jones's piece from last week: Science-Based Medicine 101: How To Establish A Source’s Credibility. Actually I don't just disagree. I think it is quite wrong headed.

Jones's piece is meant as a guide to lay readers about how to establish a source's credibility. Right off the bat we are in trouble here, because there is an immediate confusion between credibility and being right. Most credible scientists are wrong at times, so if credible doesn't mean "right" what does it mean? Presumably it means something like, how seriously should you take this person? Let's look at the indicia of credibility as given in this guide:

In medical research, I like to think of credibility in three categories:

1. The credibility of the researcher: does the researcher have a track record of excellence in research methodology? Is he or she well-trained and/or have access to mentors who can shepherd along the research project and avoid the pitfalls of false positive “artifacts?” Has the researcher published previously in highly respected, peer reviewed journals?

2. The credibility of the research: does the study design reflect a clear understanding of potential result confounders and does it control for false positive influences, especially the placebo effect?

3. The credibility of the journal that publishes the research: top tier journals have demonstrated a track record of careful peer review. They have editorial boards of experts who are trained in research methodology and are screened for potential conflicts of interest that could inhibit an objective analysis of the research that they review. The importance of careful peer review must not be underestimated. Some say that the quality of a product is only as good as its quality control system. Top tier journals have the best quality control systems, and the articles they publish must undergo very careful scrutiny before they are published.

Oh, my. Where to begin? Let's start with track record of the researcher. More and more the real work in science is being done by post docs and graduate students who often get their names first (which is fair). But somewhere in the list (usually last, the second best place to be in biomedical articles) is usually the head of the lab or the principal investigator of the grant and they are often listed as "corresponding author" (because they'll still be there after the post doc moves on). They are also often the ones who get the credit ("Dr. Smith's lab"). How much guidance and input they had in the work depends. Sometimes it's a lot. Sometimes they barely know what's in the paper. One thing for sure. Looking at the name and record of the first author or the "senior author" is not a sure way to gauge credibility. Ask the numerous lab heads who have had to retract papers after fraud or misconduct by one of their students or post docs was uncovered.

More importantly, some of the best work is done by those fresh out or still in training. There are Assistant Professors out there who have already lost touch with the latest techniques because they spend their time writing grants and teaching and the real work is being done by their grad students and post docs. That goes triple for full professors like me. It's great to think about what to do and to have people who will try to do it, but if I had to do a lot of it myself, I'd be helpless. I don't have the time to learn these new techniques and spend the tens or hundreds of hours to master them. And don't have to. I have students and post docs to do it for me.

Indeed, if track record were enough, grant reviewing would be much easier. We wouldn't have to look carefully at the Methods. But we do look carefully at them because science is ever renewing itself and the best research often uses things that haven't been done before. And if a lay person is having trouble reading a paper, how is he or she going to be able to judge whether previous papers are any good or not? There are scientists, some quite eminent, who specialize in the LPU, the "least publishable unit." You take your stuff and chop it up into the smallest pieces and publish each as a separate paper. Very good for your resume but very bad practice. Another way to bulk up your resume is to find some technique and then keep turning the crank on it with different model systems or different data sets or different populations. It's really just one big (and often very uninteresting) paper.

As for advising the lay reader to check the study design to see if it exhibits a clear understanding of potential confounders and "controlling for false positives" (I'm not even sure what that means, but if it means correcting for multiple comparisons, this is very, very deep methodological water that biostatisticians can't agree on, so how is a layperson expected to?), if a lay person could do this they wouldn't be lay people. I spend years teaching grad students how to do this. If you are able to do this you don't need any other criteria. It's just another way of saying, is this good science or not? And it's not easy and there will be many disagreements. Regarding "placebo effect," it betrays a very parochial view of science (that it involves randomized clinical trials), but I don't think many scientists really understand what is involved and are very likely to make extremely serious errors in interpreting results (e.g., that a result that an effect is not statistically significant means there is no effect). One of my colleagues calls the worship of the randomized clinical trial "a methodolatry." I like it.

So I think this is all either bad or useless advice, but the one that is worst is the one that says, look to see if it is in a "top tier journal." OK, now I need to declare a conflict of interest. I am co-Editor-in-Chief of a peer reviewed scientific journal. Our journal is in the top quarter in impact factor in a specialty field of over a hundred journals, but it isn't one of three or four journals listed as "top tier" by this piece. It cannot possibly be that only papers published in these journals are reliable or that papers in these three or four journals are on average more reliable that many other top specialty journals and I doubt that Dr. Jones would claim this. But then how is a lay reader to know the quality of a journal that doesn't have a press office to get its name in the news all the time?

Moreover there is very little about these journals that insures reliability. It's just false that they have better quality control than other journals. They often have much shorter review deadlines and that doesn't make for careful reviewing. I have done peer review for most of the journals listed as "top tier," I do peer review for many other journals, I assign papers to others for peer review and I make editorial decisions about whether to publish based partly on the peer review process. I have also published over a hundred papers and book chapters myself. So it galls me to hear someone say that you can judge credibility by where something is published (and I'm not even touching the reliability of peer review, which is another subject). The "top tier journals" are very good but they are top tier mainly in the sense of top visibility or top profile, not because they publish the top science. Most good science is published in journals like ours: high quality journals catering to a scientific specialty and read by specialists. We don't depend on advertising and don't have big staffs of editorial assistants but we do the same (or better) peer review processes and attract the same quality of papers as the top tier journals. Those journals are interested in high visibility and attract papers that use novel methods or have surprising findings or are the first papers on a subject. The swine flu papers are a good example. They were given expedited handling by the big journals (Nature, Science, New England Journal) but the papers were very preliminary and often not very informative, scientifically. But they were first and everyone was hungry for every bit of information. Those journals were the right place to publish them because of their visibility and reach. But they weren't any more reliable for that. On strictly scientific grounds, they were mostly of very routine quality, or worse.

Some of these journals also give preference to big drug trials. These trials make news and they sell drugs. The journals make money from advertising (guess who the advertisers are) and they make a lot of money by selling reprints of the papers to drug companies who then send them for free as a form of marketing to tens of thousands of doctors. These top tier journals have also been grossly manipulated in the past and now are trying to come up with safeguards to clean up their act. The top tier journals have been purveyors of fraud more often than other journals because they were the right place for it to have an impact. But the idea that being "top tier" makes them more reliable just isn't the case, either as a thought experiment or as an empirical fact.

So if these aren't the right indicia of reliability, what are? There is no answer to this question (and certainly not the answer given in the post in question). Science is a process of sifting and winnowing and often whether work is reliable or not isn't known for some time. It has to be tested, cross-checked and fit into an existing body of information. As one of my colleagues is fond of saying, "Real peer review happens after publication." Most science reporting these days is quite terrible, little more than regurgitating the press release from a university's media relations outfit. If you are a lay reader interested enough to look at the actual paper, then you are very far ahead of the game. Most lay readers are at the mercy of a reporter or a press release and there is no good way to tell which of these are credible.

That means most lay readers have to depend on others who look at the literature with a critical and informed eye. There are some extraordinary science journalists out there who are able to do this by providing the reactions of others in the field. The Perspectives, Commentaries and News sections of the top tier journals are very good at that, as well. Then, there are the science blogs, of which Science Based Medicine is one of the best. We try to do the same kind of critical appraisal here at Effect Measure on certain subjects like influenza, and there are many, many more science blogs (such as those produced by our publisher, Seed Media Group at scienceblogs.com).

If you are a layperson interested in a particular subject, these days the internet can be a helpful resource. Of course there are the Jerry Springers and National Enquirers in the blog world, too. Maybe I should write a post on how to judge the credibility of a blog. But that's another story. Don't get me started.

Categories

More like this

Revere, I love this post. You lay down things that i've been been mostly aware of but unable to articulate so succinctly and convincingly, or unify so coherently. It is widely applicable, and reading it literally organized a small, chaotic section of my brain. Thank you!

By Suzanne Bunton (not verified) on 13 Aug 2009 #permalink

Excellent points, and well thought out.

I think it is sometimes hard to separate out the science from the scientists. A good research project can be conducted by anyone with a good understanding of the scientific process and perhaps technical proficiency. A good publication should say enough so that any team with the needed basics technical knowledge and skills (and similar resources, if necessary) can repeat the results.

I guess in the absence of the ability to objectively evaluate the methods, the credibility of the researcher can serve as an imperfect surrogate until aforementioned "real peer review" takes place. In the absence of the credibility of the researcher, perhaps the institutional affiliation or the journal can serve as an imperfect surrogate to an imperfect surrogate.

But we have to remember that even Wakefield's piece of crap on MMR/autism connection was published in the _Lancet_ (though later retracted), after some further "peer review" took place. Even the worst, most socially dangerous publications can make it through the most visible publications, "emerging science" effect notwithstanding.

I think one of the things the lay public understands least about science is that it is NOT carved in stone. Unlike religion, science is obligated to change if new evidence calls previously established facts, assumptions, principles or theories into question. This lack of timeless solidity makes a lot of people seasick and they fall back on religion because it claims to be unchanging and faith is more important than evidence. Not to criticize religion or start up a vicious argument, but I'm just saying...

I agree 100% with V.B.'s 1st sentence and will add that this contributes substantially to the poor reporting we see from the MSM. I have a friend who is a senior news producer who says, "We HATE talking to you guys - we never get a straight answer".
(Super post, Reveres)

By august_rain (not verified) on 13 Aug 2009 #permalink

As a layperson, I have tried to become better informed about certain scientific issues; but I do not subscribe to (and therefore do not have access to) any of the journals that publish much of the research that sounds pertinent to any of my particular interests or intellectual curiosities. And, even when I am able to access a raw scientific article--I frequently find that I do not have the general scientific knowledge that is necessary to completely grasp and apply my new little nugget of information. With a few exceptions, when I rely upon scientific journalists to deliver the information, I find that I am often skeptical of the information (perhaps, overly-so) because I am unsure of any possible bias, or unclear about the broader context of the reported information. This is one of the many reasons for my deep appreciation of the scienceblogs format and, specifically, the Effect Measure community.
A scientific blog with a trusted, experienced, and wise editor--such as Revere--means more to someone like me than can I can even express. Effect Measure and its Seed family of blogs, act as an aggregator of current, credible scientific research and progress, and at the same time, offer constant commentary, explanation, and even interdisciplinary dialog on the state of the art. Because I am a layperson, this is always my best option for credible science information.
The richest and most meaningful learning experiences that I had in college usually took place during my professorsâ office hours when I felt free to ask questions that I worried were too stupid to ask in front of a class, or to openly ponder ideas and note connections that were not necessarily pertinent to a specific lecture or course. Having access to Revere and Effect Measure is like having office hours with a college professor. (Without the sensory distractions of a claustrophobia-inducingly cramped, frightfully cluttered, strong black coffee-wafting office space!)
For example, the following comment (which I will post separately, due to its lengthiness) is a set of questions and ideas that I would never have the opportunity (or nerve) to bounce off of anyone else or anywhere else. I so appreciate the discourse that is nurtured by this format between the scientist and non-scientist.

Melbren: The only stupid question is the one you don't ask. If you don't understand the answer, rephrase the question. Admit ignorance; there's nothing wrong with ignorance so long as you are seeking the cure -- which begins by asking questions.

Good reporters don't rely on a press release. There are fewer and fewer good reporters, in part because we're being whacked upside the keyboard by beancounters who do not understand that quality and quantity are equally important to the bottom line.

Valerie is correct: the general public doesn't understand that scientists disagree, often loudly and at length, and that such disagreements are part of scientific discourse. They have been conditioned to expect, easy, simple answers -- which, we know, are usually wrong.

Good job, Reveres. Please continue.

By mediajackal (not verified) on 13 Aug 2009 #permalink

When I knock at the door of the Effect Measure community, this is the sort of question that I feel I can pose and have addressed in a concise, yet thoughtful manner--without being made to feel embarrassed for my underlying lack of scientific knowledge. Thanks in advance for not laughing, eye-rolling, or cringing. (Iâm posting late because I feel VERY badly about itâs length!)

I am interested in knowing how many serious cases of swine flu (i.e. cases which involved hospitalization and/or death) have reported the finding of leukocytosis (an abnormally high white blood cell count.) Statistics for the original cases in California were released to the public in May, and there were quite a few that noted leukocytosis. Since then, I have been unable to locate further data on the presentation of swine flu symptoms in conjunction with elevated white blood cell counts.

The few doctors with whom I have spoken about this matter insist that, if anything, one would notice a low white blood cell count (leukopenia) accompanying flu--but not a high white blood cell count (leukocytosis.) I would be grateful for direct clarification, or directions to public statistics that might provide empirical data on the spectrum of white blood cell counts upon initial presentation in patients who were either hospitalized or succumbed to H1N1 flu.

I have become increasingly concerned, partially due to the inherent unreliability of rapid flu tests, that patients who have swine flu as a primary illness are potentially not being recognized as such, or even possibly dismissed specifically due to the fact that they present with a high white blood cell count. That, in combination with the recent (more conservative) protocol for antibiotic intervention, leads me to worry that such a scenario may result in a potentially fatal delay of antibiotic therapy (oral antibiotics early on, I.V. antibiotics once the patient presents at the hospital) as primary care physicians and E.R. doctors wait on something specific âto culture outâ before prescribing antibiotics.

I would like to relate a personal experience to demonstrate my anxiety over the matter.

During her elementary school years, my daughter (now a 21 year old university student) was typically healthy. However, 2-3 times per year, we had to rush her to the E.R. due to the sudden on-set of a high fever, sore throat, and--what always turned out to be a white blood cell count of 15,000-35,000. (Now, looking back on the situation, her collection of symptoms seems somewhat akin to what is now sometimes referred to as SIRS--Systemic Inflammatory Response Syndrome.) Seemingly without regard to the particulars of the invasion, her primary immune response was always the same: sudden on-set fever, sore throat, high white blood cell count. It was, if you will, the modus operandi of her immune response.

(Much like the famous Gary Larson cartoon of Modern Equine Medicine--sore leg: shoot, cough: shoot, earache: shoot (etc.)--until she was about 10 years old, sudden fever, sore throat, high white blood cell count was my daughterâs immunologic answer to everything.)

And nothing (not even once) ever âcultured out.â (Even though, everything from meningitis to strep to tonsillitis to mono was initially suspected every time.)

But, back in the 90s, the E.R. doctors seemed to shoot first, ask questions later. They were never as initially concerned about what was causing a high white blood cell count; they were always initially most concerned that she had a very high white blood cell count. I.V. antibiotics and fluids were administered immediately because she was so suddenly and so very sick when she presented. Each time, within an hour of receiving the âbag of antibiotic du jour,â she was herself again. In all of those trips to the E.R.--not once was she admitted.

Anyway--fast forward 10 years. This past March, my daughter became ill and was sent to the E.R. with what I will always suspect (mommy instinct/paranoia) was an early bout of the swine flu. (BTW, 8 confirmed swine flu deaths have been reported in that same county since the outbreak, 6 deaths in the neighboring county.) It was two weeks prior to the first identification of H1N1, seasonal flu season was over--it was understandable that the E.R. doctor didnât give a lot of thought to the flu. And my daughter probably would have been even less likely to have been given antibiotics if she had tested positive for a flu. But by the time she was sent to the E.R. by her universityâs student health center, her white blood cell count was about 22,000 and rising quickly.)

2500 miles away and on the phone with the E.R. doctor that evening, I pleaded with him to give her a bag of I. V. antibiotics--or at least a bag of I.V. fluids. But he was insistent upon waiting for something âto culture out,â even though the student health center had taken several cultures that had turned up nothing. I was told that they could not administer antibiotics until and unless they cultured something definitively.

Instead, they gave her two shots of morphine (?) over the course of twelve hours and released her. I flew out to California that night, and she gradually recuperated over the following weeks. (She was only prescribed oral antibiotics 3 weeks later--when an âacute bronchitisâ showed up on a chest x-ray.) This ended up being--by far--the most protracted illness she has ever had.

But, back to the context of everyone else now. When I read reports of very healthy young people and pregnant women becoming seriously ill very quickly and dying or needing to be placed on ventilators soon after on-set, I always wonder if their immune responses were similar to my daughterâs. I always wonder what their white blood cell count looked like before things went south--and if a bag of antibiotics, early on, would have made a difference. Or if a high white blood cell count would at least have acted as a red flag that the personâs immune response had been tripped--perhaps whether it was a viral or bacterial alarm being tripped was perhaps less significant than the plain fact that an immunologic alarm had been tripped during a novel pandemic flu outbreak.

Is it possible that, whereas some peopleâs immune responses are so weak (for a variety of reasons) that, depending upon viral load and other factors--practically any virus or any significant taxation of their immune system is potentially lethal; but that other peopleâs immune response, like my daughterâs, (although also dependent upon additional factors such as viral load) is, under certain circumstances, too robust? Is it possible--particularly in pediatric and young adult swine flu-related fatalities--that there are two relatively distinct populations of victims; those on which the H1N1 virus is too hard, and a separate and distinct population that is (at least initially) too hard on the H1N1 virus, so to speak?

And, in the latter case, would it not be plausible that such a group--a group that is distinctively more likely to mount an âoverly-robust responseâ might present with sudden on-set fever, sore throat, and, most notably, leukocytosis?

I worry that, because of a high white blood cell count, flu might initially be ruled out, and antiviral medication would not be prescribed. As the window of opportunity for Tamiflu/Relenza closes, doctors would be focused on a bacterial-basis for the illness--suspecting only meningitis, tonsillitis, strep, mono, scarlet fever, etc. By the time the doctor realizes that nothing definitive is going to culture out anyway, the white blood cell count could have increased; and the patient could be at a higher risk for septic shock and other nasty complications.

I want to know if a high white blood cell count may, indeed, be one possible marker for an immune response that is on the verge of mounting an overly-robust, and potentially lethal, immune response to the swine flu. And, if so, might rapid intervention with a broad spectrum antibiotic for this population be an acceptable protocol during this highly exceptional flu season?

If we could look at the serious illness/mortality data, I suppose that a spectrum of immune response would not pop out at us empirically. Viral load, pathenogenicity, virulence, etc would probably muck up the actual data. But, what if we went looking for a spectrum of immune response. (Like on an arc, rather than on a continuum.) Perhaps we could plot distinctively, on one side of the spectrum, a cluster that distinctively mounted too weak of a response. Such a population might stand out and be marked by (truly serious) underlying medical conditions such as malnutrition, anemia, the dependence on immunosuppressive medication, etc. But perhaps the other side of the spectrum would emerge as a cluster (albeit smaller) that mounted an overly robust response. And, perhaps one possible marker for those in the population who were basically healthy, but then surprisingly and rapidly succumbed to the swine flu--would be the presence of leukocytosis. If there is such evidence, perhaps early antibiotic treatment--treatment that begins before anything definitive âcultures outâ--could make a significant difference in outcome for some in that population.

I also wonder if some of the many pregnant women who have succumbed to this flu have actually mounted a more robust immune response to the flu--counter to the explanation that their immune systems were tapped out due to pregnancy. Is it plausible that nature would have selected for genes that trigger a no holds barred sort of immune response in a pregnant woman who comes into contact with a novel pandemic flu. Perhaps genes that are housed in a body that puts up a robust fight when faced with a once-in-a-generation novel virus would be more likely to make it into the lottery for the next generation. (If only I had Dawkins on speed dial right now.) But I would certainly like to know if any of them had leukocytosis at a point early enough along the way to have been a red flag for earlier, different, or more aggressive treatment.

My bottom line is basically twofold: particularly during the next several months as we await swine flu vaccine, should we be checking white blood cell counts on more of the I.L.I patients in our at-risk populations who present with serious illness? And, armed with such information, is it possible that we might want to re-visit and possibly modify our current protocol for such a delayed and/or conservative use of antibiotics specifically in patients who indeed present with leukocytosis?

If it turns out that my daughter did not have the swine flu this past March, I can almost guarantee you that if/when she does get it, she will present with a fever, sore throat, and high white blood cell count. Particularly in the next couple of âpre-vaccineâ months--when any illness that she contracts would more likely be swine flu--I hope her doctors are more aggressive with antibiotic intervention than were her doctors in March.

I feel I need to reiterate that the doctors with whom I have spoken about this issue have said quite authoritatively--it would be very rare to see a high white blood cell count initially accompanying a case of flu.

But isnât ârareâ a relative term in the midst of a novel pandemic flu virus?

melbren: Yes, what the doctors are saying is what we believe to be true. Viral infections like flu are most likely to present with leukopenia rather than leukocytosis, the latter if there is secondary bacterial infection complicating matters. That's what we think we know and I don't know of data to contradict it in this case. But there is much work left to be done on flu in general and this flu in particular and nothing that the flu virus does would surprise me other than that I am continually surprised. I don't have any more wisdom on this except the usual response, we'll have to wait and see how things develop. Wish I and everyone else knew more.

Excellent post. While reading the method described by the SBM post it occurred to me that this type of credibility assessment of research is also done by a very select group of laypeople -- judges performing the "gatekeeper" function for scientific evidence under the Daubert standard. I would imagine they would be subject to the same pitfalls you highlight in this post.

"That means most lay readers have to depend on others who look at the literature with a critical and informed eye."

But how do they know who to depend on? I think that was the point of Val's post.

Kimbo: If that's the point, then it is unhelpful. And how do we know to depend on Val's post? You don't really know, even if you are a scientist. That's why results are confirmed, why scientists disagree and why science changes. I earn my living partly by trying to teach grad students how to do this. Believe me, if there were an algorithm like the one implied here, my job would be easy or at least easier.

But let's grant your point about the point of Val's post. My point is that it makes suggestions that aren't helpful, don't work and are wrong headed in implying that such a "101" method exists.

As I said in response to Orac's response to this response:

If we presume that a "101" method does exist, we still have to figure out what the right simplifications are. Of all the possible "lies to children" we could tell, which are useful in their own right as approximations? Which best prepare the way for the next level of accuracy?

In the present case, one might ask about the "published in a top-tier journal" heuristic. Is that the most effective cut, the best first-order approximation? What if instead we advised the reader to check that the research was not published in a known crank journal, like Medical Hypotheses or the Journal of American Physicians and Surgeons? It's a matter of trading off false negatives for false positives. If your smoke alarm goes off every time you make toast, you can pull out the batteries; this will significantly lower the rate of false positives (alarms without fire) but ups the risk of false negatives (fires without alarm). Telling people, "If it wasn't in Science or Nature or the NEJM, it's probably worthless" will cut out a great deal of bullshit, but in all the material that rule excludes, there's gonna be a great deal of legitimate work.

Indeed, "Was the 'research' even published at all?" is a worthwhile question in its own right. We know that the media are not above taking an unfinished master's thesis and spinning it beyond all recognition, until the story in the newspaper couldn't see the land of truth with a telescope.

Thanks, Blake. I hadn't noticed that Orac had responded on his blog. My response to him is now in the comments there.

The problem with the top tier journal thing is too well known to even take seriously, but I will note that Lynn Margulis's seminal paper on the endosymbiotic origin of the mitochondrian appeared in the Journal of Theoretical Biology, a very good but highly specialized journal not on most radar screens because it was rejected by 15 other journals first. J. Theor. Biol. editor Danielli recognized what he had, which none of the "top tier" journals did. That's not so uncommon.

"And how do we know to depend on Val's post?"

We don't. But I feel like we should at least try to come up with a way to communicate science in such a way that lay-people could evaluate the veracity of research on their own.

Maybe I've misunderstood your point, but it just seemed like you were suggesting that it was only the business of scientists to decide what is and isn't good research and then inform the masses. I would argue that if we can inform and educate the public with critical thinking skills, they could make those decisions for themselves.

For the record, I read and enjoy your blog regularly. I was just a bit confused when I read this. But again, perhaps I misunderstood.

Kimbo: I just saw that Orac responded to me at Respectful Insolence so I have explained further in the comments there. For the record, I thought Val's advice was bad advice and that there are many ways to get help for understanding science other than looking at the prestige of the scientists or the journal it was published in, two criteria that are both unscientific and more likely to mislead than enlighten. As for examining the study design, that is useless advice. If a layperson could do that, they wouldn't need any other advice.

Oh I agree that prestige is a largely useless argument from authority and I used to get a little riled when it was so emphasized in my program (a science program, where people especially *should* be able to evaluate the quality of the design). So, fair enough. It was mostly the last paragraph that I wasn't fond of, but I will read your post over at Orac's blog.

P.S. Thank you for taking the time to respond to me.

Excellent. :) I'd just like to put up a few more things to think about:

1. fraud: there are some hucksters out there who had gone for many years without being caught; as an outsider (well, I'm not in the medical field) that would indicate to me that the reviewers aren't doing a terribly good job. When I review a paper I don't care if it's written by a student or written by the profession's current Golden Child; the paper gets read, claims are checked, and the editor may very well get comments like "this is a load of crap". So we need to identify and cut down hucksters quicker, but that does require spending time reviewing claims and data and in some cases it requires duplicating an experiment (which is far from trivial these days).

2. To add to your statement about preferences for big trials, Reynold Spector wrote an article recently printed in CSICOP's "Skeptical Inquirer" (Science and Pseudoscience in Adult Nutrition Research and Practice) in which he criticizes journals for being uncritical of trials, especially epidemiological studies. Some journal editors essentially say they'll publish anything and leave it up to the reader to sort out what's believable or not. So what the hell is the use of an editorial board then? Personally I wouldn't want to waste time reading articles which are really not worth reading; I scream and hurl my journals across the room and swear I'll cancel my subscription whenever I find useless articles; unfortunately that seems to happen a lot in all journals I read these days.

By MadScientist (not verified) on 14 Aug 2009 #permalink

Melbren: White counts are essentially useless clinically and they should never be the major factor in a decision as to whether to empirically cover someone with antibiotics or careful wait and see approach. As I say to the residents and medical students... It is the overall clinical picture stupid... not the WBC. Where is it written that SIRS requires antibiotics? Docs are pressured every day by the lay public who think they know it all and demand antibiotics or antivirals. Your comment about hoping docs give out antibiotics like gumdrops is outrageous. In case you dont know, we have folks running around with multi-drug resistant microorganisms and misuse or overprescribing of antibiotics is contributory. It takes a minimum of 15 years to develop a new antibiotic and there is probably only a 2 or 3 new antibiotics currently in phase 2 clinical trials so we are a little behind the 8 ball on this. Relax. your daughter is still alive right? So the docs were right in holding.

I agree the big name journals acted badly in the spring--in regard to new H1N1--publishing what many considered rubbish by any other name. In fact, the NEJM published a perspective twice in the spring--were they that desperate?

By BostonERDoc (not verified) on 14 Aug 2009 #permalink

Wow.

I suddenly feel so much more lost.

This article just kicked my epistemological senses right in the nads.

This is actually a subject I've been pondering a lot as of late and what meager answers I have worked up seem meaningless to me now.

No wonder it's so easy to be a credulous fool. It is SO much simpler.