Cranks against peer review

One of the favorite targets of pseudoscientists is the peer review system. After all, it's the system through which scientists submit their manuscripts describing their scientific findings or their grant proposals to their peers for an evaluation to determine whether they are scientifically meritorious enough to be published or to be funded. Creationists hate it. HIV/AIDS denialists hate it. Indeed, pseudoscientists and cranks of all stripes hate it. There's a reason for that, of course, namely that vigorous peer review is a major part of science that keeps pseudoscientists from attaining the respectability that science possesses and that they crave so. Lately, though, the attacks on peer review coming from the crank contingent seem to be more vociferous than usual.

Now, far be it from me to go all Panglossian on you and claim that the peer review system is the "best of all possible worlds," or anything like that. Having participated in the system at the receiving end, as a peer reviewer for journals, and as a reviewer on a study section, I know there's no doubt that the system has problems and could do with considerable improvements. However, when I hear rabid bashers of peer review characterize it as "crony review," I tend to echo Winston Churchill's famous statement about democracy that peer review is "the worst way of determining what science should be published and funded except for all those others that have been tried." Certainly, the bashers can't seem to propose something better. Of course, that's because the real purpose behind the numerous criticisms made by cranks about peer review is not to reform or improve the system, but rather to weaken it or alter it so that they can get their favorite pseudoscience published an/or funded, thus allowing them to attain the respectability of real science that they so crave.

Lately, there's been an article going around the blogosphere by (gack! time to hide my face in shame again) a surgeon at the University of Washington named Donald W. Miller, Jr., who launches broadsides at the peer review system used to determine who receives U.S. government grants and whose article has gained a lot of traction among HIV/AIDS denialists. The article didn't appear in a peer-reviewed journal itself, as far as I can tell, and is entitled The Government Grant System: Inhibitor of Truth and Innovation? I knew there were going to be problems right away. First, Miller uses the term "truth" in the title. Science is not about "truth"; it is about understanding how the world around us works to as good an approximation as we can get. Worse, very early on in the article, Dr. Miller shows that he can't seem to get his facts straight, mangling the concept of "triage" and making me seriously wonder if he has ever served on an NIH study section. Certainly, searching the CRISP database I could find no evidence that he has ever been the principal investigator or co-investigator on an NIH grant, which if true would more or less disqualify him from sitting on an NIH study section. I also can't help but note briefly that Dr. Miller, like so many physicians and scientists who turn to the dark side of pseudoscience, seems to have had a respectable publication record in peer-reviewed journals as an academic cardiac surgeon until 1991, after which he has not published in peer-reviewed journals. (Note: Despite its claims otherwise, the Journal of American Physicians and Surgeons does not count as a peer-reviewed journal, for reasons I have discussed extensively before.) In any case, here he describes the peer review system as he thinks it stands:

The Center for Scientific Review "triages" applications it receives. A cursory appraisal eliminates one-third of the applications from any further consideration, and it selects the remaining two-thirds for competitive peer review. CSR sends each application to a Study Section it deems best suited to evaluate it. Peers in Molecular Oncogenesis, Cognitive Neuroscience, Cell Structure and Function, Hematopoiesis, HIV/AIDS Vaccine, and 167 other Study Sections review grant applications. Each Study Section has 12-24 members who are recognized experts in that particular field. Members meet three times a year to review 25-100 grants at each meeting. Two members read an application and then discuss it with the other section members who collectively give it a priority score and percentile ranking (relative to the priority scores they assign to other applications). An advisory council then makes funding decisions on the basis of the Study Section's findings, "taking into consideration the [specific NIH] institute or center's scientific goals and public health needs" (Scarpa, 2006).

Not quite. This is what the NIH CSR says about peer review:

One or more CSR Referral Officers examine your application and determine the most appropriate Integrated Review Group (IRG) to assess its scientific and technical merit. Your application is then assigned to one of the IRG's study sections. A study section typically includes 20 or more scientists from the community of productive researchers. Your application also will be assigned to the NIH Institute or Center (IC) best suited to fund your application should it have sufficient merit. (More than one IC may be assigned if appropriate.)

In reality, in the first pass through the CSR, Referral Officers do little more than (1) make sure the grant is formatted correctly (yes, they do check to see if you used a 10 point font instead of 11, shrank the margins beyond what the rules state, went over the page limit, or tried to get by without all the necessary institutional signatures, and if they find that you did any of those things or others the grant will not be forwarded to a study section); (2) verify that it fits the criteria for the grant mechanism being applied for; and (3) figure out the most appropriate Integrated Review Group to send it to. I suppose it's possible that 30% of scientists are too stupid or careless to follow the formatting requirements properly and to include all the needed information, but I doubt it. Even if that were the case, the scientists would have no one to blame but themselves; the instructions, although voluminous, are quite clear at least about formatting requirements and page limits. In any case, pretty much every grant that's formatted correctly and contains all the required elements is assigned to a study section for review.

During a study section, the appraisal of which grants are "triaged" is not "cursory." Every grant application is assigned to approximately three reviewers (the number may vary, depending on the grant mechanism and study section). In the study section on which I serve, for example, every application is assigned to a two primary reviewers (Reviewer 1 and Reviewer 2) and a secondary reviewer (the Discussant). Both primary reviewers are expected to read the grant application in detail, write up a 2-4 page review of it, and assign it a proposed priority score. The discussant is also expected to read it in detail but only to write up a 1-2 page review and assign a proposed score. Everyone else on the study section tends to look over grants to which they are not assigned as Reviewer 1 or 2 or the Discussant in a much less detailed fashion, but that's understandable, given that most reviewers are assigned around 10 grants to read in six weeks and that it can take several hours per grant. At the very beginning of the study section meeting, the chair will list the grants whose initial proposed priority scores are in the bottom half. Because these grants clearly have no chance at being funded this cycle, they are then "streamlined" (or, colloquially, "triaged"), meaning that they will not be discussed in detail at the full study section. Potential streamlining candidates whose reviewers assigned them widely divergent scores, indicating a disagreement on their merit, are often specifically pulled aside for discussion before voting on streamlining. Indeed, if either reviewer is insistent about it such grants will usually be discussed before the whole study section, and if any study section member strongly objects to the streamlining of any grant application, it will be discussed.

After streamlining, once the discussion of the remaining grants starts, Reviewer 1 usually leads it for his assigned grants, and then each member of the study section assigns a score. Contrary to Dr. Miller's distorted description, the only differences in treatment between "triaged" grant applications and those discussed at the full study section is that triaged grants are not discussed in detail (although they are reviewed in detail), and a "summary statement," which boils down the written reviews and group discussion into a summary, which is (usually) highly useful for applicants in guiding revisions of the application for resubmission. For triaged grant applications, on the other hand, the three written reviews are returned to the applicant, who is free to revise and resubmit based on the comments of the three reviewers. It's not quite as useful, but still helpful. Moreover, study sections do not assign percentile scores, only priority. Percentile scores are generated from the Gaussian curve of all the priority scores, and it is the percentile score that determines which grants are funded. (Indeed, that's what I mean when I say that the NCI is funding at the 12th percentile this year.)

Bottom line, whatever major faults the NIH grant approval process has, doing only a "cursory" evaluation of the applications at any stage is not one of them.

It's at this point that you find out where Dr. Miller is really coming from, and it isn't from the perspective of someone who wants to reform the system. He comes from the perspective of a friend of pseudoscience who feels that the system doesn't give his HIV denialist buddies a fair shake, which is evident in how he starts out semi-reasonable and then goes right off the deep end. Here's the semireasonable part:

The grant system fosters an Apollonian approach to research. The investigator does not question the foundation concepts of biomedical and physical scientific knowledge. He sticks to the widely held belief that the trunks and limbs of the trees of knowledge, in, for example, cell physiology and on AIDS, are solid. The Apollonian researcher focuses on the peripheral branches and twigs and develops established lines of knowledge to perfection. He sees clearly what course his research should take and writes grants that his peers are willing to fund.

There is some truth to this, again, depending on the specific grant mechanism. For example, the flagship grant of the NIH, the largest grant awarded to individual investigator (the R01), tends to emphasize research that is well-supported by preliminary data. This tends to become more of a problem when funding gets tight, as it is now. In such a funding environment, reviewers are more reluctant to fund risky research because they do not want to throw money at projects with a low chance of success. (The "whiner" in me can't resist pointing out that one way to get more "risky" science funded is to increase science funding overall, to make reviewers more willing to take risks, but that would just reveal me to be a tool of the system.) However, there are other grant mechanisms, such as the R21, which provide smaller grants for shorter periods of time for riskier projects. If a tendency towards conservative science was Miller's main critique of the system, I'd probably agree for the most part. Unfortunately, Miller can't resist tipping his hand that real reform of the system is not what his polemic is about. After listing what he characterizes as "state-sanctioned unassailable paradigms" that will never be funded, Miller devolves completely into ranting crankery:

The human-caused global warming paradigm is most likely false (Soon et al., 2001; Editorial, 2006). Two climate astrophysicists, Willie Soon and Sallie Baliunas, present evidence that shows the climate of the 20th century fell within the range experienced during the past 1,000 years. Compared with other centuries, it was not unusual (Soon and Baliunas, 2003). Unable to obtain grants from NASA (National Aeronautics and Space Administration), Soon (personal communication, August 31, 2006) observes that NASA funds programs mainly on social-political reasoning rather than science.

Duesberg (1996), Hodgkinson (2003), Lang (1993-2005), Liversidge (2001/2002), Maggiore (2000), and Miller (2006), among others, have questioned the germ theory of AIDS. All 30 diseases (which include an asymptomatic low T-cell count) in the syndrome called AIDS existed before HIV was discovered and still occur without antibodies to this virus being present. At a press conference in 1984 government officials announced that a newly discovered retrovirus, HIV, is the probable cause of AIDS, which at that time numbered 12 diseases (Duesberg, 1995, p. 5). Soon thereafter "HIV causes AIDS" achieved paradigm status. But, beginning with Peter Duesberg, Professor of Molecular and Cell Biology at the University of California, Berkeley, a growing number of scientists, physicians, investigative journalists, and HIV positive people have concluded that HIV/AIDS is a false paradigm. The NIH awarded Duesberg a long-term Outstanding Investigator Grant and a Fogarty fellowship to spend a year on the NIH campus studying cancer genes, and he was nominated for a Nobel Prize. When Duesberg publicly rejected the HIV/AIDS paradigm the NIH and other funding agencies ceased awarding him grants. Government-appointed peer reviewers have rejected his last 24 grant applications. Peter Duesberg (personal communication, September 20, 2006) writes: "When I was the blue-eyed boy finding oncogenes and 'deadly' viruses, I was 100% fundable. Since I questioned the HIV-AIDS hypothesis of the NIH's Dr. Gallo, and then the cancer-oncogene hypothesis of Bishop-Varmus- Weinberg-Vogelstein etc. I became 100% unfundable. I was transformed from a virus- and cancer-chasing Angel to 'Lucifer.'"

Yes, global warming denialism and HIV/AIDS denialism (coupled with the oft-repeated cry of "martyrdom!" from Peter Duesberg, yet!) are what Dr. Miller is about. (Hint to Dr. Miller: Citing Christine Maggiore and Peter Duesberg is not a particularly good way to bolster the credibility of your arguments.) Other "unassailable paradigms" that Miller lists are not quite as ridiculous as his examples of AIDS and global warming, but they're mostly strawmen; for example, the claim that "cholesterol and saturated fats cause coronary artery disease" is actually not quite what medical science states; rather it is that cholesterol and saturated fats are major factors, among others, that contribute to the pathogenesis of coronary artery disease. Using these examples does not exactly bolster Miller's credibility or case, either. Miller then goes on tear about how science is in service of the state, pulling out more HIV/AIDS denialism idiocy coupled with some rather blatant conspiracy-mongering:

AIDS research serves the interest of the state by focusing on HIV as an equal opportunity cause of AIDS. This infectious, egalitarian cause exempts the two primary AIDS risk groups, gay men and intravenous drug users, from any blame in acquiring the disease(s) owing to their behavioral choices. Duesberg, Koehnlein, and Rasnick (2003) hypothesize that AIDS is caused by three other things, singly or in combination, rather than HIV: 1) long-term, heavy-duty recreational drug use--cocaine, amphetamines, heroin, and nitrite inhalants; 2) antiretroviral drugs doctors prescribe to people who are HIV positive-- DNA chain terminators, like AZT, and protease inhibitors; and 3) malnutrition and bad water, which is the cause of "AIDS" in Africa. HIV/AIDS has become a multibillion dollar enterprise on an international level. Government, industry, and medical vested interests protect the HIV/AIDS paradigm. The government-controlled peer review grant system is a key tool for protecting paradigms like this.

Does this remind you of anything? Perhaps of the conspiracy theorist who recently argued with a straight face that the reason there would never be any cure for cancer is because the "vested interests" of the medical industry and government would not allow it, as such a cure would "devastate" the medical economy? It sure sounds like the same fallacious argument applied to AIDS, which is why I ask you to repeat after me: This is all a load of crap. The evidence that HIV causes AIDS is exceedingly strong and has not been seriously challenged, not by Duesberg, and certainly not by any of Dr. Miller's HIV "dissident" tracts published at the execrable LewRockwell.com, where Miller routinely spews HIV/AIDS denialism (playing the Galileo Gambit yet, but with Copernicus!), global warming denialism, anti-fluoridation rants, and antivaccination posturings worthy of the mercury militia. These views alone show that Dr. Miller's critical thinking skills leave much to be desired, and this lack of critical thinking is very apparent in his article.

No wonder Dr. Miller is so unhappy about how peer review works! Personally, my view is that, whatever the problems are in the peer review system, one thing it does do a reasonably good job at is keeping pseudoscience (such as what Dr. Miller apparently subscribes to) from being funded. To me, its ability to keep pseudoscience from being funded and, for the most part, published is one of the great strengths of our peer review system. Any reform that is undertaken must be done carefully in such a way as to minimize any weakening this firewall against ideas that are clearly without scientific merit and overwhelmingly believed to be so by scientists. After all, one of the risks of funding "riskier" science is that pseudoscience will sneak in, along with the legitimate science. Finally, one thing that I have to wonder about is this: If the "unassailable state-sanctioned paradigms" that Dr. Miller detests so much are, as he seems to believe, due primarily to the NIH grant peer review system, why is it that scientists around the world also consider Duesberg's ideas about HIV to be profoundly incorrect and have come, after much wrangling, to believe that human-caused global warming is occurring?

Sadly, the ideas for reform seen in Miller's article and elsewhere among the HIV/AIDS "dissidents" seem to boil down to either "let's find a way to fund potential cranks like us" (a.k.a. "mandatory funding of contrarian research") or "let's get rid of peer review." Dr. Miller opines:

One alternative to the competitive peer review grant system that the NIH and NSF might consider for funding specific research projects is DARPA, the Defense Advance Research Projects Agency. This agency manages and directs selected research for the Department of Defense. At least up until now it has been "an entrepreneurial technical organization unfettered by tradition or conventional thinking" within one of the world's most entrenched bureaucracies (Van Atta et al., 2003). Eighty project managers, who each handle $10-50 million, are given free reign to foster advanced technologies and systems that create "revolutionary" advantages for the U.S. military. Managers, not subject to peer review or top-down management, provide grants to investigators whom they think can challenge existing approaches to fighting wars. As long as the state controls funding for research, managers like this might help break the logjam of innovation in the biomedical and physical sciences. Science under the government grant system has failed and new kinds of funding, with less government control, are sorely needed.

I fail to see how giving appointed managers this power would be "less" government control over research. After all, who hires these managers? The government! What's to stop the government "orthodoxy" from simply hiring managers that do what the government orthodoxy wants? Nothing! After all, it would be even easier to enforce an orthodoxy if the managers, rather than largely volunteer peer reviewers drawn from diverse academic settings, controlled funding, because, as Dean Esmay informs us, quoting Al Gore in the process, "It is difficult to get a man to understand something if his salary depends upon his not understanding it." Besides, military technology, although a broad area, is applied, not basic, science. It probably does not require as much of an understanding of the nitty-gritty of the basic science behind technology proposals as it does to understand whether a basic science or translational research proposal is reasonable, innovative, and feasible. Moreover, remember that the entire yearly budget of the NIH is only $28 billion, and the entire budget of the NCI is less than $5 billion, both of which are utterly dwarfed by the size of the Defense budget. In other words, the military is much more lavishly appointed and can afford to throw money at risky scientific projects in a way that the NIH and NSF cannot. Moreover, contrary to this example of DARPA, the system that the U.S. Department of Defense uses to evaluate most submitted research proposals is actually peer review. Using peer review, in fact, the Army (believe it or not!) does quite a good job of emphasizing and fostering innovative proposals, as I discussed before; they probably do better job in some respects at supporting scientific innovation than the NIH. If the NIH is going to emulate the military, it would do far better to examine how the Army conducts its scientific peer review sessions, rather than to listen to the posturings of people like MIller.

The rest of the peer review bashers don't do much better than Dr. Miller. For example, taking the most vociferous of the critics that I've seen lately, Dean Esmay's ideas seem to boil down to the sort of reasonable to the ignorant to the unworkable. For example, Dean proposes a seemingly not entirely unreasonable idea of completely eliminating the anonymity of peer reviewers that betrays his ignorance of the process. For one thing, he doesn't seem to have noticed that Study Section rosters are already published on the web, allowing reasonable guesses as to who specific reviewers are for applications. (In fact, the NIH helpfully sends applicants the complete roster of the study section that reviewed their grant, along with the summary statements and reviews.) He also seems not to understand that it is a not infrequent occurrence for more junior faculty to be reviewing applications by senior, well-entrenched faculty, the veritable "gods" of the field, if you will. How willing would these early mid career scientists be to be brutally honest about a bad proposal if the applicant would know who gave him the bad score? Indeed, completely eliminating anonymity might actually have the tendency to worsen the very problem Dean and Miller decry by leading to grants by highly established and respected scientists getting even more of a pass than they do already.

In addition, Dean proposes another idea that reveals his ignorance of the NIH, namely to make peer review funding boards "truly multidisciplinary" (whatever that means) and forcing every application to be looked at by a mathematician or someone with a "background in mathematics." I'm not sure if he's referring to the study sections, which do initial peer review, or advisory councils of each institute, which do the second tier of peer review taking into account specific scientific and/or programmatic priorities of their Institutes, but Dean apparently has never actually looked at the roster of a few typical NIH Study Sections. If he means study sections, I point out that they already are multidisciplinary, and virtually all of them include biostatisticians! (I hope that's "mathematical" enough for Dean.) For example, the study section on which I presently sit includes internists, physiologists, surgeons, computer experts, radiologists, medical imaging experts, molecular biologists, a medical physicist, and biostatisticians. I suppose that we could make things even more "multidisciplinary" with "no direct interest" in the field (we could bring in an archaeologist, I suppose, to look at cancer biology proposals), but we would do so at the risk of decreasing the familiarity of reviewers and study section members with the detailed science behind grant applications assigned to them. (On second thought, maybe that's just what Dean would like. On third thought, there's no "maybe" about it.) On the other hand, if Dean means Institute advisory councils, it is hard to see what added benefit that making the these second tier reviewers even more "multidisciplinary" would provide, given that the primary driver of what gets funded is the review provided by the study section, not the post-review committees, which largely rely on the study section's priority score and the priorities of their respective Institutes to dole out funds. They tend to be mostly rubber-stamp sorts of committees and tend not to make a big difference except for close calls or in cases of proposals that are highly congruent with the Institute priorities. Making these advisory councils more "multidisciplinary" would be unlikely to affect these priorities because it is not the advisory councils who determine NIH funding priorities; they only implement them. It is the NIH Director and the Directors of the various Institutes, who are appointed by the President, who determine NIH funding priorities, heavily influenced, of course, by Congress and the President. How else do you think NCCAM, for instance, came into existence? Certainly the scientific community at the NIH didn't lobby for it; woo-loving Congressmen did.

Finally, Dean also thinks that there should be an appeals process for grants reviews. Indeed there should! Unfortunately, Dean seems utterly oblivious to the fact that there already is a formal appeals process for applicants who think their grant applications were subjected to biased reviews or assigned to reviewers who clearly did not know the science. It can certainly be argued whether the appeals process is adequate or fair, but to imply that there is no appeals process for an applicant whose "risky" grant receives what he deems to be unfair or factually incorrect reviews reveals that Dean just doesn't have a clue about how NIH peer review is done.

No one denies that there are problems with the NIH peer review system for grant evaluation; like all human endeavors, there's room for improvement. (Indeed, the complaints bubbling up against it now are nothing new; I heard the same complaints when I was in graduate school.) Despite those problems, the system has largely served us well for the last several decades, and, despite their flaws, the NIH and NSF peer review systems are remarkably immune to political influence and corruption, at least as much as any government entity can be. Certainly it has much to recommend it. For example, junior scientists compete for funds with more senior scientists on a more equal footing than perhaps any other nation in the world. In fact, new investigators are even given a significant (although, some would argue, not significant enough) break on funding lines to give them a better chance of being funded, for example, setting the payline more liberally for new investigators, as the NCI did when it set its payline at the 12th percentile and the payline for new investigators at the 18th. Also, applicants can propose virtually any sort of health science-related research project, and it will be seriously considered for grant funding by a study section composed of experts qualified to evaluate it. Moreover, scientists are actively working to address the system's shortcomings. Indeed, Antonio Scarpa himself, the Director of the Center for Scientific Review of the NIH, recently published an article in Science reporting on these efforts and soliciting suggestions. Meanwhile, contrary to the impression given by Dr. Miller's article of a system that scientists accept and never challenge, articles about the problems in the peer review system, and there has been much discussion of this at meetings that I have attended.

Of course, substantive and real reform of the peer review system in order to make it function better and allow the funding of meritorious but risky projects is not the true goal of "critics" like Miller and Esmay. Neutering it is, the better to allow pseudoscience like HIV/AIDS denialism an opening. Mark was right to warn us to beware the bashers of peer review.

Categories

More like this

I'd like to hear from some other sciencebloggers and science readers what they think reform of peer-review should look like. I'm not of the opinion that it has any critical flaws, but most people would like to see more accountability for sand-bagging and other bad reviewer habits. Something like…
Today I got several emails, each asking for my views on a proposed change to the format for National Institutes of Health grant proposals. This may seem of only parochial interest except to those of us who make our living applying for NIH grants, but how health research is funded is of interest to…
It looks as though I've been tagged by Drug Monkey, who apparently thinks that I might have something worth saying about the state of the NIH and its peer review system, about which the NIH is presently soliciting comments, as pointed out to me by Medical Writing, Editing, & Grantsmanship. Why…
Most of you don't want to hear about my grant writing any more, but some of you are clearly interested in one of our innovations (at least I think it's an innovation; I've never heard of anyone doing it on this scale before): the Mock Study Section. So I'll take a break from writing (actually, re-…

Donald Miller and those like him apparently doesn't (or can't) distinguish between novel ideas backed by well formulated argument (and preferably preliminary data) versus off the wall ideas backed by unsound arguments. All novel ideas are not equally meritorious.

Great article.

Apparently, nobody told Dr. Miller that the National Center for Complementary and Alternative Medicine was instituted to bypass the scientific peer-review process. They dole out money based on the popularity of the notions proposed. He should apply there.

Also, there are a lot of quack, "peer-reviewed journals" that will accept anything he can type. Many of these are indexed by PubMed.

What is Dr. Miller's real problem?

Duesberg. Some time ago I had a long, tired, pointless argument with some HIV denialists, and among all the crap they wrote, they accused me of "worshipping high priest" Robert Gallo. The curious thing is that I mentioned Gallo at most once, while these folks were fond of citing Duesberg and others as if they were spawned by Zeus himself and their words were The Absolute Truth. I doubt they saw the irony.

Off-topic: did you check this out? Or am I outdated?

By MartÃn Pereyra (not verified) on 16 May 2007 #permalink

Anti-peer-review crankery is getting increasingly common within the ranks of the non-crank scientists as well.

Orac, you assert one of the most puzzling problems as I see it:
reviewers are more reluctant to fund risky research because they do not want to throw money at projects with a low chance of success.
I agree that this is a common explicit or implicit criterion. What I find completely unstated and unexamined is what constitutes "success" and what actual data are used to justify biases regarding "risky" or "safe" proposals.
One might propose scientific output in terms of numbers of papers- okay, so who has "failed" to justify our biases? where are the data on which type of grantee (new investigator? non-tenured? small institute? 65yrold near-retiree?) "fails" to "succeed" with a grant?

Drugmonkey,

I'm not sure why you characterized your link as "crankery," when it's quite different from what Miller and Esmay are about. As for how to judge "success," that is indeed a difficult question. One common measure of "success" is the submission of a competitive renewal that is funded. I'm sure it could be argued that this is a meaningless measure because grants are "renewed" if the investigator shows results and progress that "confirm the existing paradigm," but in actuality the most interesting competitive renewals are ones that find something different and new and propose a scientifically sound way to pursue the implications of this new finding.

Martin,

Yes, I just became aware of that study early this morning while checking my e-mail. ;-)

The NIH awarded Duesberg a long-term Outstanding Investigator Grant and a Fogarty fellowship to spend a year on the NIH campus studying cancer genes, and he was nominated for a Nobel Prize.

How to spot a crank.

The post at the link was discussing such issues, the common refrain that there is something "wrong" with peer review that we are hearing from scientists. Perhaps these individuals are not, precisely, cranks. The complaint boils down to "I'm not having as easy of a time getting my money as I used to have so there must be some problem with review. Sure, I know the budgets suck but surely that affects all those other drones, not a genius like me". When pushed, these typically senior scientists assert all kinds of data-free and very poorly outlined arguments to support the contention that there is something "wrong" with the review. All the while not considering the fact that, perhaps, they wrote a grant that was not as good as the other PI did and that initial peer review is working just as well or poorly as it ever has. The arguing from personal anecdote, from study section experience 20 yrs out of date, in frank violation of the evidence around them (say from their jr colleagues) combined with the circular and catch-22 logic respecting why they "deserve" their grant money and others more jr or with less robust funding histories do not is the reason i use the term crank. feel free to use another, semantics isn't the issue.

you hit on the inherent circularity of the "success" issue based on grant renewals, but still it is good to identify these frequently used but generally unspecified criteria. My question, however, remains. How are we to evaluate and verify our hypotheses (biases, as I'd have it) with respect to which type of grant leads to "success" and which does not?

To bring this back to the larger point, there is a valid critique respecting the inherent conservatism and old-boys-ism of grant peer review. Not as bad as the pseudoscientists might paint it, but still palpable. I think you can see clearly that the activities of "Institutional priority", "initiatives" (roadmap, new investigator, etc) and other Program behaviors are in place to combat this emergent feature of the study section. One of the things that I'm on about is why we cannot also address some of these problems at the source by fixing the study section...

I generally agree with your comments, although I think there are a few areas where you overstated the case.

Do you know of anybody who has ever gotten any kind of satisfaction through the NIH appeals process? I've never tried it myself, but I have colleagues who have done so, who had received grant critiques with what seemed to me to be fairly outrageous errors and biased statements. The closest I've seen to a favorable outcome was a grudging admission that errors were made, but an insistence that the proposal would not have been funded anyway. The general perception among scientists I know is that NIH wants to discourage appeals, which make more work for already-overworked NIH personnel and study sections--and considering that appeals never succeed anyway, why risk pissing somebody at NIH off by filing an appeal?

I agree that in the current funding environment the peer review system is substantially broken, but I still can't think of anybody I'd rather have making decisions on my proposals. Perhaps there should be more money devoted to funding R21 and R03 grants to support pilot projects on innovative topics. In the past, I became discouraged about the R21 mechanism after a bad experience in which I submitted a proposal for an R21, had it fail with a criticism that there was too little preliminary data (even though I did have some, and R21's supposedly don't require preliminary data at all). So I did more preliminary experiments, resubmitted, and it came back with a critique that with the preliminary data it would be more appropriate as an R01. So I reformulated it as an R01, submitted it, and it came back with a critique saying that it was speculative and would be more appropriate as an R21. At which point I gave up and submitted something else.

But I'm giving R21's another shot. It used to be that to get an R01, you just needed Preliminary Data that just demonstrated that you had the methodology working, as well as any key results that a large part of the proposal was contingent upon. These days you are going up against 2nd and 3rd revision proposals that have substantial preliminary results for every single Aim. With funding tight, it's hard to scrape up the money to do that much preliminary work on a new, unfunded project, so maybe R21 type grants are the answer. We'll see.

There seems to be a large libertarian bent to these critiques: the folks at Liberty&Power have been on an anti-Global Warming tear recently which includes an awful lot of attention to the peer review process.

The solution is obvious. Who are the people with multidisciplinary scientific expertise that are capable of evaluating research proposals in all branches of science. I can only think of two - Donald Miller and Dean Esmay. I say let these peers review all scientific research proposals! What could possibly go wrong?

By Chris Noble (not verified) on 16 May 2007 #permalink

It seems to me the point of reviewing peer review is missed. I believe in quality of intellectual thought, in funding effective science, and most important ensuring ongoing respect among the population at large for the ethical standards carried and ensured by the term 'science' and 'scientist'. Peer review depends on the judgement of one person or group of the work and quality of another person or group. This immediately raises the question of how we operate as a species, and whether or not any pre-existing concepts or paradigms will play any part in that judgement, and if so does that have the potential to distort that judgement. This is intrinsic to peer review, regardeless of any issue of intellectual politics that could arise.

At issue is the management of the fully business process of maintaining quality. Is there in fact any way of maintaining quality standards that are intrinsic to the intellectual work itself, and so the assessment of quality and excellence steps beyond the confines of peer review. I know there are, but it would seem that popular opinion is pre-disposed otherwise, is that not case in point.