American Idol and Grant Review

As a fledgling scientist, I am not privy to the process of grant view. It may as well occur behind a green curtain, and all I get to see is the hologram of the intimidating wizard in the form of an email announcing that I did or did not get the tiny morsel of cash I so politely requested (sorry for the bad Wizard of Oz metaphor). But real people review those grants, and those real people have personalities. Some are surly and dominant. Others are polite and passive. In fact, we can draw parallels between reviewers and American Idol panelists, as this Correspondence to Cell cleverly does:

A typical day at one of the many NIH study sections goes something like this. Of approximately 50-70 investigator-initiated/R01 applications reviewed, about half are triaged and the rest are subjected to lengthy discussion, despite the fact that in most of the cases the initial scores are close. Like the amateur singers on the television talent show American Idol, each grant application is evaluated by three reviewers. And, when opinions are conflicting, the three reviewers may display a peculiar resemblance to the American Idol judges, Paula Abdul (sympathetic), Randy Jackson (neutral), and Simon Cowell (hostile). Due to the specialization of science, the discussion is often limited to the three reviewers, with the other study section panelists rarely participating. Indeed, sometimes, while the three reviewers wrangle over a particular application, others are busy on their laptop computers. It is difficult to determine whether these panelists are reading the application under discussion, preparing for the next discussion, or answering their emails. The necessarily inexpert or distracted panelist often sides more easily with the Cowellesque reviewer, who is trashing the application, especially when there is not enough money to go around. This leads to the perception that "the nasty reviewer always wins." Remember, everyone on the study section votes to determine the final score -- even those who are busy with their emails.

The author, Michele Pagano, goes on to point out the great costs that go into the grant review process, from the time spent preparing the grant proposal to assembling the study sessions. And this doesn't include the actual funding requested by the applicant. Pagano suggests ways to streamline the grant review process including prescreening applications using letters of intent, turning the focus of the evaluation from nitpicky criticisms to big picture issues like the impact of the proposed research, eliminating detailed timelines from proposals, and using electronic forms of communications to save money on travel to a common meeting place. He argues that the peer review process for publication proceeds without face to face meeting, so the grant review process should be able to as well.

As a neophyte outsider to the process, I'm hardly in a position to offer much of an opinion. Do any of my more experienced readers have something to add?

More like this

I sit on a lot of NIH panels, and your comments are not too far off for the worst cases . However, the panels I am on generally have a lot of good, productive conversations. More than just the 3 primary reviewers participate, and those who didn't write often ask good questions, or helping focus the discussion on important topics. And people looking at their laptops are often trying to find relevant passages from the application (or the literature) to make their points. (Having search is really an improvement over paper applications for this!)

At least for NIH reviews, you can tell what parts came from the written reviews (the primary reviewers rarely stray far from what they write) and which from the oral discussion ("summary and resume of discussion"). I don't get to see those summaries for any proposals save my own, but for those, I often think the summaries of the discussions do a better job at getting to the heart of the matter than the written comments.

To get beyond the American Idol metaphor, though, there are people on these panels who know a lot and can be quite persuasive (and it's rare to see folks who are consistantly negative or positive, usually it depends a lot on the argument). It's true that it's easier to raise doubts than it is to assuage them, but I've seen it go both ways.

I just participated in my first completely electronic peer review for NIH, and it was unsatisfying in many respects, mainly because I couldn't really interact with the people I disagreed with very well. I've also done phone reviews, which are better. It's clear that NIH could improve the process, and it's been changing slowly, but it's hard to think of things that would reliably make it better. And, as you point out, a lot is riding on it.

Finally, I want to point out that the people who sit on review panels generally survive by getting grants themselves, and are pretty sympathetic to the plight of the submitters. We all tend to think that we are better reviewers than the people who review our own grants :-). In more than 15 years of reviewing, though, I've only seen a few folks I thought were really irresponsible. Mostly reviewers try hard to be fair and do what's best for the science. And the NIH Program Staffs and the CSR folks are certainly receptive to ideas about how to make the process work better.

Having sat on NSF panels and the odd NIH ones, I think he gets it wrong. Paula Abdul does not do it. Someone has to be enthusiastic. There are too many good proposals. The nasty ones can't knock you out unless they find something really wrong with your proposal that can be hammered on. Finally, everyone has to agree on a panel review statement.

The aim in a lot of NSF panels is to first separate out the proposals which will actually compete for what funding is available. At the end of the panel they are revisited and strictly ranked. Many members of the panel will look at the competitive proposals overnight.

Finally, I cannot overemphasize the importance of responding to reviewers comments on NIH resubmissions. Seriously and completely.