Over in Twitter-land, somebody linked to this piece promoting open-access publishing, excerpting this bit:
One suggestion: Ban the CV from the grant review process. Rank the projects based on the ideas and ability to carry out the research rather than whether someone has published in Nature, Cell or Science. This could in turn remove the pressure to publish in big journals. I’ve often wondered how much of this could actually be drilled down to sheer laziness on the part of scientists perusing the literature and reviewing grants – “Which journals should I scan for recent papers? Just the big ones surely…” or “This candidate has published in Nature already, they’ll probably do it again, no need to read the proposal too closely.”
And, you know, I sympathize, at least to a point. Paper-counting is dumb, and impact-factor-weighting is even sillier. But then, there are a lot of problems with this idea, most of them tracing back to the fundamental fact that there isn't enough money to go around.
That is, yes, in an ideal world, you would give out grants on the basis of "the ideas" in some abstract sense. But there are lots of people with cool ideas out there, and a pretty large fraction of them even have "the ability to carry out the research" (we'll assume for the moment that there's some sensible way to establish that ability without a CV). But we're in a world where grant approval rates dip toward single-digit percentages, so a bunch of those people aren't going to get funded. So we end up accreting stupid criteria for approval, just because you need to do something to cut the pool down.
And this happens in all sorts of places in academia. There are all sorts of factors that get used in academic hiring that are problematic to various degrees, the classic example being the nebulous catch-all of "fit," but that happens because there isn't enough money to hire everyone who deserves a job. When you've got 200 people applying for a single tenure-track job, good people are going to get left out through no real fault of their own. And the sloooow progress on faculty diversity has similar roots-- I'm sure that if you gave the administration of the University of Missouri the money to hire 400 new faculty and staff they would be thrilled to make their racial diversity problem go away. But nobody in academia has the money to do that.
Absent a sudden influx of astronomical amounts of cash, I don't know what realistic options there are to do a better job with allocating the limited resources we do have. At some level, it would probably be just as fair and effective to distribute grant funding by filtering out the small number of totally unqualified people and then rolling dice to determine the lucky folks who actually get funding. I doubt that'd make people any happier, though. For faculty positions you'd probably need to combine random number generation with massively illegal collusion, to make sure that the same handful of superstars don't get offered all the jobs.
I'm sympathetic to the concerns of the open science community, and more generally to concerns about the absurd pressures placed on junior faculty. But most of the things people propose as solutions would need the sudden appearance of shitloads of money to work out as intended, and that's just not happening.
And on that depressing note, I'm going to go edit some photo-of-the-day pictures.
Thanks for saying it so clearly. I've been preaching this message for years - measures for academic merit aren't going away because they are necessary. Every other month or so, there is a commentary in Nature complaining about these, but no practical solution whatsoever.
I think the major problem with these measures is that they're supposed to be a one-size-fits-all approach. They're infinitely cheap and unintelligent. They use data that we already know doesn't correlate with success. Many of these so-called measures are actually driven by publishers who want to score high on some scale. They're not made for scientists, they are, in a sense, deliberately made against them.
We can do much better than that. The best (and I think, the only) way to do it is to allow scientists to build their own measure together from available data (say, bibliometric, keywords, rating by colleagues, coauthor networks, whatever you can find!). An individual, customizable, transparent measure. I'm thinking of some shareware that comes with templates for measures and where people can exchange their templates and adjust them as desired. The administrative measure can then be an aggregate from this - one that has the great benefit of also being adapted to the local environment.
"Absent a sudden influx of astronomical amounts of cash..."
Planetary Resources is working on that.
Your headline reminded me of my main take-away from a visit to our university last year by the dean of engineering at Harvard: You can do some pretty amazing educational things if you have essentially unlimited financial resources.