I enjoy reviewing papers even knowing it sucks up too much of my time. I mean what better way is there to get out any inner angst than to take it out on the writers of a sub par paper? (That's a joke people.) Reading Michael Nielsen's post taking on the h-index (Michael's posting more these days!), reminded me of a problem I've always wondered about for reviewing.
Suppose that in the far far future, there are services where you get to keep control of your academic identity (like which papers you authored, etc.) and this method is integrated with reviewing systems of scientific journals. (I call this Dream 2.0.) One benefit of such a setup is that it might be possible for you to get some sort of "credit" for your reviewing (altruists need read no further, since they will find any such system useless.) But the question is how should one measure "credit" for reviews. Certainly there is the number of reviews performed. But is there a better way to measure the "quality" of a reviewer than simply the number of reviews performed. Or the number of different journals for which the person reviews? Ideally I would think such a measure would punish a reviewer for letting papers through which never get cited. Or maybe punish a reviewer for not letting a paper through which gets accepted somewhere else and is successful. As a start, I think Cosma's observation
...passing peer review is better understood as saying a paper is not obviously wrong, not obviously redundant and not obviously boring, rather than as saying it's correct, innovative and important.
is probably a nice place to start.
Oh, and I certainly won't claim that it is a problem of any real significance (this is to stop the Nattering Nabobs of Negativity from yelling at me) It's just a fun metric to try and define. Sort of like the quarterback rating.
- Log in to post comments
Well, one of the most annoying things from an author's standpoint, is a reviewer who clearly only lightly skimmed the paper (this can work both in the author's favor as well as against him or her). While the practicality of such an idea is limited, having reviewers answer basic questions (multiple choice perhaps?) about the paper would certainly be an option. But, like I said, it's not exactly efficient or easy to implement (at least at first glance).
Ideally I would think such a measure would punish a reviewer for letting papers through which never get cited.
That depends a lot on the journal's standards. For top journals, every paper is pretty much guaranteed to be cited at least a little. (You could make more demanding standards, but that would be giving too much importance to citation counts.) For minor journals, it may be fine to publish something that is never cited.
Or maybe punish a reviewer for not letting a paper through which gets accepted somewhere else and is successful.
That's another tricky one. If you're reviewing a paper that you think will be popular but is seriously misguided, then you know you'll be punished if you reject it.
In general, it's tough to formulate a reward system that doesn't create bad incentives for reviewers. One thing you can do is to issue rewards only in unambiguous cases. For example, you could award many bonus points to reviewers who uncover plagiarism or who manage to convince an author that his/her paper is seriously flawed.
In mathematics, you could also penalize reviewers who sign off on a paper that is eventually withdrawn after publication. (In other fields, a paper may be fatally flawed for reasons invisible to the reviewers, so this would be less fair. For example, nobody expects reviewers to be able to detect fraudulent experiments.)
These rewards aren't quite good enough, because they wouldn't occur frequently enough. They do a good job of handling extreme cases, but they don't rewards ordinary judgement calls...
A true statement, and one which applies equally well to the present system. For example, if you're sufficiently bad at reviewing, the editors learn not to assign reviews to you, and there is no penalty for placing yourself in this position--indeed, there is the reward of not having to spend time refereeing papers. It's not a position that I personally take--I find that refereeing a good paper is a net plus since it forces me to think about a problem in my field in a different way--but the temptation is there, especially when there are so many mediocre papers out there.
I've seen that happen too, but that one works both ways. One of the most annoying things from a referee's standpoint is to read the revised version of a manuscript and find that the authors have pretty much blown off a significant referee's comment. It's one thing for the authors to argue that the referee's comment is either wrong or irrelevant, as long as their response letter or revised manuscript actually makes the case (with references and/or calculations, if applicable). Authors who attempt a proof by assertion against one of my points, or who ignore a significant point entirely, are inviting my inner curmudgeon to recommend further revision if not outright rejection.
In addition to Anonymous' comment about that; it doesn't play very fair with a referee who rejects a paper on the grounds that it's miscategorized in that particular journal. Then again, if a paper is going to be successful it's highly likely that the author is going to submit it to the correct publication in the first place.
Give credit just for reviewing, not for the results of the review - as noted, any kind of result-oriented weighing is going to distort the process. But - do _not_ give credit for the original review at the time of review; give credit for that review when you accept the _next_ review for the publication.
In other words, you get credit for your review only if the editor of the publication thinks you did a good enough job that he wants to assign you another review (and you accept).
For conferences it's more difficult; perhaps the system could be more general so that you get credit for a conference review whenever you accept another conference review and journal review when you accept another one. Yes, a bad reviewer could get credit for quite a few reviews before the word is out that they're not good, but then, people are rarely _really_ disastrous, so this means they do get some credit for the work they did put in.
I'd like to echo anonymous. Some of the best --- and hardest --- work I've done as a referee has been keeping out papers which were sure to have been popular, if they'd been published, but were just bad. (In one case my report was as long as the original paper, and involved re-implementing one of their models to show that a crucial result was a numerical artifact.) I do this kind of thing because I am a viciously negative and critical man, but also because it makes the journals I care about better to read. If they decided that they'd rather just have the popular papers, well, I guess I could just carp on my blog instead.
I like Janne's suggestion.
Here's a wild idea, probably not good but perhaps interesting.
What if each field had a central web server that logged how many times each person had served as a referee? Journal editors would update the underlying database each time they received a report, but the data would only be displayed on a yearly basis (so that authors could not just check whose refereeing count had increased that day).
It would be fascinating to get statistics on how many papers a typical researcher actually referees (and even a typical researcher with a given seniority, subfield, etc.). Plus it would shame the people who don't referee enough.
Of course, there would be no guarantee the referee's reports were actually worth anything (other than the continued reliance of editors on the referee). You could collect a little more information, instead: ask the editors to rate how useful the reports were to them, and also include a count for the number of reports promised but never delivered.
I don't think this has any chance of getting off the ground (although I wish it would). Submitting data would be more trouble for editors than it was worth, and there's be no chance of getting anywhere near universal coverage.