In 2009, I've done ~9 reviews of journal articles, including two in the past week, and not counting the 1-2 more looming in the next two weeks. During the same period, I've submitted one 1st author manuscript, still in review, but probably only going cost 3 reviewers some time.
Anyone see a mass balance problem there? Or do y'all just see a case of a junior faculty member correctly working to build her international reputation in time for tenure? Or something else? 'Cause I'm no longer quite sure what to make of the situation. I'm dancing around the question of "How many reviews are enough?"
As far as I can tell, there are three philosophies I could use to approach the question.
1. Accept all review requests, make the associate editors happy by turning the reviews in on time, get the early view of what's coming in the field, build an international reputation through your diligent service. Let's call this the "more is better" philosophy. This is a philosophy that is common to people with unlimited time and energy (no life outside of science or a stay-at-home spouse to manage all the "distractions") and to the bean-counters who believe that a scientist doing 9 reviews in a year is fundamentally stronger and better-positioned than a scientist doing 7 reviews in a year.
2. Figure out roughly how many reviews your 1st author manuscripts have received, and give that many back to the scientific community before you reject any assignments. After you've achieved mass balance, accept review assignments that seem interesting to you and as you feel so inclined. Let's call this one the "parity" philosophy, and its one I was introduced to by FemaleScienceProfessor (though now I can't find the link). With the parity philosophy, a junior faculty member could still end up doing a lot more reviews than she was receiving in a year, if she felt the need to make up for the ones she received as a grad student. Also, the people who are *lucky* enough to go through multiple rounds of reviews before being accepted/rejected would end up with more reviews to do per paper published, but hopefully the process of doing those reviews would be useful to them in such a way that the quality of their submitted papers would improve or they would pick more appropriate journals.
3. Accept all reviews under the guise of the "more is better" philosophy, but do the reviews in a time-limited world. Budget only 4 hours per review, and sometimes have to write lower quality reviews as a result. Panic when you suddenly realize you have four reviews to do in a 2 week period, on top of getting ready for a conference and teaching a new prep. Let's call this the "bad compromise" philosophy. I've never heard anyone advocate for it, but I know I'm not the only one who's fallen into it because of influence from the "more is better" advocates.
In my experience, people who espouse the "more is better" philosophy do not seem to recognize that a 1/2 day to day doing a review is a 1/2 day to day not writing your own manuscript, collecting and analyzing data, or playing with your kid. Or maybe they just know how to stretch time. When the "more is better" approach is attempted by a time-limited mortal, I think there is a real danger that a scientist could put her service to the professional community over the higher-payoff time spent on her own research, graduate student advising, and publication.
And I think that's what I might be at risk of doing this year. I've achieved mass balance in my review life, even accounting for multiple rounds of reviews and the backlog from my time as a graduate student. I know that reviews are a vital part of professional service, but I also know that I need to get another paper out this fall to keep me on pace for tenure. If reviews keep coming in at the same pace as they have this spring and fall, I know I won't get that paper finished up. If the reviews dried up of their own accord, or if I grew a spine and stopped accepting all but the most intriguing ones, I might still not finish that paper, but I'd be closer. So I think I need to move from the "bad compromise" philosophy to the "parity" philosophy. But, how do I do that, dear readers?
If I decline a review, do I risk never being asked to review for that journal again?
If I decline a review and I know the editor, do I owe them some sort of explanation?
If I move towards the parity philosophy and that results in a marginal, but positive, increase in my publication productivity, does that cancel out any dints in my international reputation caused by declining reviews?
Are the people who agree to write the tenure evaluations all firmly in the "more is better" camp and blind to the correlation between the research and service productivity?
- Log in to post comments
I have never (35 years and counting) seen the number of reviews playing any role whatsoever in an evaluation for tenure, promotion, or annual salary considerations. Some reviews vs. no reviews might conceivably have a small impact, as evidence of one's reputation beyond the campus. But small.
I agree with ecologist. Editors are used to hearing people decline their requests, so it's unlikely that most will hold a grudge against you when evaluating your research as an external letter writer for tenure. They're going to talk about your research, and writing reviews only does so much to support that you are a good researcher.
Also, this is probably a bit field dependent, but most manuscripts take me less than 4 hours to read and review. And you can tell by reading the reviews of my own manuscripts that most of the others in my field are spending that much time or less.
"If I decline a review, do I risk never being asked to review for that journal again?
If I decline a review and I know the editor, do I owe them some sort of explanation?"
I'm no expert on this, but my suggestion would be to reply quickly and say that you wouldn't be able to get a review done in a timely manner right now, but that you'd be happy to be a reviewer in the future.
I don't think you owe any more of an explanation than that you don't currently have the time, and I don't think the editors expect any more of an explanation than that. I'm guessing they would rather get an immediate reply declining to review than a review that takes a long time to get back to the journal.
I've been an associate editor for one of the top journals in my field for a few years now. I'd say at least half the people I've asked to review manuscripts have turned me down, so no, it's not a big deal. No one's going to blacklist you from ever reviewing for the journal again if you turn down one or two requests. If you want to stay on the editor's good side, suggest three or four other people who might be good reviewers; consider postdocs and advanced grad students you know who may have more time, and more familiarity with the current literature, but who may not be on the editor's radar.
I think there are two issues here:
First (and I say this with all love and respect, Alice!), 4 hours for a review is insane. It takes me about an hour to 90 minutes to write a solid enough review to help the authors know what the next steps are if it's a case of revise and resubmit. Based on feedback I've gotten from a couple of editors and in resubmissions, I seem to be hitting the target, so I don't think more time on my part would be more helpful to the authors.
Second, while I like the mass balance idea, and I think it is important to judge your own time and say "no," I also recognize that engineering education is reforming itself in terms of its nature, scope, quality, and methodology, and there just aren't that many reviewers to be had. So to me, conducting reviews at this point in the field's development is part of helping to build the kind of scholarly community I want to be part of.
In this case size (of the reviewer pool) does matter. It is different in other disciplines, where I'm asked to do far fewer reviews per year because the reviewer pool is so much bigger. I keep that in mind as I try to balance my time and sanity with my sense of helping to enhance a research community.
Not that any of that solves your problem. But really - spend way less than 4 hours per review.
I've been in a similar situation with too many review requests and my compromise is the following: I only accept those reviews that are really and truly in my area of expertise. The others I decline, but I always suggest an alterantive reviewer to the editor - usually someone whom I know and whose expertise fits better to the paper topic.
How does reviewing affect your international reputation? It seems to me any benefit it might bring in that area is counterbalanced by the time it takes away from other activities that could have a much bigger impact on reputation.
I hate doing journal reviews; they take me way too long (I still have not learned to effectively manage my time on them the way I have on other things) and I'm always late getting them in. Yet I keep getting asked to review more. My advisor gave me the same advice pika gives: only accept reviews that are right in your wheelhouse and never accept a review as a way to learn about an area. I've recently started following it with great success.
Interesting. I spend much more than four hours per review, but I am in mathematics. Reviews are not really very important for you. There are better, more time-efficient ways of getting its benefits. I try to achieve balance, but definitely no more. Grad students can take up the slack. They benefit more than I do from reviewing.
I'm am running at about 2-3 times the review requests per paper for the year.
I view reviews as one of the solid ways in which I can wield influence on my field. So I accept most reviews putting myself between #1 and #2, but I see them as a way to indirectly influence the field in ways I think are important. This means that for me, it's more that just a "service" task.
I spend about 2 hours per paper, though I've found that I'm able to write an effective review in less time now than it did even 2-3 years ago. I'm currently 2 years from tenure.
I advocate for a slightly above parity model, meaning that you agree to 1 or 2 more than you consume. Perhaps the approach is to accept all until you reach parity then accept only the interesting ones. My reasoning is that if everyone operated only at parity, it results in no slack in the system. Based on the other comments, it seems like the overwhelming majority of responders are saying that reviews themselves don't count significantly towards tenure, which means that the only issue is the "mass-balance" factor associated with contributing at least as much as you consume.
I also agree that 4 hours seems like a lot. I'd argue about 10-15 minutes per page is about right.
I would make a couple points.
1. Studies have shown that when you are rushing through reviews, you are more likely to rely on your inherent biases in making your decisions. This tends to disadvantage authors who are minorities, female, or both. So it's much, much better to turn down more manuscripts and to spend the time you need on the manuscripts you do take.
2. Relatedly, as an editor, let me agree that editors appreciate rapid responses, even if it's a decline, and really appreciate suggestions of other potential reviewers. This can be a really good way of helping qualified post-docs get on the radar of editors in their community.
3. Only accept reviews for stuff directly in your field. You will be much more interested in those manuscripts, will probably learn something important that benefits your research, and it will take you less time per review.
4. Many journals are now thanking reviewers by name in end of year (or two year) posts on their websites. This allows you to put it on your tenure document with a link, which you should do. If the journals you are reviewing for don't do this, tell them they should.
5. On a broader note, we do need to get something like ResearcherID going so that we can quantify these types of contributions to the literature.
Thanks for the advice, all. From the sampling of comments above, it seems like the "more is better" crowd that I've been under the influence of is the minority viewpoint.
A bit of context. I'm not Alice (engineering ed), I'm SciWo (geosciences). The average manuscript I review has 20 double-spaced pages of text including equations, 7-12 figures*, and 2-5 tables*. (* usually too many in my opinion)
In my 4 hour review process, it takes me ~1.5 hours to do the initial critical read, making marginal comments to myself on the hard copy, another 1.5 hours to delve into the technical details of the methods and results and write the comments to the author (this often includes time for a quick scan of related abstracts), and an hour to polish things off, write the summary to the editor, figure out how to log back into the publisher's website, etc. If I have to dig deeper into the related literature, because the authors appear to be making some mistakes or need some other scientific guidance, then my process stretches longer. From speaking with others in my field and related ones, I think that this timeframe is easily within 1 std. dev. of the mean review time, and may even be slightly below average. At Matthew's 10-15 minutes per page, I'm running about right on pace.
If you do not think that you have time to do the review to your satisfaction in a reasonable amount of time, then you should decline. If the topic is sufficiently far from your area of expertise that you do not think you can do an adequate job of reviewing, you should decline. If there is a situation of which the editor is not aware that could reasonably be construed as a conflict of interest, you should decline. It is generally polite, but not required, to state your reasons for declining and to offer suggestions for colleagues who in your opinion would be suitable referees.
As for how much reviewing you should do, three reviews for each paper with you or one of your students as first author is roughly the right balance, at least in my field. Most papers have two referees, and you must allow some additional refereeing because some papers will be rejected (I have been on both sides of that equation).
If you are trying to be thorough, four hours per review is *not* a lot. You have to read (not skim!) the paper, mark it up with your notes, and then translate those notes into a coherent review. Assuming Matthew #10 is counting manuscript pages, his 10-15 minutes per page (which sounds about right to me) works out to around two hours for a letter paper and three to five hours for a typical length longer paper, at least in my field.
(1) No one gives a shit--other than editors--how *many* reviews you have performed. The only thing that is relevant to your CV is *what* journals you have reviewed for. No editor will ever hold it against you if you respond to their request for review by telling them you would like to in the future, but cannot devote the time right now.
(2) Four hours on average is way too long for a paper review.
(3) You should be enlisting the assistance of your grad students and post-docs in review. This is an important part of their training, and all journal editors are happy to allow this if you inform them and make it clear that you are ensuring that your trainees who participate are being instructed on confidentiality.
Agree with other commenters - declining to review will not be remotely detrimental. In fact, if you wait a day before replying, the editors may have already found somebody else! The parity issues ought to be over the career span, not a year by year basis - everyone knows pre-tenure folks don't have the time for lots of reviews. Later in your career, you'll get faster at it, and probably review far more papers than you submit, plus maybe take on editorship duties. For now, ONLY review those papers you are truly expert in, and don't take more than one assignment at a time. Do review within your sub-subfield to help keep up on the latest, but don't let it take away from your productivity!
A caution on the idea of using your students or postdocs to help with reviews (different from suggesting names to an editor for alternate reviewers). It is definitely a breach of ethics to share or discuss with anyone a manuscript that you receive for review, without prior permission of the editor. Editors are usually happy to give permission (as CPP notes), but do ask for it.
Many journals, at least in the biomedical sciences, either explicitly give blanket permission in their instrutctions for reviews, or do so implicitly by asking as part of the review form to list the names of any trainees who have assisted.
John MacDonald's comment above - to suggest other reviewers - is excellent. Good for journal editors, and a way of enhancing your reputation. (You're no longer the mew scientist who is suggested by your former advisor - you're a seasoned professional who is helping to launch the careers of newer scientists!)
Interesting article - thanks for posting this. I generally receive 2-3 review requests per year and so far have only declined reviews when I am really too pushed for time or the topic is way outside my field. The time a review will take me depends - if there is data in the paper then I will want to plot it up to check that I can get the same results as they are suggesting. If I can, then the review doesn't take very long....
I genuinely enjoy reviewing and hence am an easy mark. To keep things under control, my new rule of thumb is to only review for journals that I publish in or aspire to publish in.
(Hi SciWo - Sorry for the too fast reply mixing up you and Alice - all I can say is that it was before my coffee).
But I'm curious: do the journals in your field expect reviews that operate at that level of detail? Are these reviews of articles that are within your area of expertise already, and still require you to look up related abstracts and go through the method with what sounds like a fair amount of detail? Are you checking data analysis in detail? I review in at least two different fields, but the level of detail expected in the review is nowhere near the level you describe, but I'm not sure if that's because I do only review articles where I'm up on the literature or because of something else about the nature of the research and data analysis.
In answer the the comment above, the level of detail I give in my reviews is consistent with that of the better reviews I get on papers I submit. I don't aspire to be the annoying reviewer with nothing useful to say, but maybe I have been (un)lucky and gotten more-than-usually detailed reviews on each of the papers I've submitted so far. I am looking up abstracts, because often times I want to verify that I am right about a particular detail or who was the first to publish on a particular method before I write that in my review. This part will probably get faster as I either care less or become so familiar with the literature that I know with confidence that I am referring to the right fact or citation.
I've heard people say it takes 3-8 hours to review a paper (depending on both the reviewer and the paper), so your 1/2 day sounds normal to me.
I'm not real clear on how reviews contribute substantially to one's international reputation. Wouldn't publishing your own papers develop that reputation much, much more efficiently? It seems like turning down excess reviews in order to get your own papers out is a much better career/tenure strategy.
I've reviewed several papers as a grad student. I was asked to do most of these because my advisor suggested me as a substitute when she was too busy. Seems like you should be able to do that too if/when you have an advanced student.
Recently, I've been working on stuff which has a very small pool of qualified reviewers. I don't really count myself as qualified, but have been accepting most review requests because there just aren't many people who can do it at all. Which brought me to the idea of...
Maybe there should be two classes of reviews. It seems that two or three 'quick' reviews from people in the general field, but not necessarily experts on the precise topic, would be useful for structure, language, and clarity problems. A couple of intensive technical reviews are called for as well.
These should be separate IMO for a couple of reasons...
The first is simply to ensure that both types of reviews actually get done for every paper. I've had papers accepted where no reviewer actually bothered to check the technical details and obvious minor errors (typo based) got through. I've also seen (from the final review/editor side) where technically sound papers that made basically no sense explanation wise due to serious language and structure problems were approved by three reviewers.
Another reason to separate these types of reviews is to make reviewing easier. If a first pass of 'quick' reviews are done, then critical clarification and structure issues can be dealt with at relatively low cost (to the reviewers). Once the paper makes sense overall, then reviewing the technical aspects is a lot easier. It is quite difficult to assess if a method actually supports a claim if you're not clear on what the claim is supposed to be.
Finally, the pool of really technically qualified reviewers is often quite small and somewhat incestuous. Wider peer review is really needed to assess general interest and also mitigate the inevitable problems (friendships, rivalries, pet theories of the "big names", ect) of not-quite-so-anonymous reviews from colleagues in a narrow specialty.
Anyway, my $0.02
I'm probably in the more is better camp but with the proviso that I won't review for shite journals that I don't submit to. I can't be bothered wading through utter crap.
If you review well and consistently for a journal you could always ask the editor to write you a letter to recommendation? Would that be any help in American tenure track world?
In medicine you can also be asked to write the editorial if you have put in a thorough effort at review-time. This is starting to happen for me and I've now got a major medical journal asking for an editorial. Lots of peer recognition points there and all from writing a good review.
I'm surprised about the estimate of four hours for a review but I guess this must be related to the length of an average paper for your field. I guess my first reaction was: if it only takes four hours what's the problem?
In my field the papers themselves can be between 30-60 pages so 4 hours would just be enough to have a quick read through. I think I spent more than 16 hours on the last paper I reviewed, and I thought that was rather short but since it was a rejection I didn't think it was necessary to comment in too much detail. But most of my colleagues seem to spend a lot of time on reviews too. I see that when I've gotten reviews back on my own papers as well. On the other hand, I've never heard of anyone being asked to review as often as you seem to have been asked.
If the only thing you can put on your CV is which journals you reviewed for, this would suggest only reviewing at most once a year for each journal (then you've done your duty). Of course, you could also add in parentheses how many papers you reviewed.
In favor of "more is better"...
You want to try to develop a good relationship with the editor at journals you care about. Then you might be invited to write a review for that journal. (This happened to me). Such a review would help your tenure chances.
You want to demonstrate that you are recognized in your field, and recognized internationally. Thus, to a point, reviewing more is better, particularly if the reviews are for big-name journals. I would think you would want to review as much as others in your dept that have been promoted or as much as your chair thinks is appropriate. It would be a good question for your chair, or your mentoring committee if you have one. "Am I reviewing enough or too much? How many reviews per year are best?" If you meet that number and the reviewing is for good journals, then when the committee or the chair writes the letter to go with your tenure package, it can say that you achieved recognition in your field as evidenced by your invitations to review for these superior journals.
I'm intrigued by this idea that you have a backlog from being a graduate student. You didn't do paper reviews as a graduate student? I have been published exactly once and I am just beginning my second year as a master's student in statistics, and I have just finished my first paper review. Mine was by far the most comprehensive of the reviews, so I expect there will be more in my future....
But seriously, is this just my field? Is it the fact that I was first author on my one and only publication? I'm surprised that you have a backlog.
Around here it would be extremely uncommon for graduate students to be asked to review or be involved in reviews. Not sure how general that is across other universities though.
To do a good review in my field quite often also takes more than 4 hours- I susepct that old folks in the field might knock the time down a bit, but I have done a few mammoth papers this year which have taken more than a day each.
On the other hand, I try to vary the journals I review for, simply for CV purposes.
It frightens me to think that many people feel that less than four hours is more than sufficient to adequately review a submitted manuscript, not least because of the reasons someone above points out about implicit biases being more likely to creep in when one rushes through a manuscript review. I realize the industrial-throughput model has thoroughly colonized just about every aspect of our lives, but really, journal article reviewing, too? How can you possibly expect to adequately conduct a reasonable peer review of a manuscript in, say, two hours - reading and thinking and commenting and submitting remarks?
No wonder ghostwritten articles sail right through the "peer review" process. The peers aren't looking all that closely, it would seem. I guess this is the price we pay for all of us needing to publish ever more every year.
Heck, I've spend half a day on other people's reviews- in the form of,
"Hey LL, I'm reviewing a paper using method X. Didn't you do that ten years ago? Could a just borrow a minute of your time?"
Seriously, though, paper-chaining your way back through a decade of incremental methods papers to figure out whether a novel result is a screwup or a discovery is non-trivial, even if you're up to date in a field. Re-reducing the data to prove the point is even more time consuming. But if you are going to recommend rejection due to an analytical screw-up, you owe it to everyone to be absolutely sure you can identify exactly what went wrong...
Interesting post and follow-up discussion.
I've been told that one should accept all requests for reviews when being at the beginning of a scientific career for CV purposes.
On the other hand, I think there are occasions in which it is better to turn a request down, for example, when you are asked to review something that is not your field of expertise or when there is a conflict of interest (e.g., the paper you are asked to review was written by your supervisor). In both instances, I think it is best to explain why you would prefer not to review the paper, suggest names of people who could do the reviewing instead, and to offer to still do the review if the editor fails to find reviewers.