A few days later, but my story today bears some similarities to Alice's tale of reviewer requested revisions.
I too just got back reviews on a paper derived from my dissertation. The reviews ranged from minor revisions to reject with the editors landing in the middle. Mostly it looks like major rewriting and some rethinking of our arguments, but they'd also like to see some more data collection and analysis.
Only one problem. My field sites are >2500 miles away and about to be covered by snow for the next 7 months. So I suddenly find myself scrambling to make arrangements with co-authors to do some last ditch field work. The field work itself shouldn't be too onerous (or so says the woman who'll be sitting in her office on the other side of the continent) but figuring out exactly where and how to do it will take some quick but heavy-duty thinking and planning.
So I foresee four possible fates of this paper and its attendant reviewer requests.
- We get the field work done, do the rewrite, and get the paper accepted at its current journal.
- We don't get the field work done, do the rewrite, and make a compelling case as to why the field work wasn't necessary and get the paper accepted at its current journal.
- We don't get the field work done, do the rewrite, and make a compelling case as to why the field work wasn't necessary but the editors don't buy it and we decide to submit elsewhere.
- One or more of my co-authors slaps me upside the head for even thinking about trying to gather more data and we take the paper elsewhere after a rewrite.
In any case, I think I can cross a 2008 publication date right off my mental planner. And I can add yet another substantial piece of work to my nearly-toppling to-do list.
Stay tuned to learn the fate of our heroine in peril.
- Log in to post comments
#2 sounds reasonable. Of course, you had your first alternative journal in mind when you sent the paper off.
Read Issac Asimov's two volume autobiography. He eventually published everything, without rewrite, with the exception of one manuscript which was lost in a move.
"Read Issac Asimov's two volume autobiography. He eventually published everything, without rewrite, with the exception of one manuscript which was lost in a move. "
Does that include his thesis?
Yikes. I am overwhelmed for you. I like that you see what options you have, although they all seem quite daunting... more data??? Yeeesh!
What a pain!
I would go for #2 first (leaving the field scrambling possibility in my collaborators laps - if they are happy to do it without much from you, OK, otherwise... do you need the hassle? no you do not). If it turns into #3, curse, reformat and get it back out asap. It might not make it into a 2008 in press but the key thing is to keep it OFF your to-do pile and on someone elses - getting A paper without too much more work is probably the most useful outcome for you, especially with all the hassles you've had this year, right?
I just want to encourage you and Alice. I got a revise & resubmit review for one of my major rechurning-of-the-thesis papers. I was really down in the dumps about the whole thing and one of my colleagues kindly slapped me up the side of my head and told me I wasn't as in as bad a shape as I thought. I was able to talk to the editor about my inability to run some of the experiments that were requested. It took things a while to percolate through the whole publication process, but it got published.
Hang in there.
You might want to look at the brilliant post by Comrade PhysioProf:
http://scienceblogs.com/drugmonkey/2008/09/no_fucking_way_am_i_doing_th…
This is becoming a more and more common abuse of the peer review system (I speak with several decades of experience). I am confronting it right now on reviews of two separate papers. It comes, I believe, from a serious distortion of the role of peer review, arising in turn from a distortion of the idea of what constitutes scientific criticism.
Peer review should examine your methods, results, and interpretation to see if there is anything wrong. If you used the wrong reagents, miscalculated your statistics, or said something really incorrect, you should get rejected. But not having "enough data" is not cause for criticism. The whole purpose of statistical inference is to evaluate what the data (in their current quantity and quality) say about the scientific hypotheses you are investigating. If you truly don't have enough data, your statistics will limit the conclusions you can draw.
It is not, however, the role of peer review to prevent you from being wrong. Maybe if someone collects additional data, it will be discovered that you are wrong. But that discovery could also come from a different analysis of your present data, a different interpretation of your present results, or from an investigation of different questions with completely different kind of data. (The latter is the most likely.) That is how science progresses, and peer reviewers should not try to interfere with it.
One source of this distortion of peer review is in the way we train (or don't train) students to think critically about science. I can't tell you how often, in graduate seminars, I have found students claiming to critique a paper by saying the equivalent of "I think they need more data." When I ask why, the answer usually boils down to "they might have found something different". True enough, but that is not scientific criticism. True criticism must make an argument, and show that something other than the author's conclusions follows from that argument. It is not sufficient to claim that something different MIGHT follow, any more than it is sufficient for the author to make a claim on the basis of what their data MIGHT show. What is sauce for the author is sauce for the reviewer.
So... my suggestion is to fire back, defend your data, as analyzed by your statistics, and as interpreted by you, and refuse the demand for more data. Review the reviews. It might not work, but then again it might.
Maybe not. But it certainly can be an appropriate role of peer review--depending on the editorial policies of a particular journal--to assist editors in identifying papers whose conclusions have a higher likelihood than others of ending up being wrong. This is because there is a limited amount of space available to a journal, and they sensibly wish to allocate it to papers that have a lower likelihood of ending up drawing incorrect conclusions.
Where do students formally get trained on how to do a peer review? If all they have to go on is "do more work" reviews which they themselves have received, then it seems only reasonable that they will review in kind.
Wow!
I'm so glad I'm not in a field where I have to do field work! That just sounds awful. But on the other hand, I envy the brevity of the review and publication procedure in your field. I had a paper accepted in February 2008, I saw the proofs in May and they will publish it in 2009, no specifics yet about when. We submitted it in the Spring of 2007, so looks like a two year period from submission to publication even for something where there were almost no revisions. Good Luck!
Comrade PhysioProf said "it certainly can be an appropriate role of peer review--depending on the editorial policies of a particular journal--to assist editors in identifying papers whose conclusions have a higher likelihood than others of ending up being wrong."
True, but my point is that the assessment of this likelihood is a conclusion that must be argued on the basis of evidence. Point to an error in methodology, point to a mistake in analysis, point to a conclusion drawn from faulty logic, and you have an argument. But to ask for more data without more than a hunch that it would change conclusions is not justified (and "hunch" is all that many of the reviews I have seen can muster as support for the demand).