Post-publication Peer-review in PLoS-ONE, pars premiere

Why is the letter P the most useful for alliterative titles?

But back to the substance. One thing that bugged me for a long time is that I often see on blogs or hear in person a sentiment that "there are no comments on PLoS ONE". Yet I spend quite some time every week opening and reading all the new comments so I KNOW they are there and that there is quite a bunch of them already. Why the difference in perception? Is it due to the predictable distribution (a few papers get lots of comments, most get one or none, just like blog posts)?

So, when we saw this nice analysis of commenting on BMC journals by Euan Edie, we decided to take a look at our own numbers and see how they compare. We got the data together and sent them to a few people, mostly bloggers, to take a look. As they write their blog posts I will post the links here, especially when they do the nitty-gritty statistical analysis.

But for now, quickly out of the box, two of them have already posted their first impressions they took when they scanned the data.

Deepak Singh writes:

There people post a link and a whole discussion erupts (not always, but often enough). I would throw out this challenge (see the DOI suggestion above). Why should discussion be localized to PLoS One itself. If a paper pubished in PLos One is discussed in 20 other places, it would be considered a success. In other words, we shouldn't limit our thinking to just on site commenting. Perhaps within the site, we should be focussed on ratings and perhaps tagging and notes.

--------------------

So what have we learned from this exercise. Quite frankly, I am not sure. Is the commenting on PLoS One at a level that we hoped it would be? Not quite. Is it as bad as some might like to believe? Not quite. What we have is a very very nascent (no pun intended) effort on the part of the scientific community using web publishing platforms as a communication medium. I'd like to ask those same scientists to think about newsgroups. Most scientists are fairly comfortable participating in newsgroups, and here you essentially have one, with very clearly defined thread titles.

Cameron Neylon says:

In that context, I think getting the numbers to around the 10-20% level for either comments or ratings has to be seen as an immense success. I think it shows how difficult it is to get scientists to change their workflows and adopt new services. I also think there will be a lot to learn about how to improve these tools and get more community involvement. I believe strongly that we need to develop better mechanisms for handling peer review and that it will be a very difficult process getting there. But the results will be seen in more efficient dissemination of information and more effective communication of the details of the scientific process. For this PLoS, the PLoS ONE team, as well as other publishers, including BioMedCentral, Nature Publishing Group, and others, that are working on developing new means of communication and improving the ones we have deserve applause. They may not hit on the right answer first off, but the current process of exploring the options is an important one, and not without its risks for any organisation.

There are also two quick responses on Nascent and Genome-Technology.

What is most interesting is that there are no comments on Nascent and Genome-Technology, only a couple on Deepak's and Cameron's posts, yet a very nice long discussion on FriendFeed (very much worth reading, and not just for compliments to me)! What can we learn from that fact alone?

Second, the comment (by 'comment' I mean Ratings+Notes+Comments) to article ratio on PLoS ONE has risen from 1.1 a year ago to about 1.5 today. For comparison, my own blog has 1.6. And I get lots of comments - on a few posts, while zero or one on most. And I am prolific with quick/short posts like Quotes, so it is a decent comparison: not every paper is exciting or controversial enough (just like most of my posts are not) to motivate people to say anything.

Also, a blog post usually gets comments immediately. After 24 hours it is deemed stale and people move on to the new stuff. A scientific paper will be expected not to get any for the first few months and to only gradually accumulate them over the years as new information comes in that sheds new light on that paper. And I see this every week - older papers get a nice share of comments every week, while papers published this week do not.

See what I wrote last year about this:

One thing to keep in mind is that a PLoS ONE article is not a blog post - the discussion is not over once the post goes off the front page. There is no such thing as going off the front page! The article is always there and the discussion can go on and on for years, reflecting the changes in understanding of the topic over longer periods of time.

Imagine if half a century ago there was Internet and there were Open Access journals with commenting capability like PLoS ONE. Now imagine if Watson and Crick published their paper on the DNA structure in such a journal. Now imagine logging in today and reading five decades of comments, ratings and annotations accumulated on the paper!!!! What a treasure-trove of information! You hire a new graduate student in molecular biology - or in history of science! - and the first assignment is to read all the commentary to that paper. There it is: all laid out - the complete history of molecular biology all in one spot, all the big names voicing their opinions, changing opinions over time, new papers getting published trackbacking back to the Watson-Crick paper and adding new information, debates flaring up and getting resolved, gossip now lost forever to history due to it being spoken at meetings, behind closed door or in hallways preserved forever for future students, historians and sociologists of science. What a fantastic resource to have!

Now imagine that every paper in history was like that (the first Darwin and Wallace letters to the Royal Society?!). Now realize that this is what you are doing by annotating PLoS ONE papers. It is not the matter so much of here-and-now as it is a contribution to a long-term assessment of the article, providing information to the future readers that you so wished someone left for you when you were reading other people's papers in grad school and beyond. Which paper is good and which erroneous (and thus not to be, embarrassingly, cited approvingly) will not be a secret lab lore any more transmitted from advisor to student in the privacy of the office or lab, but out there for everyone to know. Every time you check out a paper that is new to you, you also get all the information on what others think about it. Isn't that helpful, especially for students?

I hope more of you join in the discussion, and I will post links to other posts as they appear over the next week or two.

More like this

I have to agree with Cameron in that it is VERY hard to get scientists to revise the way they do things. There is an article out in PLoS ONE that someone in my department desperately needs to comment on. So far, all they will talk about is submitting a letter, and they don't accept that PLoS ONE doesn't accept letters. They automatically assume that if they send one to start a debate it will get published, and have yet to even look at the PLoS ONE site to see how it works. Do you have any ideas on how we can change people's viewpoints on this, to encourage them to comment on an article?

Bora-

Very interesting stuff. I don't have time to write a detailed comment now (I have a meeting at 10:00), but I'll try to later.

I'm working up a dissertation on the topic, and I'd be interested in speaking with others with an interest in this area.

Why should scientists be all that different from the rest of society? Remember Jakob Neilsen's 90-9-1 rule:
http://www.useit.com/alertbox/participation_inequality.html
User participation often more or less follows a 90-9-1 rule:
* 90% of users are lurkers (i.e., read or observe, but don't contribute).
* 9% of users contribute from time to time, but other priorities dominate their time.
* 1% of users participate a lot and account for most contributions: it can seem as if they don't have lives because they often post just minutes after whatever event they're commenting on occurs.

So think about the size of the audience for a given paper, then cut that down to 1% as those likely to leave a comment. It's not surprising really, to see low numbers. Eliminate comments from the author of the paper, address updates, etc., and the numbers I'm sure are even lower. PLOS seems to actually be doing better than I would have expected with comments, given these factors.

One of the other big issues, which you touch on, is expediency. It's hard to have a conversation over several years. One often finds an interesting paper years after it's been published. Are you going to leave a comment at that point? Is it likely the author, or people who read the paper when it came out, are going to keep swinging by to see if anyone has added anything new? If you have a question on a method used, are you better off hoping someone will see your comment, or writing directly to the author via e-mail (or making a phone call)?

The other big issue is that the currency of science publishing is the pdf file format. Nearly everyone downloads papers as pdf's, then reads them at their leisure. This eliminates the social nature you're trying to promote, as was pointed out in some of the comments on the posts which you linked. Given how this is the way the vast majority reads papers, that 1% figure above is more likely .01%.

"by 'comment' I mean Ratings+Notes+Comments". I don't think Ratings really count as a comment.

It is an interesting problem - how to encourage researchers to comment on articles, and then how to pool all the existing comments on the web about a paper so readers and authors are aware of them.

Some Ratings contain brief comments. I was using that criterium because that was user activity on site, while I excluded trackbacks which link to the user activity off site.