Scifoo - Day 3 (well that was yesterday, but I just didn't have the time ...)

Our session on Scientific Communication and Young Scientists, the Culture of Fear, was great. Many bigwigs in the scientific publishing industry were present and a lot of ideas were pitched around. I would also like to thank Andrew Walkinshaw who co-hosted the session, Eric Lander for encouraging us to pursue this discussion, Pam Silver for giving a nice perspective on the whole issue, and all the other participants for giving their views.

Now someone had asked that we vlog the session, we actually tried to set it up but we didn't have the time. In retrospect I'm glad we didn't. This became at the last session of scifoo where attendees voiced their comments on the logistics of scifoo, many conference goers preferred to keep video and audio recording devices away from the sessions as they impede open discussion. Conversations off of the record can be more honest and more productive.

So about the session ...

The main point that we wanted to make was that there are problems with the current way that we are communicating science and due to developments with Web2.0 applications there is a big push to change how this is done. But we must keep in mind the anxieties and fears of scientists. How we communicate does not only impact how information is disseminated but does affect the careers of the scientists who produce content. Until there is general consensus from the scientific publishing industry, the major funding institutions, and the higher echelons of academia (for example junior faculty search committees), junior scientists are unlikely to participate in novel and innovative modes of scientific communication. The bottom line is that it is just to risky to do so.

There are two main areas that remain to be clarified by the scientific establishment.

1) Credit. How do we ascertain who deserves credit for an original idea, model or piece of data.

2) Peer-review. Although most scientists and futurists who promote much of the open-access model of scientific publishing support some type of peer-review where the science or consistence of a particular body of work is evaluated, there remains some confusion as to whether peer-review should continue to assess the "value" of a particular manuscript. Right now, manuscripts that are submitted to any scientific publication must attain some level of importance that is at least equal to the standards of that particular journal. When evaluating the scientific contribution of any given scientist, close attention is payed to their publication record and particularly where their manuscripts are published. Now whether we should continue to follow this model where editors and the senior scientists determine the scientific validity of any given manuscript is being questioned. In an alternative model many technologists are pushing post-publication evaluation processes which evaluate the importance of any single manuscript after the manuscript has been released after minimal peer-review. These not only include citations indices, but also newer metrics that are currently being developed by many information scientists. There are many problems with these systems, the most critical is that most of the value cannot be assessed until many years after the publication date. An important piece of work may take years to have an impact in a given particular field. Until the scientific establishment reaches a consensus as to whether these post-publication metrics are indeed useful for determining the credentials of a scientist in the shorter term (

There was a strong feeling that the top journals do provide a valuable filtering service. They go through all the crap in order to publish the best work. OK they don't always succeed but competition between all the big journals promotes a high standard. And many scientists are reluctant to give up this model. Journals also help to improve the quality of the published manuscripts, this function would be lost if all we had was PLoS One and Nature Precedings. To all those who think that journals must be eliminated in favour of an ArXiv.org model you are now warned.

That's where I'll end this for now.

Of course since then there has been some chatter on the web. Andrew has a nice post on the session. Jean-Claude Bradley discusses how discussions at scifoo helped clarify certain issues. Duncan Hill has some thoughts on the open science discussions. Corie and Anna also have comments from that first contentious session on "open access".

To see recent blog posts from all scifoo participants, click here.

More points to come shortly ...

More like this

Thanks for the summary, Alex!

It is impossible for a working editor to see sentences like this and say nothing:
"Now whether we should continue to follow this model where editors and the senior scientists determine the scientific validity of any given manuscript is being questioned."
So I have to chime in. We use junior scientists ALL THE TIME as referees, on average once on every paper for me I'd guess. Post-docs and beginning faculty members can and do make great referees, and we certainly appreciate them (and all referees). Younger scientists are just as capable of judging scientific validity, and they recommend rejection of papers just as often as more established scientists.

The notion that there is a vast conspiracy of "editors and senior scientists" determined to keep out everyone else is ludicrous. How could journals survive and thrive if that were the case? We WANT to publish good papers! We WANT to help younger (and all) scientists advance great ideas!

Could we therefore base these discussions in fact? Perhaps you should moderate a session of editors or invite their comments. For example, many editors would also tell you they think impact factors are statistically flawed measurements that should NEVER be applied to individuals for things like tenure decisions.

I am guessing that you heard similar comments from the editors who were at SciFoo, but I just had to add some more data to your post!

I agree, junior scientists are often included in the peer-review process, however (and you can correct me if I'm wrong) most of the decisions are placed in the hands of well established senior scientists. Now I'm not arguing either whether this is bad or good, but it seems that there are many in the scientific community (many of whom belong to the establishment) who wish to change the way we evaluate scientific work. Their ideas involve shifting the evaluation process to a post-review based system. In some ways I think that the current technology will inevitably lead to some shift, but unlike many commentators out in the blog-o-sphere (who are quite naive) I think that it is impossible to have a full shift. But this, I guess, is not the main point of our argument - we just want to have this discussion out in the open so that everyone is on the same page. By coming to some sort of consensus, everyone (the authors, the science publishers, and the consumers) wins.

There is a difference between sending a manuscript out to a junior scientist for peer-review (in which he/she gets to voice his/her opinion on whether the paper should be published in additions to suggestions for improvement) and an editor (whether it be a professional editor or a senior scientist) initially handling the manuscript and deciding whether it even deserves to go out for peer-review. Additionally, the editor also gets to make the final decision on whether the paper gets published. While input is made by non-senior scientists, the preliminary gatekeepers and ultimate deciders (to use the parlance of our times) are still either editors and senior scientists.

Now, that's not to say that this is a bad thing. Getting an editorship at a journal (where the editors are regular academics, not professional editors) is an honor that is bestowed upon researchers in recognition of their expertise in their field, which requires that these positions be filled by people of at least some seniority. There isn't some "conspiracy of 'editors and senior scientists' determined to keep out everyone else", but let's at least acknowledge how the peer-review process works.

To invoke some shadowy cabal of 'senior scientists' as gatekeepers for what gets published is utterly bizarre. I would go so far as to challenge you to come up with some facts, rather than promote what looks like an uninformed conspiracy.

By A Nother Natur… (not verified) on 06 Aug 2007 #permalink

There's no conspiracy, certainly.

People who go into academic publishing - certainly in the case of Nature, Science, and the other top journals (probably PRL and Nature Materials in my field, can't speak for Alex) - are definitely not representatives of the worst excesses of raw capitalism. It'd be pretty absurd to accuse them of doing anything other than trying to do the best job they can of putting the best science they receive out in print as best as they can - and they do it well. Admittedly, as a slight aside here, though; I wouldn't even consider sending most of my work to Nature/Science. I do a fair amount of tools/scientific software work - which, in a sense, is the newest branch of instrument science/methods/experimental technique, but isn't typically regarded as very prestigious, alas.

Anyway, as a result, I'd be *really* loath to point the finger at (the vast majority of) publishers (there's always going to be some dodgy practices in any field, people being people, but the problems are more likely to be in the political aspects of peer review than in anything which goes on in the publishers' offices.

The key problem's this, though;

  • Nature, Science, Phys Rev Letters, Cell, JACS, et al. - the high-impact journals - are doing the best job they can of taking the cream of research. But...
  • ... they're trying to divine the future, and however good their hitrate is - and it's good - it's not 100%. It takes a few years to see how influential a paper really will be, which is time grad students and postdocs might not have. Citations are a blunt instrument too; a "final-word" paper might, temporarily, close off an avenue of research, and it'll be systematically undercited regardless of how good it is;
  • ... and the volume of research is increasing...
  • ... and, regardless of the opinions of publishing professionals (who were pretty unambiguous in the session talked about above), hiring committees (who are the real worry here), at least anecdotally, use h-index and other bibliometric measures when considering who to employ, which are, in the short term, influenced heavily by where work's been published;
  • ... which is OK if you've got enough research for the older papers to have been out long enough for the metrics to be valid, but much worse (statistically noisier) the earlier in your career you are;
  • ... and there's a vicious cycle here; people do the most fashionable research, rather than what they're most interested in, best at, or what's most important (there's a correlation between all of these things, sure, but it's not perfect), people wind up blaming the journals for not publishing them because of the disproportionate career-value of absolute-top-tier publications, and in general we wind up paranoid and defensive. Thus it kind of runs counter to most other fields - the younger you are, perversely, the higher the stakes: if you're tenured, you can afford to take more risks, because you've got more to fall back on.

So, in a sense, the top journals are caught in the crossfire here. I sometimes wonder if the problem isn't an absence of summarising services, as a layer on top of the existing journals; pulling together all the 'best' papers in a subject area from all the journals. This is a pretty ropy analogy, but something like the Smart Playlists in iTunes; virtual journals, in a way. At the moment, a paper in Nature is substantially more visible than one in Phys Rev Lett, which is substantially more visible than one in Phys Rev B, which is... etc etc etc. If that could be flattened out somehow, without compromising the integrity or quality of the top journals (and without drowning scientists in information), it'd be an enormous win. Copyright/subscription issues might make that very hard, though.

Anyway, just a few thoughts. I'm on vacation; probably shouldn't write so much!

Andrew,

Well said.

Another Nature Editor,

The anxiety that younger researchers are feeling (besides the need to develop a good story) is the uncertainty of what is going on with "open-access" and all these alternative models of scientific publishing. I don't think that anyone here is invoking some conspiracy theory of journal editors or senior scientists (if that was the case, my boss would probably qualify ...), but that there is a push for a change. In fact by starting Precedings, NPG is responding to this reality, and we aplaud you for trying to adapt. But now what we young (and older) scientists need is to get everyone talking so that we can have some sort of shared understanding about what the playing field is. That is to say, how are data/theories/models going to be credited and evaluated within funding agencies, search committees and within journal articles.