Science21: The Journal of Stuff I Like

Another thing I thought was intriguing that came up at the Science in the 21st Century meeting wasn't from a formal talk, but rather a conversation over dinner with Garrett Lisi and Sabine Hossenfelder about the future of publishing. Garrett was suggesting a new model of publishing, based on pulling things from the arxiv (or something like it).

The idea here is that anybody who cared to would set up a "journal," consisting of a collection of links to papers they found worthwhile. If you wanted to know what Garrett Lisi finds interesting and useful from recent research, you would look at his "journal," but if you prefer, say, Jacques Distler's take on theoretical physics, you would look at his.You could imagine putting together higher-profile collections, as well, say by making a "Journal of Articles Picked by More Than N Journals," where N is some integer greater than 1.

This isn't all that original an idea-- as Kate noted when I mentioned it to her, it's basically the same scheme some people have proposed for getting rid of traditional publishers in fiction. I suspect it would avoid the obvious pitfall of the fiction version, though, and elsewhere in the meeting, we heard about some software that might allow something very much like this idea.

The obvious pitfall in the fiction version of this idea is that nobody in their right mind wants to read slush. Sturgeon's Law applies only to what's been published, which is a tiny fraction of what gets sent to publishers by people whose enthusiasm is greater than their skill at writing. The majority of this material is just dull, but some of it is soul-crushingly awful, and it's hard to imagine a business model that would get people to agree to spend time wading through oceans of this stuff.

I think this is less of an issue for science, though, because reading papers is an essential part of what scientists have to do anyway. If you want to do research for a living, you need to have some acquaintance with what other researchers are up to, which means keeping an eye on the journals and the arxiv, and flagging interesting stuff as it comes up. If you're doing that anyway, it's not too much of a step to share the list of what you found worthwhile with the rest of the world, at which point you're effectively publishing the Journal of Stuff I Like.

(You could almost imagine using the tools from the aforementioned Public Knowledge Project to set this up in the manner of a real journal, with the aid-to-reading tools and everything. That's probably more effort than anyone would be willing to put in, though, and there might be IP issues with that sort of thing...)

The problem, of course, is getting this effort recognized, but one of the things I like about the idea is that it's the sort of thing that anyone interested in changing the culture could just do, right now. Getting professional credit for making the effort to sift through papers might take some doing, but if people started to do this sort of thing on a regular basis, that could easily evolve out of the system.

In another session of the meeting, we had a demonstration of some software that, if you tilt your head and squint, is almost doing the sort of thing Garrett was talking about. Victor Henning showed a preview of a program called Mendeley, which is a tool for managing and sharing research articles. The demo was unfortunately cut short by some issues with the wireless network, but the bits that we saw showed some nice features-- you can give it a bunch of PDF's, for example, and it will sift through them to automatically find metadata like the title, author names, and so on, and build a library of articles, making it easier to keep track of what you read.

It will also synch that library with a web-based system, allowing you to access your papers online. They're getting backing from the people behind Last.fm, so it will also involve some social component, including the ability to share papers between users, and get recommendations of other things that might be of interest.

That last bit is the part that resembles the "Journal of Stuff I Like" bit, but I'm not sure how fully developed this is. They're in beta now, and still have some issues to work out, both technical (how to handle copyright issues inovolved with sharing) and social (will people be too paranoid to let their rivals see what they're reading?), but it's a neat idea.

I haven't tried it out yet-- ITS has threatened promised to upgrade my work computer Real Soon Now, and I'm not going to install new software just to have it broken immediately. When that gets sorted out, though, I'll definitely give it a look. If nothing else, the desktop portion would do wonders for the folder full of cryptically named PDF's I have on my office computer...

More like this

The series of interviews with some of the participants of the 2008 Science Blogging Conference was quite popular, so I decided to do the same thing again this year, posting interviews with some of the people who attended ScienceOnline'09 back in January. Today, I asked Victor Henning from Mendeley…
During my summer blogging break, I thought I'd repost of few of my "greatest hits" from my old blog, just so you all wouldn't miss me so much. This one is from September 3, 2008. There was some nice discussion on Friendfeed that's worth checking out. ===== Some recent posts that got me thinking…
I have never been a huge proponent of the Open Access and Open Data movements in science publishing, because they've always struck me as wasted effort. I've never really seen what value is supposed to be added by either project. When I think about the experiments that I've been involved with (see…
Garrett Lisi's Exceptional Approach to Everything: When Lisi published his physics paper, "An Exceptionally Simple Theory of Everything," to an online archive last year, it created a media buzz about his lifestyle and an onslaught of support and skepticism about his model. Although the verdict is…

CiteULike and, to some extent Connotea, already do something like this. Mendeley is impressive but CiteULike has the advantage of already having a large number of users.

Thanks Chad! As you correctly pointed out, we're still in beta and have a lot of work to do - but we're getting there. Besides improving the speed and stability of Mendeley Desktop, we're expecting to have internal prototypes of the research workflow features (the Microsoft Word/LaTeX integration and the Metadata-Scraping Browser Bookmarklet that I mentioned in Waterloo) by the end of this week. Obviously some testing is needed before we can roll out these features - so I don't know whether we'll make it before your machine is upgraded :-)

As for the *Journal of Stuff I Like* - what you and Garrett proposed is definitely one of the things we have in mind for Mendeley, but admittedly it's a little further down the road. Once we implement this feature, the privacy issue you mentioned shouldn't be a problem: It will be an "opt-in" model with differing levels of privacy, so you don't run the risk of accidentally revealing your reading list to potential competitors. Instead, for example, you could elect to only share your reading list with people in your contacts list, or not to share it at all.

It was nice meeting you in Waterloo,
cheers from London,
Victor

P.S. D'oh - I forgot to say: One of the next versions of Mendeley Desktop will also include an automatic file renamer, so you can rename all those cryptically titled PDFs according to a certain schema, e.g. "Author - Year - Title - Journal.pdf".

That seems like a good idea, but not as a replacement for journals (on paper, online, or coded in any medium whatsoever). What I am missing in a lot of those brave new ideas is a mechanism for quality control, given the source of this proposal maybe that is not too surprising... I personally have used 20 year old papers, whose authors I have never met, with complete confidence that every factor of 2 and minus sign is correct. I am not sure I want to give that up.

That seems like a good idea, but not as a replacement for journals (on paper, online, or coded in any medium whatsoever). What I am missing in a lot of those brave new ideas is a mechanism for quality control, given the source of this proposal maybe that is not too surprising... I personally have used 20 year old papers, whose authors I have never met, with complete confidence that every factor of 2 and minus sign is correct. I am not sure I want to give that up.

Moshe, the quality control comes from only paying attention to people you trust. I personally feel that I will get better value from the opinion of people I know and understand and (even if I don't agree with them) than necessarily from an anonymous (and overworked) editor and anonymous (and overworked) referees who have taken some unspecified amount of care with the process.

Obviously it depends on the journal but I wonder how you feel about those factors of 2 and minus signs in more recent papers?

Actually one of the strange things I find about this whole debate is the assumption that we as scientists are too busy to take care of quality control and that 'someone' has to do it for us. I agree we're swamped and that 90% of everything is rubbish - but surely that doesn't mean we can simply assume that a journal (any journal) will do the work for us?

Cameron, this is not either-or situation: quality control is absolutely essential, we need many layers of it for different situations. An insider in some field can rely their own judgment or that of people they trust. As an outsider (say a student), imagine trying to get into a new subject without knowing which claims out there are right and which are wrong, where all you have is a bunch of neatly packaged opinions.

So, if anything, the standards for refereed journal needs tightening, not relaxing. The proposal here, like many others like it, goes precisely the opposite direction. Again, it is not an empty exercise to try to put the proposal in context in order to see what precisely it is aimed to achieve.

Quoth Moshe: What I am missing in a lot of those brave new ideas is a mechanism for quality control

As Cameron says, there are issues with quality control in existing refereed journals. Some obvious mistakes get flagged during the refereeing process, but other mistakes slip through, either because the referee wasn't paying attention or because he is not an expert on that particular aspect. More than once I have read a paper in the leading journal of my subfield only to ask myself, "How the #$%@ did that get past the referees?" I've seen it in PRL, too, so top journals are not immune. And this is just honest error, let alone deliberate fraud a la Jan-Hendrik Schön.

It does help, as Cameron says, if you the people involved. There are some authors in my field who I can trust have done the algebra right. There are others where I know not to believe anything they've derived until I have gone through the derivation myself and am satisfied with the approximations used (but their papers are otherwise mainstream enough that I cannot ignore them). I suspect it has always been thus (look at "The Seven Percent Solution" from Surely You're Joking, Mr. Feynman for an example that dates from the late 1930s or early 1940s). You always have to exercise your skepticism: if something looks to you like it might be fishy, check it out before quoting the result because there is a good chance that it is fishy.

By Eric Lund (not verified) on 17 Sep 2008 #permalink

Quality control in journals surely could work better, finding ways to improve it is an interesting conversation to have. For example, this can be improved by changing the reward system, and there are already some early attempts (e.g paying referees and editors for their effort, recognition of good referees through prizes, etc.).

Also, supplementing the journals with another layer of quality control, in the form of a network of opinions, is a good idea. That is probably most valuable to the experts in the field, who are exactly the set of people who don't need much help in forming a judgment, but I don't see how this can hurt. It is probably formalizing something that already exists anyhow.

What I object to is the idea that such network could completely replace the traditional peer review, even if that peer review does not work perfectly. Somehow every part of the scientific community, not just the people in the center of the field today, need to know which papers have passed peer review, and which have not. Even if that knowledge is not a complete guarantee everything is correct, it is better than no information at all.

Moshe, I think we're actually at cross purposes here. I would never suggest this as a way of replacing traditional peer review, but as a way of supplementing it, even of improving it in the way you suggest. Arguably it is a way of doing peer review in a (potentially) more efficient way.

But really - what is the difference between 'three people have read and recommended this paper' and 'three people have refereed this paper'? Putting aside the fact that in many fields the 'standard' number of referees is very different (anywhere from one to five). The difference lies in whether you prefer to trust the judgement of an anonymous editor in picking three anonymous referees or your own judgement (i.e. you trust the journal) or whether you believe the recommendations of three named people. For someone from out of field both of these have potential pot holes to fall into.

Erich Segal at Yale (joined 1964) showed his mettle by writing Love Story (1970). Eric Segal was not granted tenure - but his critics got it.

A discipline is degenerate when its practitioners engage in dialog rather than discovery and creation. US finance requires analysis and salvation rather than operation. It is deceased.

Why did Hester Prynne get the Scarlet A? "It was the highest grade they gave." Anything and its opposite are true in the Liberal Arts, the Fine Arts (Bach and John Cage), the social sciences, economics, religion... Science is poised at the brink of the easy path.

IWhen science is a matter of opinion somebody is lying and everybody else is complicit.

Cameron, we are in agreement then. Sorry for being unclear, I am supplementing what Chad wrote with my own knowledge of the proposal.

As for your second paragraph, the difference is the set of qualification of these 3 anonymous people. When the system works as it should, I know that an editor in a reputable journal is likely to choose a qualified referee, even in cases I cannot evaluate that choice myself. This is not unlike my trust in airplane mechanics: I do not know the specific ones that worked on the plane I am about to board, but I know that their training has resulted in good track record. I'd be really nervous to board an airplane services by 3 random people who are convinced they have really good mechanical skills...

So where we differ is in our trust of the Cival Aviation Authority (or equivalent):-) Which actually may be a function of field and the quality of journals and refereeing in that field rather than anything else.

The question perhaps might be better framed as 'would you get on a plane where the three people were recommended by many people you trusted'. The idea of attributing review makes new kinds of analysis possible. The experience of the web is that this is quite a good way of finding useful stuff and more importantly that it is quite fine grained - different people will provide different types of content and medium depending on their preferences - and you can use this to help your processing.

But until this actually applies well to the peer reviewed literature it is so much pie in the sky anyway. Prob worth mentioning tho that I do use Friendfeed quite a lot in this way - certain people I follow I will check up the papers they bookmark because I know they are likely to be interesting.

My model for a completely democratic system, where equal space is given to the profound and to the absurd, is actually the Physics blogosphere itself. In my area of expertise it is fairly easy for me to distinguish the correct statements from the incorrect ones, not to mention the patently absurd ones. I observe though that this is not quite that easy even for my colleagues and students, let alone complete outsiders. I would not trust this model alone in producing good science, though it could be useful in supplementing existing models of science evaluation and distribution.

Like Coturnix, I already use CiteULike for organising and sharing papers that I'm interested in. It can automatically pull bibliographic information about the paper from websites, and automatically generates BibTeX entries for each paper. I've found many interesting papers through their tagging system and through seeing what my 'neighbors' there read.

This sounds rather similar to the idea of "overlay" journals, where a journal exists primarily as an institutional version of what you're talking about: providing links to "interesting" papers, which reside in a public repository like the arXiv. The difference is that rather than person X scanning through the archive and saying, "Hey, that looks interesting", you have an editor (and/or editorial board) + submission + peer review system, and possibly copy-editing as well.

See, e.g., the RIOJA project; there was even a recent one-day meeting devoted to the idea.

In principle, journals provide an advantage over individuals in that there are mechanisms for institutional continuity: new articles are published (or approved or at least pointed to) more or less continuously, whereas relying on a single person's efforts means the continuity is contingent on their not going on vacation, getting distracted by other things, drifting into crackpottery, leaving the field, or dying suddenly.

The idea here is that anybody who cared to would set up a "journal," consisting of a collection of links to papers they found worthwhile.

This "idea" dates back 14 years and was the "Commentary Layer" at its most primitive.

Of course, a list of "interesting" or "useful" papers is not terribly helpful (think of it as one bit of information/paper). Hence the "commentary" aspect of the "Commentary Layer."

To Moshe, I would point out that achieving a more reliable literature is not without its costs (the obvious ones -- that the task of refereeing becomes a much more arduous one -- and the less obvious ones -- the "penumbra" of results that don't (or haven't yet) made it through the process. And you need to weigh those costs against the benefits.

The Mathematicians have struck a very different balance from that prevalent in theoretical physics. But I'm not sure you'd prefer their situation to ours.

Thanks Jacques, that is an interesting conversation to have some time. I suspect that you are right and I would not like the situation in math, an over-developed oral tradition favors the insiders as much as an unreliable literature does. In both cases the accessible information has to be supplemented with a layer of interpretation not available to just anyone, including I suspect the next generation of scientists. Maybe one element in a solution is a hierarchy of publications, it may be that only a small number of publications needs to be perfected, so one high-prestige very rigorous journal may improve the situation quite a bit.

BTW, it is interesting how this discussion correlates with one's political philosophy. For me, lots of the arguments for disposing with the journal system, along with its set of standards, are basically the libertarian agenda applied to this particular topic.