Could Open Science Resolve the Researcher-Data Producer Conflict?

Last week, I wrote about the problems facing genomics and the concept of ownership of data. While I am sympathetic to researchers' career needs under the current system, I don't think we can, in good conscience, let that get in the way of rapid data release, especially in applied areas.

I, and others, cast this as a conflict between individual researchers and the larger community, but there was a third part that was missing: universities. To the extent that universities care about--and desperately need--grants, altering how funders determine what a successful outcome is critical. That's why I found this interview with Cameron Neylon about 'open science' relevant (italics mine):

I think there are two different main classes of reason why funders support science. One is to build up knowledge, and the other is to support the generation of economic and social outcomes from research and innovation...

Our current metrics and policies largely optimise for prestige rather than knowledge building or social outcomes. On the assumption that most funders would choose one of these two outcomes as their mission I would say that the simple things to do are to actively measure and ask fundees to report on these things.

For knowledge building: Ask about, and measure the use and re-use of research outputs. Has data been re-used, is software being incorporated into other projects, are papers being cited and woven tightly into the networks of influence that we can now start to measure with more sophisticated analysis tools?

The re-use of data is the key point. If researchers can be rewarded--and acknowledged successful completion of grants seems to be a pretty good standard--based on the broad usefulness of the data they generated, this would go a long way in reducing the potential for conflicts between genome centers and individual researchers.

Tags

More like this

Though the "publish or perish" life of an academic never rests, it can't help but be infused with the rhythm of the school year. Perhaps that explains a recent surge in bloggerly analysis of the institutions and infrastructures that infuse this scientific lifestyle. From peer review to data…
Continuing stream of consciousness notes from this workshop held in DC, Wednesday December 16, 2009 Alexis-Michel Mugabushaka. (European Research Council) - intertwined research funding structures at national and European level. At the national level two main funding modes - institutional (block…
Welcome to the most recent installment in my very occasional series of interviews with people in the publishing/science blogging/computing communities. This latest installment is with Mark Patterson, Executive Director of new OA publisher eLife. I attended an ARL Directors briefing conference call…
Here's what they're about: The first draft of Panton Principles was written in July 2009 by Peter Murray-Rust, Cameron Neylon, Rufus Pollock and John Wilbanks at the Panton Arms on Panton Street in Cambridge, UK, just down from the Chemistry Faculty where Peter works. They were then refined with…

current metrics and policies largely optimise for prestige rather than knowledge building or social outcomes. On the assumption that most funders would choose one of these two outcomes as their mission I would say that the simple things to do are to actively measure and ask fundees to report on these things.
For knowledge building: Ask about, and measure the use and re-use of research outputs. Has data been re-used, is software being incorporated into other projects, are papers being cited and woven tightly into the networks of influence that we can now start to measure with more aaa sophisticated analysis tools?

The re-use of data is the key point. If researchers can be rewarded--and acknowledged successful completion of grants seems to be a pretty good standard--based on the broad usefulness of the data they generated, this would go a long way in reducing the potential for conflicts between genome centers and individual researchers

For knowledge building: Ask about, and measure the use and re-use of research outputs. Has data been re-used, is software being incorporated into other projects, are papers being cited and woven tightly into the networks of influence that we can now start to measure with more sophisticated analysis ..thanks.

actively measure and ask fundees to report on these things.
For knowledge building: Ask about, and measure the use and re-use of research outputs. Has data been re-used, is software being incorporated into other projects, are papers being cited and woven tightly into the networks of influence that we can now start to measure with more aaa sophisticated analysis tools?

The re-use of data is the key point. If researchers can be rewarded--and acknowledged successful completi

two outcomes as their mission I would say that the simple things to do are to actively measure and ask fundees to report on these things.
For knowledge building: Ask about, and measure the use and re-use of research outputs. Has data been re-used, is software being incorporated into other projects, are papers being cited and woven tightly into the networks of influence that we can now start to measure with more aaa sophisticated analysis tools? thaks.