Continuing stream of consciousness notes from this workshop held in DC, Wednesday December 16, 2009
Alexis-Michel Mugabushaka. (European Research Council) - intertwined research funding structures at national and European level.
At the national level two main funding modes - institutional (block research funding of higher ed institutions), and competitive.
Orgs structured at European level - like CERN or EMBL. Joint research funding ESF or bi or multilateral. Research Frameworks. ERC
ERC Scientific Council - 22 eminent scientists. Executive Agency (where he works). 2 programs: starting grants, advanced grants.
Evaluation activities of national funding agencies. ESF - 80 orgs, 20 countries. Venue to support members sharing info on evaluation methods. It's not whether or not to fund basic science - that's not on the table for them - they're interested in how to best fund it, what schemes, and what types of research in which areas.
challenge 1 - high quality data: get data without over burdening researchers - allow long term (post project) monitoring of effects
challenge 2 - ?
recent trends - metrics, broader contextual questions, other evaluation questions (societal impact)
wish list - things mentioned earlier, awards and recognition (best paper prizes), better metrics for patents (measuring patents influences universities to patent, but that costs a lot of money, pulling from other areas, and licenses don't always cover). How to reward communicating to a non-academic audience (over value publications over other contributions to the community - tools, software, teaching, public communication, etc).
audience q: also need to add "micro attribution" - how much data is being put in public repositories, how is that data reused.
Jan Velterop- Concept Web Alliance - semantic web - forging quality out of quantity. Why publish: record, transfer, credit.
"scientific egosystem" "acknowledgement economy" - all of the standard metrics are popularity contests - not really about quality at all. What does quality look like? An edifice with an ivory tower and a foundation of hard packed rubble (no citations, but support the entire edifice). What if your article has no citations but is heavily used in teaching? It's better quality if it's got more formulas? It's better if you can't understand it?
What about brownie points for commenters on plos1 or f1000.
Nano-publications - triples with attributes like author. what must a publication be? citable, valuable, convey knowledge, full length- no. One number for each concept (in any language). Link to concept wiki for attributes/definition - need browser. So maybe we can have metrics for first use and times reused. Brownie points for annotation. Whole datasets can be triplized. Can be cited. Benefit authors and publishers. see http://bit.ly/6B1XqK (reminds me of a project from Catherine Blake where she's doing text mining for knowledge claims to map them from article to article - but hers seems more sophisticated. Also, Jan doesn't mention if they are linking these concept wiki pages to community accepted naming schemes for genes, chemicals, stars, etc)
- Log in to post comments
"Also, Jan doesn't mention if they are linking these concept wiki pages to community accepted naming schemes for genes, chemicals, stars, etc)"
Yep. We do. Extensively and we try to be comprehensive in that regard. Anyway, you can add accepted names, should you miss any.
Jan