NSF Workshop on Scholarly Evaluation Metrics – Afternoon 3

stream of consciousness notes from this meeting I attended in DC, Wednesday December 16, 2009

Final panel

Oren Beit-Arie (Ex Libris Group), Todd Carpenter (NISO),Lorcan Dempsey (OCLC),Tony Hey (Microsoft Research),Clifford Lynch (CNI),Don Waters (Andrew W. Mellon foundation)

introduction from Cliff Lynch - gets requests for tenure reviews - he takes these very seriously. Got one that had a whole bibliometric survey of the work - with all of the citing papers, etc., about 40 pages. Things that were intended to provide insight are now used for evaluation.

Could we get access streams/patterns from higher level routers on the net?

He's also intrigued by the accesses of ADS via google - this is a way to start characterizing impact beyond the scientific community.

Can't linearly rank articles or people - don't have the resolving power.

 

Tony Hey (Microsoft) Expanding the toolkit for scientific assessment - looking at webometrics using google data, some schools ranked higher than other systems - top in Europe Southampton - turns out these are the institutions with repositories that have lots of crawlable data. People use these metrics, even if they have big problems.

Do not underestimate the PR of a simple and imperfect metric. Even an imperfect metric can yield interesting results. Metrics do not capture everything (humanities, EU funding as a tool for social engineering)

Todd from NISO - apples and apples, comparing measures. Identification - what are you identifying and why. These citation structures we have, developed over decades, we're just at the beginning of understanding the impact of new informal communication on the web (blogs, twitter).

Don Waters - Practical vs. theoretical or philosophical issues. Some of this leads to a sort of cynicism that you can find a measure to say whatever you want. Are the stories a metric tells significant? Is it predictive? Julia's theoretical questions are a good starting place - we need more theory.

Lorcan Dempsey (OCLC) - didn't hear mentioned: reputation management. This is clearly important whether you do it accidentally, purposefully, or ignore. Academic search engine optimization becoming important to research organizations and individuals.  Also - gray literature. It was not findable in the past, now it's more findable than some of the formally published stuff. Web scale, web level, network level activities. We talk with web 2.0 and distribution, but there has been a great concentrating effect - rich get richer, etc. We did talk about interoperability (high recombinant potential) but it's easier if things are concentrated. More calls for unique author identifiers.

Oren Beit-Arie(ex libris) need more standards, have to take into account in our architectures the highly distributed world. How and who? Libraries are important and do have a role - they are used to dealing with final outputs, having trouble dealing with non-final work like data sets. BX - usage based SFX add on recommender system. Trying to keep data now so that when they need it in the future they'll have it.  Can't capture everything but need to know what's interesting.

More like this

I'm on a sub-sub committee to evaluate evaluation of consideration of adding a new recommender system to our discovery tools across my parent institution's libraries. The system costs money and programmer time (which we're very short on), but more importantly, there's a real estate issue, we…
I attended this one-day workshop in DC on Wednesday, December 16, 2009. These are stream of consciousness notes. Herbert Van de Sompel (LANL) - intro - Lots of metrics: some accepted in some areas and not others, some widely available on platforms in the information industry and others not. How are…
Everyone and their grandmother knows that Impact Factor is a crude, unreliable and just wrong metric to use in evaluating individuals for career-making (or career-breaking) purposes. Yet, so many institutions (or rather, their bureaucrats - scientists would abandon it if their bosses would) cling…
We are now just 12 hours from the release of the National Research Council Data Based Assessment of Graduate Programs. The tension is just overwhelming... An interesting thing about the 2010 NRC rankings is the methodology, and a final version seems to have been settled upon. As you know, Bob, the…