Very quick note on things that are used but not cited

In most of the discussions of using usage as a metric of scholarly impact, the example of the clinician is given.  The example goes that medical articles might be heavily used and indeed have a huge impact on practice (saving lives), but be uncited. There are other fields that have practitioners who pull from the literature, but do not contribute to it.

ResearchBlogging.org

So it was with interest that I read this new article by the MacRoberts:

MacRoberts, M., & MacRoberts, B. (2009). Problems of citation analysis: A study of uncited and seldom-cited influences Journal of the American Society for Information Science and Technology, 61, 1-12 DOI: 10.1002/asi.21228

The article provides great examples from the field of biogeography (the distribution of plants and animals over an area - they tell me). It is typical for researchers in this field, when writing articles in peer-reviewed journals, to not cite their data sources.  Some of the data sources are flora  - "a list of plant species known to occur within a region of interest." The flora might be books, government reports, notes in journals or some other sort of gray literature.

The authors give a couple examples - one is their article - and show how these articles are uncited according to Web of Science, but heavily used and well incorporated into databases, books, and pamphlets. As they say, the purpose of the article has been achieved.

Not only are these things used directly, but once their contents are incorporated into databases, the database then goes on to serve maybe thousands of people. The sources are often listed in notes or in an appendix but no citation. This content that is sucked up into books or databases provides no traceable usage link - as far as I can tell. If we can't even determine the impact of the article - a container for an idea - how can we understand or evaluate the impact of the author and his or her knowledge contributions?

It's been noted elsewhere (see the article for citations/discussion) that the largest influence on a scientist often comes from informal communication partners - colleagues and co-workers. This is not cited, either.  So the question becomes, if we are truly interested in evaluating a scientist on his or her influence, we have to come up with new methods that look at how their ideas have been used - it is not enough to look at article citations or downloads.

(as an aside: the authors quote a website that bemoans the difficulty of locating floras. Certainly if they were cited, that would help!)

Categories

More like this

The gold standard for measuring the impact of a scientific paper is counting the number of other papers that cite that paper. However, due to the drawn-out nature of the scientific publication process, there is a lag of at least a year or so after a paper is published before citations to it even…
For those of you interested in the science publishing business, there is an interesting paper out about Impact Factors, where they do the math to try to explain why the IFs are apparentluy always rising from year to year, and to figure out the differences between disciplines. They remain agnostic…
I attended this one-day workshop in DC on Wednesday, December 16, 2009. These are stream of consciousness notes. Herbert Van de Sompel (LANL) - intro - Lots of metrics: some accepted in some areas and not others, some widely available on platforms in the information industry and others not. How are…
Michael J. Kurtz of the Harvard-Smithsonian Center for Astrophysics came to speak at MPOW at a gathering of librarians from across the larger institution (MPOW is a research lab affiliated with a large private institution).  He's an astronomer but more recently he's been publishing in bibliometrics…

Hi Christina

I am working on a semantic approach to identifying what I call ``lineages of works,'' which is not exactly like influence, but clearly has some strong affinities with it. There are serious but not insurmountable conceptual problems that need to be solved, or at least rendered harmless. Implementation of the idea will require expert human readers, but I think can eventually be taken up by machine readers. My purpose is to aid researchers in resource discovery. I thought you and your readers might be interested in this. Check back now and again at the web site listed above (the front page is a blog) for updates on my progress. If only the blogosphere weren't so interesting and I would get back to writing, I would have finished the essay about this that I am now writing.... sigh.

I guess the key question is whether uncited influence, within a field, is correlated with cited influence. It seems likely that the most influential papers in a given field will be used in both cited and uncited ways. So even if there's a fair bit of uncited use, the metric is still somewhat useful, so long as enough papers get cited at all.

Of course, if you try to do comparisons of papers or researchers from different fields, that breaks down. Perhaps that could be corrected for somehow?

Their point - which I didn't convey that clearly - is that in this particular field, there is no correlation. That is, uncited documents are heavily used. There are lots of ways to correct citation metrics from field to field, with varying success.