Which is a better metric of faculty research performance, H or G?
I already pontificated about the Hirsch index - where you rank your published papers by citation rank, and the H-index is the largest number such that you have k papers with the number of cites greater than or equal to k.
It is an interesting measure, grows monotonically with time, gives some weight to number of "pretty good" papers, rather than small number of very highly cited papers.
I now learn there is also a "G index" proposed by Leo Egghe, apparently, to fix the problems with the H-index. See here though it appears the paper has only seven cites to date...
It is the largest number k, such that the summed number of cites for your papers ranked by citation, down to the kth paper, is larger than or equal to k2.
G is of course greater than H, by construction. In fact G can be greater than the actual number of papers people have, so you add zero citation fictitious papers.
G also weighs more heavily a small number of highly cited papers.
I don't know, I think we have to get logarithms in here somehow... this algebraic stuff is just not enough for physicists.
I am reliably informed that
Clearly to maximise your index value you want to use the H-b index to find the sub-field where H' is large...
As always, one truth is universal - you can not get a lot of citations unless you're working in a field in which a lot of papers are written!
- Log in to post comments
"you can not get a lot of citations unless you're working in a field in which a lot of papers are written!"
I'd assume this kind of ranking would be used only for discriminating between people working in the same subfield. If you were, say, evaluating applicants for a position, they'd all be from the same subfield so the relative value would be relevant no matter what the absolute one.
I'm not really convinced by the H-index; the dynamics don't feel "right". For instance, when evaluating two applicants with the same H index, it doesn't matter if one applicants first paper is just another nice but obscure journal paper with perhaps ten citations while the others first-ranked paper made the cover of Nature. Or more generally, take two applicants with high H-indexes - H=40 and H=44, say - and try to compare them. Most likely, the actual citation distribution among the top ten papers or so are a much better indication of the quality of their body of work than the difference in H index.
I was hoping for an erudite discussion of the relative merits of Enthalpy and the Gibbs Free Energy.
The cynical side of me wonders if, whenever a new system gets proposed, the proposer is likely to have a higher score under the new system than under the old.
Which is a better metric of faculty research performance, H or G?
No.
And we have a winner! Or two.
Actually, I think the Free Energy is a much better indicator. Anyway, "Enthalpy" just sounds silly.
Free the Faculty Energy Now!
PS: you'd think "H" would only be used to compare people in same subfield at same career point (since it increases monotonically with age and is bounded by total number of publications in the field), but sadly no. It is a number, once it enters the system it takes on a life of its own.
Hence the current fad for H' of course. It is a new improved number.
'Course H' has units, I think we need time measured in months, I mean if the field is really hot, that should be a relevant time scale.
Improved? As before, publishing the same things as everyone else in a 'hot' field (made 'hot' by others 5 years ago) ranks one higher than doing groundbreaking work in a new field, which takes longer to build up citations.
I'm surprised you would waste a post on these measures. They don't measure anything but ego and the size of your clique. That's not science.
It says right in the paper that it is "improved"!
Reason I waste a post on this, is because of a memo that I received recently.
The point about these measures is that they are out there, they are being actively utilised by administrators and senior faculty for "objective" evaluations - since they provide a single numeric indicator of something they are attractive. Yet most people affected by this do not know of this, have not explored the indices and what they say about them or others.
It is something that is out there and actively impacting big part of my "audience", and I was prompted to say something now by an explicit piece of administrivia which emphasised the need to make note of it.
Plus if you know it is there, you can at least try to game it.
Steinn, sorry to appear overly grumpy about this, but I think if faculty start to use their research direction to "game" a statistic rather than to maximize the benefit to the students of the future they would be doing a disservice to the field. I think it's an important point to make. Administrators, if they are not aware (and we know that most of them are enlightened at some level, although busy schedules prevent them from being and indeed tend to push them toward considering simple statistics) should be made aware. Things like this can end up being an impediment to choosing new but promising research directions and aren't good for science, IMO.
"I was hoping for an erudite discussion of the relative merits of Enthalpy and the Gibbs Free Energy."
Right here.
As for Janne's point:
Which is a better accomplishment? A paper of X citational value in Nature, or a paper with the same scientific impact in the Journal of South-Western New Jersey Quaternary Geology*?
I would argue that a paper that is so important that it forces people to read a journal that they've never heard of must be more important than one that was shoved down their throats in a high-profile, heavily press-released rag.
This is because the citations that it gets are more likely to be crucial, rather than gratuitous.
* Ficticious po-dunk journal.