A repost of a November 28, 2008 post:
The other night, at the meeting of the Science Communicators of North Carolina, the highlight of the event was a Skype conversation with Chris Brodie who is currently in Norway on a Fulbright, trying to help the scientists and science journalists there become more effective in communicating Norwegian science to their constituents and internationally.
Some of the things Chris said were surprising, others not as much. In my mind, I was comparing what he said to what I learned back in April when I went back to Serbia and talked to some scientists there. It is interesting how cultural differences and historical contingencies shape both the science and the science communication in a country (apparently, science is much better in Norway, science journalism much better in Serbia).
But one thing that struck me most and got my gears working was when Chris mentioned the population size of Norway. It is 4.644.457 (2008 estimate). Serbia is a little bigger, but not importantly so, with 7.365.507 (2008 estimate - the first one without Kosovo - earlier estimates of around 10 million include Kosovo). Compare this to the state of North Carolina with 9,061,032 (2007 estimate).
Now think - what proportion of the population of any country are active research scientists? While North Carolina, being part of a larger entity, the USA, can afford not to do some stuff and do more of other stuff, small countries like Norway and Serbia have to do everything they can themselves, if nothing else for security reasons, e.g., agriculture, various kinds of industry, tourism, defense, etc. Thus, North Carolina probably has a much larger percentage of the population being scientists (due to RTP, several large research universities, and a lot of biotech, electronics and pharmaceutical industry) than an independent small country can afford (neither Norway nor Serbia are members of the EU).
So, let's say that each of these smaller countries has a few thousand active research scientists. They can potentially all know each other. Those in the same field certainly all know each other. Furthermore, they more than just know each other - they are all each other's mentors, students, lab-buddies, classmates, etc., as such a country is likely to have only one major university in the capital and a few small universities in other large cities. It is all very....incestual.
With such a small number of scientists, they are going to be a weak lobby. Thus, chances of founding new universities and institutes, expanding existing departments and opening up new research/teaching positions in the academia are close to zero. This means that the only way to become a professor is to wait for your mentor to retire and try to take his/her place. This may take decades! In the meantime, you are forever a postdoc or some kind of associate researcher etc.
If each professor, over the course of the career, produces about 18 PhDs (more or less, depending on the discipline) who are indoctrinated in the idea that the academic path is the only True Path for a scientist, the competition for those few, rare positions will be immense. And they all know each other - they are all either best friends or bitterest enemies.
This means that, once there is a job opening, no matter who gets the job in the end, the others are going to complain about nepotism - after all, the person who got the job (as well as each candidate who did not) personally knows all the committee members who made the final decision.
In such an environment, there is absolutely no way that the decision-making can be even the tiniest bit subjective. If there is a little loophole that allows the committee to evaluate the candidate on subjective measures (a kick-ass recommendation letter, a kick-ass research proposal, a kick-ass teaching philosophy statement, kick-ass student evaluations, prizes, contributions to popularization of science, stardom of some kind, being a member of the minority group, etc.), all the hell will break loose at the end!
So, in small scientific communities, it is imperative that job and promotion decisions be made using "objective" (or seemingly objective) measures - some set of numbers which can be used to rank the candidates so the candidate with the Rank #1 automatically gets the job and nobody can complain. The decision can be (and probably sometimes is) done by a computer. This is why small countries have stiflingly formalized criteria for advancement through the ranks. And all of those are based on the numbers of papers published in journals with - you guessed it - particular ranges of Impact Factors!
Now, of course, there are still many universities and departments in the USA in which, due to bureacratic leanings of the administration, Impact Factor is still used as a relevant piece of information in hiring and promotion practices. But in general, it is so much easier in the USA, with its enormous number of scientists who do not know each other, to switch to subjective measures, or to combine them with experiments with new methods of objective (or seemingly objective) measures. From what I have seen, most job committees are much more interested in getting a person who will, both temperamentally and due to research interests, fit well with the department than in their IFs. The publication record is just a small first hurdle to pass - something that probably 200 candidates for the position would easily pass anyway.
So, I expect that the situation will change pretty quickly in the USA. Once Harvard made Open Access their rule, everyone followed. Once Harvard prohibits the use of IF in hiring decisions, the others will follow suit as well.
But this puts small countries in a difficult situation. They need to use the hard numbers in order to prevent bloody tribal feuds within the small and incestuous scientific communities. A number of new formulae have been proposed and others are in development. I doubt that there will be one winning measure that will replace the horrendously flawed Impact Factor, but I expect that a set of numbers will be used - some numbers derived from citations of one's individual papers, others from numbers of online visits or downloads, others from media/blog coverage, others from quantified student teaching evalutions, etc. The hiring committees will have to look at a series of numbers instead of just one. And experiments with these new numbers will probably first be done in the USA and, only once shown to be better than IF, transported to smaller countries.
Related:
h-index
Nature article on the H-index
Achievement index climbs the ranks
The 'h-index': An Objective Mismeasure?
Why the h-index is little use
Is g-index better than h-index? An exploratory study at the individual level
Calculating the H-Index - Quadsearch & H-View Visual
The use and misuse of bibliometric indices in evaluating scholarly performance
Citation counting, citation ranking, and h-index of human-computer interaction researchers: A comparison of Scopus and Web of Science
H-Index Analysis
The EigenfactorTM Metrics
Pubmed, impact factors, sorting and FriendFeed
Publish or Perish
Articles by Latin American Authors in Prestigious Journals Have Fewer Citations
Promise and Pitfalls of Extending Google's PageRank Algorithm to Citation Networks
Neutralizing the Impact Factor Culture
The Misused Impact Factor
Paper -- How Do We Measure Use of Scientific Journals? A Note on Research Methodologies
Escape from the impact factor
Comparison of SCImago journal rank indicator with journal impact factor
Emerging Alternatives to the Impact Factor
Why are open access supporters defending the impact factor?
Differences in impact factor across fields and over time
A possible way out of the impact-factor game
Comparison of Journal Citation Reports and Scopus Impact Factors for Ecology and Environmental Sciences Journals
Watching the Wrong Things?
Impact factor receives yet another blow
Having an impact (factor)
In(s) and Out(s) of Academia
The Impact Factor Folly
The Impact Factor Revolution: A Manifesto
Is Impact Factor An Accurate Measure Of Research Quality?
Another Impact Factor Metric - W-index
Bibliometrics as a research assessment tool : impact beyond the impact factor
Turning web traffic into citations
Effectiveness of Journal Ranking Schemes as a Tool for Locating Information
Characteristics Associated with Citation Rate of the Medical Literature
Relationship between Quality and Editorial Leadership of Biomedical Research Journals: A Comparative Study of Italian and UK Journals
Inflated Impact Factors? The True Impact of Evolutionary Papers in Non-Evolutionary Journals
Sharing Detailed Research Data Is Associated with Increased Citation Rate
Measures of Impact
- Log in to post comments