Why does Impact Factor persist most strongly in smaller countries

The other night, at the meeting of the Science Communicators of North Carolina, the highlight of the event was a Skype conversation with Chris Brodie who is currently in Norway on a Fulbright, trying to help the scientists and science journalists there become more effective in communicating Norwegian science to their constituents and internationally.

Some of the things Chris said were surprising, others not as much. In my mind, I was comparing what he said to what I learned back in April when I went back to Serbia and talked to some scientists there. It is interesting how cultural differences and historical contingencies shape both the science and the science communication in a country (apparently, science is much better in Norway, science journalism much better in Serbia).

But one thing that struck me most and got my gears working was when Chris mentioned the population size of Norway. It is 4.644.457 (2008 estimate). Serbia is a little bigger, but not importantly so, with 7.365.507 (2008 estimate - the first one without Kosovo - earlier estimates of around 10 million include Kosovo). Compare this to the state of North Carolina with 9,061,032 (2007 estimate).

Now think - what proportion of the population of any country are active research scientists? While North Carolina, being part of a larger entity, the USA, can afford not to do some stuff and do more of other stuff, small countries like Norway and Serbia have to do everything they can themselves, if nothing else for security reasons, e.g., agriculture, various kinds of industry, tourism, defense, etc. Thus, North Carolina probably has a much larger percentage of the population being scientists (due to RTP, several large research universities, and a lot of biotech, electronics and pharmaceutical industry) than an independent small country can afford (neither Norway nor Serbia are members of the EU).

So, let's say that each of these smaller countries has a few thousand active research scientists. They can potentially all know each other. Those in the same field certainly all know each other. Furthermore, they more than just know each other - they are all each other's mentors, students, lab-buddies, classmates, etc., as such a country is likely to have only one major university in the capital and a few small universities in other large cities. It is all very....incestual.

With such a small number of scientists, they are going to be a weak lobby. Thus, chances of founding new universities and institutes, expanding existing departments and opening up new research/teaching positions in the academia are close to zero. This means that the only way to become a professor is to wait for your mentor to retire and try to take his/her place. This may take decades! In the meantime, you are forever a postdoc or some kind of associate researcher etc.

If each professor, over the course of the career, produces about 18 PhDs (more or less, depending on the discipline) who are indoctrinated in the idea that the academic path is the only True Path for a scientist, the competition for those few, rare positions will be immense. And they all know each other - they are all either best friends or bitterest enemies.

This means that, once there is a job opening, no matter who gets the job in the end, the others are going to complain about nepotism - after all, the person who got the job (as well as each candidate who did not) personally knows all the committee members who made the final decision.

In such an environment, there is absolutely no way that the decision-making can be even the tiniest bit subjective. If there is a little loophole that allows the committee to evaluate the candidate on subjective measures (a kick-ass recommendation letter, a kick-ass research proposal, a kick-ass teaching philosophy statement, kick-ass student evaluations, prizes, contributions to popularization of science, stardom of some kind, being a member of the minority group, etc.), all the hell will break loose at the end!

So, in small scientific communities, it is imperative that job and promotion decisions be made using "objective" (or seemingly objective) measures - some set of numbers which can be used to rank the candidates so the candidate with the Rank #1 automatically gets the job and nobody can complain. The decision can be (and probably sometimes is) done by a computer. This is why small countries have stiflingly formalized criteria for advancement through the ranks. And all of those are based on the numbers of papers published in journals with - you guessed it - particular ranges of Impact Factors!

Now, of course, there are still many universities and departments in the USA in which, due to bureacratic leanings of the administration, Impact Factor is still used as a relevant piece of information in hiring and promotion practices. But in general, it is so much easier in the USA, with its enormous number of scientists who do not know each other, to switch to subjective measures, or to combine them with experiments with new methods of objective (or seemingly objective) measures. From what I have seen, most job committees are much more interested in getting a person who will, both temperamentally and due to research interests, fit well with the department than in their IFs. The publication record is just a small first hurdle to pass - something that probably 200 candidates for the position would easily pass anyway.

So, I expect that the situation will change pretty quickly in the USA. Once Harvard made Open Access their rule, everyone followed. Once Harvard prohibits the use of IF in hiring decisions, the others will follow suit as well.

But this puts small countries in a difficult situation. They need to use the hard numbers in order to prevent bloody tribal feuds within the small and incestuous scientific communities. A number of new formulae have been proposed and others are in development. I doubt that there will be one winning measure that will replace the horrendously flawed Impact Factor, but I expect that a set of numbers will be used - some numbers derived from citations of one's individual papers, others from numbers of online visits or downloads, others from media/blog coverage, others from quantified student teaching evalutions, etc. The hiring committees will have to look at a series of numbers instead of just one. And experiments with these new numbers will probably first be done in the USA and, only once shown to be better than IF, transported to smaller countries.

Related:

h-index
Nature article on the H-index
Achievement index climbs the ranks
The 'h-index': An Objective Mismeasure?
Why the h-index is little use
Is g-index better than h-index? An exploratory study at the individual level
Calculating the H-Index - Quadsearch & H-View Visual
The use and misuse of bibliometric indices in evaluating scholarly performance
Citation counting, citation ranking, and h-index of human-computer interaction researchers: A comparison of Scopus and Web of Science
H-Index Analysis
The EigenfactorTM Metrics
Pubmed, impact factors, sorting and FriendFeed
Publish or Perish
Articles by Latin American Authors in Prestigious Journals Have Fewer Citations
Promise and Pitfalls of Extending Google's PageRank Algorithm to Citation Networks
Neutralizing the Impact Factor Culture
The Misused Impact Factor
Paper -- How Do We Measure Use of Scientific Journals? A Note on Research Methodologies
Escape from the impact factor
Comparison of SCImago journal rank indicator with journal impact factor
Emerging Alternatives to the Impact Factor
Why are open access supporters defending the impact factor?
Differences in impact factor across fields and over time
A possible way out of the impact-factor game
Comparison of Journal Citation Reports and Scopus Impact Factors for Ecology and Environmental Sciences Journals
Watching the Wrong Things?
Impact factor receives yet another blow
Having an impact (factor)
In(s) and Out(s) of Academia
The Impact Factor Folly
The Impact Factor Revolution: A Manifesto
Is Impact Factor An Accurate Measure Of Research Quality?
Another Impact Factor Metric - W-index
Bibliometrics as a research assessment tool : impact beyond the impact factor
Turning web traffic into citations
Effectiveness of Journal Ranking Schemes as a Tool for Locating Information
Characteristics Associated with Citation Rate of the Medical Literature
Relationship between Quality and Editorial Leadership of Biomedical Research Journals: A Comparative Study of Italian and UK Journals
Inflated Impact Factors? The True Impact of Evolutionary Papers in Non-Evolutionary Journals
Sharing Detailed Research Data Is Associated with Increased Citation Rate
Measures of Impact

More like this

So, let's say that each of these smaller countries has a few thousand active research scientists. They can potentially all know each other. Those in the same field certainly all know each other. Furthermore, they more than just know each other - they are all each other's mentors, students, lab-buddies, classmates, etc., as such a country is likely to have only one major university in the capital and a few small universities in other large cities. It is all very....incestual.

Why do you assume that graduate students and post-docs in Norway will all be trained in Norway? That's not true in any other small country. The best students will be getting their training in the same places as the best students in America and Great Britain.

Good question. This will differ between countries, but most will get their training at home. Perhaps a brief postdoc abroad, but that is reserved for the few who can afford it or are really star potential. It would be interesting if someone did a comparative analysis of different countries - where people go to grad school, where they do postdocs, where they look for jobs. I am assuming that there would be a lot of cross-polination between EU-member countries, or between Canada and USA, but other countries would be much more closed and self-centers/sufficient.

I also omitted, on purpose, countries like Germany, UK, France, Canada, China, Russia, Japan, Australia, Italy....countries which have at least 2 out of 3: large population, large investment in science, long tradition of science. Some of those may be more like the USA and quick to move forward, while others will be more like a small country, moving slower. I do not have sufficient insight into those communities to say anything intelligent.

The bottom line for impact factor--or whatever other quantitative metric might replace or supplement it--is that people are lazy and time is highly rate limiting in the professional lives of academics. People are always gonna rely on something fast and easy to obtain--like a journal impact factor--than they are on something difficult and time consuming to obtain--like a developed opinion about the solidity and importance of a particular published paper.

Anything that is gonna replace impact factor of journals in which scientists publish papers as a metric for comparative assessment of scientific productivity is gonna have to be as braindead easy to deploy as impact factor.

True that, Comrade PhysioProf. But if we have to, I've always been in favor of a metric that measures how much scientists get cited. That's more important to me personally than where a scientist gets published--which is a considerably more subjective process than citation (although of course politics are involved in both processes).
And, without endorsing anything in particular--I recently discovered the "h index" that comes up with an author search on scopus.com, which is an interesting concept (I don't know if it has been used before scopus).

I added a bunch of related articles to the bottom of my post, several of which are critical of the H-index and demonstrate why it is not as good as initially thought.

(1) "People are always gonna rely on something fast and easy to obtain--like a journal impact factor" -- I find that slightly pessimistic, on the pessism to pragmatism/realism to optimism spectrum.

(2) There is a hierarchy of publication venues. That is part of the point of having "impact parameters" for journals. There are obvious differences in the value to your career in:

(a) handwritten or cheaply xeroxed rants handed out in supermarket parking lots;

(b) random blog or chat room comments;

[c] notes handed out in your classroom (if you are a professor or TA) [square brackets to stop your form system from rewriting this as a copyright symbol];

(d) an article in the local newspaper;

(e) a presentation or poster at a typical conference;

(f) a paper in the proceedings of a major international conference;

(g) a letter to the editor or short note in a professional journal;

(h) a research review in a professional journal;

(i) Original research presented in a professional journal;

(j) monograph or book from a major academic publisher;

(k) one's Field Medal or Nobel Prize acceptance speech with citations to the work that enabled it.

The problem is in mapping that to the world of PLOS, arXiv, important edited Math web sites (such as MathWorld or OEIS = The Online Encyclopedia of Integer Sequences), superior science/math blogs (such as Shtetl-Optimized, Terry Tao's, the n-Category Cafe; Cosmic Variance, the Seed-umbrella?d Science Blogs, on the one hand; and on the other hand, Wikipedia (which seems to rangeover most of the quality spectrum from wonderful to abysmal). For that matter OEIS has a distinguished major CS/Math editor-in-chief (Dr. Neil J. A. Sloane) and a stellar set of associate editors, yet also acts blogular in that within its roughly 140,000 searchable web pages there are numerous sequences of no evident value to me, which appear unedited (sometimes with keywords such as "uned" for unedited, or "less" for less interesting to editors, or "probation" for will be deleted unless and editor wants to edit it).

Do these two hierarchies eventually collapse into one? The system of publishing, which includes academic gems and teenagers' solipsistic social network pages (My mood is as shown in the cartoon image, I'm listening to the non-broadcast hard-core new Eminem song, 7th grade sucks, and my boyfreind won't talk to me) is so far from equilibrium that it is essentially impossible to forecast the Web 3.0 chaotic attractor.

Dr.Vector says:

"So we have what you might call the Impact Factor Paradox: getting a real handle on something as slippery as the value of someone's scientific contributions is inevitably going to be time-consuming and hard; metrics that allegedly measure that value are going to fall along a spectrum from "time-consuming but accurate" to "quick, easy, and horribly flawed"; in a system where time is the limiting resource, there will always be a sort of grim undertow toward the quick-'n-greasy metrics."

This is very interesting post. Do you have any evidence that the decisions made in "smaller" countries are motivated more by IF than "other factors"? Is there any reason you think this affect outweighs the needs of high-in-demand universities/research groups in larger countries using IF in more public contests?

I was floored when I went to Serbia and saw the Table of detailed criteria (how many papers in how high IF journals)required for the promotion at each step of the academic ladder. I have been (as a grad student rep) on some job search committees here in the USA and IFs were never mentioned or taken into consideration - we were evaluating people in their entireties, carefully reading their statements, letter of rec, publications, etc. I have since heard that a number of other countries and institutions have similar writ-in-stone Tables.

Excuse my French - but this post has a large component of BS. You take anecdotal evidence from two small countries that you happen to have some superficial knowledge of, and leverage those anecdotes with some unfounded generalizations. The end result has no connection to any reality I know. I live and work in Israel (a small country, population ~6 million, 7 top-tier universities). In our system, people who want to compete for an academic position must travel for a postdoc abroad, and are usually competitive only if said postdoc resulted in a significant contribution published in high profile journals. People usually also switch fields for their postdoc. This requirement for people to go abroad and succeed elsewhere is reasonably effective at preventing inbreeding (although not 100%), with the caveat that some will never come back. I spent four years abroad for my postdoc, and found a job in Israel in a department and institute where nobody knew me before or had even heard of me before I started publishing from my postdoc. All the young faculty we have hired in the past 10 years had 4-6 year postdocs in the USA or Europe, and most of them did their PhD's in other universities. NONE of them were trained at any stage in our department. Nobody ever asked me for impact factor lists when I was hired, and we have never looked at impact factors for new faculty hirings since I joined the dept'. We look for scientific excellence, and the main criterion to judge that apart from the candidate's cv, publications and interview is by soliciting letters of reference from leaders in the candidate's field.
The European countries I know where impact factor reigns supreme are both large and small. If I had to hazard a guess I would venture that obsessive reliance on IF's is more a matter of history and culture than of size, but that would also be a dangerous generalization.

I think it's a valid hypothesis, but probably culture and the group dynamics among the individuals at an institution making the rules play a much larger role than the size of a country. I know one of the largest university hospitals in Germany, the Charité, ask their applicants to count their "impact points" not only for all publications, but for best five, best five first authors, best five last author and other ridiculous crap. I refused to apply for a position if the people at the institution are so blatantly brainless.
Germany is not small in Europe and still the IF idiocy is rampant.

Maybe it still persists because of their mental idleness (to read and take in consideration other documents or data), they still count - at least in Serbia - IF in 99% of cases. Of course, don't expect that IF DB's are accurate: people who work on IF data(indexers, librarians, IT staff) have odd criteria (or no criteria) to include your paper. For example, I have several published articles and papers that are not in SciIndex for unknown reasons (I don't know how often they update db), last paper dates from 2004. But in foreign institutions when I apply for something - they accept my list of publications, with CV and proposal, statement, bibliography, etc.
Serbian IF scientific community is not reliable to me.

Journal Impact Factor
= measure of the citations to science and social science journals

JIF is regarded as "the science of rating scientists and their research"

What does JIF have to do with "the science of rating scientists and their research"?

This is another glaring sad example of the prostituting, by the sience establishment guild of the 20th century Technology Culture, of the terms science, scientist and research.

I am asked if I have a better suggestion on how to rate scientists and research.

I do not pretend to have any suggestion on how now to scientifically rate scientists and research.

The present science establishment is, IMO, widely-deeply cancered with the malignant 20th century Technology Culture, of which public rating is one symptom. Tackling only this one single symptom would be a very difficult task.

My most probably hopeless approach is to stir the stagnant water and initiate evolutionary changes that would eventually re-place science, scientists and research where Western culture departed from Enlightenment circa 100 years ago, when it dealt with the essence of nature and life evolutions, and elected to become a pierced-ear slave (Ex.21, 6) to the Technology Culture .

IMO it is vitally important for now charting the course of our society to learn and understand, to analyse and assess, with a scientism perspective, the evolution and collapse of the Technology Culture and the implications, within it, of the bare survival of basic classical science, of the further comprehension of our place and fate in the universe.

Respectfully suggesting,

Dov Henis
(Comments From The 22nd Century)
Updated Life's Manifest May 2009
http://www.physforum.com/index.php?showtopic=14988&st=495&#entry412704

By Dov Henis (not verified) on 17 Jun 2009 #permalink