So, what do we make of the NRC Rankings?
What drives the different rankings, and what are the issues and surprises?
First, the R-rankings really are reputational - they are a bit more elaborate than just asking straight up, but what they reduce to is direct evaluation by respondents without evaluating quantitative indicators.
Doug at nanoscale puts it well - the S-Rankings are really generally better indicators...
A new index W = R - S has been named the "hard work" index.
BTW - you can't take (R+S)/2 and call it a rank - you need to rank the resulting score and count the ordinal position... the averaging compresses the scores.
Either way, senior academic administrators should really not bemoan in public that the methodology is "complicated" and that they don't understand it.
It is anti-intellectual to do so, and the methodology really isn't that complicated...
The Chronicle app is great for head-to-head comparisons between programs at different unis and gives at-a-glance positioning within the spread within the field and what metric drive the rankings in the field.
The PhDs.org has a nice interface tool - it allows you to just check the NRC Rankings, or set your own weight, with a default set of popular weights on the front page.
If, for example, you decide to discard the NRC rankings, you can pick the proffered options, or, pick the obscure ones from the "more options" lists.
So, pick "Research Productivity", "Student Outcomes", "Student Resources" and set all to "5" - important and equally important.
1-7Pennsylvania State University-Main Campus Astronomy and Astrophysics
1-11University of California-Berkeley Astrophysics
1-9University of Chicago Astronomy and Astrophysics
1-11Princeton University Astrophysical Science
1-10University of Washington-Seattle Campus Astronomy
2-17Ohio State University-Main Campus Astronomy
3-19University of Arizona Astronomy
3-19Cornell University Astronomy and Space Sciences
5-20California Institute of Technology Astrophysics
4-23Columbia University in the City of New York Astronomy
4-22University of Wisconsin-Madison Astronomy
5-23Harvard University Astronomy
6-27Massachusetts Institute of Technology Astrophysics and Astronomy
7-26University of Michigan-Ann Arbor Astronomy and Astrophysics
7-26Michigan State University Astrophysics and Astronomy
7-27The University of Texas at Austin Astronomy
8-29Indiana University-Bloomington Astronomy
8-26University of California-Santa Cruz Astronomy and Astrophysics
8-28University of Arizona Planetary Sciences
9-29University of Virginia-Main Campus Astronomy
10-29Johns Hopkins University Astronomy and Astrophysics
11-29University of Maryland-College Park Astronomy
11-30University of California-Los Angeles Astronomy
11-30Yale University Astronomy
15-29Georgia State University Astronomy
15-30Boston University Astronomy
14-31New Mexico State University-Main Campus Astronomy
17-31University of Colorado at Boulder Astrophysical and Planetary Sciences
15-31University of Minnesota-Twin Cities Astrophysics
20-32University of Hawaii at Manoa Astronomy
26-32University of Florida Astronomy
28-33University of Illinois at Urbana-Champaign Astronomy
32-33University of California-Los Angeles Space Physics
Can't argue with that.
Actually, by zeroing all the weights and then putting the two worst metric at maximum and equal weight I managed to get our (Penn State) rank down to 25, and, no, I won't tell you which those were.
We do however see where our weaknesses are, and can discuss them.
How did we get top rank, going from 21/33 to 3/33 (or 6/33 or 9/33 depending on how you weigh)?
Ok, we were a bit lucky, but we also just had earned it.
The primary driver of PSUs ranking was high publication per faculty.
This is in part because we had two major projects at peak productivity within the time window the NRC used and we had major role in Chandra (ACIS) and Sloan (QSO/AGN).
We also had essentially no idle faculty - which happens, people have lives, and bad things happen - mostly people come back, but sometimes they are out for a long time, and that is part of the way things are.
Being large helps, even when you correct for per capita.
I looked at some bibliometric studies - A&G has a convenient one from 2007 of UK astronomy, which I can't access right now,
but, the distribution of per capita publications per year is clearly a truncated power law - it is of course left-truncated at zero, it is hard to have negative publications, but the right side looks like a flat power law - maybe N-1 or so, certainly flatter than -2
- yes the mean is divergent and ill defined in the limit.
So, having many faculty helps in two ways - it builds out the curve to the right, you push the mean out on average, and you are more likely to have instantaneously productive faculty members in any given time frame - faculty productivity ebbs and flows, few are consistently high productivity over an entire career, and then mostly if they become administrative heads of large research groups.
A small department can do very well if they hit a time frame when they have some productive members, but they are much more vulnerable to poisson noise fluctuations to the low end than a large department.
The other thing that helps is being in a large field - having high mean citation rate requires lots of other people publishing in that sub-field to cite you.
BTW - I was very surprised at the low per capita publication rate in other natural sciences, physics is higher, but that is clearly due to the growth of fields with very large authorships (a trend also present in astro but not gone as far) - astro also has higher mean citation rates than physics, I don't know if this is because we are so data driven or just because we actually read the literature, driven to do so by pedantic older professors of course.
I suspect the "super-publishers" in large fields like biology are further evidence of the power law distribution of publication rates - with more total faculty they are pushing the curve further out and so more very large rate individuals are randomly present.
Having large facilities helps, but not as much as you'd think - and it can hurt, if a lot of your faculty are tied up building up a facility and haven't hit peak (or any) output phase yet.
That clearly hurt some institutions.
Demographic cycling also is a factor, and some institutions can with fairness point out that they have had substantial turnover since 2005 - most can't time scale for turnover is 8-20 years.
I had expected the Big State Research Universities to do proportionately better, there really has been a push in the last few years and some did jump, but the elite private universities held their ground surprisingly well.
Some of the privates do have enormous resources, and arguably per $ are not doing as well, Big State does tend to do "more with less", we are efficient and have disproportionately good outcomes per input, but that is not quite enough to overtake completely.
So, what now?
Reinforce strengths (and make us all over indulged and lazy)?
Put resources into currently weaker programs to revive them?
Well, there will be some of each - mostly dependent on local economics and federal trends.
Some places will get resources poured in to revive programs - based on personal relationships with administrators, a history, an unwillingness to discard sunk costs, or because there is a perception of opportunity to improve by a large fraction for modest cost.
There will be some cuts - program closures - and the NRC rankings will be used to rationalise these.
In some cases this will be because deans and provosts don't want the responsibility, or can't push the academic political issue of closure without the external lever.
In some cases the deans/provosts didn't have the data thrust in their face and take the rankings seriously.
In some cases it will be arbitary decisions using arbitary quantitative metrics, because there are no good choices to be made.
Does this all make a difference, other than in the internal academic world.
Well, I know prospective graduate students who looked carefully at NRC rankings, I also know faculty who guide their students and postdocs with an eye on the NRC rankings.
You also ought to look not just at the rankings but also the derivatives and likely future derivatives, especially their sign - it is better to be in a program on an upswing then one in decline, but on a 5-7 year timescale there are factors at work grad students don't know about and have no control over.
Still, some data is usually better than none, and for all its flaws, the NRC is measuring something, which is correlating with something else, that somewhere may measure something we do care about.
Fair enough. I don't really know how accurate these rankings are with respect to various fields, but anyone in the humanities, at least, should read this one:http://leiterreports.typepad.com/
What if using normalized citation to balance the contamination of gigantic projects in which most people contribute next to nothing except the first few authors?
Hi Steinn - I'm glad you found the phds.org site useful. One other thing you might find useful is that we have histograms of the rankings the NRC generated in their simulations, so you can get a better sense of the distributions. Here, for example, is the PSU astronomy department: http://graduate-school.phds.org/university/psu/program/ranking/astronom…
We're continuing to add new functionality to the site, so stay tuned.
The barcharts are very good at illustrating the ranking spread.