there are many ways to rank a program: including its reputation, its performance, and more subtle quantitative indicators, some of which are contradictory and mutually inconsistent. Rankings are also generally lagging indicators and imperfect indicators of future performance, they are vulnerable to demographics and individual star performers which may not be there in the future, or may fail to live up to their reputation.
Yet, considering the stakes, rankings are endlessly mulled, weighed, hashed, disputed as storied programs slip in the rankings, and celebrated as outsiders surge up the ranks.
No, not some silly game rankings that nobody cares about, now.
Serious stuff: Astronomy Departmental Rankings
The gran'daddy of departmental rankings, is of course the National Research Council decadal ranking exercise, which was last published in 1995.
The 2005 2007 2009 rankings will be published Real Soon Now, and are eagerly anticipated, as people rush to see who went up and who went down.
The NRC ranking is a big, serious exercise, and the weighting assigned to different factors is critical in determining the final rankings. In particular, the "reputation score", the rank assigned to other departments by senior research scientists and university administrators. The reputation rank is important, and quite stable, it is somewhat self-reinforcing, but it is also a severely lagging indicator - more so than many others, as it relies on perception formed over many years, and often strongly biased to the early years of the careers of senior scientists.
Another strong correlate for rank is size - bigger departments tend to be more highly ranked, and there is some convincing analysis showing that at least for some rankings the rank per faculty is approximately constant.
However, particularly in astronomy, access to facilities and private resources is critical. If you don't have direct access to a major observing facility, as an astronomy department, then it is hard to break into the top ranks. Other in-house resources, like endowments, also matter. Money buys a lot.
The astronomy departmental rankings in 1995 are:
- 1 Caltech 4.91
- 2 Princeton 4.79
- 3 Cal Berkeley 4.65
- 4 Harvard 4.49
- 5 Chicago 4.36
- 6 Cal Santa Cruz 4.31
- 7 Arizona 4.10
- 8 MIT 4.00
- 9 Cornell 3.98
- 10 Texas 3.65
- 11 Hawaii Manoa 3.60
- 12 Colorado 3.54
- 13 Illinois 3.53
- 14 Wisconsin 3.46
- 15 Yale 3.31
- 16 UCLA 3.27
- 17 Virginia 3.23
- 18 Columbia 3.20
- 19 Maryland 3.07
- 20 Massachusetts 3.04
- 21 Penn State 3.00
- 22 Stanford 2.96
- 23 Ohio State 2.91
- 24 Minnesota 2.89
- 25 Michigan 2.65
- 26 SUNY Stony Brook 2.58
- 27 Boston University 2.40
- 28 Indiana 2.16
- 29 LSU 2.06
- 30 Iowa State 2.03
- 31 Florida 1.98
- 32 New Mexico State 1.85
- 33 Georgia State 1.81
Not bad, my alma matter is #1, where it should be, and the rest of the rankings look broadly reasonable though you could quibble about some individual rankings.
Now Dr Anne Kinney from NASA (GSFC) has published a rank based on the normalised h-index, h(m), due to Molinari and Molinari.
The Science Impact of Astronomy PhD Granting Departments in the United States
The Hirsch index, h, is a popular measure right now for the impact of individual faculty research. Dr Kinney uses the Molinari index,
h(m) = h/N0.4
where N is the number of publications, for all publications by tenure line faculty at 36 astronomy departments.
The publications considered are the astronomy and astrophysics journals, not including general science journals like Nature and Science. Publications between 1993 and 2002 are considered, so this is a lagging indicator of the decade after the data for the 1995 NRC rankings were gathered.
Dr Kinney further considers rankings when researchers at associated institutions are included, including non-tenure line research staff; a lot of universities have associated research centers.
Somewhat surprisingly, the research centers are dilutive - they increase N and h, but decrease h(m), so "pure departments" tend to move up relatively in the rankings when associated institutions are included.
Considering only tenure line faculty, ranking on h(m) the top quartile is:
- 1. Caltech,
- 2. UC Santa Cruz,
- 3. Princeton,
- 4. Harvard,
- 5. U Colorado, Boulder,
- 6. SUNY, Stony Brook,
- 7. Johns Hopkins,
- 8. Penn State,
- 9. U Michigan, Ann Arbor.
So far so good.
Including associated research institutions, the rankings become:
- 1. UC Santa Cruz,
- 2. Princeton,
- 3. Johns Hopkins
- 4. Penn State,
- 5. SUNY Stony Brook,
- 6. U Michigan, Ann Arbor,
- 7. New Mexico State,
- 8. UMass, Amherst,
- 9. U Virginia
Now, that is change we can believe in...
This is of course only a single indicator of rank, but we like it, and the folks at UCSC should like it also.
Princeton probably doesn't care either way.
- Log in to post comments
Heh, so I just calculated the "impact index" for the faculty and senior scientists in the Astro dept at Case Western, and I get h(m)=7.64 over the same time period. That would put us #1 on that list, baby!
So, prospective students, those schools listed above might make good safety schools, but if you really want to study in a high powered research group, Case Astronomy is the place to be!
Chris
(who has tongue planted firmly in cheek here, b/c he actually doesn't believe that these kind of ranking systems are actually good measures of quality output, but will nonetheless use the number above to needle all his friends at those universities on Anne's list...)
Phew.
I thought for a minute you were going to crow about that Other Ranking system for games which are not all that important after all...
Yup, small departments with active faculty game well on this ranking scheme.
Anne used the NRC + 3 group of universities, not sure if the NRC had a size cutoff for considering astro departments, or if they just went with "does it have a separate name".
"I thought for a minute you were going to crow about that Other Ranking system for games which are not all that important after all..."
football is dead to me this season.
"...not sure if the NRC had a size cutoff for considering astro departments, or if they just went with "does it have a separate name"."
We are a separate department, so I think it's a size cutoff -- we're small.
The original NRC survey did have a size cutoff. UW missed being in that particular survey because of a 1 year downward fluctuation in the number of graduating students -- we enroll an average of 4 students a year, and that particular year only 1 student graduated.
I cannot tell you the endless fall-out that has resulted. People go to that list for all sorts of things, and we're not on it, and so get overlooked.
As an example, Anne did not include us in her recent paper, in spite of the fact that we're much bigger than many of the programs in her analysis. Our department comes out in the top 5 by her metrics, but as head of graduate admissions I live in fear of undergrads perusing her paper and choosing not to apply to UW due to our absence. So yes, while the rankings are a bit silly, and probably are only useful at the quartile level, students and their advisors will use them anyways, and I'd rather be ranked than not.
Damn 1995 NRC report!
The danger of giving physicists (or astronomers) a formula is that they will use it. There was a professor in my grad department who argued that the only objective way to do grad admissions was to admit in rank order of score on the physics GRE.
Yes, it's the only objective way - and it's also possibly the stupidest way you could choose to do admissions.
Astronomy is a small field with a small number of elite departments with a lot of resources. A problem in such a system is that the elite may become self-perpetuating simply because they are the name departments. Like the Ivy universities before the 1960s expansion of US higher education. Chris and Julianne's points show an example of the bad tendencies that can result.
More on my ranking as inherited from Caltech, where I started as a Physics major, switched to Astronomy, and ended up with the double B.S. in Math and English, with letters of recommendation from the Executive Officer of Math (Gary Lorden) and the VP/Provost Steve Koonin (who certified that I knew more than essentially anyone in the
world who had only a M.S. in Astronomy or Physics.
My contract as Adjunct Prof of Astronomy was not renewed at Cypress College, where the Dean dismissed Steve Koonin's letters as "only one man's opinion."
Similarly, that my Math Adjunct contract was not renewed at Woodbury University, while the a**hole with an Ed.D. Rao Chekuri Nageswar is still Chairman, and was promoted twice, is MOSTLY (to be charitable) because Caltech, U. Edinburgh (where my Physics Professor wife got her first 2 degrees), U. New South Wales (where my wife got her Physics Ph.D. and did her postdoc) are so many light years academically beyond Woodbury.
I was not just angry that they insulted me, but that they insulted all my colleagues, Nobel laureates and teenagers alike...
My wife has (at my urging) read up on the "Sokol Hoax" and agrees that Woodbury faculty don't know an Atom from Adam, or a Proton from a Pronoun. Earth to College: Physics and Mathematics are NOT mere social constructs, that you can deconstruct with Critical Theory.
As I posted at n-Category Cafe:
Some universities give cash bonus per publication; Re: The Case of M. S. El Naschie
In my experience, some colleges and corporations give a cash bonus per publication or conference presentation, on top of travel and per diem to approved conferences. If the organization does not have someone qualified to evaluate these publications, nor has an outside evaluator, then they are incentivizing the assembly line of crackpottery.
In one unnamed university where I was for 5 semesters a highly-rated Adjunct Professor of Mathematics, and my wife is still Professor of Physics, the Dean of the College of Arts and Sciences had a total of one (1) refereed publication, on which she was the junior grad student, the authors being a Psych prof and a more senior grad student.
The dean, before she hired her lover as Assistant Dean and later her husband as Adjunct Prof. of Statistics fr Social Sciences, twice promoted a nearly illiterate Chair of Physics and Math, who (again) had only 1 refereed publication, on which he was the junior author (the 2 senior authors have dozens of legitimate publications subsequently that GoogleScholar finds).
I'm still steamed that this Chair, whose Ed.D. thesis I've read (and which is disproved each time that he teaches a class badly) who followed the Dean's urging to cancel my contract, had a lavish trip to Morocco for a nonsense conference where he was the token North American (albeit he came from India). His abstract at that conference, all made-up terminology and anecdotal evidence with no valid quantitative analysis:
Rao Chekuri Nageswar
http://www.icpe2007.org/
OP18-B
Student-reasoning on the temperature dependence of the buoyant force:
pre- and post experiment
Nageswar Rao Chekuri
W******* University, 7500 Glenoaks Blvd, B******, CA 91510-7846, USA
According to the resources model of thinking, how and when an individual activates the elements of knowledge is important. Reasoning for a phenomenon is generated from these activated knowledge elements and contains abstract reasoning statements called r-prims. An individual uses these statements in explaining the physical phenomenon. This paper explores reasoning structures of the architecture and their choice of r-prims on the temperature dependence of the buoyant force pre- and post experiment.
Analysis of the data of thirty students shows eight categories of reasoning structures. The reasoning structures of the students who possess locally coherent knowledge change from pre- to post experiment, indicating that they may be in the process of making or strengthening connections between the knowledge elements. Furthermore, these students possess the required knowledge elements to perform the tasks but they do not seem to activate the elements in the right context, which confirms the resources model of thinking. The students who demonstrate local coherence do not provide valid reasoning for the
temperature dependence of the buoyant force. These students activate out of context elements, do not activate all the required elements or choose inappropriate r-prims. The r-prims these students (with local coherence) choose to explain relations between various physical quantities after the experiment are mostly different from those they choose before the experiment. One student from this group prefers normal reasoning to physics reasoning even though the latter is correct. The students who demonstrate global coherence provide valid reasoning for the temperature dependence, activate right resources in right time and their choice of r-prims are the same before and after the experiment. A skills based instructional strategy is also discussed.
One tenured professor whom I ran this past commented: "Unbelievable, unsavory, ungainly, garbage!"
But this cannot be detected by the faculty and administrators. I like the Senior VP of Academic Affairs, who has a solid PhD in English Lit from Columbia. But he and the professors with degrees in American Studies and Critical Theory and the like are all falling for a less-clever version of the Sokol Hoax.
I do enjoy teaching Math and Science in high schools, where I am a big fish in a small pond. But I still resent people who write bullpuckey and covert that, through lack of standards, into being big fish in medium ponds. And last week, he covered up the mercury spill in the lab that my wife also uses, and she had to evacuate for clean-up, while the Chair denies all knowledge of what happened in his lab while he taught.
I contend that lowering academic standards in this way is an actual physical danger to students. And why should an Ed.D. who write papers like that be supervising my wife, who has actual refereed science publications, degrees for University of Edinburgh and UNSW, Sydney, including a Ph.D. in Physics and post-doc work afterwards?
Posted by:
Jonathan Vos Post on November 9, 2008 11:50 PM