Admissions and Hiring: Faculty Are Students in a Funhouse Mirror

In one of those Information Supercollider moments, two very different articles crossed in my social media feeds, and suddenly seemed to be related. The first was this New York Post piece by a college essay consultant:

Finally, after 15 or so years of parents managing every variable, there comes the time when a student is expected to do something all by herself: fill out the actual application. Write an essay in her own voice.

By this point, our coddled child has no faith in her own words at all. Her own ideas and feelings, like a language she has not practiced, have fallen away.

Her parents wanted more than anything to protect her, to give her the world. Instead they’ve taken away her capacity to know it.

Faced with that blank page, the students panic. They freeze. Their entire lives have been pointed toward this one test of their worth: Who wouldn’t suffer writer’s block? The parents yell. Everyone sobs.

That’s when they called me.

The second is from Bee at Backreaction, with a thorough examination of the metrics used to measure scientific success and the rationale for them:

The reason for the widespread use of oversimplified measures is that they’ve become necessary. They stink, all right, but they’re the smallest evil among the options we presently have. They’re the least stinky option.

The world has changed and the scientific community with it. Two decades ago you’d apply for jobs by carrying letters to the post office, grateful for the sponge so wouldn’t have to lick all these stamps. Today you apply by uploading application documents within seconds all over the globe and I'm not sure they still sell lickable stamps. This, together with increasing mobility and connectivity, has greatly inflated the number of places researchers apply to. And with that, the number of applications every place gets has skyrocketed.

Simplified measures are being used because it has become impossible to actually do the careful, individual assessment that everybody agrees would be optimal. And that has lead me to think that instead of outright rejecting the idea of scientific measures, we have to accept them and improve them and make them useful to our needs, not to that of bean counters.

On one level, these are completely different sorts of stories, but in thinking about the two in close succession, it occurred to me that in some way, they're distorted reflections of each other. In both cases, a big part of the problem is, as Bee identifies, the vast oversupply of applicants relative to the number of open spots. The "common application" for colleges has had a similar effect to the ability to email job applications-- application numbers are way up at every college and university. Relatively crude measures like test scores thus end up playing a larger role in admissions than might be ideal, just in order to get through the huge pool of applicants.

But it's even worse than that. An admissions officer once remarked that they get dozens of applications from students who probably have no real interest in Union, but throw an application our way anyway because it's just another click of a checkbox on the common application form. The problem the folks in admissions face is not just identifying the best students in the applicant pool, but the best students who are likely to actually come here.

That, in turn, is a problem driven by the use of oversimplified metrics-- the infamous US News rankings include the fraction of students admitted as part of their calculations, and the lower you can make that, the better. Offering admission to outstanding students who are overwhelmingly likely to turn you down actually hurts your ranking compared to offering admission to less outstanding students who are overwhelmingly likely to accept. (You occasionally hear stories of outstanding students who get rejected by their "safety schools" for this reason-- they were too good to plausibly want to attend, and thus denied admission to keep the selectivity numbers up. These may be apocryphal, though...) Which is part of why there are so many people reading files and letters so closely, trying to parse the essays crafted under the tutelage of professional consultants to find the students who are actually both good and interested.

On the faculty side, meanwhile, Bee points out that there are ways to game the system of crude metrics that we have, and that having faculty deliberately targeting these metrics further distorts the whole process. And it occurred to me that a lot of this system-gaming, and even the advice given to faculty candidates, is really the moral equivalent of hiring professional application essay consultants. Padding out a CV with "minimum publishable units," or specifically targeting certain types of publications, like getting paid advice on how to craft a compelling college essay, is a way of buffing up an otherwise lackluster application in an attempt to get past relatively crude filters.

(I will admit a little bafflement at the idea of using single-author publications as a measure of success-- I come out of a field where single-author publications just do not happen. I can think of one remotely significant single-author experimental paper in the last twenty-odd years of AMO physics. But then, this measure is attributed to Lee Smolin, who's pretty bad about mistaking the tiny subfield of theoretical high-energy physics for the totality of physics, so...)

So, when it comes to crude metrics and trying to game them, I suspect we have more in common with our students than we might think. And, like Bee, I agree that we need new and better measures at all levels of academia-- standards that are simple enough to apply very broadly and efficiently, but that nevertheless are accurate enough to capture what we're really looking for. I especially want somebody else to develop these, because I haven't the foggiest idea how to go about it...

(While I'm mentioning the NYP thing, though, I'll note one aspect of the story that was a little disappointing to me-- I would've liked to know more about the student who framed the story. I have some sympathy for her plight as a general matter-- my college essay writing went through a bunch of iterations, and the final version came only after I trashed my parents' basement in frustration-- but the final essay sounds pretty interesting. I'm curious, though, whether the author thinks it actually reflected the student's real potential, justifying her admission, or if her acceptance was just the inevitably result of having her surname already on a campus building, and the essay they came up with merely passed the minimum bar needed to justify that.)

More like this

I've mentioned before that I do field interviews for my undergraduate alma mater, so I've seen parts of the undergraduate admissions sausage factory. But I don't live in the world of the author of that NY Post piece. I must resist the urge to slap the parents who feel compelled to hire a consultant to get their two-year-old into the "right" preschool, so they can go on to get the kid the Ivy League admission they feel is his birthright. People from around here do sometimes get in to top tier universities, if they are good enough, and being middle to (occasionally) upper-middle class, they (and their parents) generally can't afford the $7500 fee that author charges just for consulting on the essay.

But at least at my undergraduate alma mater, all applicants can choose to get the interview, and that has been known to make the difference. In the academic world, only the short list of 2-5 candidates gets interviewed, and I've heard of multiple cases where the department and the person hired proved to be a poor fit. We'll never know if somebody who didn't make the short list would have been a better fit.

You occasionally hear stories of outstanding students who get rejected by their “safety schools” for this reason– they were too good to plausibly want to attend, and thus denied admission to keep the selectivity numbers up. These may be apocryphal, though…

Most of the applicants I see apply to State U. as their safety school, and for most of them, it's a perfectly reasonable choice. Those students hardly ever get turned down (if they complete the application), because State U. isn't trying to play the US News rankings game. But I can see how this might be a problem if the safety school is a second- or third-tier SLAC.

By Eric Lund (not verified) on 26 Aug 2013 #permalink

Chad, I can't think of any particularly influential single-author experimental AMO papers. Which one do you have in mind?

By Thad Walker (not verified) on 26 Aug 2013 #permalink

I'm not sure it's really influential, but Shimizu had a single-author PRL about ten years ago on bouncing metastable neon atoms off a surface. It used purely long range van der Waals potentials and got the reflection due to a quantum effect. I noticed it because one of the xenon papers for my Ph.D. used the same long range reflection off an attractive potential to get an estimate of the Penning ionization rate, and, of course, because I spent three months working in that lab.

It really was pretty much a single author experiment, too. He had a grad student to provide an extra pair of hands, but nobody touched that apparatus if Shimuzu-san wasn't in the room.