The Inescapable Vagueness of Academic Hiring

Inside Higher Ed ran a piece yesterday from a Ph.D. student pleading for more useful data about job searching:

What we need are professional studies, not just anecdotal advice columns, about how hiring committees separate the frogs from the tadpoles. What was the average publication count of tenure-track hires by discipline? How did two Ph.D. graduates with the same references (a controlled variable) fare on the job market, and why? What percentage of tenure-track hires began with national conference interviews? These are testable unknowns, not divine mysteries.

From the age-old Jobtracks project (ended in 2001, archived here) to those 21st-century methods such as the American Historical Association’s jobs-to-Ph.D.s ratio report, many studies have examined the employment trajectory of Ph.D. students. Few, however, have cross-referenced the arc of tenure-track success with the qualifications of the students on the market. Instead, only two types of applicant data are typically deemed significant enough to gather in these and other job reports: (1) the prestige and affluence of their alma mater, and (2) their age, race or gender.

Thank you so much for the detailed information about all the things in my application that I can’t improve.

I have a lot of sympathy for this. As a profession, academia is awful about even collecting data on critical topics, let alone sharing it with people who might make use of it. This can be maddening when it comes time to apply for jobs, or to seek tenure or promotion. The criteria for hiring and promotion seem to be shrouded in mystery, and you can never seem to get a straight answer to simple questions.

Some of this is, like every irritating thing in modern life, the fault of lawyers. Criteria are vague because giving more specific values opens the door to lawsuits from people who meet one stated criterion, but are manifestly unsuitable for some other reason-- someone who has more publications than the stated threshold, say, but 75% of those are in vanity journals. If you try to get a little bit specific, you quickly find yourself having to plug stupid loopholes, with wildly multiplying criteria, until you need a degree in contract law to read the job ads.

(And if you don't believe there would be lawsuits over that kind of nonsense, well, I'd like to request political asylum in your world. I could tell you stories about idiotic things that people have threatened to sue over. That is, I could, if it wouldn't be wildly inappropriate for me to do so, so instead I'll just darkly mutter that there are stories to be told...)

The bigger problem, though, is that most of the stats that could readily be reported are noisy garbage, subject to wild misinterpretation. As nicely demonstrated by the paragraphs where the author tries to highlight real data of the type he'd like to see:

However, secondary data analysis of other studies that have been generously made public can reveal clues that the job reports don’t care to. For example, a 2016 study that measured the publications and impact of STEM Ph.D. students happened to simultaneously measure the average number of their publications while in grad school, cross-referenced to their later hireability. The average number of publications for each future tenure-track hire was 0.5, a surprisingly low number that would likely be higher in humanities disciplines.

Another study from 2014 measured similar survey data with similar results, but it added that publishing rates among graduate students have been steadily increasing over time, while hiring rates have been steadily decreasing. That study placed the average number of publications around 0.8. It is clear that standards are shifting, but how much? And how do those standards vary by field?

Those numbers both registered as wildly implausible to me, so I looked up the papers, and they're both misrepresented here. The first study, showing half a publication per tenure-track hire is restricted to Portugal, which the authors describe as "a developing higher education system" that is "still characterized by poorly qualified academics-- only 68% of academics in public universities hold a Ph.D." The number from the second study is a flat-out misquote-- the 0.8 publications per student number is for publications by students who did not publish with their advisor as a co-author. Those who did co-author papers with their advisor, as is common practice in STEM fields, had 3.99 articles pre-Ph.D.; averaging them all together you get 1.88 publications before the doctorate for all the students considered in the paper, more than twice the figure quoted.

Even if you crunch the numbers correctly, though, these statistics are not all that meaningful, because acceptable publication rates vary wildly between subfields, even within a single discipline. I had a total of 5 publications before I finished my Ph.D. (though one of those was a theory paper to which my only contribution was delivering a floppy disk full of experimental data to the theorists upstairs), which is pretty respectable for an experimentalist in my field. I came into an established lab, though, and just picked up on an experiment that was already working. I know of grad students who were involved in lab construction projects who ended up with only one or two publications, but who were well-regarded scientists in the field because of the scale of the work they did building up an experiment that would go on to churn out lots of papers. In a very different field, on the other hand, a former student who went into theory had eight papers pre-Ph.D., which is more in line with expectations in his field.

And the noisiness of the data only gets worse. People working in fields like nuclear and particle physics, where papers are published by large collaborations, can end up with their names on dozens of papers, because everybody in the collaboration gets listed on every paper from that collaboration. (I don't think grad students are generally included as full members in this sense, though junior faculty often are.) You can, of course, attempt to divide things up by subfield, but as always, the more finely you subdivide the pool, the less reliable the data gets, from simple statistics.

I'm not going to say there aren't hiring committees doing simple paper-counting as an early weed-out step, but it's not remotely a good metric for making final decisions.

In the end, the question comes back to a point raised earlier in the piece, namely what factors the candidate can control. Hireability is not a matter of hitting some threshold number of publications, it's about making the most of the opportunities available to you, and being able to make a persuasive case that you have done so, and will be able to do so in the future, as a faculty member. What counts as "enough" publications to get a degree or go on the job market isn't a bright-line rule, it's a judgement call, and making a reasonable decision about when you have "enough" is part of the transition from student to scholar.

The factors a job applicant can control are making that central decision-- what constitutes "enough," in consultation with the advisor(s)-- and making the case that the decision was a good one. When we're doing a search and trying to separate "frogs from tadpoles," we're not just counting papers, we're reading statements, and (speaking for myself) a clear and convincing research statement putting past work in context and showing how it carries through to future work carries a lot more weight than an extra paper or two on the CV.

That is, I realize, maddeningly vague. I've been through the process, and been maddened by it. But as comforting as it might be to have more stats about the process, that comfort would be an illusion, because most of the readily available measures are junk.

(And, of course, everything comes back to the central problem, namely the vast gulf between the number of qualified candidates and the number of available jobs...)

Tags
Categories

More like this

I heartily second all of your comments, but there are a few things you missed.

In the case of hiring processes I've been involved in, it is essentially illegal (meaning it is a firing offense) to retain privately the kind of information this person seeks. It is kept by the lawyers (HR) should there ever be a need to defend the collge (not its staff) in court.

The biggest thing that author missed was that the things he can't control are EXACTLY what he needs to pay attention to. You are most likely to get hired at a place that is one step below where you did your PhD research and/or post-doc work. You might get hired at the same level or even higher (if that is possible), but the upward moves I have seen were always senior hires who proved themselves on the job.

Second, the percentages are much more real than anything about publication rates. For example, most jobs in physics are in industry rather than academia, and drawn from the fields with ties to industry. (That means particle theory has more than its share of disappointed job seekers.) If the odds favor teaching if you want to be in academia, or if that is what you want to do, then build up your resume in that area and be sure you get appropriate letters.

Finally, an anecdote. I know of an experimentalist who got tenure without publishing anything more than a report during the pre-tenure period. It was a tough sell, establishing an international reputation as a group leader on a project that still had years to go before collecting data. But it was justified. When data started to roll in, papers were published and students graduated at an astounding rate.

By CCPhysicist (not verified) on 08 May 2017 #permalink

Who, exactly, is supposed to do this systematic research about how many articles, etc., successful candidates had? Is someone supposed to do it for free, just as a service to the profession? Or if not, what budget line is supposed to pay for it?

This is an excellent post. I would like to add another important point regarding information and the job market.

As a regular dissertation advisor in a social science field, I try to impress upon my students that my job is to help them develop their judgment regarding our field. This is true with respect to research that will end up in their dissertation, but is also true regarding other aspects of the job, such as refereeing, and yes, hiring. Many graduate students find it difficult to sit in the shoes of a faculty member on a hiring committee when evaluating their own applications. In my experience, the students who do the best are those that not only do all the sufficient conditions for success (degree in hand, article, good teaching) but know roughly where their work fits into the higher education hierarchy. Knowing why your work and skills are uniquely suited for a LAC, for example, is a huge plus.

The data this individual wants doesn't help develop that judgment. At best it can articulate the necessary conditions to get certain types of jobs, not the sufficient conditions.

By Joshua Hall (not verified) on 15 May 2017 #permalink