New Teaching Evaluation Study

Inside Higher Ed, in their "Quick Takes" points to a new study of teaching evaluations that they summarize thusly:

Students care more about teaching quality than professorial rank when evaluating professors, and professors who receive good evaluations from one group of students typically continue to do so in the future, and to have students who earn better grades than those in other courses, according to new research from the National Bureau of Economic Research.

None of that sounds all that shocking, and the abstract of the paper itself doesn't add much more detail. The key sentences would appear to be:

The findings suggest that subjective teacher evaluations perform well in reflecting an instructor's influence on students while objective characteristics such as rank and salary do not. Whether an instructor teaches full-time or part-time, does research, has tenure, or is highly paid has no influence on a college student's grade, likelihood of dropping a course or taking more subsequent courses in the same subject. However, replacing one instructor with another ranked one standard deviation higher in perceived effectiveness increases average grades by 0.5 percentage points, decreases the likelihood of dropping a class by 1.3 percentage points and increases in the number of same-subject courses taken in second and third year by about 4 percent.

Again, nothing really surprising at first glance. It's well known that tenure and promotion at the large university level have little to do with teaching quality, and much more to do with research productivity, so it's no shock to find that rank and salary don't correlate with teaching ability. I'm a little surprised that the grade correlation is so small, but it's probably harder to really buy good evaluations than conventional wisdom would have it.

What would be really interesting to me is to see this sort of thing replicated at a small college, where teaching quality is supposed to be more important for tenure and promotion decisions. It'd be harder to get good statistics (because there are fewer students), but it would be interesting to see how well we actually do at rewarding teaching.

Tags

More like this

However, replacing one instructor with another ranked one standard deviation higher in perceived effectiveness increases average grades by 0.5 percentage points, decreases the likelihood of dropping a class by 1.3 percentage points and increases in the number of same-subject courses taken in second and third year by about 4 percent.

How significant were those results? They should like small changes, but if the expected error bars are a lot smaller....

-Rob

How significant were those results? They should like small changes, but if the expected error bars are a lot smaller....

I'm not sure. It looks like I'd need to pay to download the paper, and I just don't care that much.

Maybe the higher ranked teachers just grade easier so their students feel better? vs the students performing better?

By david1947 (not verified) on 21 Oct 2006 #permalink

Ditto what David said. Students give better evaluations for instructors who give better grades. I have yet to see any evidence that student evaluations provide any information on the quality of instruction. If you want to know how good someone is at teaching, don't ask the students; ask an unbiased third party.

Children do not matter. Surviving children matter. Education is manufacure. It has minimum standards for raw material input, quality assurance for process, engineering qualifications for equipment, and performance specs for personnel. Discovered defective units get tossed off the assembly line. Invest in what you want not in what you loathe.

Civilization should invest in winners not losers. Feminized milquetoast America celebrates losers with infinite budgets: genetic, developmental, and behavioral trash; reproductive warriors, hind gut fermenters, drug addicts, slum bunnies, Enviro-whiner Luddites; the stupid, the pathetic, and the Officially Sad.

Look what compassion dumps on us. Support evolution: shoot back.

The "hardass" instructors I knew of over the years who actually got good evaluations, typically got them from students who actually wanted to learn something "hard" and were receptive to being "challenged".

Otherwise most other students didn't really want to be there, and were only interested in getting a high grade (ie. premeds, many engineering students, etc ...). These students almost always gave poor evaluations to the "hardass" instructors, for obvious reasons. This was especially common in those courses everybody seems to love to hate, such as freshman physics, freshman calculus and/or linear algebra, organic chemistry, etc ...

My experince is that when students are being forced to take a class, their good evaluations can be easily bought. Too many semesters with pre-med students in physics classes has taught me that. However, I find that when students are taking the class because they want to learn, they differentiate between their grade and whether or not the professor taught well.

Students give better evaluations for instructors who give better grades. I have yet to see any evidence that student evaluations provide any information on the quality of instruction. If you want to know how good someone is at teaching, don't ask the students; ask an unbiased third party.

The closest thing to a surprising result (at least from the abstract-- a reader sent me a copy of the paper, but I haven't looked closely at it yet), though, is that the correlation between grades and evaluations doesn't seem to be that big-- one standard deviation up in evaluations is only half a percent in grade. That's not terribly big, and suggests that getting good evaluations is not just a matter of giving good grades.

In general, I agree that there are problems with the whole idea of relying heavily on student evaluations, particularly in classes like the oft-cited pre-med physics, where the students are taking the class because they have to, and may resent it. But there's also a significant problem with finding someone (or something) to act as an "unbiased third party."

It's a Hard Problem.

At some universities many years ago, some of the engineering majors were just as nasty as the premeds. Now that the "word is out there" that there isn't as many good engineering jobs anymore, this doesn't seem to be as big of a problem.

I suppose with college majors where there's a "perception" of graduates having a nice highly paid job after graduation, this seems to attract the young "type A" personality types who want to major in it for "the money". Computer science/engineering was like this during the late 1990's.

Today it seems like all those young "type A" personalities are back in the premed trough.

What I found odd in a lot of my old friends who went the premed and medical school route, is that they actually highly discourage anyone from going to medical school these days. They're constantly complaining about all kinds of things like:

- paying off their huge debts (from medical school)
- fighting with the HMO's or medicare/medicaid
- dealing with deadbeat patients
- malpractice insurance and/or lawsuits
- lack of respect
- screaming patients
- endless stress
- etc ...

A few even left the medical professional altogether for all kinds of reasons like:

- they can't get any malpractice insurance
- they don't attract enough patients for their private practice
- the pay is crap with very long hours if they work on staff at a hospital
- emergency room patients physically assaulting them
- etc ...

A few of them are very angry with themselves for pissing away that much time and money, just to enter a profession they grew to hate. I've also noticed the same thing with some old friends who went to law school, where they found out they actually hate being a lawyer.

Maybe the word isn't out there yet that the medical and legal professions are not as "glamorous" as they're perceived to be (such as how they're presented on TV like "Law And Order", "ER", "Grey's Anatomy", etc ...).