links for 2009-01-13

More like this

I would like to comment about the link to the story about bonuses for student evaluations. Clearly, this seems like a bad idea. Students (especially at a public institution like mine) already think evaluations are a trade off for grades. With this it would be doubly clear that would be so.

I think there needs to be feedback to the administration and to the instructor, but the evaluation method here (similar to most places) tells you basically nothing. In looking at student evaluations in regard to tenure situations, I take the stance that extremely low or high evaluation scores is a sign that something is going on and I should observe and examine that faculty member more closely.

Here is my favorite evaluation note (and I can't find the original link) - there is a correlation between good looks and evaluation scores (http://chronicle.com/jobs/news/2003/10/2003101501c.htm). But remember, correlation does not mean causation.

In the handful of discussions that I've seen about student evaluations, one thing has always been missing: What about weighting the evaluations according to the student's overall performance in all courses? I'd be much more interested in the opinions of a senior with a three-year record of earning As and Bs in tough courses than in the opinions of a sophomore about to flunk out and looking for somebody other than himself to blame. So, why not use a (heavily?) GPA-weighted average of the evaluation scores instead of just an unweighted average?

I don't claim this suggestion is perfect. In particular I'd _really_ like to hear from an honestly struggling student who did not get proper help. Alas, just looking at GPA doesn't distinguish well between the lazy-and-immature vs. the struggling-and-unsupported, so that's a problem with my suggestion. How do you fix this? How about attaching extra weight to evaluations from students who did worse in a particular class than what you'd otherwise expect from their GPA from other classes? If an A-B student suddenly gets an odd C-, something may be up with the instructor or curriculum design. Weight it.

Here's another neglected way to evaluate: Instead of polling students at the end of a class term, ask graduating seniors which instructors they learned the most from, or received the most help from, over their four years. Might lead to more mature and reflective evaluations. "When I was a freshman, I thought he was too demanding about esoteric stuff nobody really needs to know, but I later came to appreciate what this prepared me to do..." Comparing graduating-senior evaluations against immediate-feedback evaluations should be quite revealing, and might wash out some of the popularity-contest and weed-out-course problems. But do any schools do this systematically?

Also, I'd like to echo the first commenter: The standard deviation in an instructor's evaluations might say more about the teaching quality than the evaluation average itself.

By Emory Kimbrough (not verified) on 13 Jan 2009 #permalink

In the handful of discussions that I've seen about student evaluations, one thing has always been missing: What about weighting the evaluations according to the student's overall performance in all courses? I'd be much more interested in the opinions of a senior with a three-year record of earning As and Bs in tough courses than in the opinions of a sophomore about to flunk out and looking for somebody other than himself to blame. So, why not use a (heavily?) GPA-weighted average of the evaluation scores instead of just an unweighted average?

Confidentiality.
In order to calculate the average for the weighting, somebody would need to know which students made which comments. Even if it's only the department secretary who knows, that knowledge might very well have a chilling effect on students, and discourage them from answering honestly.

There are things you can do to get at some of that information-- our course comment forms ask students to self-report their expected grades, for example, and also whether they're taking the class as part of their major, or as an elective. I know people who have calculated averages for different categories, and used those numbers to try to make some point or another.

But in the end, if you want students to give their real opinions, you probably need to let them comment relatively anonymously, so weighting comments by GPA is probably a non-starter.