Yesterday in class my students filled out one of those stereo-typical bubble sheet evaluations that supposed to tell me (and the administration) something meaningful about my teaching abilities. I won't see those results until after grades are turned in, but that's OK with me because I didn't find the questions asked on the standard form particularly useful.
Plus, I gave my own evaluation a few weeks ago and those results are already in.
So how's my teaching going?
- 31% of students said that my general performance was excellent, 47% of students said it was good, 19% said it was acceptable, and only 3% said it was poor.
For a first-semester faculty member teaching an introductory course, I'm not sure I could ask for any better statistics than that.
- 47% of students said my exams and quizzes were fair but extremely difficult, 25% said they were fair but difficult, and only 22% said they were reasonable.
I tried to make subsequent exams easier on them and average scores went up. The biggest change is that I gave my first midterm on-line and made it open book. The second midterm was in class, closed book. I think the students underestimated how difficult open book exams can be, and I over-estimated what they could handle. I think we're getting to a happier medium, but in the long run I'm shooting for something approaching difficult (from the reasonable side.)
- 53% of my students said the assignments were helpful and reasonable, 16% said they were helpful and easy, and 19% said they were helpful but difficult. 15% said they were useless and easy/reasonable.
I view the assignments as a chance for students to work with the material in a more active way (by writing, analyzing, etc.) In contrast to my philosophy on exams, I've tried to write the assignments for this class so that anyone who can follow explicit directions (and write grammatically-correct sentences) can get perfect or near-perfect scores. For some people, that is apparently difficult, and for others it is useless. What can I do? (shrugs shoulders) That said, the quality of the work has improved substantially over the course of the term.
- 56% of students said that my lectures have about the right amount of detail, relative to the amount of time we have in class and the material on the exam. 19% said they had too little detail, 16% said they had too much detail, and 9% said they had no relation to material on the exams.
Since I write my exams straight from the lecture notes, I wish I had more satisfied customers in this department. I think part of the problem may have been that hard first exam.
- 69% of students said lectures are well organized, but move too fast. 25% said "Lectures are well organized, easy to follow and move at the right speed"
I'm not quite sure how to jibe this with the previous result. I suppose I could break down the statistics and see whether the students were consistent in their assessment of my lectures. Some of the feeling of things "moving too fast" may be the harsh reality of an introductory college science class that is striving to be broad rather than deep. As the semester has gone on I've started more liberally cutting material compared to the book so that I could focus on the stuff I think is really important, and if/when I teach this class again, I'll probably do more of that. But anytime the powers that be proscribe 17 chapters in 15 weeks, there's going to be a lot of material.
- 91% of students said that when they contacted me I am responsive and reasonable and 91% said that I am accessible through the combination office hours, email, and blackboard
Finally, some numbers I can get behind and not parse out. They are pretty uniformly happy with access to me.
It's interesting to look at these numbers now, a few weeks after I collected the data. I wonder if I had the chance to do the evaluation again how different things would look. I'd try, but I don't think I'd get much response, given that they've just done the bubble sheets and mine was an on-line volunteer thing. I certainly found the data useful in the middle of the semester. They helped me see where the students were most dissatisfied (the exams) and where I was doing well (lecture organization). I think I would definitely do a mid-semester evaluation again, particularly if I can figure out a way to also collect an end-of-semester sample of the same questions. Sort of a pre-test/post-test of my teaching.
If you are an instructor, do you do a mid-term evaluation? What sorts of questions do you ask? How do you use the results?
- Log in to post comments
Nice job with your class(es)! I use a midterm evaluation in my big freshman class, but it's more of an in-class discussion administered by someone else. How did you do yours? I sometimes think I would be better off doing one myself online or something. Keep up the great work!
An online midterm? I am intrigued by your ideas and would like to subscribe to your newsletter - is the exam timed? Multiple choice? Do you have cheating/security concerns?
Sounds like you are doing great! Wonderful feedback! You said your evaluation was volunteer online; how many students took the evaluation?
Seems that the midterm eval was really useful for you.
Great first semester, Professor!
I *always* use midterm evaluations in all of my courses. I've found them to be so valuable. I usually ask more general questions, like "name one thing that's going really well, name one thing that's going not so well, what can both you and I do to make things better the second half", and then I might ask more specific questions like "are the assignments long enough, too long, or too short" (particularly if I'm trying something new). They usually take no more than 10 minutes to administer, and I always read them over and come back with statistical breakdowns and such in the next class and talk about what I'll change (and what I want them to change) in the second half. Students seem to really appreciate this---I think they are really grateful to feel like someone is really listening to their concerns.
I've used midterm evaluations when I was a new professor, or when I was trying to work out the bugs in a new course. I don't use them regularly, because I haven't found a way to look at the results efficiently in the middle of the semester.
I think the evaluations look great. It's better to come across as "too hard" than "too easy," and if you're hard at the beginning of the semester and then students do better towards the end, they tend to feel like they've learned more. And if over half the class thinks assignments are just right, and the rest of the students are split between "hard" and "easy," then that's usually the best you can do, especially in an intro class where you've got a wide range of backgrounds and abilities and interest.
I think I moved too fast in the first classes that I taught, in part because I constantly heard the voices of my grad committee in my head, ranting about students not knowing this or not knowing that. At some point, I decided for myself what I wanted to talk about in class, and what I wanted them to get from the books or from homework or from labs, and I started slowing down. (Now, the challenge is to incorporate new ideas from workshops and conferences into the classes that have started to fit like an old and comfortable pair of shoes. But that's a different problem.)
I do almost exactly what Jane does! I do a simple 5-min eval once a month asking what's going well, what's going not so well and what they would like to see go differently. The only issue I've noticed is that students still use evaluations as a chance to vent feelings, rather than think constructively. That is, they're more constructive in my evals than the ones the university gives them, but they don't put any thought behind *why* I do the things I do in class, even when I tell them or make it explicit at the time. So they might complain about pre-draft assignments without putting thought into why it was necessary to the progression of the draft (even if I explain this in the assignment and in person).
I like the questions you asked in your evals and might incorporate the less open-ended approach it seems you had, where you gave them a limited set of possible answers. I still like the open-ended stuff, but think there needs to be a way to measure some of what I'm trying to understand quantitatively.
Very thought-provoking post!
I am not an instructor/lecturer yet, my Supervisor is. She gives a test, then a mid-term exam and another test. In between the tests and the mid-term exam, she gives quizzes during lab sessions too, which all contribute towards the "continuous assessment marks" of about 60%. Finally, the final exam carries 40% of the grade.
Other lecturers in my university basically work around these arrangements :)
My mother, who was my mathematics professor in college, always made tests unbelievably tough. the "A" students would score perhaps 35% at best. She explained that a test is like a ruler, and you can always use a ruler that's lots longer than the longest thing you're ever gonna measure.
Our university uses Blackboard software, which I have been taking advantage of for this class. One type of assessment I can build is an anonymous survey. I can see who's taken it but I can't link answers to specific people. I encouraged them to take it and made it available for a week and I got responses from about half of the participating students (~30 people).
Re: the online midterm. This was also administered on Blackboard. There was a 15 minute window that they could log in to the exam (during class time) and the exam was 75 minutes long. It was open book/open resource, so the questions were largely conceptual multiple-choice (hence, the difficulty). In order to keep a group of students from sitting in a lab and doing the exam together, I wrote question sets, say 10 questions per chapter and each student would randomly get 5 of them. It's not perfect but I was experimenting. Unfortunately there were so many software glitches it turned into a nightmare for me to administer.
Jane - I like your ideas about "one thing to improve" etc. I did give them an open space for comments at the end, but was afraid (with this group) that if I left things too open I wouldn't get useful feedback.
OreneryPest: I like the ruler analogy, but it's not quite what I've beens striving for. I want the A students to get nearly every question right and the B/C students not too. As a student I always hated taking exams (usually math ones) that were impossible. It was so discouraging.
Ooooh, so brave to ask them questions mid-semester. And such a good idea!
I may work up the nerve to do this next semester -- I don't give quizzes so I'm not familiar with that part of my course management software. But that's clearly a good use of it, especially if you can keep responses anonymous.
Thanks for sharing the idea and the questions. Better teaching is a great goal.