the results from yesterday's poll on reporting exam scores were pretty strongly divided. 47% favored giving histograms, or some very detailed breakdown, while 33% were in favor of statistical measures only (mean, standard deviation, extrema, that sort of thing). 19% were in favor of giving no collective information at all.

My own usual practice is to give the high score, low score, and class mean, and that's it. This has as much to do with student psychology as anything else.

I provide the high score to prevent students with low scores from thinking "Oh, this was just impossible, so nobody did much better than me." In fact, it's rare for me to give an exam without somebody scoring in the 95-100% range. The high score prevents people from expecting some sort of massive curve.

I give the mean, so students have some idea where they stand relative to the rest of the class. I sometimes specify the typical average grade (somewhere around B-, usually), but most students know what that is without being told. I give the low score to prevent a certain group of students from melting down over getting a score that's just below average, because they assume that "below average" means they must be failing. If it provides a bit of a kick in the ass for the student who got the low score, that's all for the best.

I don't usually give standard deviations, because I don't teach classes that are large enough for that to have much meaning-- the largest single class I've taught here had about 21 students in it. Intro classes are capped at 18, and while I've occasionally taught two sections, that's about it.

It's also rare to have anything close to a normal distribution in exam scores. Most of the time, the distribution is at least somewhat bimodal-- there will be a clump of students who score really well, and a clump who score really badly, and those clumps are usually big enough to keep the total distribution from looking like a bell curve.

I don't do histograms both because I'm lazy, and also because the smallness of the classes tends to push them a little too close to violating student privacy. If there re only ten students in the class, it's too easy for students to figure out where everybody fits in the histogram, and that's more information than I'm comfortable giving out.

As for the question of class time brought up by several commenters, giving out the mean and extremes takes almost no time. If there's some very common failure mode, I will occasionally talk about that a little, but I never spend more than maybe five minutes talking about exam scores. I don't like to go over the problems in detail, because our trimester calendar means we have too few class meetings to give one over to exam review, and I don't hand out solutions because I like to re-use questions.

- Log in to post comments

I think I've told this story here before; if so I apologize. I had a math teacher in high school who used to return test papers in descending order. The longer you waited, the worse you did. He then wrote all the scores on the board, starting with the top score at the top of the board and on down. Multiples were clustered together. Once that was done he'd glance at the board and curve on the fly. So if the total points available was 200, and you got 89 (which was pretty good; no one ever got all the points), you didn't know your grade until he was done. He'd draw a line under some score, and everything above was an A, another line, and everything above that and below the first line was a B, and so on.

Once he ran out of space on the board after a mid-term, and wrote two scores on the wooden paneling below the chalk tray. Then he turned to one of the students and said, "Eric, your score is on the board in the teachers' lounge downstairs." Eric got a 6 out 200 if I remember right. I think he wrote his name and then drew a triangle in the space for problem 1 and labeled the sides a/b/c. And then handed in his paper.

The point is that if you were paying attention you knew exactly who got what, since they were returned in the same order they were then recorded on the board. And if you'd been paying attention during the year, you could figure out the mean and s.d. all by yourself!

Thare was a lot of pressure not to be "that guy" sitting and sitting while everyone else looked over their results. Of course we all shared our papers with each other anyway, so privacy wasn't really an issue.

If you have bi-modal results, does the arithmetic mean actually represent anything?

Let's say you have 15 students. 10 of them have scores "around" 90 , while 5 have scores "around 60". The arithmetic mean is 80. What does 80 signify? (The center of mass for the class?)

My point here is that when reporting class performance to the students, certain statistics can be totally uninformative, if not misleading.

From my own personal experience, I have found the bi-modal phenomena to be an accurate description of my students' grades, with 2/3 "doing well" and 1/3 "doing poorly." Unlike your situation where you only have a small sample size, my classes typically range from 100 to 200 students, so histograms are much more plausible.