No, this isn't a mistake-- I'm doing two quasi-polls on academic issues today, because I care what you think...
I'm handing back last Thursday's exams today. The scores on the test were about what I expect, given the material. As I'm looking at the scores, trying to assess the class as a whole, I'm curious about the issue of score reporting, so I'll throw this out to the general readership:
What information do you think students should be given about exam scores?
You can answer this from either a faculty or a student perspective (please indicate which).
My general practice is to hand back the tests, and put the high score, low score, and mean on the board. I've seen professors do everything from just handing tests back with no comment, to writing all the individual scores (without names, of course) on the board. I'm curious to know what other people think on this issue. How much information should students be given about the scores on an exam?
- Log in to post comments
If I were faculty, I'd be making a histogram by the finest gradation my university allowed (e.g., probably at the A- vs B+ level of resolution) for my own purposes.
When I'm a student, I assume faculty are doing that anyway, and so I'd kinda like to see it. It's not a big enough deal to ever complain about, or even ask for if it's not freely given, but it's what I like to see.
As a student: Standard for my education has been mean and standard deviation (which is perhaps better than high and low scores).
I've been shown a histogram on some occasions, and that was always the most helpful.
I always show a histogram (with 1-point bins), with mean, +/- 1 SD, and "straight-scale" (90%, 80%, etc.) clearly marked.
I show them a histogram with the basic stats (mean, median, sd).
I typically don't give individual scores - my classes are small enough (small private business oriented school) that the students hash them out themselves anyway. The fact that our admin is a little overly harsh on information sharing with students has a factor in that as well.
But, whenever there are certain mistakes that happened with a majority of a class, I also return an extra handout stating the problem, the common error, and how they can avoid it in the future. perhaps not what you were aiming at, but i find that it helps me in wording future questions, and the students do appreciate it.
When I was a student, I rarely got more than the class average.
I agree with Miss Outlier #2 that standard deviation is more relevant than extrema, but a histogram is even better. Sometimes the grade distribution comes out binormal--and one could argue that such a result is desirable because you have a clear distinction between two groups. But in that case the standard deviation isn't all that relevant either.
Nothing at all, not sure what they will do with the information, and class time is limited. I do discuss though the questions of the exam, and solve the ones which proved to be difficult for some large fraction of the students.
Mean, median, and standard deviation. The stat geek in me can't help but toss in a reference to the central limit theorem.
As a professor - I just hand back the exams and don't discuss the class grades. My husband who teaches stats gives the mean, sometimes the median, and the upper and lower quartiles.
I'm with moshe on this one... I always hated sitting through class when all we did was discuss grade distributions, course grading, etc. Made me wish I stayed in the dorm and slept. I would prefer that the prof spend time going over commonly missed problems or conceptual mistakes people made, or in the rare cases, acknowledging the concepts that everyone successfully learned. If you really want to present grading feedback, I'd go with a histogram because it has the most information for students and they might even learn something while obsessing over it (I'm pretty bitter about students' obsessions with grades!) I'm always disturbed by the message it sends when a professor spends as much time going over the exam grades as the material. Is the take-home message that I should be pleased with being top of my class even if I missed the important concepts? (Disclaimer - I am a student. A rather bitter student!)
As a student, I liked to know as much information as possible, but I was a pretty obsessive grade-grubber. As an instructor, I find a lot of students don't care about grades as much as I did, and I think that's a better attitude. So I try to give less information just so as to spend less time on the topic.
The only thing I think one has a duty to do is warn people who are in danger of failing, even if they're not failing yet, because most students' performance only gets worse as the semester goes on.
Honestly, as a student, I never really paid much attention. In undergrad, at least for my physics classes, it was generally a competition for the high score between me and two other students, and we'd hash that out between ourselves. In my first year grad classes, it was nice to know the mean; I figured if I was at or slightly above the mean, I was doing ok, and if not I needed to study up on that material more.
If and when I get to the point of thinking about this from a faculty perspective, I'll probably go with a histogram, with some stats. Extrema don't seem to me to hold any useful information, except for those few students who are wondering if they got the top score. A mean and standard deviation are fine if the distribution is a reasonably close to normal, but if it's bimodal, or skewed one way or the other, it can be hard to gauge one's performance without seeing the actual shape of it. Of course, the percentage of students who would actually do anything with the information is probably a fair bit lower than I like to imagine, but I'd prefer to err on the side of more information.
As I student, I was never one to care much about how other students did on a test. If I did poorly, I assumed that it was my fault for not studying enough; if I did well, I assumed that I had a firm grip on the topics.
That being said, I could see the benefit of knowing whether the test seemed inordinately difficult; but I never took a test where I sensed that this was the case.
I guess it marks me as old that when I was an undergrad, professors would post a list of names and grades.
SS and Moshe:
Just a thought, but while I agree that class time is a scarce resource, small scale publishing is not. This is what class mailing lists or websites are for. I wouldn't want excessive time in class taken up by it, either.
As a student, my preference is for enough information so that I can tell where my score sits relative to others, and what letter grade my numerical score represents. (Which I suppose are kind of equivalent.) Especially if you use non-standard number-letter equivalents, the latter can be VERY helpful.
However, having taken several (largeish--intro chem, etc) classes in which I generally got the highest score on the test (by a significant margin), I would also say that I tend not to like histograms. They're useful for the students in the middle, but can make the ones at the very high and very low ends quite uncomfortable. (Especially if the class is graded on a curve and there's a large gap between the top score and the next lowest score. It's no fun to be the "curve-breaker.")
You can save class time and office hour time by preparing an exam key with grade stats on it, either to actually print and hand out or post on the class website. The answers you give generally stop students from asking why they lost points on an answer, and you can give as much or as little information on the score distribution as you wish. (I side with the histogram folks - it's the most likely to tell the story, with no assumptions about normal distributions, etc.)
John: good point, but there is also issues of privacy, esp. in small classes. Mainly, I am not sure why any student would need to know the grades of their classmates. Maybe basic statistics would be fine, but again I am not sure what for.
I was a student, now I work as an engineer and I'm getting a masters degree part-time.
The best score reporting I've ever experienced included a score histogram, the mean, and the median for the class. This gave me the information to answer any questions I might have about how I was doing in the class. It also gives some indication of the quality of teaching and testing. I considered this the most responsible and useful action on the part of the professor, whom I also consider one of the best teachers I ever had.
The known meta-theory using eigenvalues of correlation matrix of scores on exam questions is nicely surveyed in:
An Evaluation of Mathematics Competitions Using Item Response Theory
Jim Gleason
Notices of the American Mathematical Society
Vol.55, No.1, Jan 2008
Item Response Theory (IRT) is a psychometric technique mathematically modeling the probability of the responses of individuals of varying ability to questions (items) on multiple-choice examinations. The author gives an introduction to IRT and explains how it can be used to assess mathematics tests from competitions to placement exams.
It seems many people are missing (except Dr. Kate) that how much information about scores you post depends on how the course is going to be marked. If it is direct percentages, or percentages converted to letter grade (i.e. 100-95%=A+) then no information about scores needs to be (and for the sake of privacy, should be) disclosed besides what is on the syllabus. However if the grading scheme is dynamic and depends on how the class as a whole does, mean, stdev, median, and probably a histogram should be given. As a TA, when I hand back lab reports I usually only tell them the mean and the standard deviation, but I only have a dozen students, so its easy to know where they stand.
As a Math. Prof.
I just list the scores on the board (without names). It is much quicker and the students can do with the numbers whatever they wish.
Now if all but one of the students got together and compared test scores, they could determine what score the other student received. But they could also do this if I gave them only the average, so I don't see much advantage, in terms of confidentiality, of only releasing part of the information.
As a physics prof, I histogram the scores for classes large enough for that to make sense. For small classes, I'll either give extreme scores and median, or an unnamed list of scores.
While I agree with Dr. Kate that high and low outliers may be uncomfortable, as a prof, I want the low outliers to be a bit uncomfortable, and since I don't grade on a curve, the upper outliers don't need to be nervous about peer response if their identity does get out.
I'm not fond of the mean, I'd prefer the median instead. That plus the maximum and minimum, and the upper and lower quartiles, would be great. Maybe some higher percentiles, like the 90th and 95th, would be worthwhile, but maybe your class sizes aren't big enough for that to be meaningful.
Showing every score would be cool, but only as a graphic, not as a table of numbers, and that implies you'd have a laptop and projector, or had handouts made using faculty printers: we're not trying to make you do lots of work scribbling on a board.
I'm a graduate student. This year I had a class where the instructor gave us a histogram with the mean and standard deviation, plus an explanation of the grading scale (mean = B+, mean + 1 standard deviation = A-...). He also included a note saying that if your score was below a certain threshold, he'd like you to come to office hours to discuss your grade in the class, because it indicated that you weren't performing up to minimal standards.
I appreciated being given that much information, but for most of my classes we're given, if anything, just the mean and the curved grading scale (as in, "above 85% is an A," or whatever).
The more info the better. Historgrams, averages, standard deviations, you name it. Another key piece of information that students should be given is the professors opinion of how well the class did. If the average score was 60% was the test difficult, or is the professor disappointed that the scores were that low?
Most exam statistics are utterly useless to the student, and I think the obsession is about as pointless as hoovering up baseball stats. What is the student supposed to do with information like, "you're at the 73rd percentile for the class"? I don't want my students to be fussing over where they are relative to the other students, but instead focus on where they are relative to the material.
I tell them what the mean and high scores are after the exam. That's it. As we get into the last half of the term, I'll give them general, individual assessments -- "you're doing A work so far", "you're failing" -- with specific recommendations on where they are weak. I used to give out all kinds of specific numbers, but that only encouraged students to grasp at points, nitpick endlessly for partial credit, and worry about whether so-and-so was getting a better grade than they are, and I hate that crap.
Now, I don't hide their personal information from them, but I don't let them have a clue about the rest of the class. They can talk to me anytime, I pull up my spreadsheet, and I can tell them exactly what grade they have in the class so far this term, but nothing more. It's not a horserace.
For a small class, standard deviation and mean only, plus maybe a statement like "I'm not promising any specific curve, but the historical average in this course has been around a B-" and some kind of note about how pleased I am with the results. I don't like to give top/bottom scores, or histograms for small classes, as I think it violates students' privacy. I tell them to ask around if they want to share scores. For big classes, I'll post a histogram though. The whole mess doesn't take more than five minutes or so.
As a student, the only time I cared about what other people got was if I got an exceptionally low grade(B or C). I wanted to know if I was an idiot alone or if we were all having problems.
When grades are given as pure letter grades, I never cared about the distribution. I know that a C is bad and an A is good and I don't care how many people got what.
But when the grade is a number and relationship between that number and a letter grade is not obvious, the most helpful thing in those situations is when the instructor says something vague like, "If you got below 60 points on this one, you need to start working harder." It doesn't tell us how everyone did, but it let's you know where you are in relation to what the instructor thinks. Obviously if there's a strict percentage correct to letter grade formula (90%-92% is A-, 80-90 is a B+/B/B-, etc) then it doesn't matter because I can just do that math and get my letter grade.
And the whole thing goes out the window when there is a mandatory curve. In those situations, you want to know your rank. Mandatory curves are evil anyway; if everyone knows all of the material, then they all get A's. Isn't learning the material the goal?
Notwithstanding the eigenexams of #20, I handed my students histograms after each exam. The problem was explaining "grading on the curve" to them and/or their parents if they were not familiar with Gaussians. These were 9th, 10th, 11th graders. I won't pretend that ALL of my college and university students were much clearer on the concept, even in Math and Astronomy classes.
I'm going to go with mean, max, and standard deviation as being enough to know where you are in the class. The obsession with points (gotta catch 'em all!) is annoying though.
It's also hard to deal with the culture shock from college freshmen, as a TA with no control over how the course is graded except for marking homework and exams. They are stuck in strict 90/80/70/60 or 93/ or 95/ breakdown from highschool, and all the professors I've worked with make tests expecting a ~60% average to keep the grades from piling up at the top, so they can see what's going on with the top of the class. It's like they can't handle the idea that an 80% is a good score.
Mean, unless you are. =)
Standard deviation is pretty standard, and college students OUGHT to be conditioned to interpret that via spinal reflex.
Min and max are pretty conventional, but ought not be needed if you have SDEV. On the other hand, they're faster to find by hand if you're not using a spreadsheet.
I'll add a tentative vote for histograms binned by letter grade. However, as a student, what I really want to know is what scores indicate minimal competence and (reasonable progress toward) mastery, respectively (e.g. "If you scored above 80 or so, you're doing very well. If you're one of the three who scored below 50, please come by my office hours later to discuss the test.")
I'm unclear. Why are statistics even relevant here (assuming a STEM class)? Some are useful pedagogically in terms of knowing where people are falling down or where clearly the group got something, but most of the uses here are for relative rankings.
Which, once you've set standards for what a sufficient level of mastery (i.e., the syllabus overall, and the grading rubric for hw/exams/whatever, written before any grading is done in order to apply consistently) are irrelevant.
Relative performance just isn't useful. What is relevant is absolute performance -- did this person's solution handle Newton's laws correctly, did that person apply Bernoulli's law ok, did that person over there violate energy conservation, etc.
When I was a student there were several professors who would always show histograms, which was fun, because the distributions bore no resemblance whatsoever to Gaussians.
As a student, highest, lowest, median and std. deviation (I am an engineering student after all). That way you get the bottom, the middle cluster and the top. I can pretty much figure out where I fall in the class with just that information. Not including the std. deviation means I have no way of knowing if my 10 points off the average is within the norm or not.
Our stat faculty regularly report lots of statistics about their exams to their students, treating it as a teachable moment with some real data.
My classes are small enough that I bin them manually as I record grades, just to get a feel for whether the B grades are all 88 or 89 versus 81 or 82. When I show the distribution to the class, I show it binned by grade and always have a reason for showing it to them.
I don't bother with the mean (or median or mode) unless they are interesting. "Interesting" might be a mode of 98 or a median of 92 on a test about DC circuits to a well-prepared class. Or it could be very bimodal. I've never gotten the case where an average of 75 results from half the class at 95 and the other half at 55, but I've seen some that suggest it is possible in an intro class.
Like one commenter, I will not state the low score on the test but will tell someone if their score was the lowest when asked. Similarly, I will show my own scatter plot to anyone who asks, but can't recall anyone asking.