Value Added Testing (or "Merry Christmas, Novak")

One of the more contentious recurring topics around here over the years has been education policy, mostly centering around the question of teacher evaluation and teacher's unions. It's probably the subject for which there's the biggest gap between my opinions and those of some of my regular readers.

As this is a good time of year for peace and reconcilliation, let me point to this guest post at Calpundit Monthly, in which Paul Glastris talks about the problems of "gifted" kids under the "No Child Left Behind" system, and pushes a Washington Monthly article on Value-Added Testing. The idea here is to evaluate teachers and schools based on how much the students improve over the course of a year, rather than how well they meet a single absolute standard. This avoids penalizing schools and teachers who are stuck with student populations that come in poorly prepared, and also provides some incentive to push the more advanced students (who tend to be relatively neglected in a single-absolute-standard system).

It's not a perfect evaluation method-- it increases the number of required tests, and opens the possibility of rewarding schools that turn out lots of students who are below grade level, but not as far below as when they came in, which isn't really a stisfying outcome (not to mention perverse statistical dodges like telling kids to bomb the pre-tests, to inflate the "value added"). It's a step in the right direction, though, and some combination of absolute standards and value-added testing might lead to an acceptable assessment method.

The basic idea is pretty standard among people who do physics education research (see, for example, the well-known groups at Harvard, Washington, and NC State). I've done this with my intro classes the past few years-- you give a pre-test in the first week of class, and a post-test at the end, and measure student improvement using the "normalized gain," which is the post-test score mins the pre-test score, divided by the maximum score minus the pre-test score:

                     (Post-Pre)
               Gain = --------
                      (Max-Pre)

This is designed to account for the fact that a one-point gain for a student who starts with a 25/30 is a bigger achievement than a one-point gain for a student who started with a 5/30.

(The test we use is designed so that the scores are generally pretty lousy-- I think the pre-test average nationally is somewhere around 10/30-- and the gain from a "traditional" lecture-format course is usually around 25%. Classes based on "active learning" are found to do a better job, with gains of 40-60%.)

It's a reasonably effective technique, though you can question how much it actually proves about anything (and people do, at length). It's also fiendishly difficult to come up with good tests for this purpose, and it's not hard to find people who complain about even the well-established tests that are out there.

(Let me note, by the way, that I'm not prepared to whole-heartedly endorse everything that Glastris says in his post. In particular, I'm not especially disturbed by a drop in the fraction of students scoring as "advanced" in math as they move up in grade level-- it should be harder to score as "advanced" at the higher levels. I'm also a little skpetical of the oft-cited claim that some sizable fraction of high school drop-outs are really "gifted" kids who are too bored to continue-- in a lot of the anecdotes people use to illustrate this, the problem seems to be more a matter of psychological issues than boredom.

(I do agree, though, that the focus on bringing the bottom kids up to speed-- "triage, rather than teaching," in the phrase of one of the Washington Monthly commenters-- tends to lead to the better students being slighted, particularly in distracts with limited resources. The "gifted" program at my old school was an on-again-off-again thing while I was there (existing only when enough parents made enough noise to get it funded), and was shut down for good several years back. Programs for the "special needs" kids at the bottom end of the class are legally mandated, and have actually expanded over the same span.)

(Originally posted at my steelypips blog.)

Tags

More like this

Kevin Drum has done a couple of education-related posts recently, first noting a story claiming that college kids study less than they used to, and following that up with an anecdotal report on kids these days, from an email correspondent who teaches physics. Kevin's emailer writes of his recent…
As noted in previous posts, I've been trying something radically different with this term's classes, working to minimize the time I spend lecturing, and replace it with in-class discussion and "clicker questions." I'm typing this while proctoring the final exam for the second of the two classes I'm…
Last week, E.D. Kain took Megan McArdle to task for promoting the use of student testing as a means to evaluate teachers. This, to me, was the key point: ....nobody is arguing against tests as a way to measure outcomes. Anti-standardized-tests advocates are arguing against the way tests are being…
Kevin Drum reports receiving an email from a professor of physics denouncing the Advanced Placement test in Physics: It is the very apotheosis of "a mile wide and an inch deep." They cover everything in the mighty Giancoli tome that sits unread on my bookshelf, all 1500 pages of it. They have seen…