Kevin Drum has done a couple of education-related posts recently, first noting a story claiming that college kids study less than they used to, and following that up with an anecdotal report on kids these days, from an email correspondent who teaches physics. Kevin’s emailer writes of his recent experiences with two different groups of students:
Since the early 1990’s, I have pre and post tested all of my introductory mechanics classes using a research based diagnostic instrument, the Force and Motion Conceptual Evaluation. This instrument is based on research by Ron Thornton at Tufts that identified a reproducible sequence of intermediate states that all people seem to pass through in the process of gaining a Newtonian understanding. So it can give me not only a do they get it/do they not measure, but also, along several conceptual dimensions, a measure of how close they are to getting it.
My first job out of graduate school was at an unranked tier 4 institution in Myrtle Beach, South Carolina. Coastal Carolina “University” to be specific. It was the 13th grade. [...] I pretty reliably got 50-60% normalized gains on the FMCE.
Normalized gain is the ratio of how much their scores increased compared to how much they could have increased — (post-pre)/(100-pre). 50-60% is actually pretty stupendous on this particular measure. It means they were typically getting 80-90% of the questions right.
[His current employer] Spelman [College, in Georgia] is a top 75 liberal arts college, according to US News, and top 10 according to the Washington Monthly. My personal impression of the students is that the average is generally much higher than it was at Coastal. These are students who can think around a few corners.[...]
I think I’m at least as good an instructor as I used to be, and probably a lot better. I know quite a bit more about developmental psychology and cognitive science as a result of my job at Georgia Tech and I think that improves my instruction considerably.
And yet, in a good year I get about 20-30% normalized gains.
I don’t really know what is different but something clearly is.
I have seen a few comments about this questioning the validity of “normalized gain.” The argument is, basically, that if you start with students who know nothing, it’s easy to teach them quite a bit, but if you start with students who already know quite a bit, it’s difficult to raise their scores significantly.
This is true if you’re talking about absolute gain, but normalized gain is supposed to take that into account. That’s why it’s a fairly standard measure used by the physics education research community to compare instructional methods across courses and institutions.
The concept of “normalized gain” as a general pre/post test measure goes way back– I’ve seen references to papers from the 1940’s. Its application to physics really starts in the 1990’s, with the key reference being this 1998 paper by Richard Hake looking at test scores from 6000-odd students in introductory physics courses at a variety of institutions (using a slightly different test than the one cited in Drum’s post, but the results are pretty robust). The class average pre-test scores range from around 20% to around 80%.
Hake plots the data in a slightly funny way, shown in this figure:
This is a graph of the absolute gain (that is, the increase in the percentage score from pre-test to post-test) as a function of the pre-test score. As you would expect, this shows a clear downward trend as you move to higher pre-test scores.
The diagonal lines on the graph are lines of constant normalized gain. That is, all points on the lowest solid line have a normalized gain of g=0.23. As you can see from the data, the points associated with “Traditional” courses (standard professor-lecturing-from-the-front-of-the-room courses, represented by shaded points) tend to cluster along that line, whether they were taught in a high school, college, or university setting. Points associated with “Interactive Engagement” courses (any of a variety of reform instruction methods in which students do more group work than note-taking) have a higher spread, but if you draw a line through the middle of the group, you get a decent fit from a normalized gain of 0.48 (the second solid line).
This suggests that normalized gain scores are correlated with instructional method, and not so much with the incoming student knowledge. Hake did the obvious statistical test, and the correlation coefficient between the normalized gain and the pre-test score is only +0.02, which is pretty negligible compared to the correlations of +0.55 for the post-test score and -0.49 for the absolute gain.
That’s just one article, though, and a somewhat self-selected sample by a guy with an agenda. Maybe there’s some correlation to be found in a different study. And, indeed, there is, in this 2005 paper, which looked at correlations between normalized gain and pre-test scores at a range of schools: Loyola Marymount, Southeastern Louisiana University (hey to Rhett), the University of Minnesota, and Harvard University. For all of these schools but Harvard, they found a correlation between pre-test score and normalized gain, as shown in graphs like this one for Loyola Marymount:
The top graph shows all of the individual student normalized gain scores plotted versus pre-test score, while the bottom graph shows the scores averaged within 17 bins. There’s a clear correlation to be seen, but it’s a positive correlation– that is, students with higher pre-test scores are likely to see higher normalized gains than students with low pre-test scores. This is the exact opposite of the obvious argument against the anecdata provided by Kevin’s emailer.
So, does this mean that kids these days are dramatically worse than they used to be? Not necessarily. The ellipses in the long quote above include a lot of material that would call into question any conclusions based on the assumption that the classes at Coastal Carolina and Spelman were truly identical in the way they would need to be for these scores to be meaningful. It does strongly suggest, though, that the change cannot be explained away as an obvious effect of starting with smarter students.
(Both of these papers can be found as free PDF’s with a little Googling, if you would like to read the source material directly.)
Hake, R. (1998). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses American Journal of Physics, 66 (1) DOI: 10.1119/1.18809
Coletta, V., & Phillips, J. (2005). Interpreting FCI scores: Normalized gain, preinstruction scores, and scientific reasoning ability American Journal of Physics, 73 (12) DOI: 10.1119/1.2117109