Dropping SAT scores: A plausible explanation

The local newspaper here in Charlotte was aghast that SAT scores (a test used to help determine college admissions in the US) fell in North Carolina this year, even though the article goes on to point out that nationwide the scores dropped even more.

So what's up? Are schools letting the kids down? Is the new test harder (this year a writing section was added, though the format of the remainder of the test remains the same, and the writing section isn't included in comparisons)? The College Board, which administers the tests, claims that the difference can't be attributed to the longer tests -- students performed equally well (or badly, depending on your perspective) at the beginning and the end of the test sessions.

One explanation may be that students didn't retake the test as frequently as they had during previous years. The Observer article claims this may be due to increased testing fees. But with kids spending hundreds of dollars on iPods, PSPs, and the rest of the latest electronic gizmos (not to mention SAT-prep courses), is a lousy 13 bucks going to prevent them from retaking the test? I think the answer may be elsewhere.

I also doubt this year's loss can be attributed to recent research on decreasing general intelligence. This sort of research requires decade-long intervals to find significant effects; with the SAT, the decline was noticed in the space of a single year.

What's more, scores on the SAT's rival test, the ACT, held steady this year. The simplest explanation for the discrepancy must lie in the difference between the SAT and the ACT. The primary change implemented in the SAT this year was the addition of the new writing section, which extended the length of the test by nearly an hour. Though the ACT has a writing section (and did last year as well), this section is optional, while the SAT's is mandatory.

So if the College Board's claim that the extended length of the test didn't cause the decrease is true, and if writing scores aren't included in the comparison, what's left? I'd say it's how students prepared for the SAT this year. Now that the writing section is required, students had to divide SAT prep time between the traditional verbal and math sections and the new writing section. Further, since the new section was so -- well, new -- they may have spent disproportionately more time trying to master the art of writing a 5-paragraph essay in 25 minutes. So, comparatively less time would have been spent going over algebra, reading comprehension, and the other skills the SAT purports to test.

What's more, the uncertainty about the writing section may have also convinced plenty of students not to retake the test. You might be convinced that you can make fewer arithmetic errors the second time around, but can you write better? That's a dicier question. I'd say fewer students retook the test because they thought it'd be safer just to let the original score ride.

So that's my guess as to why the scores went down. What'll happen next year? I suspect scores may come up a bit, since next year's students will have had more of a chance to wrap their heads around the writing section. You heard it here first!

Tags

More like this

Actually the SAT costs $41.50. (Why they didn't just round up the fifty cents is anybody's guess.) That's still a bargain compared to the GRE ($130), but it also costs 9.50 to send your score to a college in addition to the four score reports provided free of charge.
That's not to say I disagree with your analysis. I agree with it.

How about this one? Jobs these days require a college degree, so more kids are applying to college. Hence, more kids are taking the SAT. Specifically, more kids who probably wouldn't have had to go to college to get a good job a few years ago, and probably aren't going to do as well on it. These kids' scores drag the mean down.

Let's find out if the number of students taking the test has increased over the past few years.

OK. Waitaminute. This was a drop over the course of only ONE year?!? And it was a drop, statewide, of 2 points!?! And nationwide of 7 points? This sounds like random variability to me. Let's freak out if this turns into a trend over a few years.

Jenny:

I'd buy the 1-year variability argument, *but* we're talking about a statistically significant result -- over 1.5 million people took the test. Plus the fact that ACT scores were stable. My explanation might not be right, but it does fit with the data.

Just because a difference is statistically significant doesn't mean it is actually significant in any way that we care about (you really need some measure of effect size to start to get at this). Average SAT performance is influenced by a host of factors (a subset have already been mentioned here: cost of the exam, number of students taking the exam, and division of labor while preparing) and any change in performance probably can't be explained by just one of these factors. You'd need something like a regression analysis to figure it all out and with such a small change it just doesn't seem worth the effort. Furthermore, I've never really understood the validity of these cross-year comparisons, because the test itself changes from year to year. How do we know this isn't just slight variation in the construction of the test itself? In short, I think these kind of media stories miss out on these subtleties and lead people to interpret this change as something meaningful and important.

>(a test used to help determine college admissions in the US)

Thank you, thank you, thank you!
I knew what SAT was (as most of your readers) but the mere fact you considered that some people might live outside of "America"---USA, really--- is so rare among your fellow citizens, it is worth noting.

On the statistical significance: regression, graphs, collection method, context, more data, etc. would be interesting, and needed---but this looks like an interesting problem for statistician to look into, similar to the 90's drop in crime rate (in certain states of the USA). And remember 5% of all the "significant with a 95% confidence" results are wrong, because that's the way they are.

May I offer one tiny assumption? Steve Lewitt? "Freakonomics", anyone? OK: this author smells of sulphur---but he rightfully considered that if there is significant cheating, the results might drop after this issue be resolved: an interesting theory, though abviously difficult to assess. I remember reading there used to be widespread cheating at the teacher level (about 5%): were there any new system to prevent teachers from giving away the subjets this year?

danah (boyd) will kill me for saying that, but could the time spent on MySpace be of any role? Considering direct effect (less homework) or indirect (less sleeping time)? I came across a (French, old) study that estimated half the time spent on-line was taken from TV viewing, and the other half from sleep.

I would avocate to look at the correlation of the drop with proven local case of cheating for the first assumption, and internet activity at the smallest possible scale for the second.

Perhaps there is an anxiety factor associated with the new writing component? Some studies have shown that African American and females have a tendency to "perform" worse, on average, when tests are preceded by a question that primes their social status. Similarly, if students are nervous or anxious about the writing component, this may have an influence on their overall performance. In short, anxiety about one part may spread to other parts. (Not that this is inconsistent with your explanation, Dave.) It will be interesting to see whether or not the aggregate results change much in the next few years.

By Ken Smith (not verified) on 04 Sep 2006 #permalink