Apples : Oranges :: New SAT : Old SAT

OK, they dumped the analogy questions ages ago, but for oldsters like myself, those are still the signature SAT questions...

Inside Higher Ed has a piece today on the new SAT results, which expresses concern over some declines:

Mean scores on the SAT fell this year by more than they have in decades. A five-point drop in critical reading, to 503, was the largest decline since 1975 and the two-point drop in mathematics, to 518, was the largest dip since 1978.

Gaps among racial and ethnic groups continued to be significant on the SAT, including the new writing test, for which the first mean scores were released at the College Board's annual SAT briefing on Tuesday. The board also reported a small decline in the total number of people who took the test, and while board officials insisted at a news conference that the decline was across the board, they acknowledged later Tuesday that the board's own data suggest that the decline appears to be among students from the lowest income families.

The percentage of SAT test takers with family incomes up to $30,000 was 19 percent for the high school class of 2006, down from 22 percent a year ago. The share of SAT test takers from families with incomes greater than $100,000 was 24 percent, up from 21 percent a year ago.

Sounds pretty bad, right?

Here's the thing: Last year was the first year of the newest revision of the SAT, including the addition of a writing test. Frankly, I'm amazed the scores didn't fall further.

Is this a real decline in student performance, or a blip due to the revisions? There's really no way to say without more data, which will take a year or so to obtain. Really, you probably need two or three more years of tests before you could hope to say anything sensible. My guess is that the scores will probably come back up slightly, as students and teachers have a better idea what to expect from the new format, but I wouldn't want to stake a large amount of money on it.

Either way, it would be really silly to declare the imminent fall of the sky at this point.

Tags

More like this

The local newspaper here in Charlotte was aghast that SAT scores (a test used to help determine college admissions in the US) fell in North Carolina this year, even though the article goes on to point out that nationwide the scores dropped even more. So what's up? Are schools letting the kids down…
Kevin Drum looks at the latest story about American students lagging the world in science test scores, and notes that this has been going on at least since he was in school. This leads him to wonder whether it's really as bad as all that: I still wonder about this. If American kids are getting…
The science story of the day is probably the Department of Education Report on science test scores, cited in this morning's New York Times. They administered a test to fourth, eight, and twelfth-graders nationwide, aking basic science questions, and compared the scores to similar tests given in…
Everything’s bigger in Texas — including the number of Texans without health insurance. But thanks to the Affordable Care Act, the percentage of uninsured Texas residents has dropped by 30 percent. That means the Texas uninsured rate has hit its lowest point in nearly two decades. In a new issue…

Apples : Oranges

Since you bring it up, it is quite simple to compare apples and oranges:
Apples and Oranges -- A Comparison
by Scott A. Sandford, NASA Ames Research Center, Mountain View, California

By somnilista, FCD (not verified) on 30 Aug 2006 #permalink

The sky fell years ago when we became more and more dependent on the SAT for college admissions.... The result is that college students are chosen partially on their ability to do the SAT. To the degree to which that correlates with what we're really looking for in college students, that's not necessarily a bad thing, but I'm dubious that the correlation is as high as we'd like to think it is. What's more, students who are really good at memorizing things get used to thinking that they should get high grades in everything....

From the very brief descriptions I've heard of the new SAT, I'm hoping that it will actually be a bit of a better test.

But no matter what, we need to avoid overinterpreting the staitstics of test scores and so forth.

-Rob

People take their comfort (or ring their alarm bells) wherever they can. The governor of Georgia was quite excited because the state's SAT scores had moved the state up four places - from dead last. Of course, Georgia students' SAT scores also declined.

By Mark Paris (not verified) on 30 Aug 2006 #permalink

From the very brief descriptions I've heard of the new SAT, I'm hoping that it will actually be a bit of a better test.

I've been tutoring students in the new SAT for a while now, and I assure you it's still junk.

One of the real problems with the SAT is that students from well-off families will always be able to afford tutoring services to raise their scores, while students from low-income families will not. Often teaching students basic SAT strategy (which is completely unlike any other test strategy) is sufficient to raise scores by as much as 100 points.

More specifically, as a math professor I find that the math problems on the new SAT are pretty much the opposite of what I would want to test. A significant number of the problems involve unwinding convoluted questions (which are asking for something simple), trick questions, and clever tricks.

The new essay component to the test is absolutely horrible. The students write a position essay in 20-25 minutes (no outlines, no revisions!); the arguments supporting the position taken can be 100% factually incorrect with no ill effect on score. Is that the kind of writing we should be asking of students?

The SAT tests one thing only -- a student's ability to take the SAT.

I'll confess to being dense here, but is 5 points really outside the error bar for an SAT score? especially since the minimum decrease for an individual score is 10 points, right? Does the "decrease" has to be a function of anything?

By Brian Ledford (not verified) on 30 Aug 2006 #permalink

My rule of thumb for evaluating annual changes in SAT scores is simple: When the scores of students entering college are lower that mine were, it is a clear sign that kids these days are getting stupider. When the scores of students entering college are higher that mine were, it is a clear sign that these tests are highly flawed and don't really measure anything of significance.

This rule of thumb is useful for many things other than SAT scores.

Brian, my understanding is that the drop is indeed statistically significant. When you've got an n of 1.5 million, your error is pretty small.

Isn't the entire purpose of standardized exams like the SAT, GRE, MCAT, LSAT, etc ... just an easy way to weed out college applications?

I can't imagine the people on the admissions committee wanting to read through thousands (if not tens of thousands or more) applications. Some combination of test scores, GPA, and other quantitative factors seems to be the easiest way to eliminate tons of applications and producing a smaller pile, without having to read every single one of them.

I can't imagine the people on the admissions committee wanting to read through thousands (if not tens of thousands or more) applications. Some combination of test scores, GPA, and other quantitative factors seems to be the easiest way to eliminate tons of applications and producing a smaller pile, without having to read every single one of them.

Well, that's the idea. But, of course, they can be overinterpreted. Some schools have cutoffs that probably aren't really serving them best. All schools love to trumpet the average SAT score of their students, which leads to a drive to use the SAT score as a major weighting factor in admissions. US News and World Report uses average SAT scores of students as part of the weighting factor to decide how good a school is.

The fact is that what you say is true: there are too many college applications to truly and thoughtfully think about all of them. Heck, it's hard enough for the GPC in my department to truly think about the <100 applications to the Physics graduate program we get. So shortcuts are needed. I just fear that the shortcut we have isn't really measuring what we like to pretend it's measuring.

Interesting read. So the College Board is saying that the drop from 56% retakers to 53% is a major cause? Rationale makes sense but does it match numerically based on previous data? I don't know but something to look at. Did test taking strategies change (ie studying more for the writing which was new than for the other sections, students during the test not putting as much mental effort into the older sections saving their focus for when the new part came) or the unknowable, were students worried about the writing section thus affecting how they did on the older sections?

What is the error on the SAT?

Another thing looking at the report the College Board put out. In 2004 the mean Math score was 518. In 2005 it was 520 and then this year back to 518. The last drop was one point in 1998 (512 in 1997 to 511 in 1998).

Look at the scores by income bracket towards the end of the article. That is what we should be discussing. The difference between the lowest and highest incomes is almost 1 SD for the Math section (SD 115, difference is 107). For the critical reading it is larger (SD 113, difference is 120) as is the case with the writing section (SD 109, difference of 116).