Big study finds educational software of little help

Is this working?

Jefrey Mervis brings the word from Science ($):

U.S. students using educational software do no better learning primary school math and first-year algebra than their counterparts who follow a traditional curriculum. That's the conclusion of a new federally funded study that is loaded with caveats about what it means for students, educators, and the companies that make the software.

The $14.5 million study, funded by the U.S. Department of Education and conducted by Mathematica Policy Research Inc. in Princeton, New Jersey, was designed to find out whether students are benefiting from the growing use of educational software. Preliminary results released in April 2007 suggested that the answer, for first- and fourth-grade students in reading and for sixth-and ninth-grade students in math, was no. But those results were widely criticized by educators and software makers for lumping together the outcomes from many different products and for testing their impact on student achievement too early, in the first year the teacher had used the material. The second-year results, posted 17 February, address those concerns by reporting results for individual software packages and by testing a new cohort of students whose teachers already had a year of using the software under their belts.

Not flattering. But as Mervis explains, this is one of those questions that no single study can seem to answer.

Education research is rarely definitive, however, and this study is no exception. For one thing, the students and teachers were not randomly assigned to the products, explains Mathematica's Mark Dynarski. That rules out comparing one software package with another. It also means that the results shouldn't be generalized to different student populations. "Everyone began at different starting points. So we can only say how this product works at a particular school," he says.

However, this essentially replicates similar no-help results from a larger study published in 2007. That study looked at 16 different software products usedd in 33 school districts, 132 schools, and 439 classrooms, and is described at the National Center for Educaiton Evaluation and Regional Assistance.

Though we use a lot of software at a lot of schools, we still have little data on how effective it is, This is a situation much like that in medicine, where we lack standardized, reliable data on the effectiveness lot of expensive procedures. Our health and educational systems are much alike: We put a lot of money into them and throw a lot of technology at them and try a lot of different things, but because they're so decentralized and there's no standardized data collection, we don't know what works and what doesn't. All we know is that it's not working very well, as in both health measures and educational measures we lag many other countries that spend far less than we do per patient and pupil.

More like this

Of course it doesn't work. What works is working through problems with the guidance--when necessary--of an outstanding teacher. This idea that "technology" is going to solve the fundamental problem with education--that teachers are paid so poorly that on average the most talented people pick other professions--is a joke.

This is surprising to me. In 7th grade math, we split the period and had half with a traditional teacher and half doing computer modules that were based on a test we took at the beginning of the year. Since I was always way ahead of the rest of the class, I loved it that I could go at my own pace, as it was much less boring. I probably wouldn't have cone through half as many lessons if I had to go at the rate of an average student in my class. It seems like this would be great for students who learn more slowly, too, because they would have enough time to really understand the lesson and no pressure to rush through it.