Accountability for Educators, If Not Op-Ed Writers

Inside Higher Ed has a puzzling opinion piece about science and math education by W. Robert Connor of the Teagle Foundation. It's not his arguemtn that's puzzling, though-- that part is perfectly clear, hard to disagree with:

Public and private funders have spent billions of dollars -- sometimes wastefully -- on education initiatives like those in the STEM (science, technology, engineering and math) disciplines without rigorous assessment and evaluation. Not asking for documented results when so much money is on the line misses a golden opportunity to determine whether such programs are indeed helping to improve the way students learn and enhancing their engagement in their studies.

Before more money is spent, we need to listen to those well-attested success stories -- what I like to call "islands of success" -- learn from them, and put assessment to work so that student learning can be improved in all fields, not least the STEM disciplines.

If you want to improve education, you should look at what works. Absolutely. Nothing particularly controversial there.

What's puzzling to me is the fact that the essay doesn't really have anything to offer beyond that statement. Despite the fact that he thanks Jill Jeffery of New York University "for research assistance," there are very few examples of successful programs (and what examples there are are presented in such a disjointed fashion that I had to read it several times to convince myself that IHE hadn't accidentally deleted a paragraph), and he closes with this unintentionally hilarious paragraph:

Do we really know what accounts for such successes? Maybe not, but I'll wager that the programs that succeed have often evaluated their results and used those evaluations to guide planning and make successive improvements in program design. Whether that hypothesis proves correct or not, careful assessment of results, as we have repeatedly seen in our grants at the Teagle Foundation, helps the funder improve its grant programs over time. We learn from our successes, and, yes, our failures too.

So.... a willingness to wager on a conclusion is evidence? And whether you're right or not, your pet idea is really important?

There's nothing terribly novel, in late 2007, about calling for assessment and outcome-based evaluation of educational programs. We've been there, we've done that, we've got a long archive of blog posts about similar calls for reform.

If you want to add something new to the discussion, you need to provide some actual, you know, data. Or at least some more specific and detailed anecdotes. Because, really, writing an op-ed piece calling for basing education funding on documented successes without including any documentation is just silly.

More like this

A collection of miscellaneous stuff with an academic inclination from the past week or so: -- We gave an exam last night in introductory E&M (I'm teaching one of five sections this term), so we've spent a lot of time this week on exam review. One thing that might be worth mentioning here is the…
Lately, I've been blogging a bit about science teaching. Most of my focus has been on teaching at the secondary level, but it turns out that there are issues to be tackled with science teaching at all levels, including the college level. You'd think, then, that when a scientist who has proven…
Kevin Drum notes a growing backlash against education reform, citing Diane Ravitch, Emily Yoffe and this Newsweek (which is really this private foundation report in disguise) as examples. The last of these, about the failed attempts of several billionaires to improve education through foundation…
TAPPED notes that the Times is getting aboard the education reform bus: THE TIMES ON EDUCATION REFORM. More evidence that the establishment is getting behind major reforms in how teachers are evaluated and paid: The New York Times editorial board today calls on Secretary of Education Arne Duncan…