Inside Higher Ed has a puzzling opinion piece about science and math education by W. Robert Connor of the Teagle Foundation. It's not his arguemtn that's puzzling, though-- that part is perfectly clear, hard to disagree with:
Public and private funders have spent billions of dollars -- sometimes wastefully -- on education initiatives like those in the STEM (science, technology, engineering and math) disciplines without rigorous assessment and evaluation. Not asking for documented results when so much money is on the line misses a golden opportunity to determine whether such programs are indeed helping to improve the way students learn and enhancing their engagement in their studies.
Before more money is spent, we need to listen to those well-attested success stories -- what I like to call "islands of success" -- learn from them, and put assessment to work so that student learning can be improved in all fields, not least the STEM disciplines.
If you want to improve education, you should look at what works. Absolutely. Nothing particularly controversial there.
What's puzzling to me is the fact that the essay doesn't really have anything to offer beyond that statement. Despite the fact that he thanks Jill Jeffery of New York University "for research assistance," there are very few examples of successful programs (and what examples there are are presented in such a disjointed fashion that I had to read it several times to convince myself that IHE hadn't accidentally deleted a paragraph), and he closes with this unintentionally hilarious paragraph:
Do we really know what accounts for such successes? Maybe not, but I'll wager that the programs that succeed have often evaluated their results and used those evaluations to guide planning and make successive improvements in program design. Whether that hypothesis proves correct or not, careful assessment of results, as we have repeatedly seen in our grants at the Teagle Foundation, helps the funder improve its grant programs over time. We learn from our successes, and, yes, our failures too.
So.... a willingness to wager on a conclusion is evidence? And whether you're right or not, your pet idea is really important?
There's nothing terribly novel, in late 2007, about calling for assessment and outcome-based evaluation of educational programs. We've been there, we've done that, we've got a long archive of blog posts about similar calls for reform.
If you want to add something new to the discussion, you need to provide some actual, you know, data. Or at least some more specific and detailed anecdotes. Because, really, writing an op-ed piece calling for basing education funding on documented successes without including any documentation is just silly.
- Log in to post comments