Marie-Claire Shanahan has a couple of great posts up about the science of science education, and research on what it takes to actually change someone’s mind. They’re great posts, and hold the promise of many more insightful looks at the skills and approaches best suited to increasing science literacy.
That’s a topic of no small interest to me. In my recent posts about the National Science Board, in my testimony and backstage efforts in Texas, and in my workshop on Defending Evolution in the Classroom and Beyond at The Amazing Meeting! (to name this month’s major projects), the focus is always on what we can do to increase appreciation for science, understanding of science, and interest in learning science. And I love seeing new research on the topic because most of my education in pedagogy was ad hoc, and I think the same is true of many academic scientists, and of too many secondary educators.
For instance, a survey by Nature Education (gated, here’s a report in USA Today) in 2010 found that while college-level science educators generally think science education is mediocre or poor, 85% of them think they personally have a positive or strongly positive effect. It is, first of all, essentially impossible for 85% of professors to have a positive effect and for the college education system to be crappy. This misperception of competence (the Dunning-Kruger effect) is an obstacle to getting help to the people who most need help with their teaching. The survey also found that good or bad teaching had no realistic impact on hiring and tenure decisions at major universities, which means there’s no incentive for self-improvement among the 15% who know they’re not having a positive effect, let alone the unaware.
This problem is even greater in the realm of skeptical outreach, where the outreach is informal by nature (so there’s no end-of-course exam), and largely conducted by people who are not trained teachers or trained scientists or trained science communicators (and who can be expected to have even greater Dunning-Kruger effects). At a TAM! panel on communicating skepticism, Phil Plait noted that there are no well-established metrics for skeptical outreach, so it’s hard to really know what works and what doesn’t work, let alone to promulgate those effective techniques to the folks in the field.
Phil’s point struck me as reasonable and a good jumping off point for a discussion of what those metrics might look like. Genie Scott had just finished saying that it’s important to set clear goals (inspiring Phil’s comment), and in her workshop the previous day (and in her talk immediately afterward), Desiree Schell had emphasized the importance of setting clear, concrete, and measurable goals for skeptical activism. What those might be in general, and how to implement them broadly, seemed like the topic of a useful discussion.
I was primed for that conversation because, at Netroots Nation a few weeks earlier, I’d attended several workshops in which political activists talked about using controlled experiments to see what tactics and strategies work best. Whether it was testing language on doorhangers to see how it changed voter turnout, or comparing details of the wording of mass emails to see what maximized clickthroughs and donations, there was a lot of focus on testability and use of scientific protocols. If political hacks can use scientific methods to evaluate and improve their outreach efforts, surely scientists and other skeptics dedicated to promoting science in society could do the same.
Immediately after Phil spoke, PZ Myers jumped in (the panel was: Plait, Scott, Myers, Jamy Ian Swiss, and Carol Tavris). PZ disagreed with Plait, saying he is glad we don’t have metrics for skeptical outreach. He said that he doesn’t like when people come to him waving scientific research showing that their technique works and that techniques like PZ’s don’t, because he doesn’t think those papers address the specific situations he’s dealing with, and it’s wrong to say he should change his behavior in response to those studies. Then Jamy Ian Swiss emphasized that point, saying “I don’t want what works best,” just what works best for himself.
Myers, of course, is a biology professor at a small liberal arts state college (thus with a focus on education), and a prominent science communicator. Swiss is a magician who, the previous day, had been given an award in honor of his service to the skeptical community. These are people who take skepticism seriously, and who take promoting skepticism and science seriously. But the attitude they expressed is as unskeptical as could be.
I’ll grant that there are times when one does simply go with what works for oneself. There’s no absolute metric one can use to pick a favorite baseball team, or a favorite novelist, or indeed a religion. So we can’t go with what’s best, we just go with what works for us. And, while recognizing that there’s no absolute metric, we still look to the guidance of literary critics, sportswriters, etc., to guide us away from objectively bad decisions about those topics, even if there’s no objectively best choice.
But on empirical matters, I think a skeptic is defined by insisting on clearly delimited metrics of success and failure, and by a refusal to accept “this works for me” as an answer, and a reluctance to countenance special pleading or other logical fallacies. And special pleading is what PZ’s comments were: he was saying that the peer reviewed research on communication in general, or science communication in particular, wasn’t germane to his specific situation, and therefore had nothing to say at all.
Which is absurd. If someone came to me hocking a homeopathic treatment for male pattern baldness, and dismissed my citations of homeopathy refutations by saying those didn’t test his specific concoction or this specific application of it, I wouldn’t celebrate his sophistry. I’d say that homeopathy’s uniform ineffectiveness and lack of theoretical foundation mean that any claim that homeopathy works must be backed by substantial evidence presented in equally prestigious venues, using clear, well-established, and objective metrics. And if a friend tells me he wants to keep using Chinese herbs even after I show him a paper demonstrating that they work no better than a placebo, I ought not to shrug and accept his claim “I don’t need what works best, just what works for me.” I think Tim Farley is right, that skepticism is about using science to keep people from spending money on stuff that doesn’t work. And what does or doesn’t work has to be based on some clear metric. Skeptics – by definition, I’d say – do not dismiss the utility of metrics!
Evidence matters, and the truth matters, and that’s why skepticism matters. I was more than a bit shocked that people applauded PZ’s and Jamy’s comments, and I even wrote to PZ asking him to clarify. The email exchange generated more smoke than fire, alas, so I throw it open to you, dear readers. Maybe there’s some meaningful distinction I’m missing, or some failure in the analogies above. But if I’m right, if the analogy is legitimate, and science really can tell us which skeptical outreach techniques work, then I urge you to suggest some clear, objectively measurable metrics that skeptics can use in their campaigns. While we’re at it, how can we get research on science education into college and high school classrooms? How can we overcome the Dunning-Kruger effects surrounding educational approaches in science classrooms and informal skeptical efforts?