This summer I am working with a student on trying to start to get some reasonable assessment of one of our ongoing oddball-ish outreach projects. Working with a local videographer, we've been making 2 minute mini-profiles of astrobiologists who work here at LSU (Louisiana State University) - two minute summaries of who they are, what they work on, and why they work on it. Rather than do the seemingly most typical "talking head" style of video, we've tried to do something that more resembles a mash-up between Bill Nye and Independent Film (yes, it is difficult to do that in your head, but there it is). Three are in editing mode right now and three more are scheduled to be filmed soon - but the focus of this blog entry is: How do we assess these?
Here is my understanding of how we are "supposed" to do it: We walk into the astronomy summer camp group that we will be showing them to, and immediately give them a quiz on the content in the videos, THEN we show them the videos, then we give them the same exact quiz again. For control groups, we are apparently supposed to swap out the videos for a standard talk that has the same content. You then look at the gain from Test 1 to Test 2 for the 2 different groups. I first heard about this "Test Them, Teach Them, then Test Them Again" strategy for education assessment only about a year and a half ago, or so - but I've been told that IT IS THE ABSOLUTE NORM.
When I first heard about this assessment strategy I went: huh? When I describe the strategy to students who are working with me, they go: huh? When I describe it to other biochemists, they go: huh? But any education person I talk to says that this is the way it is done. Clearly, however, the pre-test primes the audience for the information. It just seems so odd that no-one has come up with a widely acceptable alternative to this pre-test/post-test approach.
I've been wondering if maybe this strategy is a way to avoid having to have large test pools. It just seems that if you only gave tests after the videos, and your audience numbers were small, of course audience member variations in "prior knowledge" could totally skew the results. But what if you test, say 150 kids after the videos and 150 kids after the "standard lecture" (no pre-test for either of them) - if you had randomly sorted the 300 into their 2 pools it just seems like any "prior-knowledge bias" of some audience members would be overwhelmed by the numbers. If your difference between groups were small, you might have to worry - but if the difference is large and statistical, it seems it would be difficult to argue that too many people in one group had too much prior knowledge.
It just seems like either strategy has drawbacks, but for some reason educational assessment researchers have decided that priming people with which answers to look for (by giving them the pre-test) is okay, while risking the probability that some people will have prior knowledge is not okay. Every education researcher I have talked to this about has acknowledge that this is, in fact, one of the choices being made in designing assessment studies this way, but that everybody does it and everybody expects it.
Anyway, next week we are going to start doing some assessment, and I feel confident that we will be learning a lot more about assessment itself as we proceed.
- Log in to post comments
It sounds like a lovely project. I was just curious how you decided to assess it in the end and if it worked?