Peer Instruction

The Paper of Record today features an interview with Eric Mazur of Harvard, a physicist who is probably best known for his pedagogical work. He talks aabout how typical science teaching sucks, and why we need to change it:

From what I've seen, students in science classrooms throughout the country depend on the rote memorization of facts. I want to change this. The students who score high do so because they've learned how to regurgitate information on tests. On the whole, they haven't understood the basic concepts behind the facts, which means they can't apply them in the laboratory. Or in life.

On a physics exam, the student will see a diagram and they'll classify it. Then, it's simply a matter of putting the right numbers in the right slots and, sort of, turning a crank. But this is algebra. It is not physics. When you test the students later on the concept, they can't explain what they've just done.

I think the same problem occurs in most sciences, but is felt more acutely in physics because so few students continue on in the subject. Then again, my knowledge of the pedagogy of other sciences is basically nonexistent, so who knows?

Mazur's big innovation is the idea of "Peer Instruction," in which students are asked to read the textbook before class, and then during class are asked a bunch of questions about the text, which they discuss in small groups. The answers from the various groups are submitted electronically, and depending on the results, they either move on to the next topic, or go over that topic in more detail.

The consensus among most physics faculty I've talked to seems to be that this works extremely well if you're starting with Harvard students, but doesn't translate all that well to other institutions. Still, he's got some interesting ideas about how to approach the problem of teaching basic physics concepts, and I picked up some good tips from his book. If you're interested in the teaching of science in general, it's worth taking a look.

More like this

Posting has been (relatively) light this week because today was the first day of classes. I'm teaching introductory modern physics (relativity and quantum mechanics), a class that I've taught before, but I've been putting a significant amount of time into revising my lecture notes, to keep the…
As mentioned a while back, I'm experimenting with "active learning" techniques in my intro courses this term. Specifically, I'm doing a variant of the "Peer Instruction" method developed by Eric Mazur and others. There are a few complications imposed by our calendar/ class schedule, but I'm giving…
Chad links to an article about a study that shows that good preparation in high school math helps students perform in all science disciplines in college, whereas studying one science in high school doesn't help their performance in other science disciplines in college. There are a few conclusions…
Doug Natelson talks about a recent presentation on education: I recently heard a talk where a well reputed science educator (not naming names) argued that those of us teaching undergraduates need to adapt to the learning habits of "millennials". That is, these are a group of people who have…

I use that technique in my own class sometimes. It works well when you design good questions. And instead of having the answers submitted electronically I have each group present their answers to the class, so we can have further discussion and learning opportunities.

I think it's more than just the mathematical handcranking that we'd have to worry about. Laboratory work (in physics, biology and chemistry) also suffers from this problem, especially now that data loggers and other benchtop analytical tools are becoming cheaper and within reach of a school's budget. It's too easy to treat these as black-boxes that spit out data; I've noticed that a lot of students know what sequence of buttons to push in order to get data, but fail to explain exactly what's going on.

When their data doesn't come out as expected, there's usually two extreme reactions - either a fervent belief that the equipment can't be wrong and hence the data shows something new, or that they've taken the "correct" data but it got mangled up by an equipment failure.

Although admittedly, the former happens quite often in the grown-up research world as well. :)

1) Success in math and the sciences requires raw intelligence. Beagles cannot race salukies. The universe doesn't give crap one if that is unfair to beagles.

2) Success in math and science requires supportive culture. A: a mother carving her initials into the flesh of her young lest she be embarrassed within her peer group doing the same; and B: a self-suppportive peer group bearing those stigmata.

3) Success in math and science requires objective evaluation and culling. Medical diversity obtains King-Drew Medical Center in Los Angeles, a perpetual inundation of malpractice deaths. Educative diversity obtains atrocities like Bush the Lesser and John Kerry leaving Yale with degrees.

4) Success in math and science requires competent, motivated teachers given adequate funding and equipment. Less than 0.1% of Department of Education funding is aimed at the Gifted. US Gifted funding is below $(US)0.30/kid-day. Head Start, underclass babysitting, has an annual budget 20% greater than that of the NSF.

5) Success in math and science requires admittance. NO university outside Federal harassment admits the Gifted based upon their objectively demonstrated abilities. Scholars are specifically ineligible for scholarships. Scholarships are engines of diversity and compassion awarding equity to the stupid, crippled, Officially Sad, and athletically amusing.

Rather than foster brilliance we allocate for its suppression. Jackbooted State compassion program over its 60 years of imposition has succeeded below all measure. FEMA recently melted untold tonnes of Hurricane Katrina ice. Said cumulative lost value - purchase, refrigerated shipping, refrigerated storage, disposal... and above all administrative costs - exceeded the whole of national Gifted funding for a year. For ice. For ice that was tossed.

Richard Feynman was an unlovable pain in the butt who watched semi-naked women dance, produced almost no graduate students, published nearly not at all, and had a Brooklyn longshoreman's accent. What university would retain a lout like that?

Actually, it's not quite true that "this works extremely well if you're starting with Harvard students, but doesn't translate all that well to other institutions." This 2002 paper from The Physics Teacher compiles the experiences of nearly 400 Peer Instruction users from around the world, showing that Peer Instruction does work in a huge range of settings.

Other helpful references include this and this.

By Adam Fagen (not verified) on 17 Jul 2007 #permalink

Having just recently concluded my undergraduate education, I will admit that my reading practices were spotty. I had a very difficult time absorbing information that way, especially preceding the lecture. My classmates generally read about as much or less than I did.

The system in which I got the most out of reading was when we had online reading quizzes that we had to complete the night before each lecture, accounting for ten percent of our grades. This is a rather hand-holding technique, but it highlighted the most important concepts of each section.

The only other time I really got much out of reading was in Quantum I, where the readings were so short that I could go over each section the 2 or 3 times that is necessary for me to fully comprehend the text. I was perceived as a bright student, but I suspect that I was just good at compensating for my slowness.

So there's an unsolicited but candid student perspective...

So I rambled about reading and forgot to say why it's relevant.

The peer instruction system hinges on the idea that students are reading (and understanding). I would find it easy to believe that more students at Harvard read their textbooks than Michigan State University (don't get my wrong, I love my alma mater as much as the next Spartan).

However, as in the reading quizzes, conceptual questions to guide student understanding of the big topics are extremely useful. For this reason, I think peer instruction could be a valuable tool so long as it wasn't the only instruction students received.

Again, take it as you will. I have very little physics teaching experience, but a lot of recent physics learning experience.

In my experience (three terms teaching large lecture sections of General Chemistry lusing a somewhat watered-down version of Peer Instruction, after many years of teaching the same course via traditional lectures), the benefits of PI fall into two broad categories. The first, which shows up right away with little effort on the part of the instructor, is feedback to the instructor. You find out *immediately* that 80 percent of your students have no idea what you meant during your last 10 minutes of talking, instead of waiting for the exam results. The second aspect, improving students' conceptual understanding by making them work through questions and discuss them with each other, takes more work since it requires careful question design - you have to avoid giving away clues that enable students to select the right answer without actually solving the problem. Some tricks that I have found useful include multiple choice questions with more than one correct answer and questions where the 'correct' answer depends upon assumptions that are not stated in the problem. (I tell students to expect that such questions will come up once in awhile, but don't tell them when.) Another trick that's really easy if your questions are in Powerpoint (or your preferred presentation software) is *not* to put up the choices at the same time as the question. Give them a couple of minutes to work through and discuss, then show them the choices.

By Robert P. (not verified) on 17 Jul 2007 #permalink

The consensus among most physics faculty I've talked to seems to be that this works extremely well if you're starting with Harvard students, but doesn't translate all that well to other institutions.

Here's a second assent of the results Fegan points to. It's anecdotally well known amongst science pedagogy researchers that there's a lot of pushback from faculty who think that being good at something means you can teach it well, whereas in reality being reasonably good at something is a necessary but far from sufficient for being a good educator. And it's not like it's surprising that faculty wouldn't necessarily know the details of something outside their field.

When it was introduced by Halloun and Hestenes, there was a big hubbub over the Force Concept Inventory and whether it really measured what its authors claimed it measured. This eventually resulted in a widely agreed upon protocol (test at the beginning and at the end of a semester of mechanics) and a common interpretation of scores. The test has been administered regularly at everything from the Air Force Academy to Podunk State U, and hands down the pedagogy techniques Mazur and others advocate (there's a slew of "interactive engagement" techniques) make a measurable improvement in students' understanding relative to the traditional lecture-lab-and/or-recitation format. The reason for this is pretty well established by educational psyc research (not to mention common sense) -- if people are thinking about something instead of asleep in the back row, they internalize more of it; if they are on the line for in-class participation but you create an atmosphere where embarrassment is not an issue, they tend to think more about the material at least while they're in class.

He talks aabout how typical science teaching sucks

Incidentally, Gary Gladding from UIUC has pointed out that one shouldn't describe it as "the old school stuff sucks", but rather that "we can better serve those we are teaching by using these techniques".

The former Nuffield physics course, succeeded by the IOP physics course, does a great job, I think, in training physicists in the process of teaching them physics.

Any successful approach is going involve a bunch of different techniques. That 'peer instruction' idea should only be one of a basket of different things that were attempted.

The problem with planning a course, particularly when it's dependent on one or two big innovations, is that you end up trying to make a bunch of physicists just like you. That pretty much blows for the students in the class that don't think in the same way that you do.