As I said last week, I recently wrapped up a term experimenting with "active learning" techniques in the two intro courses I was teaching. The diagnostic test results were a mixed bag-- one section showed really good improvement in their scores, the other was no better than the same class with traditional methods-- and the exam scores weren't really any different. There was probably some slight improvement in the multiple choice-- fewer students got trapped by the Newton's First Law questions than usual-- but in the free-response problems, they did about the same as usual.
So, having turned in my grades, what did the students have to say about the course on their end-of-term surveys? Again, this was very much a mixed bag.
Really, this experiment was a great demonstration of the problem with student course evaluations as a method of evaluating anything, because the signal-to-noise ratio sucks. I got a whole bunch of comments complaining that I was unavailable for help, some of them nastily personal, which it's really hard not to resent, because my unavailability was directly related to the whole new baby thing.
I also got a bunch of complaints about the fact that I didn't spend enough class time lecturing about the basic concepts. Which, you know, was the whole point of the exercise. As I explained in detail on the first day of class. So, again, it's hard to give this much weight, other than as support for the prevailing conventional wisdom that asking our students to read the textbook will be regarded as a gross imposition.
The main substantive complaint was that I didn't do examples in class that were like the homework problems. This, at least, is a fair point-- I didn't, in large part because I was expecting them to ask more questions about the homework. This has been a problem with traditional-style classes in recent years as well-- I used to be able to rely on students asking questions about the homework every day, but the last few years, they've just sat there mutely whenever I ask if there are questions. I probably need to start designating one homework problem per assignment to go over in class whether they ask for it or not, because silence clearly does not indicate understanding.
This is somewhat problematic, though, because doing example problems takes up class time, and it's not clear that it does any good. That is, there is evidence from the Physics Education Research community that going over example problems in class or in a recitation session does not substantially improve problem-solving ability. Which makes sense-- watching someone else do a problem is great for producing the illusion that you understand what they're doing, but doesn't necessarily help you solve a similar problem on your own.
A better tactic is to get them to actively solve problems in class, but that's been really difficult to do. Again, several years ago, I could get decent results from just putting a problem up on the board, and asking the class to work on it while I circulated to give advice and answer questions, but in the last few years, this has been a really dismal failure-- at least half of the students just sit there staring at a blank sheet of paper, saying they have no idea how to start.
A possible solution is to try to integrate this with the in-class polling stuff, but the attempts I made at that had only mixed success. Putting up a whole problem hits the same "I don't know how to start" issue, while trying to break it into manageable chunks that can be asked as multiple choice questions only sort of works-- the individual steps are often rendered trivial by the transition to multiple choice format.
There was also a significant technology issue here, in that the text-message-based polling system I was using to do the clicker questions was often really, really slow to record responses. The students would have reasonable discussions about the questions, text in their answers, and then have a minute or more of down time to talk about miscellaneous gossip before enough results had come in for me to be comfortable moving on. On a couple of early occasions, I just went ahead with early results that mostly had the right answer, only to find that a big group of late responders had talked themselves into wrong answers, so I had to wait for all of them to come in. In the future, I think I'll have to go with an in-class clicker system (ITS has them, but I thought the text-based system had some advantages, which were mostly cancelled out by the slow responses), to avoid those delays and keep things moving.
This experiment has made me less happy with Matter and Interactions, as well. I was drawing "clicker questions" from the really excellent collections at the University of Colorado, but those are based on a traditional text (Halliday, Resnick, and Walker, I believe), so the ordering and the presentation were somewhat different. There are also huge swathes of material that the Matter and Interactions book just wipes out. They're so intent on keeping everything in terms of momentum (rather than acceleration) that there are hardly any problems involving multiple forces acting on systems that are moving. They have no block-on-an-incline problems at all, which removes a huge chunk of traditional intro physics, the only tension force problems involve suspended static masses (i.e., no pulleys, no falling masses on ropes), the way they talk about energy dissipation and friction doesn't really allow stopping distance problems, and there are no quantitative problems involving one-dimensional elastic collisions.
The publisher does provide a collection of "clicker questions" to go with the book, but they mostly suck. Far too many of them require numerical calculations, which is just deadly-- you would not believe how long it takes a class full of students to multiply three numbers, and there's absolutely no discussion involved-- and a good chunk of the remaining questions are too simple to be good for anything beyond checking to see that everyone's awake. The Colorado questions are way better, but require a good deal of adaptation to work.
The handful of really pissy comments aside, I still think this method has promise, and will probably try it again next year in one of the off-sequence classes (so I don't have to worry about the "we had to do way more work than the other sections..." problem). We have a visiting faculty member this year who was experimenting with whiteboarding, which he seemed to think was working well, so I may also look at that as a way to deal with the doing-problems-in-class issue. First, I'll have to find out how that went over with his classes.
- Log in to post comments
" I used to be able to rely on students asking questions about the homework every day, but the last few years, they've just sat there mutely whenever I ask if there are questions.
....
Again, several years ago, I could get decent results from just putting a problem up on the board, and asking the class to work on it while I circulated to give advice and answer questions, but in the last few years, this has been a really dismal failure--"
Why the change? What might be the reason the students are less responsive?
And, thanks for all the updates on your classroom experiment.
I'm becoming a fan of the 'attendance quiz' idea: you spend the first five minutes of the class giving them a quiz on something covered the lecture before both to get an idea of whether they understand it or not. They can turn in a 'reasonable effort' for a tiny bit of an attendance grade. At five minutes, they must turn in whatever they've got...and then you ask them for questions. Once they realize they haven't a clue, they're probably more likely to ask questions. And that's the point of the quiz, really...
I'm not sure clickers are really great for stuff that involves significant higher-level reasoning.
It's hard to figure out exactly what will get them involved, though. And I can't imagine trying to deal with a newborn while teaching a class. Forgive my local vernacular, but Uff Da. :-)
Sometimes it helps to require questions - i.e., each student must come to class with at least 2 questions from the reading (classwork grade). Then you have options: collect them (I use 3X5 cards a lot), then skim them & answer ones that appear common, or split students into groups of 4 or 5 and have the groups try to answer one question from each person, or assign rotating question gatherers to round up 5 questions from (for example) their row of desks. . . If you keep a running list of the questions, they can serve as a great study session list (for them), and a good guide (for you) about common areas needing more explanation/exposition.
Of course, nothing is foolproof; after 45 minutes of class time devoted to reviewing a writing assignment while I repeatedly asked for questions, assigning them peer review groups to use as a resource for comments and questions, and a 2 week interval of complete e-mail silence (so they must understand the assignment, right?), several students stopped me after class and informed me that they had not used any of the 3 required sources for their work and would this affect their grade?
Sigh.
I've been following your posts on your active-learning experiment with interest. Our department is in the process of turning one of our traditional classrooms into a SCALE-UP room, several years after I started pushing for such a renovation. (Be careful what you wish for!) With some luck, both the room and I will be ready for this radically new format in January, when I spring it on our physics majors in the introductory sequence.
I've been using Matter and Interactions for our majors course for a couple of years now. It's definitely a big adjustment for all of us who grew up with the canonical (e.g. Halliday and Resnick) treatment of introductory physics, and I have some of the same quibbles as you do with the material. I pull clicker-type questions from M&I, Mazur's book, Colorado, and Ohio State. And I've deviated from the book when I felt that the students needed exposure to more traditional and/or more complex examples of systems.
But the authors' broad themes of (a) reasoning from conservation principles and (b) micro/macro connections really resonate with me. These seem to be to relate strongly to how one should want physicists-in-training to think. I wish there were more textbook authors out there willing to challenge the introductory canon like Ruth and Bruce have, and more departments serious about rethinking the goals of the introductory curriculum, whether it be for physics majors, engineers, pre-meds, or others.
(Of course, you already know about the results suggesting that M&I students don't fare as well on the FCI...)
In small enough classes, I find that waiting in silence, staring at the class, will eventually trigger questions; first week or two you have to wait for feels like a frighteningly long time (but turns out to be tens of seconds), after that the class usually adapts and starts spontaneously asking questions.
Usually.
so excited to hear about people doing this! i have been doing concept tests, group work, and no examples in class for a year now. they complain for a little while, but they get used to it and some even buy into the idea that it will make them better problem solvers. all i know is, by the end of the term, most of them sound more like physicists when they solve problems - so much less of "first you find the formula, then you plug the numbers in"! i've also tried out peer grading of some hw problems this term. has any of you tried this? not sure what i think yet; we'll see. can't wait to hear more about your adventures with no examples and active learning!
First, to echo something just posted in an earlier thread related to this topic, you need to find a way to get evaluations and feedback from your students NEXT year, after a semester of engineering classes. You might learn a lot of valuable things from them. You can start now with the "traditional" ones who are already there.
Second, I was struck by the same sentence that JM noted @1. I've seen the same change. Is it a coincidence that these students are a product of NCLB teaching-to-the-test for their entire K-12 career? I think you have it harder because it can take a month to get them to realize they have to start working the problem before they get any feedback on their efforts in class. That is almost half of your entire course. However, you might (as a college) start working on that from the first day.
Putting up a whole problem hits the same "I don't know how to start" issue, while trying to break it into manageable chunks that can be asked as multiple choice questions only sort of works-- the individual steps are often rendered trivial by the transition to multiple choice format.
Yes.
So rather than asking them to solve the problem, ask them to break it up into smaller parts. You might be surprised at how little experience they have with turning a simple sentence into simple math like "x(0) = 15 m and v(3) = 0."
When did textbooks begin to change ---
How much of students' hesitancy to grapple with problems relates to the format of the textbook. I've noticed that there are a lot of illustrations and sidebars, which I find distracting, when compared to textbooks from the mid-seventies or earlier.
Related, the more advanced, upper-division math and physics courses seem to use texts that are not recently published, or are only text, symbols, and formulae.
Also, I recall that courses for math or physics majors differed from versions for engineering or life science majors, where the latter courses had a more cookbook approach.
To put my comments in context, I'm responding to comments CCPhysicist made here and on the thread about lectures. I'm a psychologist, who minored in math and took lower-division calculus-based physics and chemistry courses, as an undergraduate in the seventies. My exposure to recent textbooks comes from my child who's now taking upper division physics and math courses.