MA State Commissioner of Education Confuses Evaluation with Personnel Management

Given the fundamental problems that New York City's 'proficiency growth' evaluations of teachers have, it's absolutely unclear why Massachusetts, which leads the nation according to the gold-standard NAEP, would want to adopt them (we'll return to this point later). Yet the contagion of stupidity that is educational 'reform' knows no bounds:

The proposed regulations would reward teachers and administrators whose students show more than a year's worth of growth in proficiency under the Massachusetts Comprehensive Assessment System and on other exams, while educators whose students underperform would be placed on one-year "improvement plans.'' Under the proposal, teachers could face termination if they do not demonstrate progress.

The goal is to fix a long-broken evaluation system that too often fails to provide constructive feedback to educators on how they need to improve and on what they are doing right, Mitchell Chester, the state's commissioner of elementary and secondary education, said in an interview.

Evaluating teachers and principals has become a focal point for the state as it tries to reduce high school dropout rates and turn around dozens of failing schools. If done correctly, state officials, business leaders, and educators believe, the job reviews could help advance the academic fortunes of a whole classroom of students or even an entire school.

"Currently, the evaluation system in the Commonwealth is inconsistent and underdeveloped,'' Chester said. "Unless we have a robust evaluation system, we don't have a strong understanding of who is excelling and who is lagging.''

One hopes educational commissioner Mitchell Chester wasn't ever an English teacher, since he has a very peculiar misunderstanding of the word evaulation. Notice how Chester elides from student evaluation--how students are doing--to personnel management. Evaluating is good! I like the MCAS and the NAEP. But one of the reasons these tests work and haven't been hurt by fraud or test-prepping is that they are used to evaluate students, not teachers.

Worse, Chester seems to be losing the sight of the larger picture. As I mentioned at the outset, Massachusetts has the highest achievement in the country across the board. In terms of international competitiveness, once you remove those schools with high concentrations of poverty, the U.S. either leads or is very near the top. The problem in Massachusetts is in those schools that suffer from high concentrations of poverty (and research in other states has shown that poor kids in non-poor schools do perform better than those in poor schools). Why economic segregation requires fundamentally overhauling the incentives for teachers--who seem to be performing extremely well with the current incentives--escapes me. Might want to do something about the dense concentrations of poor students (just saying).

Educational reform rivals creationism in its stupidity.

More like this

So, the basic message is "Hey, teachers, if you work with kids who are anything but spoiled prep-school brats, you'll be fired."
Nobody will apply for those teaching spots anymore, because of the certainty of doom, and nobody will be around the replace the fired teachers, leaving empty schools.

As terrible as this would be, I'd almost welcome it if it would jar people out of their complacent "testing solves everything" mindset.

Well, your argument works the other way too.

If MA schools are so great, most MA teachers probably think (based on the data of their individual classes) that they are above average. And, indeed, they *are* in the sense of 'my students are achieving more than other students their age in the country'.
However, as you point out, it is fairer to ask the question, 'are my students gaining more than other students like my students'?
If we introduced a reasonable metric for that, do you know what's gonna happen? Up to half of those 'above average' teachers are now going to be 'below average'. Which, while painful for those teachers, would be useful information.
~94% of college professors think they are 'above average' teachers. I'm sure profs are more deluded than K-12 teachers, but how much more?

I still think it's asinine in the extreme to reward/punish based on this kind of test performance- the risk of fraud increases, and I think standardizes tests are a lousy metric of learning. But unless there is a system in place which puts student performance into context, at least taking into account student demographics (used as a philosophically poor, albeit in reality depressingly predictive proxy for 'student potential'), you probably are going to have a lot of teachers who don't realize where they could improve.

becca, you say the argument works the other way, but there is not a teacher who sits and thinks they are "above" anyone (maybe excepting Ms. Rhee). Teachers work with whatever their community sends them. Teachers know if they traded places with the teachers from the poorest districts they would be hard-pressed to just keep the status quo.

Twenty years ago I was a "famous" teacher. I got to go to DC and hang out with the bigwigs, I got invited to conferences and professional meetings. All these things happened because I had a bunch of outstanding students and people tried to blame me for their success. I knew they would be great students with anyone. I didn't get a big head.

( I'm in Mass, so Mike's posts always reel me in)
Just before I retired I was present at a meeting where a metric for projected student achievement was proposed to be used to rank teachers. The term they used was "trajectory". As a math/science guy I was intrigued, but as a teacher I was happy I was leaving after 35 years.

joemac53- I think in the absence of good feedback systems, people tend to overestimate their capacities. I think that's what has happened with professors, at least. What percentage of people think they are 'above average drivers', again? It's not about teachers being arrogant or unwilling to take what their community sends them. It's just that they may not be getting the information they need to be the best teachers they can be.
For example, you obviously got recognition for doing an excellent job as a teacher... but did you or anyone actually try to model how much of your students performance *could* be attributed to you as opposed to their starting trajectory? Did the performance evals you got provide useful information in helping you improve?