By way of Bob Somerby, we come across this Brookings Institution report by Tom Loveless, “How Well Are American Students Learning?” There’s a lot in the report, especially since it’s really three studies rolled into one, but part of section I, which debunks the notion that Finland has the best educational system in the world highlights the intersection of educational goals, curriculum, and testing. Loveless writes (p. 10):
But by 1999, Finland slipped to only a little above average in TIMSS (z-score of 0.06), ranking fifth of the original twelve countries and fourteenth of all countries taking the test. One complicating factor is age. Finland’s students were younger than the rest of the eighth graders in TIMSS 1999, averaging 13.8 years compared with an international mean of 14.4 years (and 14.2 for the participating FIMS nations)….
Finland stopped participating in TIMSS after the 1999 test. Since 2000, math scores from Finland come from only one test–PISA. On PISA 2009, Finland ranked first among the FIMS countries, a lofty ranking it has held throughout the decade. When the OECD launched PISA in 2000, many small countries believed participating in two international assessments would be repetitive and a potential burden in both money and time.
So Finland, often touted as a paragon of education, didn’t look so good depending on the test used. Why might this be? Loveless (p. 10-11; italics mine):
A plausible hypothesis stems from differences in the content of the two tests. The content of PISA is a better match with Finland’s curriculum than is the TIMSS content. The objective of TIMSS is to assess what students have learned in school. Thus, the content of the test reflects topics in mathematics that are commonly taught in the world’s school systems. Traditional domains of mathematics–algebra, geometry, operations with numbers–are well represented on TIMSS.
The objective of PISA, in contrast, is not to assess achievement “in relation to the teaching and learning of a body of knowledge.” As noted above, that same objective motivates attaching the term “literacy” to otherwise universally recognized school subjects. Jan de Lange, the head of the mathematics expert group for PISA, explains, “Mathematics curricula have focused on school-based knowledge whereas mathematical literacy involves mathematics as it is used in the real world.” PISA’s Schleicher often draws a distinction between achievement tests (presumably including TIMSS) that “look back at what students were expected to have learned” and PISA, which “looks ahead to how well they can extrapolate from what they have learned and apply their knowledge and skills in novel settings.”
The emphasis on learner-centered, collaborative instruction and a future oriented, relevant curriculum that focuses on creativity and problem solving has made PISA the international test for reformers promoting constructivist learning and 21st-century skills. Finland implemented reforms in the 1990s and early 2000s that embraced the tenets of these movements. Several education researchers from Finland have attributed their nation’s strong showing to the compatibility of recent reforms with the content of PISA.
In other words, Finland does well on the PISA test because PISA reflects Finland’s educational goals (interestingly, many Finnish mathematics university professors think those goals leave Finnish students woeful underprepared for college math, but that’s a whole separate discussion).
Keep in mind, this is not ‘teaching to the test.’ If you want to use an exam to determine how well your students are doing, the test should reflect the curriculum and educational goals. If you think X and Y are important, and Z much less so, a test that focuses on Z will yield poorer scores. That doesn’t mean your system is failing, but that the test doesn’t measure what you’re trying to teach.
While this isn’t the main thrust of Loveless’ report, it does emphasize something most educational reformist ideologues completely ignore: the importance of curriculum. Not only do you need a good curriculum, but then there needs to be a good measurement system that can’t be gamed*. In other words, your tests should reflect the curriculum, and the curriculum should reflect your educational goals.
One of the major flaws of the ‘reformers’ is that they don’t even discuss what should be taught (their complete silence on the topic of creationist biology teachers is the most obvious example)–the concept of curriculum is completely absent, even though it is critical in education.
Poor outcomes can be a result of a poor curriculum. Since many of my readers are people who have to teach for a living (either students or their colleagues), ask yourself how much time you put into thinking about what you’re going to teach and how you’re going to teach it versus career goals as they relate to the lecture. Was the quality of your teaching largely affected by what you covered and how you covered it? In most cases, yes.
Yet, when it comes to K-12 education, for some reason, reformists believe–and a ‘faith-based’ word is appropriate–that poor teaching and poor outcomes are largely due to managerial issues, such as teachers unions (and we’ll leave to the side the massive influence of poverty).
Finland’s different testing experiences–and the concerns about mathematics college preparation–demonstrate just how important curricula and educational goals are. It’s too bad reformers don’t spend time discussing those topics. Instead, by denigrating teachers, they provide cover for the Scott Walkers to defund public education and to lower teacher salaries.
Well done, ‘progressives.’
*Most good systems, and Massachusetts’ MCAS is one of them, actually test small portions of the total curriculum in depth, and rotate those portions annually. This makes it difficult for teachers and administrators to prep students specifically for the test.