Does being a movie expert make you a better predictor of the Oscar winners? Comedy Central pundit parodist Stephen Colbert claims that he made his oscar predictions without having seen any of the movies, but then went 5 for 5, even predicting the upset of the year, Crash, to win best picture. If you take Colbert's case as an example, it appears that no expertise is necessary to predict the Oscar winners. Of course, since Colbert's TV character is a parody of conservative political commentators, we can never be quite sure if he's being on the level about his level of movie expertise.
At Cognitive Daily, by contrast, we've now got some hard (though nonscientific) numbers on the question of whether it pays to be a film expert before making your Oscar picks. Ninety Cognitive Daily readers responded to our survey asking them to give predictions in four categories and then report their level of expertise, measured by how many of the nominated films they had actually seen. Here's how the level of expertise was distributed among our respondents:
By far most of our respondents had seen fewer than three movies, with the largest number of repondents answering with the Colbertian "0" movies seen. But did the viewers who had seen more movies do better in predicting the winners?
We divided responses into two groups: those who had seen two or fewer films, and those who had seen three or more (two respondents had seen all 15 nominated films!). Next we compared their predictions to the actual award winners in four categories: Best actor, actress, movie, and special effects (this is science blogs, after all!). For fun, we also charted people who had seen 5 or more movies, even though this amounted to only 13 responses, and wouldn't give us significant results. Here are the results:
Amazingly, seeing just three or more movies gave you a significant advantage over those who had seen fewer films, in two of the four categories: best actor, and best special effects. The trend was toward an expertise advantage in the best actress category as well. The only category where expertise didn't give our respondents an advantage was in the upset best picture award. How big of an upset was the Crash victory? Just seven of our respondents picked it to win, compared to 67, or 74.7 percent of respondents predicting Brokeback Mountain.
Did any of our respondents, like Stephen Colbert, make a perfect prediction? Just three. And how many films have they seen? 7, 2, and 0.
Isn't "seeing just three or more movies gave you a significant advantage" assuming causation where only correlation has been identified? It doesn't seem unlikely to me that people who read more about movies are more likely to both see more movies and to make better movie award predictions. I know I made my own predictions based more on what I'd read than what I'd seen.
You're right, Scott -- it's not necessarily seeing the movies that gave people the advantage. My language may have been a little imprecise there (but it is casual Friday!). Perhaps a better way of putting it is that seeing just three movies is a significant marker of expertise, and that expertise helps predict Oscar winners.
It would be really neat if you could truly control this experiment by isolating the cohorts from watching entertainment media, critic reviews, and all those nasty "Ebert gives it 5 stars" media ads. It seems to me that if someone took the time to investigate the movies in order to select 1 or 2 which to attend from the 15 that are out there, they probably have formed a significant knowledge base from which to judge popular opinion and make similarly-informed predictions as someone that went and sat through all 15... So, it doesn't really surprise me (regardless of the lack of power in N=13 for the many-movie cohort) that the bars aren't very different.
Looks like a power function to me. Is that right? Explanation?