Last week we asked our readers to predict the result of the election. How did they do?
Out of the 474 people who guessed the results of this year's presidential election, only six got the electoral vote right - 365 votes for Obama (assuming Missouri goes for McCain and Omaha goes for Obama). None of these respondents was accurate on the popular vote, but one anonymous respondent got close, guessing that McCain would get 47 percent (the actual figure was 46.3 percent). Only one person who guessed 365 left his name, so let's give Wayland credit as the unofficial "winner" of our prediction contest.
What we were really interested in is how information sources relate to predictions. Do people rely on these sources, or do they just give it a guess? Several websites attempted to analyze poll results this election season, and some of them became extremely popular.
First off, let's consider how accurate the polling sites turned out to be. The site that came closest to predicting the electoral vote was electoral-vote.com, which predicted 365.5 votes for Obama -- just 1/2 vote off the actual total. The worst of the major sites was the New York Times, which predicted 330.5 votes - 34.5 votes off the actual total. FiveThirtyEight.com, favored by more readers than any other site, did the worst of all the dedicated poll aggregation sites, predicting 346.5 votes for Obama. But all the sites did better than our readers, whose average prediction was 318.3 electoral votes.
But did readers' preferred polling site have an impact on their predictions? This graph shows the results:
Readers of FiveThirtyEight.com did significantly better than readers of other sites. They also made guesses closer to what their preferred website predicted. But nearly all readers would have been better-served by guessing closer to the polling sites' predictions. Another way of looking at the results is to consider the average absolute error made by readers and their favorite websites:
Even though FiveThirtyEight.com had the worst prediction of the dedicated polling sites, its readers were significantly more accurate than readers of any other site.
Does reading more polling sites help you predict the results? Take a look at this graph:
Reading more different aggregators was associated with better prediction accuracy. But even reading more than four different poll aggregators was no better than just reading FiveThirtyEight.com.
Did political partisanship affect prediction accuracy? Check this out:
McCain supporters were significantly less accurate than readers who supported other candidates. This is probably to be expected -- people are more likely to predict their own candidate will win. Yet Obama supporters were still significantly less accurate than readers of FiveThirtyEight.com.
Why were FiveThirtyEight.com readers so much more accurate than everyone else despite the fact that FiveThirtyEight was the worst of the poll aggregation sites in terms of predicting the electoral vote? I suspect the reason is that FiveThirtyEight is the only aggregator that offers analysis of the polling on the front page of its site. The other aggregators only provide links to other sites or web pages where analysis is provided. So readers of FiveThirtyEight might have been more interested in the analysis that leads to predicting the vote.
Some other interesting tidbits from the data:
Whether you donated, volunteered, or commented on political blogs or forums didn't have a significant correlation to predication accuracy. However, there was a small significant positive correlation between watching the Daily Show and accuracy. Watching the Colbert Report was not associated with accurate predictions. But watching both programs was the an even better predictor of accurate predictions.
I was following along on fivethrityeight.com on election night and started to wonder if it was the most accurate out there myself, so thanks!
One small point. Because electoral college is such a funky system, which can exagerate the effects of error right around 50%. Another way to look at the system would be to assign a correlation score for the vote % in each of the 50 states.
I haven't actually sat down and looked at the numbers, but my guess is that fivethirtyeight.com would come out on top in that type of analysis, as his total popular vote for Obama was off by a shockingly smally .1%! Electoral-vote.com had an error of .6%... If my theory holds up, and if asked to peg a percentage on each individual state, 538.com was more accurate, then maybe he build too much error into his modeling. Excessive error would probably drive the electoral vote count towards the mean in repeated simulations.
It's also interesting that 538 and electoral-vote called exactly the same states when it came down to it, yet 538 called fewer electoral votes than a straight sum of their individual predictions would have warranted. So again it sounds like 538 had too much of a hedging or too high of a predicted error rate built into his models.
I do not think that comparing the expectation value of the total number of electoral votes in a model to the empirical total is a particularly good method of evaluating the quality of the model. It's obvious at a glance that the probability distribution has numerous spikes, so the expectation value over such a distribution doesn't have much meaning. The 538 model looks like it predicts 365 about 4% of the time; the adjacent 364 was third-most likely at around 8%. So the model seems to be saying that the Omaha point was mildly unexpected, but the rest of the result was expected.
A better comparison IMHO would be to just take the chi-squared-per-dof for the predictions of the percent of votes for Obama in each state. That will tell you right away if your estimation of errors is accurate in your model. Since the probability distributions for vote-percentage within each state are nice and smooth, you don't have the problem of the spike-filled distribution to mess things up.
Just out of curiosity how did all the sites compare when it came to the popular vote?
Here's the theory that occurred to me: readers trusted 538 more than they trusted other sites -- due to Nate Silver's reputation, his successes in predicting the democratic primaries and baseball stuff, and/or simply being impressed/bamboozled by the complexity of his statistical model. (I know this was true for me -- I followed electoral-vote obsessively in 2004, but basically abandoned it for 538 this year.) Overall, there was a baseline tendency to predict a significantly closer race than the polls showed -- probably some combination of McCain-supporter optimism and Obama-supporter shyness after having their hopes dashed in 2000 and 2004. So the greater trust in 538 drew its readers' predictions upward closer to the range in which the poll aggregators lived (even though within that range 538 was the lowest).
It's difficult to say which site did a better job predicting the popular vote since most sites didn't actually predict the vote -- they simply reported the poll results. Since polls give respondents an "undecided" option, the polls are by necessity going to be off. Only 538 actually predicted the popular vote, and it came very close:
52.4% Obama, 46.3% McCain
52% Obama, 46.4% McCain
Even if you assume the undecideds would split on the same ratio as the decideds (for those other poll aggregators), 538 has the best prediction.
That sounds pretty reasonable to me. I'd add that one possible reason RealClearPolitics readers were so far off is that it was favored by FoxNews.com. Since Fox viewers/readers tend to be conservative, they're biased in favor of McCain, even though RCP did a very good job in its projections.
It seems that there is an interesting issue about comparing the results of a stochastic prediction (like 538's) with a single realization of that process (like the true outcome of the election). The mean of the simulations is the best prediction of the outcome, in the sense that it minimizes mean square error, but it does not eliminate it. The stochastic simulations show you a distribution of outcomes, and you expect the true result to deviate from the mean of that distribution by an amount related to the variance of the distribution.
It seems to me that if you want to ask how a stochastic prediction like 538's did, you do not want to compare the mean with the single realization, but rather look at the entire distribution of outcomes.
I was going to say more-or-less what ecologist said. To really evaluate the quality of 538's predictions, we'll have to track it over multiple election cycles.
I trust 538 more than the other sites, partly because it gives the entire distribution of predictions, and partly because I know that Nate Silver explicitly models correlations between states whereas other forecasters don't (Sam Wang tries to justify modeling states as statistically independent, but I don't find it very convincing).
Sorry this is so late in coming, but I have a problem with this analysis. First let me be yet another to chime in for 538.com. It really did do the best because of the built-in uncertainty. Obama just barely won NC and IN, and Nate predicted that these would be very close, same as MO. He was only off by a lot in a few states where the polls were also off by a lot, like Nevada. Also, no one else was ever really talking about the NE-2 vote being in play, but Nate did (even though it was not ever polled separately in NE polls.) No one did as well as Nate on the popular vote.
Secondly, where do you get that electoral-vote did the best? It looks to me like his prediction was 353 for Obama, not 365.5. I was tipped off by this because "The Votemaster" never does fractional EVs. Sounds like that puts him more in line with Pollster and RCP.
Finally, I'd say that the reason 538 readers did the best is twofold. First, it is the best site out there -- the most clear and thorough math as well as in depth explanations of mitigating factors such as the alleged Bradley Effect and cell phone effect, not to mention the extensive ground game research they did. So I think the biggest junkies came to this site. Secondly, Nate was very clear with this predictions and came down hard (with high confidence) that Obama was going to win. In the last week he gave McCain about a 3% chance of winning each day. These readers were therefore more confident in their site's prediction, which is why they could actually put down the bold prediction of 360s EVs. The owners of other sites seemed to play into the mainstream media way of talking, where they'd be all, "even though I'm predicting 360 EVs for Obama, this could very well be an extremely close race." So readers of these ites might not used to democrats winning, are convinced by the Bradley Effect and voter suppression efforts or that their candidate (McCain) actually has a chance, so they thought it was going to be closer. 538 readers knew better, because Nate spoke with confidence and empirical evidence.