I recently heard a pollster remark, “When you give conservatives bad polling data, they want to kill you. When you give liberals bad polling data, they want to kill themselves.”
That attitude has been well on display recently in the right-wing freak out over Nate Silver's website. Silver currently gives Obama a 72.9 percent chance of winning the electoral college on election day. There is nothing mysterious in how he arrives at that conclusion. He's simply noticed that Obama has a lead in enough states to put him over 270. It's not complicated.
But that's too complicated for right-wingers like MSNBC's Joe Scarborough. On a recent edition of his morning show he said:
On MSNBC's “Morning Joe” today, Joe Scarborough took a more direct shot, effectively calling Silver an ideologue and “a joke.”
“Nate Silver says this is a 73.6 percent chance that the president is going to win? Nobody in that campaign thinks they have a 73 percent chance — they think they have a 50.1 percent chance of winning. And you talk to the Romney people, it's the same thing,” Scarborough said. “Both sides understand that it is close, and it could go either way. And anybody that thinks that this race is anything but a tossup right now is such an ideologue, they should be kept away from typewriters, computers, laptops and microphones for the next 10 days, because they're jokes.”
I've been staring at this for a few minutes trying to figure out what to say, but nothing comes to mind. It's so shockingly, willfully stupid that I am unable to come up with any clever commentary. Does Scarborough really not understand how statistics work? Does he not understand that the national popular vote is irrelevant, and that only the state results matter?
But he is not the only one pretending to be confused. Here's David Brooks, quoted in the same article:
“If you tell me you think you can quantify an event that is about to happen that you don`t expect, like the 47 percent comment or a debate performance, I think you think you are a wizard. That`s not possible,” Times columnist David Brooks, a moderate conservative, said on PBS earlier this month. “The pollsters tell us what`s happening now. When they start projecting, they`re getting into silly land.”
Brooks doubled down on this charge in a column last week: “I should treat polls as a fuzzy snapshot of a moment in time. I should not read them, and think I understand the future,” he wrote. “If there’s one thing we know, it’s that even experts with fancy computer models are terrible at predicting human behavior.”
Again, the mind reels. I think it will come as news to both Presidential campaigns that they can't use polls as devices for projecting what will happen. And I'm pretty sure everyone has figured out that unexpected events can cause sudden changes in the polls.
The problem with Brooks' argument is that the more reputable political polling firms actually have a good track record of getting things right. Silver's reputation is based largely on the fact that he nailed the elections in 2008 and 2010. He was writing about the looming Republican landslide in 2010 long before it became obvious to everyone, but I don't recall Brooks or Scarborough thinking he was a hack then.
Silver himself has the appropriate reply to this sort of pseduointellectual nonsense:
Silver cautions against confusing prediction with prophecy. “If the Giants lead the Redskins 24-21 in the fourth quarter, it's a close game that either team could win. But it's also not a “toss-up”: The Giants are favored. It's the same principle here: Obama is ahead in the polling averages in states like Ohio that would suffice for him to win the Electoral College. Hence, he's the favorite,” Silver said.
“We can debate how much of a favorite Obama is; Romney, clearly, could still win. But this is not wizardry or rocket science,” Silver told POLITICO. “All you have to do is take an average, and count to 270. It's a pretty simple set of facts. I'm sorry that Joe is math-challenged.”
Indeed. Brooks and Scarborough are not crazed tea partiers, but basic statistics is apparently too complicated for them. Truly there is nothing left of an intellectually serious political right in this country.
Update: Things move quickly in the blogosphere! Upon posting this, I went to the published version to make sure that the links went where they were supposed to go. It turns out that in the brief time it took me to write this post, Nate Silver updated his projection from a 72.9 percent chance of an Obama win in the electoral college, to a 77.4 percent chance. Wow! Browsing his little map of state-by-state predictions, I notice that Obama's chances in Colorado and Virginia are now over 60 percent, while Romney's chances in Florida are now under 60 percent. So I can only conclude that Silver got hold of some new state polls. I just hope the polls are right!
Also, I just came across Ezra Klein's characteristically clear-headed approach to this question.
Worth remembering that it's in the interests of the media to make it into a close-run race. Close race = more ad money.
I feel the need of highlighting why the arguments against Nate Silver are so silly, only because you didn't spell it out directly...but it should be pointed out for anyone unfamiliar with Nate Silver's work, let alone any statistician working in a field over a period of time, his predictions are based on formulas he put together years ago, he is but inputting the recent data and publishing the results. The claims of partisanship seem to be under the impression that his formula requires some "interpretation" of the data. Well, it doesn't.
At least some of the bashing on Mr. Silver is covert antisemitism and gay bashing.
I think it's that the people making these claims don't understand statistics at all. Nate Silver giving Obama a 72% chance to win essentially means that given four repetitions of the election, one time Romney would win it.
Or a better way of looking at it: if Romney needs BOTH Ohio and Virginia to win, and is at 50% in both races, that gives him a 75% chance of winning. It sounds like a big difference putting it that way, but that's statistics. Of course since (as we have heard in oh so many op-ed pieces) math isn't something everyone should learn because it's hard and useless, I don't think we can blame nationally broadcast or published figures for their ignorance.
Oh wait... yes we can.
Oops. Adding an Obama slogan at the end of my comment was entirely unintentional.
Obama 2012. (that one was intentional)
@andre: It's not quite that simple, because there is some correlation between the various events. Some event might move popular opinion in one direction or the other nationwide (although this is becoming less likely as Election Day approaches). Alternatively, the pollsters may have various flaws in their likely voter models, and I would expect that if a given pollster's LV model for Ohio is flawed, there are probably similar issues with their North Carolina LV model.
Silver has a histogram of his latest simulation on his site. For the past week or so (when I have been looking there regularly), there have been three bars sticking out on the right-hand side of the distribution. The tallest (around 330 EVs) presumably has Obama winning every state he carried in 2008 except IN and NC. The other two differ from this by either adding NC or subtracting FL in Obama's column. As of this morning, the chance of getting one of these three outcomes is about 36%.
When Silver says that Obama has X% chance of winning, he means that X% of the simulated runs give Obama 270 or more EVs.
It's like these people don't understand sports betting...
@Andre: i'm no mathematician, but I think you're misinterpreting the 72% chance of a Obama whin. It does not mean that if you held the election four times Romney would win at least once (the chance of at least one Romney win would go up though). He statistically speaking could lose (or win) all the hypothetical extra elections.
We have a type of statistics that handles things like how to calculate if Nate Silver is wrong. It's the same math you'd use to tell if a dice or coin was weighted. Basically, "what is the probability that the probability estimate of this event is wrong?"
Now, say you have a coin that you suspect is weighted to come up heads 60% of the time. Of course, a normal coin could come also come up heads 60% of the time, but it's less likely than with a weighted coin, and increasingly so with more and more coin flips.
With Silver's predictions, you can apply the same kind of math to find out if his predictions fail "too often", or not.
I think Brooks has a point. Elections cannot be described by deterministic equations that allow you to realistically project forward in time from the present condition. Thus elections are fundamentally different from the weather, which does have deterministic (i.e. physical) predictability. If we could reduce the actions of humans w.r.t. elections to a set of time-dependent equations, then it would be different. But our actions are fundamentally unpredictable, as Brooks notes. Take the change in polling results after the first debate. Even Silver's projection for Obama's chances changed dramatically the day after the first debate. But he was not able to predict that change.
"our actions are fundamentally unpredictable"
The fields of psychology and sociology would disagree.
Tulse: Understanding of human behavior through valid scientific methods does not necessarily mean that behavior is predictable. I am neither a psychologist nor a sociologist, so I would be happy to be corrected on this point.
Eric, the whole notion of personality is that behaviour is predictable. Some people are more likely to be uncomfortable in social situations -- we call such people "shy". Some people are more likely to exploit others for personal benefit and be unmoved by empathy -- we call such people "psychopaths". These are statements of likelihood, but that doesn't mean they are any less "predictions" about behaviour.
Heck, if you tell me you are a Republican, your behaviour in the next election is pretty darned predictable, no?
He's got a method. He turns the crank, and gets the result. There are multiple methods of estimating electoral votes and probably most of them will give some insight into potential outcomes. Intellectually, there is no real point in getting frothing mad about it.
Politically, I expect GOP partisans are getting frothing mad over it because of the folk wisdom that nobody likes to vote for a loser. Whether that's true or just an old tale is probably immaterial: if the GOP partisans think its true, they will fight to make sure no credible source predicts a Romney loss, because they will think that prediction might influence some on-the-fence voters to vote for the person they perceive will win, i.e., Obama.
Tulse: What you describe, and indeed what Silver is engaged in, is statistical modeling. I.e. a method of using past behavior and some data for the present (polls, economic data, etc... in the case of Silver's work, or my supposed status as a republican, in the case of your prediction of my future voting) to predict the outcome of a future election. This process assumes that certain statistics are stationary, i.e. do not change. Examples in the case of Silver's methods include past relationships between the data (e.g. polls, economic indicators, and the other bits of data that Silver draws on) and past outcomes as well as an estimate of the relative weight to be placed on each individual piece of data in the final prediction. Silver's track record lends credence to the notion that this seems to work well for elections. But it is not obvious to me that the statistical relationships in his model must be stationary, nor do I think his model comes close to including many elements that could meaningfully change the eventual outcome of the election. A good case-in-point is the nation's reaction to the first debate, which was a single event that significantly changed the polling data, and hence Silver's projection for the Obama's chances of reelection.
Thus, I am essentially in agreement with Brooks that many important elements that could shape the final outcome of the election are unpredictable.
For the record, I do very much like Silver's explanation that his analysis shows that Obama is the "favorite" to win. That seems like a sensible way to interpret the kind of analysis that he is doing.
Sincere thanks for the comment. I'm familiar with this facet of statistics, but by no means am I a mathematician. How do I evaluate Nate Silver's predictions with so few to choose from?
Nate Silver is about 12 years old and has been making these kinds of predictions far fewer years than that. From what I can tell, we have 2008 (whose outcome a goldfish could predict), 2010 (which he was wrong about up until about a week before the election) and now this election.
@Eric Lund: yes the events are correlated, but not directly in what he is predicting. Silver is not predicting what events are going to change polling in the future. He's saying that if the election were held today (or if the polling today accurately reflects the country on election day), Mitt Romney has an approx 25% chance of winning. Except in the most convoluted way, the voting in Ohio should have no bearing on the voting in Virginia on election day (they're even in the same time zone, so there shouldn't be any significant change in Ohio voting once polls in Virginia close and report.
Silver does not pretend to predict any further election developments. If Obama comes forward tomorrow and says he's planning on executing people who love puppies, he'll lose the election, but I doubt Silver's algorithm considers the chance that Obama will go crazy by the election. It's a snapshot of the election right now.
I can only conclude that Silver got hold of some new state polls
Until you check, an alternative hypothesis is that he may be making it up to boost his (admitted) team's morale for Tuesday, and that Scarborough has the same goal in mind. If you are going to claim this is math, blog about the math.
Just checked Nate (and it's still the 31st) and he now has Obama's chances up to 78.4%. Wow those polls just keep rolling in apparently.
I don't know what Jason's science background is but here is a question for anybody with an evolutionary or biology background out there: how long can a species refuse to see the reality surrounding them and not go extinct?
Unless the American conservative movement actually is out of touch "on purpose" (that is they know that they are lying to others but not yet actually lying to themselves about so many subjects around them) then how long before said species goes extinct from failure to notice they are about to get eaten?
Duane - "American conservative" and "said species" are not synonymous. Its somewhat narcissistic to think US election politics matter that much.
Kevin - you might consider reading the title and byline of the blog before opining on what its author ought to cover. If you're only interested in Jason's math posts, you could also try not replying to the other ones.
Eric: "What you describe, and indeed what Silver is engaged in, is statistical modeling"
That is correct, but I don't see how this is a huge insight, since all of polling is statistical modelling. If you are knocking Silver for doing this, you are throwing out the utility of polling in general (and all of the social sciences, but that's another story).
Tulse: My main point here was to agree with Brooks, not to knock Silver. Silver's blog refers to his product as a "forecast". Many people are referring to it as a "prediction". I think what he is doing is valid and interesting, but I also agree with Brooks that election outcomes can depend on a lot of things that are fundamentally unpredictable.
"election outcomes can depend on a lot of things that are fundamentally unpredictable"
Of course they CAN depend on such things, but the question is still what is our best model of what is LIKELY to happen. Your line of argument doesn't just undercut Silver, but means that any polling at all is useless, which clearly isn't true. And to the extent that polling IS useful, it is arguable that Silver makes the most accurate use of it.
If you want to make the argument that elections are fundamentally unpredictable then you should demonstrate that statistical models predicting election outcomes are not good at predicting outcomes.
Unfortunately for this argument, so far it looks like Silver's model is actually very accurate.
If you're right that elections are fundamentally unpredictable then Silver's predictions shouldn't give a confidence interval above 50%. Feel free to review the results from 2008 and 2012 to determine the actual predictive utility of Silver's model. I await your analysis with bated breath.
You're reminding me of a guy I used to know who would always say "there's always a 50/50 chance of everything -- it either happens or it doesn't." Needless to say, statistics was not this guy's strong suit.
Should be 2008 and 2010 obviously.
In order to test the probabilities Silver assigns, wouldn't we need to actually repeat the same election over and over and over. How to you test the accuracy of a statistical probability of an outcome without the ability to duplicate, unchanged, the event producing the outcome being predicted?
Why do I sense we would need, at the very least, a bunch of clone universes to test Silver's predictions?
In order to test the probabilities Silver assigns, wouldn’t we need to actually repeat the same election over and over and over. How to you test the accuracy of a statistical probability of an outcome without the ability to duplicate, unchanged, the event producing the outcome being predicted?
We've covered this a few times now.
1. Silver makes a prediction.
2. That prediction is right or wrong.
3. The number of right predictions is compared to the number of wrong predictions to determine how good Silver's model is at predicting.
What you're suggesting would imply that we can't determine the probability of a coin coming up heads or tails because each flip of a coin is a unique, unrepeatable event. But if we flip a coin a hundred times we'll get roughly equal incidences of heads and tails and we can quantify the deviations from equal incidences to determine the likelihood that the coin is "fair." JUST LIKE THAT we can determine whether Silver's model is a better predictor of elections than a coin flip (it is).
No.The election IS the coin, so each election represents a different coin, not a different flip. You'd have to hold the same election over and over in order to make the coin analogy work as you intend it to.
My point is that each individual probability he assigns to a given election cannot be tested because elections cannot be recreated. He'd be better to just predict a winner outright. Then when the data set is large enough we can test his accuracy. But of course, predicting the results of elections ins't all that hard when the only the last prediction you make counts. I'd have a good record too if I could change my mind as many times as I choose right up to the election.
No.The election IS the coin, so each election represents a different coin, not a different flip. You’d have to hold the same election over and over in order to make the coin analogy work as you intend it to.
Since it's an analogy and I'm making it, I get to decide the terms of the analogy. In my analogy, the flip is the election. Every election is a different event, sure. But every election has something in common with every other election as well: the process. So in my analogy the coin is the process, while the specific forces on the coin and atmospheric conditions are like the election itself: the factors which change from election to election and coin flip to coin flip. It's a valid analogy, it's just not the same analogy you're talking about.
But we can use your analogy too. Let's say we get 100 coins and flip each of them. We can't make a determination about whether any individual coin is fair, but we can certainly make a determination of the likelihood that the collection of coins gives fair results. Let's say the coins are weighted but in a symmetrical way: the same number of coins give 10 heads for every tails as give 10 tails for every heads. The overall result we'd expect to be relatively fair. We can't determine the fact that individual coins aren't fair, but we can determine whether the entire process is fair or not.
Silver's predictions are made on the basis of poll results. If poll results change (as they frequently do) then a model based on poll results will also give a different result. The question is whether these predictions are any good. Empirical evidence seems to suggest they are. You'll need a much better argument to overcome all the empirical evidence that Silver's model is a good predictor of elections.
Note you're actually making two different and disconnected arguments: 1) that statistics can't be used to predict real-world events (since all real-world events are one-offs) and 2) Silver's predictions don't count because he changes his mind all the time (which isn't actually what's happening, but whatever)
My point is that each individual probability he assigns to a given election cannot be tested because elections cannot be recreated. He’d be better to just predict a winner outright. Then when the data set is large enough we can test his accuracy.
His data set is huge, because its based on polled voters. Its not like Ohio is the coin or the election is the coin People are the coins, and once you (correctly) sample a large enough subset of them, you can get a fairly accurate prediction of how the entire set will behave.
Sheesh, you're basically claiming that nobody can do what politicians do every time they gerrymander - accurately predict the way collections of voters will vote. If you were right, gerrymandering wouldn't work. But it does. What does that tell you?
Nope. Your analogy is no good. According to Silver, Obama has an 80% chance of winning. So if there were 100 Obama v Romney 2012 elections held in succession, we should expect Obama to win 80 times and Romney 20. And we somehow test this by comparing how many times Silver's other predictions hold up, even though he doesn't pick outright winners. Because, hey, elections are elections and they're similar enough in the grand scheme. The infinite variables involved don't add up to enough to matter. Sorry, but atmospheric conditions are not the same kind of variables as different races with different candidates.
I don't get what you mean about my analogy. I just don't get your point. The test relies on selecting outcomes, like the outcome of a coin toss. Silver isn't predicting an outcome, just the chances of it. Obama winning doesn't validate his prediction. And Obama losing doesn't validate it either. There are no outcomes to test. Silver was right and Silver was wrong about election X can't ever be said.
Too bad I didn't say elections aren't real-world events. Coin tosses are real-world events. But elections aren't coin tosses, not even close.
Didn't say his predictions don't count. Said they are untestable. They are not even predictions. They are statements of probability. A straight forward prediction would be better. 100% or 0%. No hedging. But if such a prediction is made the day of the election than I really don't have much use for it.
Maybe you should say "I don't understand" instead of "you're completely wrong." Because that's actually what's going on here.
According to Silver, Obama has an 80% chance of winning. So if there were 100 Obama v Romney 2012 elections held in succession, we should expect Obama to win 80 times and Romney 20. And we somehow test this by comparing how many times Silver’s other predictions hold up, even though he doesn’t pick outright winners.
No. No, no, no. The likelihood figures that Silver's model provides are absolutely part of testing the model. Let's take as an example where Silver makes four predictions: two at 60% likelihood and two at 90% likelihood.
Let's say that Silver was correct about the two 60% ones and wrong about the two 90% ones. That outcome would discredit Silver's model much more than the opposite. Because Silver's model claimed more certainty on the trials it got wrong than on the ones it got right.
Does that help you see why the likelihood estimates are relevant? Or are you going to keep telling me I'm stupid because you can't understand basic statistics?
F*$&ing Dunning-Kruger effect.
Obama winning doesn’t validate his prediction. And Obama losing doesn’t validate it either. There are no outcomes to test. Silver was right and Silver was wrong about election X can’t ever be said.
Wrong, wrong, wrong, wrong, wrong. If Silver predicts Obama 55% and Obama loses then Silver was wrong about election X. That's fine. No one is disputing this rather simple way of assessing how Silver's model did in any particular election.
What's being disputed is whether this is a good way of assessing the model's overall predictive success. That requires looking at all the predicted elections to see how well Silver's model does weighted by the likelihoods it assigns to various outcomes. Getting a prediction with 55% likelihood wrong isn't as big a deal as getting a prediction with 90% likelihood wrong.
When you disagree with a math professor on the subject of statistics you may want to take a step back and ask yourself whether you might be wrong in this particular case.
Called you stupid? Well let me do it first if you're going to accuse me of it.
If the odds that I win powerball are 1 in 175M, and yet I play tomorrow and win, were the odds "wrong" ?
You're strongly implying I'm stupid by acting as if I don't know what I'm talking about.
If the odds that I win powerball are 1 in 175M, and yet I play tomorrow and win, were the odds “wrong” ?
No, of course not. But you're trying to take advantage of the gambler's fallacy to discredit statistics in general. Try addressing my arguments instead of introducing new, fallacious ones.
You're a nasty little guy aren't you?
So the odds aren't wrong. Because I still had a chance to win. So if Romney wins, how is Nate Silver "wrong" so long as he gives Romney a chance of winning? We'd need to run the same election over and over to see if the odds are correct. Giving odds is not the same as making a prediction. He is saying Romney has a 20% chance of winning, not Romney will win and my confidence level is 20%. Big difference.
I rather like statistics. I just like it when it's used correctly.
You’re a nasty little guy aren’t you?
Only when people are being willfully thick about a subject they clearly don't know very much about.
I've already explained rather clearly why the likelihood estimates are relevant. Your rebuttal is to cite the gambler's fallacy. Weak. Yes, if you ignore the millions of people who didn't win the lottery it would seem to be strange that you won against such long odds. But once you count all the losers it doesn't seem strange at all. That's why it's a fallacy. Imagine the case where millions of people won the lottery at once -- let's say 50% of everyone who plays. Should we still believe the odds are 1 in 75 million?
Unless you agree that the odds should still be assessed at 1 in 75 million your argument fails.
I rather like statistics. I just like it when it’s used correctly.
Your arguments suggest otherwise.
So we agree.
If Romney wins the election more than 20 times on Nov 6, 2012 we can reject the odds Silver has assigned.
One last try to help you understand.
Let's suppose Silver's model makes 10 predictions with 90% certainty, 10 predictions with 80% certainty, 10 with 70%, and 10 with 60%.
As long as Silver's model gets 9 out of 10 on the 90%, 8 of 10 on the 80%, 7 of 10 on the 70%, and 6 of 10 on the 60% -- or better or reasonably close -- Silver's model looks really good. It not only makes predictions far above the level of chance but provides reasonable estimates of the likelihoods.
Let's say Silver got 3 out of 10 on 90%. Then it looks really bad because Silver basically said: "I'm 90% sure on each of these 10 elections" and then he was wrong about 70% of them.
Again, when you disagree with a math professor on the subject of statistics you may want to take a step back and ask yourself whether you might be the one who is mistaken.
So you follow up the fallacious argument with a smartass comment? You're not doing a very good job of defending your argument.
Can you even begin to address the arguments I'm making? Whatever, I'm not going to spend all day trying to explain something to someone who is purposefully trying not to understand it.
Oh but you will spend all day because you just can't stand it.
What you're describing is a method by which we gauge how well his predictions line up with reality over the long run. It does nothing to establish that Obama 80 and Romney 20 is accurate.
He's not wrong about any of the elections unless we can repeat them.
"La la la la la, I can't rebut your arguments but I'm going to keep repeating the same stupid crap because I don't want to admit I was wrong."
Told you you'd be back.
Figured there was a 97.895% chance that you would.
I gave you a methodology by which you can discredit Silver's model. If you want to discredit it, apply that methodology. As with Eric, I await the results of your analysis with bated breath.
I'm not going to spend all day explaining this to you but I'll happily spend the rest of the day mocking you for being willfully ignorant.
Knew you'd say something like that too. 87.6% chance of it.
Pssst, it's not his model we're after, it's this one "prediction." Prove Obama wins 80% of one time.
Repeat after me...CI is not the same as probability.
The tenor of this conversation is ridiculous. It is against my better judgment to jump into such snark, but I'll ignore my judgement to ask a question about the utility of such predictions:
Silver has had the President favored to win since the summer, so let's set that forecast aside and have a look at the forecast for number of seats in the senate instead. In Aug. he was forecasting the republicans taking the senate. Now his forecast strongly favors the democrats to hold their majority. At what lead time in advance of Nov. 6 do we assess the forecast to determine its level of predictability?
Thank goodness, Eric.
Tim, you appear to be acting willfully obtuse. Dan L is trouncing you in this argument, sorry.
"At what lead time in advance of Nov. 6 do we assess the forecast to determine its level of predictability?"
Are you serious? You judge it on the last forecast made, which is the one with the most accurate data for actually predicting the election. The forecast depends on data, so when new data comes out we update the forecast. How is this so hard for you to understand? Why on earth would you judge a data-based model based on predictions it made with incomplete/inaccurate data?
Obviously reality is a stochastic beast, and we cannot predict what events are going to happen that change the probabilities. Like all scientific predictions, Silver's are provisional, and should be interpreted as such. Don't attack the guy or his methods because you or other people don't understand logic or statistics.
"Knew you’d say something like that too. 87.6% chance of it.
Pssst, it’s not his model we’re after, it’s this one “prediction.” Prove Obama wins 80% of one time.
Repeat after me…CI is not the same as probability."
Why are you demanding things you know to be impossible and irrelevant? "Prove" the accuracy of a single probabilistic prediction?
The accuracy of the model determines the accuracy of the predictions. Derp.
If you watched the weather forecast on tv and it said that there was a 90% chance of rain tomorrow, you could make all the same arguements. The weather is complex and therefore not perfectly predictable. Each day the weather is unique and therefore to test that weather forecast you would have to run that same day 100 times. You could make all the arguements you make about the election prediction and proclaim that the forecast is useless.
OTOH I would take an umbrella with me the next day.
Doug: Indeed I am quite serious. In fact, the predictability of weather is a reasonably well determined function of lead-time. The 6-hour forecast is of course much more skillful than the 6-day forecast. Everybody was saying that Silver's forecast has worked so well in the past and those "hating on it" should use the comparison of the forecast to the outcome to make their case. I'm just wondering what lead-time they are interested in evaluating? 1 day before the election? 1 week before the election? 1 month?
I note with interest that none of the people claiming these things can't be predicted are talking about Texas or New York. Care to guess why? My observation over the years is that most people only understand two probabilities: 0% (or 100% if you like), and 50%. So to them, everything is either a sure thing, or completely unknown. Scarborough is such a person, as are, dare I say it, most of Silver's critics, including those on this thread.
Silver's October 31st projection had Romney winning Colorado, Florida, and Virginia. He lost all three. What's most interesting is that while the majority of Silver's critics appear to be conservatives complaining that he overestimated Obama's support, in hindsight any bias in his data is likely to be the other way.
I'm not saying there was bias. Being wrong in result does not mean you necessarily have a bias. But if there was, it is extremely hard to compare Silver's Oct 31 projection to the actual results and claim he added a liberal bias.
Silver's Nov. 5 prediction was 50/50 though....
Interestingly, the bets on Intrade (Internet prediction trading market) were giving Obama a 67% chance of winning a few days before the elections, quite close to Silver's 70%.