Now that his coronation as the King of Polls is finished (though honestly, several other poll aggregation analysts did just as well), Nate Silver compares all of the polling companies to the outcome of the election to see how accurate they were. And maybe the biggest name in polling, Gallup, came out looking pretty bad.
Several polling firms got notably poor results, on the other hand. For the second consecutive election — the same was true in 2010 — Rasmussen Reports polls had a statistical bias toward Republicans, overestimating Mr. Romney’s performance by about four percentage points, on average. Polls by American Research Group and Mason-Dixon also largely missed the mark. Mason-Dixon might be given a pass since it has a decent track record over the longer term, while American Research Group has long been unreliable…
It was one of the best-known polling firms, however, that had among the worst results. In late October, Gallup consistently showed Mr. Romney ahead by about six percentage points among likely voters, far different from the average of other surveys. Gallup’s final poll of the election, which had Mr. Romney up by one point, was slightly better, but still identified the wrong winner in the election. Gallup has now had three poor elections in a row. In 2008, their polls overestimated Mr. Obama’s performance, while in 2010, they overestimated how well Republicans would do in the race for the United States House.
The difference between the performance of live telephone polls and the automated polls may partly reflect the fact that many of the live telephone polls call cellphones along with landlines, while few of the automated surveys do. (Legal restrictions prohibit automated calls to cellphones under many circumstances.)
Research by polling firms and academic groups suggests that polls that fail to call cellphones may underestimate the performance of Democratic candidates.
The roughly one-third of Americans who rely exclusively on cellphones tend to be younger, more urban, worse off financially and more likely to be black or Hispanic than the broader group of voters, all characteristics that correlate with Democratic voting. Weighting polling results by demographic characteristics may make the sample more representative, but there is increasing evidence that these weighting techniques will not remove all the bias that is introduced by missing so many voters.
None of this is static, of course. Reputable polling companies are always trying to adjust their sampling models and weighting assumptions to be as accurate as possible. I would expect that in 2014 and 2016, we’ll see more companies doing away with their automated polls and going to live calls (those who can afford it, at least).