Poll aggregation and election forecasting

by Andrew Gelman on November 8, 2012 · 4 comments

in Campaigns and elections,Public opinion

Yesterday Henry writes about poll averaging and election forecasts. I had some comments that I’d like to place here as a separate post because I think we are addressing important issues at the intersection of public opinion and voting.

Henry writes that “These models need to crunch lots of polls, at the state and national level, if they’re going to provide good predictions.” Actually, you can get reasonable predictions from national-level forecasting models plus previous state-level election results, then when the election comes closer you can use national and state polls as needed. See my paper with Kari Lock, Bayesian combination of state polls and election forecasts. (That said, the method in that paper is fairly complicated, much more so than simply taking weighted averages of state polls, if such abundant data happen to be available. And I’m sure our approach would need to be altered if it were used for real-time forecasts.)

Having a steady supply of polls of varying quality from various sources allows poll aggregators to produce news every day (in the sense of pushing their estimates around) but it doesn’t help much with a forecast of the actual election outcome. (See my P.S. here.)

Since 1992 (when Gary and I did our research indicating that poll movements are mostly noise), I’ve thought that that repeated-polling business model of news reporting was unsustainable, but it’s only been getting worse and worse. Maybe Henry is right that recent developments will push it over the edge.

One reason that political scientists have not generally been doing poll aggregation is that, at least for the general election for president, there’s little point in doing so–or, to put it another way, just about any averaging would do fine, no technology needed. Recall that Nate made his reputation during the 2008 primary elections. Primaries are much harder to predict for many reasons (less lead time, candidates have similar positions, no party labels, unequal resources, more than two serious candidates running, etc), and being sophisticated about the polls makes much more difference there.

{ 4 comments… read them below or add one }

Simon Jackman November 10, 2012 at 4:40 pm

I don’t think its the case that “just about any averaging would do fine, no technology needed.” Model-based averaging outperformed naive, simple averaging this cycle, at least according to the comparisons I’ve seen.

Reply

Andrew Gelman November 10, 2012 at 6:12 pm

Simon:

Unsurprisingly (given what I do when I do statistics), I agree that model-based averaging should be better. Still, I think just about any averaging would do fine. After some point, I don’t see practical benefit to increased precision of these aggregate numbers. I am much more interested in increased precision of otherwise noisy measures such as subset estimation, and there I’m a big fan of modeling (or, as we call it, Mister P).

Reply

Martin November 10, 2012 at 10:43 pm

Regarding your P.S., I think there’s a fundamental mistake you’re making in assessing how Nate makes predictions, and it is causing you to confuse accuracy and precision. Essentially (as I understand it), Nate produces a distribution by projecting fundamentals + poll aggregates forward for each state, then convolves that quantity with both the measurement error in the polls (accounts for precision) and the predictive uncertainty of his projection (accounts for accuracy). This process in aggregate will significantly desensitize his prediction to noise movements in the polls. He then draws against those distributions, and measures the frequency with which his draws result in an Obama win vs. a Romney win, and that distribution of results is known at a higher precision that what he reports in his daily numbers. Without insight into how much the probabilities move per point of of poll motion, though, it is impossible to say what his true accuracy uncertainty is.

As an addendum, I believe Nate hides more information from the public than we are aware of. His modal outcome of the election was 332 EV for weeks before the election, even though his chart showed Florida as <50% Pwin for Obama for much of that time. That can't possibly be right, so his internal model is accounting for something his state-level probabilities fail to report.

Reply

Andrew Gelman November 11, 2012 at 1:43 am

Martin:

People give Nate a hard time about not making his method public, but he operates under a different “business model” than do academics. I have a well-paid job for life, so I might as well be as open as possible. Even if I can’t keep up with the steadily-moving frontiers of research, my job isn’t going anywhere. Nate, by contrast, has no tenure, hence much less motivation to share.

Reply

Leave a Comment

Previous post:

Next post: