Bill McInturff, who did polling for Mitt Romney, but whose firm also splits polling duties with Peter Hart for NBC/WSJ, did a deep dive to try and figure out what went wrong. A lot went wrong with the composition of the "likely voter" (aka LV).
Elizabeth Wilner, writing at the Cook Political Report, sums it up:
The upshot of McInturff’s findings (you can read his study here): A likely voter model based solely on self-described interest in the election failed to capture the true interest level and the strength of Democratic turnout efforts among voters age 18-29 and non-whites, especially Latinos. These groups are core Democratic groups, heavily dependent on cellphones and thus tougher to poll.The realization by McInturff that his model didn't accurately account for the actual electorate is not new (the flaws in the LV model were becoming apparent to Steve Singiser and others prior to the election, and after), but it's well done (as is Wilner's write-up) and well worth a read.
Wilner pairs it with a second piece in which she speaks to prominent public pollsters to get a sense of which directions polling is headed.
McInturff's .pdf presentation is here (and highlighted at the top).
We'll continue the discussion after the fold, particularly why defining the electorate properly is so important to the prediction business.
Wilner, in looking at the future:
Elections have consequences for parties—and now, for polling.Wilner talks to some of our best known pollsters (names like McInturff, Hart, Mark Blumenthal and Charles Franklin) and looks at the future of cell phone polling, online polling (which was quite accurate this presidential election cycle), and other tweaks and innovations likely to affect future polling.
An industry accustomed to unquestioned respect that had struggled quietly against its mounting demons for the previous few election cycles is facing an intervention post-2012. A decades-old method of gauging a person’s likeliness to cast a vote for president failed. The resulting gap between some pre-election ballot tests and the actual outcome shook those pollsters including the oldest brand in the business, Gallup.
A robopoll—an automated survey involving no person-to-person contact—mirrored the final results as closely as any set of live interviews.
And by offering a shortcut through the glut, Nate Silver and other poll aggregators became what pollsters once were, our national tea-leaf readers, while diminishing the value of accurate individual surveys.
Pollsters, meet Jesus.
Bill McInturff dissecting where he (and Gallup as well) failed and how to fix the LV model.
Here are McInturff’s proposed steps for how to go about it:All good ideas; now let's see how the independent pollsters react to them.
- Survey samples must keep pace with the percentage of US households (34 percent in 2012) that are cellphone-only.
- The base of voters who qualify for a likely voter model in a presidential year should be roughly 80 percent of registered voters; this 80 percent should not be further refined through additional filters.
- In addition to self-described interest, other polling indicators such as past voting behavior, recall of contact by a campaign, and intensity of feeling toward a candidate should be factored into the model.
- Turnout models need to be more generous in their assumptions for certain target populations. Even then, additional weighting probably will be needed to help compensate for “missing” likely voters.
- In presidential years, the model should use a default gender breakdown of 47 percent men, 53 percent women.
But note this: If the GOP polled 2012 correctly, they'd have been a bit nicer to non-whites. And from that, policy follows.