Three pollsters (Gallup, Ipsos/Reuters and Monmouth) were kind enough to provide two topline results to their polling. For one topline, they took the temperature of the race among all of the registered voters who responded to their polling. In the other topline, they applied (as virtually all pollsters do, at this point in the cycle) a likely voter screen.
The amazing stat? President Barack Obama led in none of the three polls, where the likely voter screen was employed. But Mitt Romney led in none of the three polls, when the universe was registered voters.
There are, of course, two fairly logical explanations for this. For one thing, the national polling of the presidential race has grown extraordinarily close: Romney's margin in the three polls among LVs was a mere 1.7 percent, while Obama's margin with RVs was only 2.0 percent. For another, it has long been assumed that a likely voter screen is going to benefit the GOP, because of the (likely correct) assumption that more habitual voters are going to lean conservative.
With the end of the marathon in sight, however, and everyone eyeing the polls with ever-increasing intensity, today's dose of Daily Kos poll analysis is going to look at some assumptions about polling invoking the likely voter screen. Some of the things we think we know about utilizing this common polling technique are absolutely true, but there is also no shortage of mythology.
Below the fold, you will find the common rationale for tightening to a likely voter screen at election time, some of the potential pitfalls of doing so, and a study I conducted of 2004-2008 presidential election polling which will reinforce some assumptions about likely voter screens, and contradict others.
(Story continues below the fold)
THE CASE FOR POLLING LIKELY VOTERS
The rationale for invoking a "likely voter" screen in polling is inherently logical. It is an absolute fact that not every person who is a registered voter will actually participate on election day.
Even in that, however, there is some legitimate dispute. What percentage of registered voters don't actually participate on election day? Hard as it might be to believe, there is not an indisputable answer to that question, even when looking at one election for data. Turnout figures are usually somewhat reliable (they may vary slightly, depending on whether people are calculating signing in to vote vs. presidential election votes, undervotes, etc.). But there is substantial disagreement on accounting for "registered voters." A late 2008 study pegged it 184 million voters, while the Census Bureau put it at 146 million voters. So, if you want to put it as a percentage, apparently somewhere between 11-29 percent of registered voters did not vote in 2008.
Nevertheless, that is a substantial reason to winnow the field down from all registered voters, since we know that somewhere between 1-in-10 to 1-in-3 of those responses are going to be invalid.
THE PERIL OF SCREENING FOR "LIKELY VOTERS"
Of course, the fundamental problem lies on two fronts. For one thing, overall turnout is going to shift from election to election. As much as the left would like to assume that 2012 turnout will mirror 2008 turnout, and as much as the right would like to believe that 2012 turnout will mirror 2010 turnout, there is simply no way to know for sure what the composition of the 2012 electorate will be (which is why that whole GOP unskewing phenomenon was such incredible nonsense).
For another, the constantly shifting demographics of the nation at-large also will alter the electorate as time goes on, making it difficult to get a legitimate bead on who will eventually vote, and who will not. Assuming a 78-percent white electorate may have been a safe bet in 1992, for example, but today it would be an invitation to ridicule.
So therein lies the challenge for pollsters. While, in theory, it would be to the benefit of accuracy to weed out non-participatory registered voters, to do so invites a lot of assumptions and speculation by the pollsters themselves. And that speculation could be errant.
Even by trying to winnow the field by simple means (like asking about voter enthusiasm or likelihood of participation), there are pitfalls, as Republican pollster David Hill noted earlier this week:
The most common question simply asks: Are you almost certain to vote, will you probably vote, are the chances 50-50 or don’t you think you’ll vote? Seems straightforward. If you want to know whether someone will vote, just ask . But this doesn’t work very well. A recent Kennedy School of Government study, looking at more than 10,000 pre-election interviews and actual turnout, determined independently from election records, demonstrates that many who say they’ll vote don’t. And even more surprising is many who say they won’t vote eventually do. In this study, 13 percent of those “almost certain to vote” didn’t. But more disturbing is that of voters who self-reported only a 50-50 chance of voting, a category most pollsters dismiss, 67 percent voted. Even more disconcerting is that 55 percent of those who said they probably wouldn’t vote eventually did. Almost no pollsters using likely-voter methodology would have kept these respondents in their samples. But they voted.So, while there is a clear and compelling reason why you would want an election poll to reflect those who will actually show up, it should be evident that narrowing the field does come with legitimate peril, as pollsters make their own assumptions about who will show up and who won't. PPP might have the best method for doing so (they simply say at the beginning of their poll that if you aren't voting, hang up).
What we will examine now, however, is whether the likely voter screen leads to ... pardon this expression ... a skewed view of elections. To do so, I examined polling from the 2004 and 2008 presidential elections, using the polling compendium at D.C.'s Political Report. If there is an obvious flaw in the study, it is in the smallish data set. Because most pollsters use a likely voter screen, and do not release RV data, there were only 50 late polls (defined as Oct. 1 through election day) from those two election cycles.
Nevertheless, there is at least some data to look at as we consider the LV/RV divide. For those interested in playing with the data, you can access the database here.
ASSUMPTIONS versus EVIDENCE: THE RV/LV DIVIDE
ASSUMPTION #1: Likely voter polls are more accurate than ones of registered voters
Of the 50 state presidential polls conducting during the final month of the 2004 and 2008 presidential campaigns, the RV result was closer to the final outcome than the LV result in fully half of them. In just 38 percent of them was the LV screen closer to the final outcome than the RV screen. In six of the polls, incidentally, there was no difference between the RV/LV results in a poll.
For what it is worth, those numbers track closely to a study I did of downballot polling in advance of the 2010 midterm elections.
ASSUMPTION #2: A likely voter screen always favors Republicans
The polling toplines in this study among LVs were, hard as it might be to believe, just as likely to err on the side of the Democrats as it was to err on the side of the GOP. In fact, it was split perfectly down the middle—50 percent of the polls gave margins that were more favorable to the Republican candidates, but 50 percent of the polls gave margins that were more favorable to the Democrat.
ASSUMPTION #3: A registered voter poll always favors Democrats
ASSUMPTION: Partially True
With only a registered voter screen in place, the polls did err on the side of the Democrats slightly more often than not. The ratio, as it happens, was 60-40, with the majority of RV polls missing to the benefit of the Democratic candidate.
ASSUMPTION #4: There is always a broad enthusiasm gap between RVs/LVs
One of the unique things about the 2012 election cycle is that there is often a fairly wide gap in the results of the polling, with the results for Obama among LVs being substantially worse than those among RVs.
The odd thing about that is that it defies the results of this study in two ways. For one thing, the gaps were not particularly wide. In 72 percent of the polls in the study, the LV/RV gap was three points or less. For another, there were a reasonable minority of polls where the LV/RV gap either didn't exist, or the Democrat performed better with likely voters. Indeed, the GOP candidate overperformed with LVs compared to RVs more of the time (64 percent), but in almost a quarter of the polls, the Democratic candidate actually saw his margins enhanced by the LV screen. That has been a phenomenon that has essentially been absent in this election cycle.
* * * * * * * * * * * * * * * * * *
So what, in the final analysis, can we take from this data set in relation to the current election standing in front of us? There seems little doubt, from the available evidence, that this year's likely voter screens seem more hostile to the Democratic president than past screens have been for Democrats, given the preternaturally wide gaps seen in many of these polls. But, a note of caution: That does not mean that they are wrong.
One of the true travesties of the "unskewed" movement, and one that regrettably is also occasionally parroted by supporters of the president, is to automatically presume that a poll with something anomalous to it is automatically, and simply, wrong. To use a far-too-often quoted film cliché, "It's a trap."
In 2010, I swore up and down, including on this very site, that the enormous gains for the GOP could never happen, because the Republican brand name was crap. Which, of course, it was: The GOP had a lower fav/unfav ratio in the 2010 exit polls than did the Democrats. All of which, as it happened, mattered very little in what was one of the most glaring examples of "clothespin voting" in recent American history.
It is entirely possible that pollsters in America are seeing a broad enthusiasm gap where none exists, and that these chasms between likely voters and registered voters are wildly understating popular support for the president.
But Obama supporters have to account for the fact that if a number of pollsters are seeing it, it might (at present) exist.
And, in fact, my first maxim of poll analysis comes into play here, and not in a way that will make Obama supporters happy. That rule is: If everyone has the race in one place, and you have it in another, chances are that everyone else isn't the one that's wrong. The corollary here: If pollsters across the board are seeing this gap, the chances of it existing are actually pretty decent.
Of course, the primary goal of Team Obama, between now and Nov. 6, is to alter that. Much has been made about the oft-ballyhooed turnout machine of the Obama campaign. And if early voting and voter registration numbers are any indicator, that could well be more than hype. Clearly, votes in the bank, coupled with a different partisan balance in states than previously thought, could go a long way towards diminishing (or even reversing) any "likely voter" gap that might exist. A sterling Obama debate performance on Tuesday could also go a very long way toward closing any "gap" that might truly exist in the electorate.