When CNN launched their 2010 statewide election polling effort in early September (conducted by Opinion Research Corporation), the numbers were a rare bright spot (PDF file) for Democrats in a summer that had been mostly cloudy. The polls showed Jack Conway locked into a dead heat with Rand Paul in Kentucky, and they also showed Alex Sink up handily on Republican Rick Scott in Florida.
Almost immediately, pundits on the right (along with a number of voices on the left and in the center) dismissed the data, pointing out that the polls were of registered voters, and not likely voters.
One week later, almost on cue, CNN and ORC released new numbers (PDF file) in the states of Nevada, Ohio, and Washington. The results broadcast to the world were of likely voters, and with the exception of Patty Murray's lead in Washington, the news was far more pessimistic for Democrats. Blowouts in Ohio, coupled with a Sharron Angle lead in Nevada, were the story of the day.
But...not so fast. One of the more attractive things about the CNN/ORC polls is their tendency to release both likely voter and registered voter toplines when they release their polling data. And, a cursory glance of the numbers showed quite a discrepancy between the two universes on their election day voting intent:
CNN/Opinion Research Poll, 9/10-9/14, MoE 3% (Registered Voters), 3.5% (Likely Voters)
NV-SEN (LV): Sharron Angle (R) 42%, Sen. Harry Reid (D) 41%
NV-SEN (RV): Sen. Harry Reid (D) 42%, Sharron Angle (R) 34%
NV-GOV (LV): Brian Sandoval (R) 58%, Rory Reid (D) 31%
NV-GOV (RV): Brian Sandoval (R) 52%, Rory Reid (D) 32%
OH-SEN (LV): Rob Portman (R) 52%, Lee Fisher (D) 41%
OH-SEN (RV): Rob Portman (R) 49%, Lee Fisher (D) 42%
OH-GOV (LV): John Kasich (R) 51%, Gov. Ted Strickland (D) 44%
OH-GOV (RV): John Kasich (R) 49%, Gov. Ted Strickland (D) 46%
WA-SEN (LV): Sen. Patty Murray (D) 53%, Dino Rossi (R) 44%
WA-SEN (RV): Sen. Patty Murray (D) 50%, Dino Rossi (R) 44%
As you can see, with the sole exception of Senator Murray's lead in Washington, the other four races tilt far more Republican under the likely voter screen than they did when the results were derived more broadly from registered voters. In the case of the Nevada Senate race, in fact, it painted an entirely different picture of the race.
And that isn't even the most audacious example of that phenomenon. An early August Reuters/Ipsos poll in the Silver State found Harry Reid leading Sharron Angle by a mere four points among likely voters (48-44). Among registered voters, however, Reid pulled out to what could fairly be described as a substantial lead (52-36) over Angle.
It is, of course, conventional wisdom that likely voter samples will tend to provide results that are far more favorable for Republicans than a mere sample of registered voters. The reasons for that tendency are also pretty universally known. As Professor Alan Abramowitz wrote over at Pollster just last week:
It is not surprising that Republicans would be doing better among likely voters than among all registered voters, especially in a low turnout midterm election. Republicans generally turn out in larger numbers than Democrats because of their social characteristics and this year Republicans appear to be especially motivated to get to the polls to punish President Obama and congressional Democrats. But a double-digit gap between the preferences of registered and likely voters is unusually large.
Thus, the existence of this gap is not the subject of the inquiry here. Nor is the subject of the inquiry going to be about whether or not limiting a voting sample to likely voters is a good idea or not. Folks like Mark Blumenthal have already plowed that ground, and done it well.
The question here, given that a lot of conclusions about the state of play in political elections are based on polls, is whether or not screening for likely voters produces more accurate barometers of election outcomes than do polls which merely screen for registered voters.
To examine this, I culled polling data from the past two election cycles (2006/2008), and looked for pollsters who were kind enough to split their data between RVs and LVs (the compendium of polls at DC's Political Report is excellent, and was the source for this exercise). In order to make sure predictive value was fair, I only took polls that were completed within a month of election day. If a pollster did more than one poll in that time period, I took their poll closest to the election.
It was not a large pool of data, when all is said and done. Apparently, the overwhelming preponderance of pollsters either don't delineate between the two pools of voters, or simply don't report the results for both.
Happily, Opinion Research (CNN's pollster) makes a habit of it, and therefore a pretty large ratio (42%) of the polls that fit the parameters came from them. Admittedly, that makes the findings a bit more imperfect, because the universe here is not only a bit small (fifty-six polls total), but it also isn't terribly diverse.
Nonetheless, there were some interesting findings to share.
1. THE FINAL SCORE: RV 32, LV 21, Ties 3
Let's assume, for the moment, that the goal of polling is to provide some predictive value about an electoral outcome. And let's face it: while pollsters will strain a larynx saying that their work should only be interpreted as a snapshot in time, the media utility of political polls is almost always about predictive value.
By that standard, it is quite difficult to make the case for the superiority of screening for likely voters, at least from this small universe of data. Nor would I say, however, that culling only for registered voters is inherently superior. Fairness would dictate a verdict that the results are...well...mixed.
If you limit it only to the 15 days before the election, the gap tightens a bit (RV 23, LV 17, Ties 3).
2. The Gaps in 2010 are extraordinarily wide, it would seem
One of the reasons why it is difficult to conclude that one method of defining the sample is better than the other is that, at least in the case of 2006 and 2008, there wasn't a huge gulf of difference between the two. Look at the following chart:
Gaps between margins, Registered Voters vs. Likely Voters, 2006/2008
0-2 points: 31 polls
3-5 points: 17 polls
6+ points: 8 polls
Of course, all five of the polls in the recent CNN/ORC release had gaps of three or more points between RVs and LVs. Interestingly, though, there might be a link between the spread between RVs and LVs and the type of election: while over a quarter of the 2006 polls had a gap of six points or wider (6/23), less than ten percent of the 2008 polls had gaps that wide (2/33).
Therefore, while this spate of wide RV/LV gaps in recent polling might be a bit wider than we would expect, you have to consider that might be par for the course when trying to forecast an electorate that will, in all probability, be 35-45 million voters smaller than the hordes that showed up at the polls in 2008.
3. Likely voters samples don't always skew GOP. Rather, they may merely skew in the direction of the frontrunners
One of the most fascinating things in looking at the roughly two dozen polls from the 2006 cycle was learning how poorly they followed the traditional paradigm of likely voters screens and their skew towards the GOP. Indeed, the very slight majority of the twenty-three polls conducted within that cycle that showed both LV and RV results is that screening for likely voters actually led to better performance among Democrats the majority of the time. More important, in terms of predictive value, those likely voter screens that were more bullish on the Democrats overstated Democratic performance three-fourths of the time (9 out of 12 polls).
What does this mean? It could mean that by measuring voter intensity, the pollsters miss the disinterested voter that winds up showing up anyway. As our polling partner Tom Jensen noted earlier this year in an excellent piece of analysis, merely being more excited by an election is not an automatic harbinger of voting intent, nor is being unexcited a guarantee that one will stay home and do laundry on Election Day. There is, therefore, always the chance that some of these audacious likely voter numbers in the polls we consume might be underestimating Democratic performance by presuming that unexcited and nonparticipatory are one and the same.
* * * * * * * * * * * * * * * * * * * * * * * * * * * *
Does this mean, in the words of Kevin Bacon in Animal House that Democrats should remain calm...indeed, that "all is well"? Well, sadly for the Blue team, no. Democratic performance in polls of registered voters during this cycle hasn't been splendid, either, and Abramowitz noted in the piece that I linked to earlier that the Gallup generic ballot, screened for likely voters, has actually had decent predictive value over the years. If that is the case, the House of Representatives could be a disaster area for Democrats, given that the ping-pongy numbers there have, on balance, favored the Republicans.
Furthermore, there is no way to know if pollsters are giving Republicans far too much credit in their likely voters screens until after the election has been completed. Therefore, any unwarranted fears or complacency are horrifically misguided.
After all, any screen is a bit of hypothesis at work. And the more unpredictable the cycle, the more difficult it will be for pollsters to find the fairway. Underestimating turnout in the early 2008 primaries was certainly one of the contributing causes to the widely disparate polling of that time. That is not meant to be a shot at the polling community. They can, and should be acquitted somewhat for that performance. We had never seen a cycle like that one.
That said, we've never had a cycle like this one, either--with a party in power that is disliked and an alternative that, if anything, is more disliked than the party in power.
The pollsters could be entirely correct, given that the tendency of voters is to throw out the party in power when they are in a bad mood about the state of the nation.
However, when the alternative is not terribly palatable, either, how will voters behave?
It is another reason why the postmortem of this particular election ought to be fascinating.
One thing we can say for sure is that the pollsters in this cycle (particularly when utilizing likely voter screens) have been incredibly bullish on the GOP. We will know in about six weeks whether their optimism was prescient, or errant.