In NH, registered voters excluded by CNN's LV screen favor Shaheen 66% to 24%. 51% of these registered voters say "their mind is made up", yet they're excluded.
CNN/ORC has a likely voter screen that's ~7% worse for Democrats compared to their numbers for registered voters. They've published four polls in US Senate races (that I've seen) from Arkansas, Iowa, Kentucky and New Hampshire (PDFs).
Registered voters excluded by the LV screen favored the Democrat by wide margins: an estimated 27 points in Arkansas, 22 points in Iowa, 8 points in Kentucky, and 42 points in New Hampshire. That's how an LV screen moves things 7%.
Credit to CNN for publishing numbers for both registered and likely voters, unlike most pollsters. I'm not here to argue against LV screens in general, but greater disclosure by pollsters would be nice.
The numbers in black and bold were published by CNN/OCR. The numbers in red and bold estimate the levels of support among respondents excluded by the likely voter screen (my original analysis).
The other non-bold numbers help reveal the basic math leading to the estimates in red. Take Shaheen's numbers in NH for example: 51% of 883 is 450.3; 48% of 735 is 352.8; subtracting 352.8 from 450.3 gives 97.5; 97.5 divided by 148 is 66% (in red). Understandable? And the final column (i.e. "sum w's") should equal the sample size (n) in bold. The variability between these two columns gives an idea for how exact the numbers in red are.
Note: The "etc." column includes responses of "no opinion" and "other". How can someone without an opinion actually make it through an LV screen? Well, some do.
In summary, registered voters are 7 points more favorable to the Democrats than likely voters, according to CNN/ORC's most recent polls:
Does anybody know how this likely voter disadvantage compares to other midterms?
CNN/ORC's methodology is described with the same exact words across polls:
All respondents were asked questions concerning basic demographics, and the entire sample was weighted to reflect statewide Census figures for gender, race, age, education and region of the state. Registered voters were asked questions about their likelihood of voting, past voting behavior, and interest in the campaign; based on the answers to those questions, [X] respondents were classified as likely voters.
It would be interesting to know how responses are weighted in the screening process and where CNN/ORC draws the line and why.
Pollsters (not just CNN's, but all pollsters) provide little if any guidance about how they conceive likely votes. We just get the results: And Voila!
What's clear is that CNN's likely voter screen excludes many people who say their "mind is made up" about the election:
In 2010, CNN/Time's LV screen was described by their polling director. It's entirely possible they're using the same type of screen, where likely voter cutoff is different for every state--drawn at the point that it "comes closest to my estimate of the actual turnout".
Okay then. Voila!
From the 2010 interview by Blumenthal:
Response from CNN polling director Keating Holland:
1) Do you use screen questions to select likely voters, a Gallup-style index/cut off or something else?
2) If an index/cut-off model, what's cutoff- percentage, i.e. what percent of the adults do you qualify as likely voters?
I run a 50-point scale and use a cutoff that comes closest to my estimate of the actual turnout. So if we're in a state with an estimated 40% turnout, I cut off the likely voters at the point where 40% of my weighted sample is included in the likely voter group.
3) Regardless of the type of model what questions do you ask to define or model likely voters?
10-point scale on likelihood of voting; 10-point scale on interest in campaign; past vote asked ina way to create a 10-point scale from past behavior.
4) Does your likely voter model rely at all on voter lists and individual level vote history?
No.
5) Do you weight by party?
Not normally, but I monitor both party ID and party registration (in statesthat have it).