Skip to main content

View Diary: 'Likely voter' polling screens were skewed toward Romney (111 comments)

Comment Preferences

  •  Going binary (13+ / 0-)

    The problem with every likely voter screen I've heard of except Rand (and that is a panel rather than a poll) is that they go from reality:
    "These people are likelier to vote than those voters are."
    to delusion:
    "These people are going to vote and those people are not."

    And they quite consciously and deliberately do so. Look, I understand talking heads on TV getting it wrong. Most talking heads are innumerate. (I think that was why they rejected Nate Silver so bitterly. He didn't have the first requirement for being a pundit -- absolute ignorance of arithmetic.) But the guys who run polls are supposed to know a little statistics.

    •  Agreed that the binary nature is problematic (2+ / 0-)
      Recommended by:
      Cat Servant, sethtriggs

      In an ideal polling world, the best approach would probably be one that weighted the likelihood of different respondents voting. (The Rand panel seemed to do this somewhat...)

      For example (and only hypothetically), a poll might assume that a voter who answered all 7 Gallup questions affirmatively was 98% likely to vote, 6 questions might equal a 93% probablilty, 5 questions 85%, 4 would be 68%, 3 could be 48%, 2 might mean 31% and 1 might be 16%... (make up and fill in your own numbers here...) They could then factor candidate preferences into a weighted calculation.

      There would be two problems in using such a system. The first would be doing the incredible amount of research to assign accurate values to likelihood of voting based on various responses - and figuring out how it plays out in different election cycles. The second would be that using such a system would necessitate a huge sample size to reach sufficient numbers of voters to be able to have sizeable enough subsamples to accurately weight.

      I suspect that this sort of thing will ultimately allow creative improvements in on-line polling (as demonstrated by Rand, YouGov, Ipsos, in this cycle) to use such innovative measurements and begin to be treated as more credible and serious alternatives to conventional telephone polling.

      My friends, love is better than anger. Hope is better than fear. Optimism is better than despair. So let us be loving, hopeful and optimistic. And we’ll change the world - Jack Layton

      by terjeanderson on Fri Nov 23, 2012 at 12:14:14 PM PST

      [ Parent ]

      •  This is exactly what I thought when I saw (2+ / 0-)
        Recommended by:
        sethtriggs, PrahaPartizan

        a pre-election diary that I thought was posted by Markos (can't find it now, sorry) that showed a likely-voter screen question for one of the "major" players in which the interviewee was asked if s/he was "sure to vote," "probably will vote," "might vote and might not," "probably won't vote," or "sure not to vote." The pollster's "likely voter screen" was to throw out everyone except the sure-to-voters. Which was clearly nuts--how do you justify ignoring everyone who says they're more likely than not to vote??

        My thought (as a lifelong applied statistician who composed & ran a couple of pick-up polls over a generation ago) was that the only honest way to deal with that question is to weight the responses--say by 100% of the first group, 75% of the second, 50% of the third, and 25% of the fourth--& compute the results.

        Then you adjust the weights (I'd keep the 100/50/0 points constant & adjust the weights for the second and fourth groups) & do some sort of sensitivity analysis, if only to tabulate the results & say Wayull dang, thass innerestin'!

        Say jerk the weights by 5% and then 10% in either direction; that gives you the following 25 cases:

        100/75/50/25 (original settings)

        Once you have the data it's not so terribly difficult to crank the results for any of those cases & then put them into a 5 x 5 3-dimensional bar graph (or for that matter a response surface with multicolor topography). Which will give you a pretty fair idea how far your respondents would have to be under- or overestimating their likelihood of voting in order to make a significant difference. Any campaign that pays for (or media outlet that gives air time to) the results of a survey ought to demand that sort of analysis up front.

        This sounds like what Rand did. Of course it's easier in a panel-back poll when you can look at the longitudinal evolution of a respondent's self-evaluated likeliness to vote...but still, the idea that a responsible pollster would fail to perform this sort of sensitivity analysis, & hedge its bets thereby, is to me borderline opinion-research malpractice.

        It's not a "fiscal cliff," it's a Fiscal Bluff--so why don't we call them on it?

        by Uncle Cosmo on Fri Nov 23, 2012 at 01:37:24 PM PST

        [ Parent ]

        •  Monte Carlo With Transparency (1+ / 0-)
          Recommended by:
          Uncle Cosmo

          With unlimited computational capacity, the pollster could Monte Carlo the results using the suggested weightings and then lay out for all to see the probabilities being used in the model.  Given the freak out that many pundits and voters exhibited to Nate Silver's analysis, I doubt that enough of the electorate would have understood the final analysis.  Too many observers wanted a determinate number rather than the fuzzy figures which are going to be inherent in any form of contest.

          "Love the Truth, defend the Truth, speak the Truth, and hear the Truth" - Jan Hus, d.1415 CE

          by PrahaPartizan on Fri Nov 23, 2012 at 03:26:03 PM PST

          [ Parent ]

      •  You don't really need a large sample size. (0+ / 0-)

        Sure the likely results of each group would have a high standard deviation. But any poll can be broken down to groups which have a larger-than-useful standard deviation.

        The point is that in aggregating them you get more reliable results.

        As for assigning probabilities of voting, that's what they do now. Only, now, they assign the probability of 0 to groups where we know that this is bullshit.

Subscribe or Donate to support Daily Kos.

Click here for the mobile view of the site