Skip to main content

In the last few posts of this series we've seen how some of the numbers for minorities in the PPP polls are unreliable. But their toplines are right in line with other polls this year, and, on average, very good in prior years compared to actual results. So how could this be? Well, obviously, if minority demographics are skewed towards Republicans, majority demographics must be skewed towards Democrats. And that is indeed the case, or at least it was for 2010, when comparing to exit polls:

PPP state poll crosstabs were too Democratic for whites, but way too Republican for minorities, on average.
More below...

We see something similar with the 'minority' age groups, those under 45, that have the worst response rates:

PPP state poll crosstabs were too Democratic for those 65+, but too Republican for younger generations, on average.
PPP leans towards Democrats among whites and older people. It just happens to lean in just the right amount to (almost) counteract the Republican skew among minorities and younger generations. Coincidence? No, not entirely.

A few days ago, I explained the deviation in geographic distribution of Asians and Hispanics with a simulation of about 9% of respondents pushing the wrong button for race. That simulation also suggests that around 2% of 'white' respondents are actually minorities who pushed the wrong button for whatever reason. That would lead to about a 2 point Democratic bias for the margin in the white numbers, assuming these minority respondents answered the ballot test questions correctly and in the same proportion as other minority respondents. What we actually see in 2010 is an average 3.6 point Democratic bias in the margin among respondents saying they are white, so a good portion of this bias could be explained by incorrect responses.

It does not even out completely, as PPP's 2010 polls had a slight skew to the right (by about a point in the margin). That is likely where the lack of cell phones comes directly into play.

So, despite everything, the toplines can still be trusted (at least they could in 2010). In other words, if you try to 'adjust' a PPP poll by 'correcting' the low ballot test numbers for minorities or youth, you will likely be sorely disappointed come November. This is very likely to be true for other automated pollsters as well, and possibly to some extent polls that use live interviewers too.

Remember, though, that the ballot test numbers for youth and minorities are, indeed, incorrect. And although in theory we can still watch for trends, we must use extreme caution, at least in the Daily Kos polls, as the number of respondents (50-80 per week for ages 18-29, for example) is too small to be able to spot a typically slight trend in amongst all the noise. Massive changes can still be detected, however, such as the huge swing in opinion among African-Americans regarding marriage equality.
Beyond the Margin of Error is a series exploring problems in polling other than random error, which is the only type of error the margin of error deals with.

The Curious Incident of the Young Republican Minorities. Only a little over half of respondents in the category of African-Americans age 18-29 said they approved of Obama - but only because many of those respondents weren't actually African-American or age 18-29. The numbers for the 18-29 age group are inaccurate as well.
This Is Why We Can't Have Nice Things. A small number of respondents press the wrong button when answering the DailyKos poll question on race, leading to inaccurate numbers for racial minorities in the crosstabs.
Why Don't People Know Where They Live in the DKos Poll? A small number of respondents - around 5-9% -  press the wrong button when answering the geography question on the Daily Kos poll. This is far greater than than can be explained by observed rates of misunderstandings or data entry errors.
Why State Polls Look More Favorable For Obama than National Polls. In the spring and summer, lack of support in Blue States was bringing down Obama's performance in national polls, while Swing States and Red States were polling about the same as 2008.
Presidential Polls Are Almost Always Right, Even When They're Wrong.  How the presidential polls in red and blue states are off, sometimes way off, and how to predict how far off they'll be.
When Polls Fail, or Why Elizabeth Warren Will Dash GOP Hopes. Why polls for close races for Governor and Senate are sometimes way off, and how to predict how far off they will be.

Your Email has been sent.
You must add at least one tag to this diary before publishing it.

Add keywords that describe this diary. Separate multiple keywords with commas.
Tagging tips - Search For Tags - Browse For Tags


More Tagging tips:

A tag is a way to search for this diary. If someone is searching for "Barack Obama," is this a diary they'd be trying to find?

Use a person's full name, without any title. Senator Obama may become President Obama, and Michelle Obama might run for office.

If your diary covers an election or elected official, use election tags, which are generally the state abbreviation followed by the office. CA-01 is the first district House seat. CA-Sen covers both senate races. NY-GOV covers the New York governor's race.

Tags do not compound: that is, "education reform" is a completely different tag from "education". A tag like "reform" alone is probably not meaningful.

Consider if one or more of these tags fits your diary: Civil Rights, Community, Congress, Culture, Economy, Education, Elections, Energy, Environment, Health Care, International, Labor, Law, Media, Meta, National Security, Science, Transportation, or White House. If your diary is specific to a state, consider adding the state (California, Texas, etc). Keep in mind, though, that there are many wonderful and important diaries that don't fit in any of these tags. Don't worry if yours doesn't.

You can add a private note to this diary when hotlisting it:
Are you sure you want to remove this diary from your hotlist?
Are you sure you want to remove your recommendation? You can only recommend a diary once, so you will not be able to re-recommend it afterwards.
Rescue this diary, and add a note:
Are you sure you want to remove this diary from Rescue?
Choose where to republish this diary. The diary will be added to the queue for that group. Publish it from the queue to make it appear.

You must be a member of a group to use this feature.

Add a quick update to your diary without changing the diary itself:
Are you sure you want to remove this diary?
(The diary will be removed from the site and returned to your drafts for further editing.)
(The diary will be removed.)
Are you sure you want to save these changes to the published diary?

Comment Preferences

  •  Nice. (2+ / 0-)
    Recommended by:
    abgin, Oh Mary Oh

    So all these trends seem consistent with there just being a bit of randomness thrown into the responses, so that all subgroups regress towards 50%. Is that what's going on? And is the same thing going on with non-PPP polls?

    •  We would assume so. (3+ / 0-)
      Recommended by:
      Chachy, abgin, Oh Mary Oh

      It's probably safe to say that the same thing happens to the same degree in all automated polling - if the methods are the same, the results should be the same. And if you look at SUSA crosstabs, yup, you can see something similar going on. This may also be a stumbling block for internet panels.

      I would hypothesize that live calls reduce the number of people reporting incorrect answers via social pressure. And you can reduce the ratio of incorrect/correct answers within a minority demographic by increasing the number of minority interviews - by calling cell phones or using multilingual interviewers. And indeed, we do see better crosstabs for a firm like Pew that does all these things.

  •  It is very interesting (1+ / 0-)
    Recommended by:
    Oh Mary Oh

    I think many people should take that into account.

    •  This is not casual, or not for only one pollster (1+ / 0-)
      Recommended by:
      Oh Mary Oh

      This result come from the statistical procedure that find to balance the sample as whole thing, but want not to balance the sub-samples in the same level.

      To balance the sample for having representative sub-samples with a good level of confiance (sorry if I use not the exact english terms) mean bigger samples, and bigger cost of the poll.

Subscribe or Donate to support Daily Kos.

Click here for the mobile view of the site