In my last three posts, we shook out three different pollsters' dirty linen, shining a light on the bedbugs under their sheets. How did the final results compare with their predictions and our corrections?
Pretty good, huh? The intent here is not to break my arm trying to pat myself on the back. My effort has been to perform a service, elucidate and perhaps educate, by deconstructing anomalous polls and showing up some of the bad pollsters. Of the three above, Susquehanna and UMass fall suspiciously into the "paid for" category (the polls being published in the right-wing Pittsburgh Trib and Boston Herald respectively); YouGov has just been highly variable all campaign season, probably because of their apparent lack of control over their Internet sampling. For a fuller list of polling hacks, see today's Kos front page "2012 Polling Hall of Shame".
How we do it:
Whenever we see an "outlier" poll, we try to get behind the headlines to look at the underlying questionnaire and, more importantly, the demographics of the respondents. These are not always easy to get to (see Note 1 below), but where available, we check the demographics against two gold standards: (a) the 2008 CNN Exit Polls (b) the 2010 U.S. Census Bureau data. If there are significant discrepancies between the poll and the standards, we use various methods to normalize the demographics to produce a more accurate projection.
I did not realize that the term "unskewing" had gained such a bad odor through self-serving Republican misuse. Perhaps "re-weighting" would be a more neutral term. The statistical principles behind it are long-standing and standard in market research. To provide an example:
Would you trust a survey on attitudes toward abortion if you knew that the respondent population comprised 40% Catholics and 40% Evangelical Protestants (roughly twice their weight in the general population)? Of course not. And yet, such undetected, and often undisclosed, "thumb on the scale" polling is published every day. That's what we try to ferret out and expose to the light of day.
Importance of the Issue
Public polls are meant to provide a public service. They should be honest, transparent, statistically auditable and verifiable. If they are to be believed, they should provide as backup, at the very least, the questionnaire, the demographics and the cross-tabs that inform the results. The American Association for Public Opinion Research (AAPOR)
Transparency Initiative lumbers along, commercial members such as egg-faced Gallup (schadenfreude here re their Romney up by 7% call) routinely ignore it. It is time to give them all a jolt.
(Private polls are a different matter. You gets what you pays for - as Karl Rove found out with his secret Ohio numbers - fat lot of good it did that sweathog. Did you catch his meltdown at Fox over the Ohio results, gleefully being re-played on MSNBC ?)
What you can do:
If public polls are not open and transparent in providing backup information:
- Write a letter to the newspaper or website carrying the survey, pointing out the omission and asking them not to publicize the pollster's results. Pollsters rely on the publicity - if that is not forthcoming, they'll change or die.
- Write to the survey organization asking for the backup. Hounding them may help change their ways.
- Send complaints to AAPOR, whether the pollster is a member or not.
- Write to AAPOR lobbying for auditing of surveys. Think of it as random drug tests -- the PED abusing pollsters must be stopped.
Note 1: Most reputable polling organizations will provide the backup questionnaires, demographics and cross-tabs for published polls. Most University affiliates (UMass-Lowell, Suffolk, Marist, Franklin & Marshall etc.) are good in this regard, the commercial pollsters less so. Most are signatories to the AAPOR Transparency Initiative, but significantly NOT YouGov, nor
Susquehanna, Rasmussen and its parent Pulse Opinion, notorious for their Republican tilt. Be warned.