Armchair political analysts (including yours truly) love to play this political parlor game:
As the avalanche of data pours in during the weeks preceding the election, every poll is dutifully parsed. In particular, campaign junkies of all stripes are fond of dismissing certain pollsters as "partisan" or "biased". This, of course, usually accompanies the release of a poll favoring the opposing party of the accuser.
With the election results (almost entirely) in the bank at this point, however, decent conclusions can be drawn about the partisan proclivities of the leading pollsters in the game.
Some might note that this has already been done, predictably by politics-and-numbers guy Nate Silver.
The concept is the same (determining whether pollsters had their thumbs on the scale towards one party or the other), but the methodologies between Nate's study and this one are quite different. What Silver did was take the number of percentage points, on average, that the pollsters results from the last 21 days of the cycle deviated from the actual results.
There's nothing wrong with this particular method. What follows is simply a slightly different way of looking at it. Here, a pollster was given a score of "accurate" if their poll came within 3 points of the final total. If they were greater than 3 points off in the direction of the GOP, that poll was rated as biased towards the Republicans. If they were greater than 3 points off in the direction of the Democrats, that poll was rated as...well, you get the idea.
The rationale for deviating from Nate on methodology was based on a simple concern. Nate's method extracts a steep price for a pollster with a single, but very aberrant, result. For example, a pollster could err on the side of the GOP four out of five times, but if the pollster was way off on the fifth poll (but in the Democratic direction), it could more than offset those GOP-leaning polls.
Here were the parameters for this study:
Every poll conducted (by the pollsters in question) from October 1st until Election Day was included for review. This is a big deviation from what Nate did, and it is bound to have some critics. Using multiple polls of the same race is certainly a controversial provision: a pollsters gets dinged extra if they were consistently wrong on a race for which they were prolific. However, by using just the most recent poll in a race, some pollsters (Quinnipiac certainly comes to mind) would be rewarded because early aberrant results would be left uncounted, as long as they snapped back into line with their final polls. The window of time is a bit longer, as well.
- Only pollsters who offered at least a dozen polls during that time period stretching from October 1st and Election Day were included in this analysis. This narrowed the participating group down to fifteen pollsters.
- Because it was hard to gauge what party benefitted from errors in a few races with three legitimate leading candidates, those races were omitted from the study. As a result, polls of the gubernatorial races in Maine and Rhode Island, as well as the Senate races in Florida and Alaska, were not included. The Colorado gubernatorial race was left in, however. The justification? By October 1st, Tom Tancredo had already established himself as the de facto choice of GOP voters, despite technically being an Independent.
- The pollsters' "bias ratings" were taken, quite simply, by looking at the difference between the percentages of polls favoring one party over another (regardless of party). So, if a pollster had 43% of their polls favoring Republicans, and 29% of their polls favoring Democrats, their "bias rating" would be a 14. Percentages were selected instead of raw numbers of polls, so the more prolific pollsters would not be unfairly impacted.
So, how did the pollsters of America do? The results might surprise you. But first, a brief caveat: a pollster leaning to one party or another should not necessarily be construed as an accusation of intentional behavior by said pollster. It could be (and, in fact, is likely to be more often than not) simply the result of a likely voter screen that was either too restrictive, or not restrictive enough. There are also a couple of key factors that help to explain the top two names on this particular list. Look for those in the commentary which follows the list.
With that caveat in mind, here is the list, in order from the most biased pollster to the least biased pollster:
POLLSTER BIAS RATINGS: 2010 ELECTION CYCLE--10/1/10 to 11/1/10 (Number of polls conducted in parentheses)
- Merriman River (18)--Bias rating of 83 (83% R 0% D)
- Penn Schoen Berland (42)--Bias rating of 57 (67% D 10% R)
- Ipsos/Reuters (12)--Bias rating of 50 (58% R 8% D)
- Susquehanna Research* (15)--Bias rating of 46 (53% D 7% R)
- Siena College (14)--Bias rating of 43 (50% D 7% R)
- Rasmussen* (161)--Bias rating of 33 (49% R 16% D)
- CNN/Op Research (28)--Bias rating of 32 (46% R 14% D)
- Suffolk University (13)--Bias rating of 30 (38% D 8% R)
- YouGov (33)--Bias rating of 24 (39% R 15% D)
- SurveyUSA (69)--Bias rating of 24 (46% R 22% D)
- Quinnipiac (26)--Bias rating of 20 (35% R 15% D)
- PPP (59)--Bias rating of 19 (39% R 20% D)
- Public Opinion Strategies (22)--Bias rating of 13 (36% D 23% R)
- Mason Dixon (28)--Bias rating of 11 (43% D 32% R)
- Monmouth University (14)--Bias rating of 8 (29% R 21% D)
(*)--"Pulse Opinion Research" polls conducted for Fox News were included here, because that firm is affiliated with Rasmussen. The same type of affiliate relationship existed for the work that Susquehanna did for the Sunshine State News in Florida.
A few observations/takeaways from the list:
- Merriman River shouldn't necessarily be wholly acquitted for pacing the field, but there is a legitimate explanation for their gaudy numbers here. The pollster only did work in two states: Connecticut and Hawaii. Those happened to be two (of the only) states in the Union where the GOP badly underperformed. That doesn't totally excuse their numbers, however: they did have Republican Sam Caligiuri leading Democratic Congressman Chris Murphy by eight points on Election Eve. Murphy wound up winning re-election by over eight points.
- House races are notoriously difficult to poll, and so that partially acquits another pollster on the medal stand. Penn Schoen Berland only polled House races during this cycle, which might explain their performance a bit. Other pollsters near the top (Ipsos, Rasmussen) polled nothing but statewide races, so they have less of an excuse.
- The most amazing performance here may well be from Public Opinion Strategies. Kossacks who regularly digested the nightly Polling Wraps routinely read comments dismissing the GOP pollster by their initials (P.O.S.). But they were not only is in the bottom third on the bias question, they actually erred MORE in the direction of the Democrats than they did in the direction of the GOP. An amazing outcome, given that they are a Republican pollster. Now, P.O.S. did have some whoppers (Charlie Baker up 7 in MA-Gov, Matt Doheny up 14 in NY-23), but they also were the canary in the coalmine on a couple of races (MN-08, AL-02).
- Our own polling partner, PPP, also landed in the least-biased third of the polling crew for this cycle, and actually were more likely to err in the GOP's direction than in the Democratic direction. Perhaps, given the performances of both PPP and Rasmussen in this polling cycle, the press will finally cease referring to PPP in every report as a "liberal" or "Democratic" pollster. Meanwhile, they might also see fit to, in a long overdue gesture, classify Rasmussen as a "Republican" pollster. They certainly earned it this cycle.
One final set of data to peruse. Let's give a well earned shout-out to the pollsters that managed to get closest to the pin. What follows is the top five list of pollsters who were deemed "accurate" (within 3% in either direction) to the final result most often, as a percentage of their total polls:
- Suffolk University--54%
- Monmouth University--50%
- Siena College--43%
For those wondering, PPP joined Public Opinion Strategies just outside of the top five, at 41%. Rasmussen was several points back at 35%. Here, as with the bias question, Merriman River came in with the worst performance (17%).