Skip to main content

Scott Rasmussen on Fox News
Scott Rasmussen: "Independent" pollster who errs on side of the GOP 81 percent of the time.

Now that the results are official pretty much everywhere (New York is a fairly important holdout, though with an obvious rationale for the tardy count), we can finally do a more thorough examination of how America's pollsters fared in the 2012 electoral sweepstakes.

Yes...yes, I realize that this has already been done in a variety of ways elsewhere, but I decided to add my own spin to it. Given my background (I am a polls guy, but from the political angle, not necessarily the math angle), I decided to do a very Algebra I approach to grading the pollsters.

Here's how it worked:

1. I made two lists of pollsters. The first list was every pollster that released polling in at least five separate races (not counting national polls). That wound up being a grand total of 34 different pollsters. Then I did a secondary list, which was the "major pollsters" list. Here, I excluded two groups: pollsters who primarily worked for campaigns, and pollsters that only worked in 1-2 states. This left us with a list of 17 "major" pollsters.

2. I then excluded duplicate polls. Therefore, pollsters were only assessed by their most recent poll in each race. Only polls released after October 1st were considered in the assessment process.

3. I graded each of the pollsters on three criteria:

  • The first criterion was a simple one--in how many contests did the pollster pick the correct winner? If the pollster forecasted a tie, then that counted for one-half a correct pick. I then rounded to the nearest whole percent, for a score between 0-100.
  • The second criterion was a simple assessment of error. I rounded each result to the nearest whole number, did the same with the polling results, and then calculated the difference. For example, if the November 5th PPP poll out of North Carolina was 49-49, and Romney eventually won 50-48, the "simple error" would be two points.

    I then gave each pollster an overall "error score" based on how little average error there was in their polling. The math here is painfully simple. No error at all would yield 100 points, while an average error of ten points would get you zip, zero, nada. By the way, if you think 10 points was too generous, bear this in mind: two GOP pollsters had an average error in 2012 of over ten points.

    The math here was basic: for every tenth of a point of average error, I deducted one point from the 100 point perfect score. Therefore, the clubhouse leader on this measurement (a tie between Democratic pollsters Lake Research and the DCCC's own in-house IVR polling outfit) had an average error of just 2.0 percent. That would yield them the score of 80.

  • The third measurement sought to reward those who did not show a strong partisan lean. This was called the "partisan error" score. Here, we took the error number from criteria two, and added an element. The question: did the pollster overestimate the Democratic performance, or the Republican one? The total number of points on the margin for each party were added up, and then the difference was taken. That was then divided by the number of polls. This led to a number that (usually) was lower than the "error" score, because a good pollster won't miss in favor of just one party every single time.

    Interestingly, virtually every pollster had an average error that overestimated the performance of the GOP. This echoes the national polls we saw, which tended to lowball the lead that President Obama held over Mitt Romney.

    For this criterion, the 0-100 score was calculated the same way. For example, Rasmussen, on average, erred in favor of the GOP by 3.5 percent (you'd have thought it'd be higher, but they had a couple of big point misses in blowouts like the North Dakota gubernatorial election. That muted their GOP swing). Therefore, their "partisan error" score would be 65.

So, how did the pollsters fare in 2012? The best, and worst, performances among the major performers might surprise you.

(UPDATE: The link to the GoogleDoc with the data and the "grades" for the pollsters should be fixed now. Apologies to those who tried to view it in the first hour.)

(Continue reading below the fold.)

First, among the sixteen "major pollsters", here is who made the top five:

1. Pharos Research: 267 points
100 points on picking winners (Overall record: 18-0-0)
73 points on "error" score (Average error: 2.7 percent)
94 points on "partisan error" score (Average error: Democrats +0.6)

2. Ipsos/Reuters: 255 points
86 points on picking winners (Overall record: 6-1-0)
76 points on "error" score (Average error: 2.4 percent)
93 points on "partisan error" score (Average error: Republicans +0.7)

3. NBC News/Marist: 243 points
100 points on picking winners (Overall record: 14-0-0)
69 points on "error" score (Average error: 3.1 percent)
74 points on "partisan error" score (Average error: Republicans +2.6)

4. Angus Reid: 242 points
95 points on picking winners (Overall record: 9-0-1)
72 points on "error" score (Average error: 2.8 percent)
75 points on "partisan error" score (Average error: Republicans +2.5)

5. Public Policy Polling: 239 points
96 points on picking winners (Overall record: 48-1-2)
64 points on "error" score (Average error: 3.6 percent)
79 points on "partisan error" score (Average error: Republicans +2.1)

Everyone except the most devoted Polling Wrap devotees might be asking the same question: who the hell is Pharos Research? The pollster caught some eyes in October, when they began a weekly series of polls in about a half dozen states. Their late start to the game aroused some skepticism. Their head honcho, Steve Leuchtman, even had some correspondence with our own David Nir, who sought to figure out who this new firm was. Nate Silver was even more skeptical, leaving Leuchtman to argue that the results would "speak for themselves."

To his credit, they did. They hit all 18 of the races that they polled, including the razor-thin Florida presidential race and the North Dakota Senate race. Their average error was relatively small, and their "partisan error" was among the smallest of any polling outfit. Of course, part of the reason why this was the case is that they were easily the most Democratic leaning of the "major pollsters". In a Democratic year, that paid off for them.

Their biggest miss, as it happened, was one of their most high-profile polls: the Nebraska Senate race. They weren't alone on that ledge, however, as the Omaha-World Herald poll also badly overstated Democrat Bob Kerrey's chances.

Ipsos/Reuters and Angus Reid, by the way, were internet-based samples. The early failed experiment with internet sampling (the notorious "Zogby Interactive" polls) besmirched the entire genre. But their numbers this year were solid, and a third 'net based pollster (YouGov) finished just outside of the top five.

A word about PPP. Their performance this year was, as always, awesome. What dinged their numbers a bit here was one simple fact: unlike everyone who finished ahead of them, they polled individual House districts. These races are often much more perilous to poll. Their only miss (out of 51 races!) was in a House race, where a private poll they conducted one week out from Election Day gave incumbent Republican Frank Guinta a one-point lead in New Hampshire (he wound up losing to Democrat Carol Shea-Porter by a 50-46 margin). The firm's average error in House races was 4.5 percent, considerably higher than their average error in the statewide races (3.47 percent).

They did have a couple of big misses: they gave Claire McCaskill a slight lead in a race that she eventually won by a wipeout, and they underestimated Obama's blowout in Massachusetts by nearly a dozen points. All in all, though, another amazing effort by the crew out of North Carolina. Here is a stat to consider: PPP got within four points or less of the final result in a whopping 73 percent of the races they polled.

Now, for the bottom five:

1. American Research Group: 121 points
39 points on picking winners (Overall record: 3-5-1)
41 points on "error" score (Average error: 5.9 percent)
41 points on "partisan error" score (Average error: Republicans +5.9)

2. University of New Hampshire: 168 points
71 points on picking winners (Overall record: 4-1-2)
46 points on "error" score (Average error: 5.4 percent)
51 points on "partisan error" score (Average error: Republicans +4.9)

3. Mason Dixon: 173 points
75 points on picking winners (Overall record: 15-6-1)
43 points on "error" score (Average error: 5.7 percent)
55 points on "partisan error" score (Average error: Republicans +4.5)

4. Gravis Marketing: 187 points
90 points on picking winners (Overall record: 16-1-2)
46 points on "error" score (Average error: 5.4 percent)
51 points on "partisan error" score (Average error: Republicans +4.9)

5. Rasmussen Reports: 199 points
85 points on picking winners (Overall record: 34-5-3)
49 points on "error" score (Average error: 5.1 percent)
65 points on "partisan error" score (Average error: Republicans +3.5)

Oh, mama. Man, did the "pirate pollster" have a shitty year. Remember that this was a year in which only two presidential states were decided by less than five points. Ergo, picking "winners" in this cycle should've been a cakewalk. ARG couldn't even bat .500, for crying out loud. They missed the presidential winner in Colorado, Florida, Iowa and Virginia. Add a big miss in the New Hampshire gubernatorial race (where Democrat Maggie Hassan eventually won by double digits), and you have an ugly cycle for the Pirate.

Mason Dixon might've only earned the bronze medal of dishonor, but they definitely earn some special dispensation for their crappiness in 2012. They had more "losses" than anyone in terms of picking winners with their polls. They missed on six races, out of just 22 races polled. Remember, PPP had just one miss, despite polling more than twice as many races!

For the House of Ras, meanwhile, it could have easily been worse. They had a pair of misses where they overestimated the Democratic performance by a solid margin (the North Dakota gubernatorial election and the New Mexico Senate race). Take those two out of the mix, and their average error in favor of the GOP would have been considerably higher.

Looking at the larger list, the outcome is incredibly predictable. The Democratic private pollsters, by and large, did quite well. The Republican private pollsters took a bath. The worst two were a pair of GOP pollsters that did so bad, they actually earned goose eggs on the error and partisan error scores. In other words, their results favored the GOP by more than ten percentage points.

The "winner" of the worst performance came from the GOP outfit OnMessage, which polled six races and correctly forecast just one winner. Their average error was an eye-popping 11.5 percent, and all six of their polls overestimated the performance of their GOP candidate.

Of course, the explanation for this is likely a simple case of selection bias. It is entirely possible, of course, that OnMessage and other GOP pollsters conducted tons of surveys that were more accurate. And it is equally possible that said surveys never saw the light of day, because campaigns aren't in the business of releasing numbers showing them getting smooshed.

Therefore, the presence of Democratic pollsters in the top ten, and the presence of GOP pollsters in the bottom ten, likely has less to do with the inherent quality of the pollster in question and more to do with the fact that this cycle, generally speaking, sucked for Republicans.

The exception, to pile on yet again, is Rasmussen. Lest we forget, their performance sucked in 2010, which was as good a year as you are likely see for the Republican Party. So whether their preferred party is in or out of favor, the firm has laid eggs over the past two cycles.

For the entire list, and the polling data used to arrive at those figures, click here. A word of warning: while I took great pains to cull together every poll I could get my hands on on a daily basis, I freely concede that a poll or two might've slipped through the cracks. That said, the overwhelming majority of polling data for the 2012 cycle did make it into our database, which was the basis for the numbers used in this rating system.

Furthermore, this is just one way to measure pollsters. I chose three criteria, but there are certainly tons of other ways to assess the quality of the numbers guys. So, to beat a cliché into the ground, your mileage may vary. Enjoy, and argue at will in the comments.

Originally posted to Daily Kos Elections on Sun Dec 30, 2012 at 07:59 AM PST.

Also republished by Daily Kos.

Your Email has been sent.
You must add at least one tag to this diary before publishing it.

Add keywords that describe this diary. Separate multiple keywords with commas.
Tagging tips - Search For Tags - Browse For Tags


More Tagging tips:

A tag is a way to search for this diary. If someone is searching for "Barack Obama," is this a diary they'd be trying to find?

Use a person's full name, without any title. Senator Obama may become President Obama, and Michelle Obama might run for office.

If your diary covers an election or elected official, use election tags, which are generally the state abbreviation followed by the office. CA-01 is the first district House seat. CA-Sen covers both senate races. NY-GOV covers the New York governor's race.

Tags do not compound: that is, "education reform" is a completely different tag from "education". A tag like "reform" alone is probably not meaningful.

Consider if one or more of these tags fits your diary: Civil Rights, Community, Congress, Culture, Economy, Education, Elections, Energy, Environment, Health Care, International, Labor, Law, Media, Meta, National Security, Science, Transportation, or White House. If your diary is specific to a state, consider adding the state (California, Texas, etc). Keep in mind, though, that there are many wonderful and important diaries that don't fit in any of these tags. Don't worry if yours doesn't.

You can add a private note to this diary when hotlisting it:
Are you sure you want to remove this diary from your hotlist?
Are you sure you want to remove your recommendation? You can only recommend a diary once, so you will not be able to re-recommend it afterwards.
Rescue this diary, and add a note:
Are you sure you want to remove this diary from Rescue?
Choose where to republish this diary. The diary will be added to the queue for that group. Publish it from the queue to make it appear.

You must be a member of a group to use this feature.

Add a quick update to your diary without changing the diary itself:
Are you sure you want to remove this diary?
(The diary will be removed from the site and returned to your drafts for further editing.)
(The diary will be removed.)
Are you sure you want to save these changes to the published diary?

Comment Preferences

  •  An interesting overview Steve, but . . . (13+ / 0-)

    You say:

    [T]he presence of Democratic pollsters in the top ten, and the presence of GOP pollsters in the bottom ten, likely has less to do with the inherent quality of the pollster in question and more to do with the fact that this cycle, generally speaking, sucked for Republicans.
    It seems you are implying that Democratic pollsters should be expected to show better results for Dems and GOP pollsters the same for Republicans. If that's true, it discredits polling. A good Democratic pollster and a good Republican pollster should not be expected to always come up with the same results (polling doesn't work that way), but they should not consistently lean in one partisan direction or another. If they do, they are both doing something wrong.

    Ok, so I read the polls.

    by andgarden on Sun Dec 30, 2012 at 08:06:52 AM PST

    •  They probably are doing something wrong, and (3+ / 0-)
      Recommended by:
      Notreadytobenice, RUNDOWN, Jeff Y

      you can figure that out as soon as you don't raise an objection to the existence of "Democratic" and "Republican" pollsters.  Otherwise, they'd  just be pollsters.

      LG: You know what? You got spunk. MR: Well, Yes... LG: I hate spunk!

      by dinotrac on Sun Dec 30, 2012 at 08:10:51 AM PST

      [ Parent ]

      •  What happened to Gallup? (3+ / 0-)
        Recommended by:
        Loge, RUNDOWN, Jeff Y

        I watched the polls closely this cycle and I noticed that Rasmussen Reports and Gallup seemed to always be the Republican outlier polls but Gallup didn't appear in the list.

        I was wondering about their overall performance?

      •  On the contrary (9+ / 0-)

        If I do polling mostly for Democrats, I can be thought of as a "Democratic" pollster. But it doesn't follow that my results should lean in any particular way.

        Ok, so I read the polls.

        by andgarden on Sun Dec 30, 2012 at 08:19:51 AM PST

        [ Parent ]

        •  Shouldn't it? (1+ / 0-)
          Recommended by:

          Two things come to mind:

          If you poll mostly for Democrats,

          1. Democrats hire you for a reason, and/or
          2. You work for Democrats for a reason

          When polls are constructed by machines with no human input, I will believe that your clientele has no effect.

          Until then, even the best and most conscientious human beings are, for better or worse, human beings.

          LG: You know what? You got spunk. MR: Well, Yes... LG: I hate spunk!

          by dinotrac on Sun Dec 30, 2012 at 08:23:33 AM PST

          [ Parent ]

          •  Unlike, for example, redistricting (4+ / 0-)
            Recommended by:
            dinotrac, MadEye, coigue, James Allen

            there should not be any reason for Republicans and Democrats to make different assumptions or choose different methodology. That they apparently do with some frequency brings an undeserved ill-repute to the science of polling.

            Ok, so I read the polls.

            by andgarden on Sun Dec 30, 2012 at 08:54:10 AM PST

            [ Parent ]

            •  The science of polling has the same problem as (1+ / 0-)
              Recommended by:

              the science of economics: Those darned people!

              Lots of wonderful statistical rigor in science (and, for that matter, in economics).

              A completely objective person could do wonderful things.

              So long as we have to rely on polling questions instead of reading minds, there will always be a little room for error beyond what the statistics say.

              LG: You know what? You got spunk. MR: Well, Yes... LG: I hate spunk!

              by dinotrac on Sun Dec 30, 2012 at 09:38:55 AM PST

              [ Parent ]

            •  There are many reasons to make assumptions (2+ / 0-)
              Recommended by:
              RUNDOWN, dinotrac

              For one thing, unless you are polling the entire population, ie all eligible voters, all registered voters, etc., you have to build a sample of just a portion of the population. You try to make that sample reflect the essential makeup of the population you're polling, but you are limited to a pretty small percentage of the overall population (again for many reasons) and so it is going to tend to be a more impressionistic reflection rather than a photo-realistic reflection. Therefore, you make assumptions about the ways in which your sample either varies from the population, or accurately reflects it. The accuracy of those assumptions are often what makes the poll more or less accurate.

              •  meh, a pollster should predict who will turn out, (0+ / 0-)

                but there's not a lot of reason to do a baseline number that represents particularly favorable turnout, they can just adjust the numbers for that to show their clients, along with less favorable turnout numbers.

                ...better the occasional faults of a government that lives in a spirit of charity, than the consistent omissions of a government frozen in the ice of its own indifference. -FDR, 1936

                by James Allen on Sun Dec 30, 2012 at 06:22:27 PM PST

                [ Parent ]

          •  Another thing comes to mind (5+ / 0-)

            If I'm a politician and honestly trying to win an election, I want the most accurate, unbiased polling results I can so get so that I can shade my campaign to the best effect. A poll that is biased in my favor does me no good whatever. It may make the base feel warm and fuzzy and give the media things to debate about, but it doesn't help me win the campaign.

            I may poll mostly for Democrats either because I am connected into the party network, I prefer their policy foundation or I detest the Republican policy foundation, but none of that means that I am going to automatically bias my poll. If I am good at polling and interested in an accurate result, I design a poll that provides accurate answers, not rosy results. And knowing my personal proclivities, I build in checks to the sampling and questions that mitigate any bias.

            Perhaps you don't think that a conscientious pollster exists. But in fact people can, with intent, overcome bias and achieve honest and accurate results, and people can desire and demand honest and accurate results because they actually want to gain a grasp of reality rather than a rosy fortune-teller forecast.

            •  You are not going to bias your poll, but your poll (1+ / 0-)
              Recommended by:
              DSPS owl

              may be biased because of you.

              Bias requires no conscious effort. It is the end result of our beliefs and our experiences -- it infuses the way we think and the way we look at things. We can consciously try to correct for our bias, but we are incapable of escaping it.  Likely as not, we don't even see it.

              To consciously manipulate your polls for a certain result is not an act of bias -- it is an act of dishonesty or propaganda, though bias may be a motivator.

              LG: You know what? You got spunk. MR: Well, Yes... LG: I hate spunk!

              by dinotrac on Sun Dec 30, 2012 at 09:57:49 AM PST

              [ Parent ]

              •  You seem to be missing the point (0+ / 0-)

                willfully or not, the point being that you must manipulate your polls in order to achieve accuracy - if you don't correct for error in sampling and question design, you will not be accurate and your results will be skewed, perhaps by bias, perhaps by ineptness.

                The other point being that you can, in fact, correct for pollster bias and honest pollsters do. The dishonesty is in skewing polls for a particular result, not correcting to achieve accuracy.

                An opinion or a bias does not automatically make people incapable of honesty and it does not render all polls skewed away from accuracy. The simple fact of a poll getting consistently accurate results sort of underlines that fact.

                •  I understand that very well. (0+ / 0-)

                  I also understand about correcting for bias.
                  And, maybe -- maybe -- you'll even get it right a decent part of the time.

                  This is not about honesty. That is something different.  This is about people.  Unless, of course, you believe pollsters are somehow exempt from the frailties of humanity.  That would be dishonest.

                  LG: You know what? You got spunk. MR: Well, Yes... LG: I hate spunk!

                  by dinotrac on Sun Dec 30, 2012 at 06:48:40 PM PST

                  [ Parent ]

              •  Not all eyes are closed. (0+ / 0-)

                Avoiding unconscious bias is something professional pollsters (and others after the truth, like scientists and traditional journalists) learn as a specific skill. It takes a particular set of practices and a certain personality to do well, but many people know how to get the correct answer regardless of their own preference.

                The attitude you express is common in news reporting today, and among conservative and evangelical propagandists. It's in not surprising to see it repeated here. It would depress me horribly to think that there is no escape from such mendacity.

                •  Yes, I understand that. (0+ / 0-)

                  I also believe that the attitude you express -- an unwavering faith that the calculations will fix it all -- more than a little bit naive.

                  It's all worth doing.
                  Good people do the best they can.
                  That, however, is all they can do.

                  LG: You know what? You got spunk. MR: Well, Yes... LG: I hate spunk!

                  by dinotrac on Sun Dec 30, 2012 at 06:51:25 PM PST

                  [ Parent ]

        •  It's about the results you release (1+ / 0-)
          Recommended by:
          De Re Rustica

          If you are hired to do internal polls, you can definitely be accurate. The problem is, your most accurate polls may show bad news for the candidate and the results never leave their campaign office. Now, if they go and brag and release your results, they're obviously favorable. This could harm your perceived accuracy heavily, as only polls favorable to the candidates that hire you ever see the light of day (despite being appreciated by the candidate either way, as all results show information they need to know).

          If you were typically contracted by republican candidates this year, most of the results of yours that ended up published were probably your wronger ones, because all your ones that were accurate were left in the campaign HQ, never to be seen by the public.

    •  Was explained above. (10+ / 0-)

      With partisan pollsters, there is sampling errors not in the samples of voters within the polls, but in the sample of the polls that are released publicly.  When these pollsters are higher by a campaign to conduct a poll, they release the results to the campaign, and then the campaign releases the results to the public, but only if they choose to.  Therefore, a partisan pollster may have a "lean" not because their underlying methodology is biased, but simply because only the polls that are favorable to the party by whom they are hired are released publicly and thereby get into this sort of analysis.  That sort of "bias" is not necessarily a sign of a bad pollster.

      PPP is kind of an exception here since they conduct both indepedent polls and polls for which they are hired by campaigns or outside Democratic interest groups (and the partisan "interest groups" like or the Sierra Club or whoever else PPP polled for are less likely to withhold results when they are poor for their side than actual campaigns).

    •  The issue with partisan pollsters... (6+ / 0-)
      Recommended by:
      IM, MKinTN, 1BQ, MrLiberal, SaoMagnifico, DSPS owl discussed in the diary.

      And it's pretty straightforward -- if you're a Democratic or Republican polling outfit, you're doing your polls for specific political campaigns, who can decide whether or not the numbers from a specific poll should be released as public information or held inside the campaign.

      Since no campaign is going to release internal data that makes them look bad, they're going to be very selective about which results they release.  So the consequence is that even if a partisan polling outfit does a very good job, their numbers may appear to be skewed based on the selective releasing of their data by their customers.

      It seems you are implying that Democratic pollsters should be expected to show better results for Dems and GOP pollsters the same for Republicans. If that's true, it discredits polling.
      Not at all.  Remember that the issue is that we're only seeing a small amount of the work done by these pollsters, and what is released may not be representative of their overall work.

      Political Compass: -6.75, -3.08

      by TexasTom on Sun Dec 30, 2012 at 08:45:30 AM PST

      [ Parent ]

      •  That is a very good argument (3+ / 0-)
        Recommended by:
        TexasTom, winsock, coigue

        for ignoring selective releases from campaigns and committees in this kind of roundup.

        Ok, so I read the polls.

        by andgarden on Sun Dec 30, 2012 at 08:52:09 AM PST

        [ Parent ]

        •  If not ignoring (0+ / 0-)

          at least assuming a selective bias towards the poll's sponsor.

          Things work out best for those who make the best of the way things work out.

          by winsock on Sun Dec 30, 2012 at 09:09:19 AM PST

          [ Parent ]

        •  That's Why I Did, By And Large... (5+ / 0-)
          Recommended by:
          MKinTN, 1BQ, MrLiberal, coigue, andgarden

          I focused on the 17 pollsters who essentially did not poll for campaigns (PPP did a little campaign work, but the bulk of the work was released publicly by the firm).

          In the Google document (which I just opened up, thought I had shared it earlier...sorry!), you can see the other pollsters, as well. But I intentionally based most of my piece on the public pollsters, for that very reason.

          "Every one is king when there's no one left to pawn" (BRMC)
          Contributing Editor, Daily Kos/Daily Kos Elections

          by Steve Singiser on Sun Dec 30, 2012 at 09:16:28 AM PST

          [ Parent ]

        •  I think they're useful... (0+ / 0-)

          In parsing a campaign's messaging. When Richard Carmona's campaign started putting out internals showing him close on Rep. Flake's heels, I knew he was for real and that he thought he could win if the race got enough attention. Similar thing with now-Sen.-elect Heidi Heitkamp when her campaign released internals showing her tied and leading Rep. Rick Berg.

          Keeper of the DKE glossary. Priceless: worth a lot; not for sale.

          by SaoMagnifico on Sun Dec 30, 2012 at 12:03:59 PM PST

          [ Parent ]

    •  No, no.... (8+ / 0-)

      What I am saying is that Democratic CAMPAIGNS were more willing to release their pollster's work in a good year. We did see more Dem releases than GOP ones this year, on balance.

      I am also saying, as I did in the piece, is that selection bias cuts both ways. Why did the GOP pollsters look so far off the fairway in some cases? Because their clients only released the one poll in 5 (or 10...or 15...or 20) that looked good for their campaign.

      "Every one is king when there's no one left to pawn" (BRMC)
      Contributing Editor, Daily Kos/Daily Kos Elections

      by Steve Singiser on Sun Dec 30, 2012 at 09:14:18 AM PST

      [ Parent ]

    •  No, it's quite simple. (0+ / 0-)

      Some outfits that call themselves "pollsters" are actually PR flacks.  Their business isn't accurately assessing the electorate's mood, it's generating buzz and momentum in a particular direction.

      Take a careful look at the GOP primaries.  Remember how it was like Whack-A-Mole?  Every month a new superstar emerged at the top, starting with Michelle Bachmann.  (Yes, they were that crazy.)  If you then dig through the polling data you'll find that each one of these bust-out candidates first showed unusual strength in a Rasmussen poll, which quickly snowballed because Ras floods the field with data, so any reputable poll-averager (like Nake Silver) would show exactly the momentum Rasmussen had engineered.

      Was Rasmussen a shill?  If you believe that all pollsters are honest and trustworthy, this question will have no meaning for you.  I for one have seen enough evidence to convince me that the field of polling is littered with fake outfits whose purpose has nothing to do with polling.  And 99% of them shill for the GOP.

      I'm not sixty-two—I'm fifty-twelve!

      by Pragmatus on Sun Dec 30, 2012 at 09:37:08 AM PST

      [ Parent ]

    •  Their overall polling could be good. (1+ / 0-)
      Recommended by:

      There could be a bias in which polls are released, though.

      "Michael Moore, who was filming a movie about corporate welfare called 'Capitalism: A Love Story,' sought and received incentives."

      by Bush Bites on Sun Dec 30, 2012 at 09:54:33 AM PST

      [ Parent ]

    •  I think main explanation for this is at end (0+ / 0-)

      of his post.  If you don't release all of the polls you conduct, you may have kept ones in the hopper that were pretty accurate.  But since you're hired by the Dem or GOP candidate they don't want you to release them.

      The only thing we have to fear is fear itself - FDR. Obama Nation. -6.13 -6.15

      by ecostar on Sun Dec 30, 2012 at 10:01:36 AM PST

      [ Parent ]

    •  We may not necessarily see the entire scope of it. (0+ / 0-)

      Campaigns that hire partisan pollsters tend to leak the good the good polls and sit on the bad ones.  So it does not seem THAT surprising that campaigns that have actual good news to tell (as opposed to ones who are desperately trying to grasp at straws to spin a story about a comeback that may never arrive) will have their pollsters naturally do better.

      Of course that is not the entire story.  There are partisan groups (of which we part of right now) that will try to do objective polling or who will at least be willing to allow bad news to leak on other matters as long as the matter they are polling on looks fine.

      So there is a chance of some of these partisan firms were doing their job well.  And we just in a lot of cases never got to know it.

      The lady was enchanted and said they ought to see. So they charged her with subversion and made her watch TV -Spirogyra

      by Taget on Sun Dec 30, 2012 at 02:46:15 PM PST

      [ Parent ]

  •  I laughed at Rasmussen being described (10+ / 0-)

    as an independent pollster.

  •  To err is Human (6+ / 0-)

    To Really foul up requires a GOP Pollster with a Computer

    As the Elites Come Together to Rise Above to Find a Third Way to do Rude things to the 99%

    by JML9999 on Sun Dec 30, 2012 at 08:11:19 AM PST

  •  Now that I'm done cringing at your description (1+ / 0-)
    Recommended by:

    of your methodology, let me just say:

    Ooh! Ooh! Ooh!

    Never mind that all polls are not created equal, for a variety of reasons (different populations with different attitudes towards pollsters, different sample sizes, etc, etc, etc), the thought that a single North Dakota poll could significantly affect a pollster's rating just seems wrong on the face.

    In the world of real hard serious statistical studies (an area in which my expertise is sufficient to calculate the change when I buy a cup of coffee) there are notions like normalization and percentiles and other goodies to keep any one datapoint from having an outsized effect.

    Averages are actually very good and very useful numbers -- I'm just not sure this is an "average" kind of analysis.

    LG: You know what? You got spunk. MR: Well, Yes... LG: I hate spunk!

    by dinotrac on Sun Dec 30, 2012 at 08:17:08 AM PST

    •  An Excellent Point... (5+ / 0-)

      And when I did this last time around, I calculated it a little bit differently. I gave the pollsters credit if they got within three points, either way. Then I calculated how many times they were off by over three points in the direction of the Dems, and how many times they were off by over three points in the direction of the GOP. That kept one bad result (like the ND Gov) from mucking up the curve.

      The problem, in this cycle? Almost everyone erred in the direction of the GOP this cycle. Had I used this method, almost every pollsters would have looked the "same", since virtually all of them were likely to overestimate in the direction of the Republicans.

      So, with that club out of my bag, I just went with the simple average, even though I knew it would come with some liabilities.

      "Every one is king when there's no one left to pawn" (BRMC)
      Contributing Editor, Daily Kos/Daily Kos Elections

      by Steve Singiser on Sun Dec 30, 2012 at 09:20:24 AM PST

      [ Parent ]

      •  It's a lot easier to criticize than it is to do! (2+ / 0-)
        Recommended by:
        jiffykeen, Steve Singiser

        I wouldn't have the nerve to even try ranking them.
        Much safer my way.
        Not as much fun, but -- hey! I get to carp at a distance and not worry about the shape of my nose!!!!!

        LG: You know what? You got spunk. MR: Well, Yes... LG: I hate spunk!

        by dinotrac on Sun Dec 30, 2012 at 09:41:44 AM PST

        [ Parent ]

  •  "criteria" is a plural word (4+ / 0-)
    Recommended by:
    Mother Mags, TFinSF, jncca, James Allen


    -The first criterion was a simple one...
    -The second criterion was a simple assessment of error.
    -For this criterion, the 0-100 score was calculated the same way.

    (Sorry, but my inner ear was ringing.)

    •  Oh, Hell.... (1+ / 0-)
      Recommended by:
      James Allen

      You are correct, of course. Thank God my high school English teachers are largely apolitical. Mr. Cook, I apologize in advance, good sir!!!!


      "Every one is king when there's no one left to pawn" (BRMC)
      Contributing Editor, Daily Kos/Daily Kos Elections

      by Steve Singiser on Sun Dec 30, 2012 at 09:21:11 AM PST

      [ Parent ]

  •  Lern (0+ / 0-)

    Engrish you lousy wetback.

    3.1.1 Criterion in the singular.

    &xC5;snark . . .

  •  Ah good memories watching abc "this week" (4+ / 0-)
    Recommended by:
    JML9999, Loge, Mother Mags, MKinTN

    Video of Karl Rove saying all the networks predicting President Obama's victory are WRONG :-)

  •  And what is the point of grading? (4+ / 0-)
    Recommended by:
    exatc, Notreadytobenice, MKinTN, stevemb

    In school, the counselor takes a look at a student's grades and tries to make career suggestions:

    "Billy, it looks like you're having difficulty in math and science, have you ever considered being a janitor?  You did an excellent job helping to clean up the auditorium after the last school dance."

    Unfortunately, after high school graduation, not enough people get help with their career choices.  Someone needs to get the following message across:

    "Scotty, it looks like you're having difficulty with your work.  Have you ever considered shoveling shit?  You did such an excellent job shoveling conservative opinion, so holding your nose wouldn't be a problem. "

    •  I love your choice of example name… (1+ / 0-)
      Recommended by:
      Works for Scotty Walker, Scotty Brown, or Ricky Scott. And soon, I hope.

      I also hope for a "Johnny" (×2), a "Ricky" (also ×2), a "Bobbie" (also ×2), a "Paulie", and a "Jannie", all seeking other career fields.

      Nickname key (so you don't have to think too hard): Kasich, Boehner, Snyder, Perry, McDonnell, Jindal, LePage, Brewer.

  •  Rasmussen (2+ / 0-)
    Recommended by:
    smokey545, pimutant

    " Rasmussen, on average, erred in favor of the GOP by 3.5 percent "  As you wrote, I  guessed by more -- 3.8% as in 2010.  They are biased but predictable.  When I personally averaged polls I unskewed Rasmussen by adding 4% Dem - Rep.  

    •  That's The Thing.... (0+ / 0-)

      Some other pollsters could argue that...heck...EVERYONE missed in the direction of the GOP this year. It was a good Dem year, and that "caught us by surprise".

      But Rasmussen was off just as badly in 2010. Maybe even a little bit worse!

      "Every one is king when there's no one left to pawn" (BRMC)
      Contributing Editor, Daily Kos/Daily Kos Elections

      by Steve Singiser on Sun Dec 30, 2012 at 09:22:26 AM PST

      [ Parent ]

    •  I always unskewed Ras too (0+ / 0-)

      Only I used 3. So if Ras had the Rethug ahead by three I figured it was a dead heat. Looks like the proper unskew was half-way between our horse back guesses!

      If your internal map of reality doesn't match external conditions, bad things happen.--Cambias

      by pimutant on Sun Dec 30, 2012 at 01:05:43 PM PST

      [ Parent ]

      •  Oh, And PS (0+ / 0-)

        With Gallup I started to simply disregard their likely voter screen result, and always just used the registered voter number.

        If your internal map of reality doesn't match external conditions, bad things happen.--Cambias

        by pimutant on Sun Dec 30, 2012 at 01:09:04 PM PST

        [ Parent ]

  •  Interestng but... (0+ / 0-)

    Interesting but a lot of apples to oranges comparison. If you are going to end up comparing the pollsters to each other directly (generating an ordered list) then it seems like you ought to do one of the following:

    1) only compare pollsters' performance on the same race.

    2) include something like a 'difficulty factor' that awards points for making predictions in hard to call races.

    If firm A predicts Obama will win CA, MA, and VT and goes 3/3; but firm B predicts Obama will win NC, FL, and OH but goes 2/3, it is hard to say firm A is really better.

    •  In Essence, They Did This To Themselves... (0+ / 0-)

      I didn't want to restrict it to just races where everyone polled the same race, because the data set (and the number of pollsters left) would've been pretty small.

      But pretty much everyone focused on: Ohio, Colorado, Florida, etc. So it kind of happened, anyway.

      I thought about a difficulty factor, but with over 400 polls in the mix, adding that variable would've been tough. Plus, since 2/3 of the criteria was how close they came to the final result, I'd argue that California and Massachusetts actually get harder than the swing states there. Always tough to get a blowout margin right.

      "Every one is king when there's no one left to pawn" (BRMC)
      Contributing Editor, Daily Kos/Daily Kos Elections

      by Steve Singiser on Sun Dec 30, 2012 at 09:25:28 AM PST

      [ Parent ]

  •  Pharos Research has me mystified (2+ / 0-)
    Recommended by:
    JML9999, Steve Singiser

    Their website is beyond amateur;

    Their principals are very difficult to pin down.

    Next time I'm down in San Diego I may knock on their door,

    Daily Kos an oasis of truth. Truth that leads to action.

    by Shockwave on Sun Dec 30, 2012 at 08:31:04 AM PST

  •  Ras and Gallup should be out of business... (10+ / 0-)

    But they'll be back and held in high esteem by the same media outlets who gave them coverage in 2012.  Ras will be back flooding the zone with his bullshit polls that anybody paying remote attention know is nothing except his attempt to control the narrative.  

    Yeah Independent pollsters go on Right Wing sponsored Cruises to hobnob with all the other propaganda tools in the GOP toolbelt.  

    The NRA is the Gun Manufacturer Lobby. Nothing more. Their pontification about the second amendment is nothing more than their ad jingle. They're the domestic version of the Military Industrial Complex.

    by Jacoby Jonze on Sun Dec 30, 2012 at 08:36:33 AM PST

  •  Good diary! (1+ / 0-)
    Recommended by:

    I like your analysis.

  •  This is a wonderful diary! Thanks! nt (2+ / 0-)
    Recommended by:
    smokey545, equern

    Might and Right are always fighting, in our youth it seems exciting. Right is always nearly winning, Might can hardly keep from grinning. -- Clarence Day

    by hestal on Sun Dec 30, 2012 at 09:14:32 AM PST

  •  Scott Rasmussen - independent pollster (4+ / 0-)
    Recommended by:
    Voodoo king, jomsc, pimutant, johnxbrown

    independent of facts
    independent of reality
    independent of honesty

    "Do what you can with what you have where you are." - Teddy Roosevelt

    by Andrew C White on Sun Dec 30, 2012 at 09:40:06 AM PST

  •  Sidebar: Which is worse for democracy... (0+ / 0-)

    A world with muddy polling or one with superlative predictions of electoral outcomes?

    The pros of having good polling are well-known and often-touted: a check on procedural breakdown and outright fraud, a vetting of candidates, their behavior and stated policy positions, etc.

    Are there cons, as well?

    Does participation of trailing parties fall off as the polling margin of error narrows, narrows more, then converges on zero?

    At what point do we end up with, well, a patchwork of single-partisan locales disguised as a competitive election systems?

    Are we 'there yet', in many areas of the USA?

    What steps can we take (or once took, and no longer take) to maximize net positive benefit of superior information?

  •  What about early polls? (0+ / 0-)

    Great analysis here, and your spreadsheet is a fantastic resource, thanks for opening it up.

    But like all the polling retrospectives I've seen so far, we're only looking at post-Oct 1 polls here.  I understand the reasoning, but the earlier polls are in some ways more important -- they help determine more investment decisions than the final rounds of polls, or even earlier, who runs and who doesn't.

    Except for primaries (esp top-2 "jungle primaries"), we don't get the ultimate reality check until the end of the race.  And most races aren't stable over those time spans (with exceptions, like VA-Sen this year, partly because of near universal name recognition with both candidates having held top-tier state-wide office before).

    But there must be some other methodologies to see who does better when we're far from election day.  Lower volatility?  Consistently close to the average of the other polls of the same race, but without the error those other polls had on election day?  And also who does worse -- as bad as they look in this analysis, Rasmussen had even worse errors that they smoothed out just in time to look vaguely respectable here.

    Ideas, anyone?

  •  PPP was amazing and toward the end, they were the (1+ / 0-)
    Recommended by:
    Voodoo king

    Pollsters that I trusted. PPP gutsy call in Florida and Virginia for Obama surprised who had given up on those states and were focusing on Ohio.

    “You must be the change you wish to see in the world.” --Gandhi:

    by smokey545 on Sun Dec 30, 2012 at 09:51:59 AM PST

  •  Unskewed Polls Guy (2+ / 0-)
    Recommended by:
    tommypaine, jomsc

    Dean Chambers gets a dishonorable mention and special award for being "special"

  •  Beyond Elections (0+ / 0-)

    And into opinion polls, the idea that certain polling outfits "identify" with a certain party - and by that measure identify with a certain political philosophy.

    This translates into the context of their polling questions, often on the verge of "push polling" public opinion.

    Evident now in the gun debates.

    Another thought on Gallup, a Gallup poll some time ago asked the most popular president of all time, Reagan came up first, edging out Lincoln ... says something about their polling sample.

    If not us ... who? If not here ... where? If not now ... when?

    by RUNDOWN on Sun Dec 30, 2012 at 10:00:18 AM PST

  •  I would grade Ramussen on a different scale (0+ / 0-)

    Ras makes his money by targeting a specific audience: Idiot conservative Republicans. To that end, he gave his audience the numbers they wanted to see, so much so that they were willing to tune out all other polling numbers. And, I have no doubt that Ras made a ton of money by keeping his audience happy and tuned in through election day.

    Ras accomplished what it sought to accomplish, and for that reason it gets an A+.

  •  Independent of the truth, you mean. (0+ / 0-)

    48forEastAfrica - Donate to Oxfam> "It is better to light a candle than to curse the darkness." Edna St.V. Millay

    by slouching on Sun Dec 30, 2012 at 10:14:08 AM PST

  •  Since Climate Change is a liberal-socialist "scam" (1+ / 0-)
    Recommended by:

    apparently so is arithmetic and mathematics.  Money should be able to buy everything even the numbers.

    What really happened is that the GOP's money-fairy failed them.  Such a shame.

  •  Can we get the media to ignore these clowns? (1+ / 0-)
    Recommended by:

      Now that we have the data on the huge amount of fail these RW pollsters have, can we get them kicked off of media mentions in the MSM? I got sickened of seeing the Rasmussens of the world being given the same creedence as a real poster.

       We have to pound the fact that the GOP uses these cooked polls as yet another propaganda outlet, and hammer home to the media that their horse-race coverage shouldn't be influenced by these lies, or at least use the "pont spread" to reflect reality (in other words, if a Raz poll says it's even, the Dem's up 4).

      I also note the Angus Reid performance, as they pretty much nailed the official results of both the Wisconsin recall and the Wisconsin presidential elections.

  •  Very interesting, Steve (0+ / 0-)

    I think is an interesting approach.

    About Rasmussen, I always tell that he is a smart guy that know the way to do right polls, but he want not. He want to sale a pro-GOP narrative.

    I was pretty sure that he was using the safe races to reduce his average bias in the close races. And your comments just show it. He does all for his narrative.

  •  Thanks for the post Steve; plus one more emphasis (0+ / 0-)

    Choosing only post Oct 1 polls is being generous to the pollsters.  However bad they were post-Oct 1, they were generally worse pre-Oct 1, especially in terms of GOP bias.

    While we can never say be sure that zillions of people didn't change their minds right around Oct 1, we can see that pollsters samples all year long featured a fantasy electorate that undercounted young people and self-identified Democrats.  A lot of PPP
    s Republican house effect can be chalked up to them consistently having poling samples that were 12% under 30 years old, while national turnout was 19% under 30s.  

    Pollsters like Ras and Gallup were comical all year long due to truly absurd assumptions about the electorate -- in Ras' case, he even said his raw numbers were showing a Democratic plurality electorate, but he still adjusted his numbers to release results that showed a Republican pluraity electorate.  In other words, it wasn't as if Ras was getting wrong data from his phone calls.  Instead, he simply ignored his phone calls and adjusted his numbers based on his own turnout model.  Basically, a stupid person got legit data and then created his own stupid data.

    Mr. Gorbachev, establish an Electoral College!

    by tommypaine on Sun Dec 30, 2012 at 12:11:17 PM PST

    •  Agreed, Kos' "Bend" Rule, Oct 1 (0+ / 0-)

      One of Kos' discussion points during the cycle when discussing the release of polls was how pollsters (he was mostly referencing Gallup, I believe, at the time) was that as we get closer to the election, pollsters bend their polls to the norm to be less of an outlier.

      Considering that fact, like you mentioned, there were some really BS crazy polls in September. But in defense of the Oct 1 line in the sand, the further back you go, the less likely you are to take into account the nature of the campaign itself.

      What separates us, divides us, and diminishes the human spirit.

      by equern on Sun Dec 30, 2012 at 02:16:14 PM PST

      [ Parent ]

      •  Answers to the calls are effected by the campaigns (0+ / 0-)

        but the setting of an arbitrary assumption of what the electorate will be mostly should not be.

        In other words, if Ras' calls were showing a D+3 electorate in July or whenever, it was bad methodology for him to change that to a R+3 electorate just because he thought that is what it would be.

        Mr. Gorbachev, establish an Electoral College!

        by tommypaine on Sun Dec 30, 2012 at 03:48:32 PM PST

        [ Parent ]

  •  What? (1+ / 0-)
    Recommended by:
    Remember that this was a year in which only two presidential states were decided by less than five points.
    Am I reading this wrong? But Ohio (1.9%), Virginia (3.0%), North Carolina (2.2%) and Florida (0.9%) were all decided by under five. Obviously this isn't a big deal and it doesn't change the point or how bad Republican "pollsters" were.

    (-6.12,-3.18), Dude, 24, MI-07 soon to be MI-12, went to college in DC-at large K-Pop Song du Jour: J-Min's Stand Up

    by kman23 on Sun Dec 30, 2012 at 01:32:07 PM PST

  •  One thought on the date universe (0+ / 0-)

    My impression over the last few cycles is that Ras tends to be way off the mark in the GOP direction and then move gradually closer to the norm as the election date gets closer - apparently a strategy to both make their fans happy, set the narrative they want to set and yet make themselves look better than they are.  Do you think that by only showing results after October 1 you actually allow them to appear better than their reality?  I understand the difficulty of grading earlier results of course, but it's something to think about.

    "Wouldn't you rather vote for what you want and not get it than vote for what you don't want - and get it?" Eugene Debs. "Le courage, c'est de chercher la verite et de la dire" Jean Jaures

    by Chico David RN on Sun Dec 30, 2012 at 02:37:05 PM PST

  •  It wasn't that Democratic a year (1+ / 0-)
    Recommended by:

    aside from Obama, yeah Democrats netted gains in both houses, but I think we only netted 10 in both combined, so not much, and I think we actually lost a governorship.

    ...better the occasional faults of a government that lives in a spirit of charity, than the consistent omissions of a government frozen in the ice of its own indifference. -FDR, 1936

    by James Allen on Sun Dec 30, 2012 at 06:27:11 PM PST

  •  Steve, one minor correction...... (0+ / 0-)

    Four states, not two, were decided by less than 5 points.  Florida was a point, North Carolina was 2, Ohio was 3, and Virginia was 4.  Colorado rounded off to 5 but was more than that taken out a couple decimal points.

    44, male, Indian-American, married and proud father of a girl and 2 boys, Democrat, VA-10

    by DCCyclone on Sun Dec 30, 2012 at 07:34:46 PM PST

Subscribe or Donate to support Daily Kos.

Click here for the mobile view of the site