Skip to main content

View Diary: Do Polls Drive the Narrative, or Does the Narrative Drive the Narrative (53 comments)

Comment Preferences

  •  You're a little off-base in several points: (2+ / 0-)
    Recommended by:
    quibblingpotatoes, ybruti

    First,

    There are also liberal poll Unskewers who believe that the polling firms themselves, led by arch-conservative pollster Scott Rasmussen, are forging poll numbers in order to drive the numbers-based narrative pictured above.
    You pretty much lost a lot of analytic credibility here, at least with me, by making a false-equivalence comparison. There is a significant difference between "liberals" questioning the poor, unscientific methodologies (such as, for instance, robo-calling or using questionable party weighting methodologies) of conservative polling outfits (both old, with a record to judge, and new, with no record at all but partisan association) and Unskewers literally making unfounded assertions about methodologies not used by polling firms to criticize polling firms that have a long track-record and are transparent about the scientific statistical methods they use for sampling (typically random sample, weighted by regional population density and state demographics). Since the first debate, the number of polls, both state and national, from conservative firms have increased significantly, and typically produce "findings" that are at odds with both non-partisan and Democratic firms. So, making that comparison really made it hard to take the rest of you analysis seriously, but I read it anyway and noticed a few things.

    First, you say this:

    This is a chart of the average margin in polls in the nine closest battleground states, as determined by my ongoing research into charting the median active credible poll in each state.
    Well, we don't know what your "ongoing research" is or what methods you use, so you may want to be a little more clear about your methods for constructing this chart. For instance, to know what your "median active credible poll" is, and whether or not the measure is reliable and valid, you should let us know a few things:

    1. What is your list of reliable/credible polls? Do you assemble this list by methodology or pollster credibility? Based on the approach, who is on the list?

    2. What time frame do you use for identifying the "median poll"? I assume, from the chart, that this is "median per state per day," but it's not clear. And...

    3. If I am correct about "2," your chart is very misleading. Since state level polls are not released daily, there are several days missing data points, but by using a line-chart, you paper over this with a filled in trajectory. This confusion kind of compounds the problem created by your lack of transparency about your methods.

    First, when averaging the margin, you are producing a measurement that makes no sense in electoral realities. For instance, Ohio and Wisconsin contain more electoral votes that Iowa and Colorado, and O's margins have been higher in those key states for most of the race than they have in Iowa and Colorado. Taking the average of these margins disguises significant leads in states that matter more electorally, leading to very bad conclusions about the "true" state of the race.

    Beyond the silly measure though, if you are averaging these medians daily, what do you do about days in which states are missing data? Such omissions can artificially drive down the margins even more than your strange method of analysis in the first place.

    Of course, there are more questions/concerns, but the point is that without being transparent about your methods, we have no means by which to gauge the validity of any of what you've put forth.

    Finally, you've provided no rebuttal to the hypothesis that polls and trends can be sensitive to narrative that cherry picks polls to make the race seem closer than it is. Your argument is "polls dropped after the debate, so that set the narrative," but the reality is a full day and a half news cycle spinning the debate occurred before even the FIRST polls were conducted after the debate. Several days of spin had passed before a full suite of polls was available. So the narrative about the debate had already been baked into the samples of the polls. Then, even though the polls were mixed on how much the race actually changed, there were enough showing movement for Romney to cherry pick for a "OMG, IT'S TIED!" narrative. This makes it difficult, if not impossible, to prove or disprove causality for either side of this debate, but the fact that most firms with much more credibility on this (and much more transparent and valid analytic methods and measures) agree that the drop in the post-first-debate polls was, in totality, negligible, makes your analysis above even less credible.

    Blogs: http://mediadeconstruction.com/ Twitter: realsteveholt

    by steveholt on Thu Oct 25, 2012 at 10:43:48 AM PDT

    •  I'm just a blogger with an excel spreadsheet (0+ / 0-)

      1. The list of reliable/credible polls comes from the admittedly partisan RealClearPolitics, with the addition of PPP's polls when they do them for liberal groups and don't appear on RCP's site.  

      2. I calculate the median based on numbers from active polls, meaning the latest guesses from individual polling firms.  In Ohio, for instance, I have the latest polls from 14 different pollsters (Time, Rasmussen, Survey USA, Suffolk, Quinnipiac, PPP, Gravis Marketing, ARG, Fox News, Marist, WeAskAmerica, Columbus Dispatch, Washington Post, U. of Cincinnati).  The median spread is the median Obama number minus the median Romney number.  The 14 polls give Obama's support at 46, 46, 47, 47, 47, 47, 48, 49, 49, 50, 51, 51, 51, 52, so his median is 48.5.  Romney is at 42, 43, 44, 44, 44, 45, 45, 46, 47, 47, 47, 48, 48, 48, so his median is 45.5.  The spread is therefore 3 points towards Obama.

      3. There are no days missing data points (well, besides all the days before September). The average is not an average of that day's polls, it is an average of that day's median spread.  If there are no polls that day, the median spread doesn't change.

      I did correct for electoral differences by my showing the light blue line (9 states, each state equally weighted) and the dark blue line (weighted by electoral vote).  So if Obama was -2 in Florida and +2 in New Hampshire, the light blue line would show the average as 0, but the dark blue line would show the average as -1.5 (29/33*-2 + 4/33*+2).

      I shouldn't have made a headline that purports to explore causality, because it's just as likely to be true that narrative drives the polls as much as polls drive the narrative.  My point was that there has been a change in polls that is significant.  I don't see how you can argue that the drop in post-October 3 polls is negligible, even if you discount my numbers (which you should! I'm not a pollster! Go check out the many other sites done by professional pollsters!)

      •  False equivalence (0+ / 0-)

        about the liberal Unskewers comment, when the Unskewed Polls guy came out, he was ridiculed in part because he was trying to take "reality" (poll numbers done by credible polling firms) and augment it with his partisan-weighted viewpoints.  This was when polls weren't going his way.  We could look at him and say, ha ha, this guy is just another Rethuglican ignoring reality.  But since the polls have turned, you see a lot of comments on DailyKos about how Rasmussen just makes up numbers to make the race seem closer than it is.  We amateurs are trying to take "reality" (poll numbers done by "credible" polling firms) and augment it with our partisan-weighted viewpoints.  Unskewed Guy tries to turn every poll into a Rasmussen poll.  DailyKos commenters try to turn every poll into a UNH poll.  

        This is wholly different from the acknowledgement of Republican-leaning house effects in Rasmussen and many others.  But we shouldn't forget they were Republican leaning during the Democratic poll surge of September too!  It's not a new narrative-driving plot!

        •  Well, I'll respond to the above methods later... (1+ / 0-)
          Recommended by:
          ybruti

          ...and I sincerely would like to say in the outset that I am not trying to get into a chest thumping contest with you and mean none of my quibbles as a personal affront. Just a discussion. A few quick things though:

          First, I'm not particularly "an amateur," since I work in survey research, design, and methods. I have not seen any Kos comments or diarists re-weighting polls arbitrarily and arguing that their arbitrary party weights more accurately reflect the state of play. Please point me to those, and, if they exist, I'll say up front that they are ridiculous. However, I think the comments and contributors mostly point out what I pointed out above, which is that the flood of questionable polls from conservative outfits using questionable methods makes it very hard to buy the "Romney surge" narrative, particularly since the polls from non-partisan firms nearly universally contradict Ras et al. "polls." Some might look at poll aggregators (TPM, Pollster) without Ras et al. to examine the state of play without their effects, but this is still significantly different from re-weighting already conducted polls using unscientific "methods" of weighting to a fluid characteristic like party ID. As for Ras et al., when conservative pollsters are on an island, it's not because everyone else is wrong...

          Second, while there were Republican polls pre-first-debate, the number of polls from conservative outfits has increased substantially relative to polls from non-conservative outfits. Seriously, the number of polls from conservative firms doubled in the week after the debate compared to the week before the debate, where as the number of polls from non-conservative outfits increased only mildly (I think by 10 or 11 if memory serves).

          Finally, don't short yourself about being a "blogger." As bloggers, we all put things into the public sphere that people can read and absorb and be influenced by. Otherwise, we wouldn't bother doing it. Consequently, we should always be clear and (relatively) careful about what we are putting out there.

          Blogs: http://mediadeconstruction.com/ Twitter: realsteveholt

          by steveholt on Thu Oct 25, 2012 at 12:17:08 PM PDT

          [ Parent ]

Subscribe or Donate to support Daily Kos.

Click here for the mobile view of the site