(image created by Jed Lewison)
You see, for years, when I get to the lesson about public opinion polling, internal polling—polls sponsored by a campaign or an interested outside group—gets seriously pooh-poohed. "Don't read too much into them," I have repeatedly cautioned.
It is then that the standard caveats are eagerly offered. A campaign may conduct a dozen polls, and only release the single one that is amenable to them. Plus, you can never be sure that things like question wording and the order of questions in the survey haven't mucked up the trial heat numbers. Plus, in the worst cases, the organizations or campaigns may be less than honest about how they arrived at those lofty trial heat numbers (think: the always sketchy "push poll").
Not that any of these caveats aren't legitimate—indeed, all of them are. What's more: It is accepted practice in the political press to examine any internal poll results and offer the immediate caution that these polls should be taken "with a grain of salt."
However, the time has come for me to atone for my sins, and offer some counterpoint. A little time, plus a not-so-little database of polls (over 6,000 in all, culled from the last three election cycles), offer legitimate evidence that internal polls can tell us a heck of a lot more than we might think about the state of play in an election. Indeed, by looking at larger lessons, and not necessarily individual horse-race results, there is a fair amount of predicting value hidden amidst all those data points encrusted in grains of salt.
Three lessons in particular warrant keeping an eye on, as what has already been a pretty laudable load of data (over 1,000 polls thus far, according to my own unofficial tally) will only grow exponentially by November.
And those three lessons await you just past the jump ...
LESSON ONE: Who is releasing those internal polls matters every bit as much as what those numbers might say.
In 2008 and 2010, over 850 partisan sponsored polls were made available for public consumption (one could use the word "leaked," though a press release doesn't exactly match the secretive connotations of a word like "leaked").
For the nitpickers, this tally does NOT include partisan pollsters who released polls for generic public consumption. In other words, PPP's polls for their campaign and private clients did count in the tally, but their polls released onto their website did not. Rasmussen, while partisan, were not working for clients by 2008-2010, so their numbers did not count. Except when their field work subsidiary (Pulse Opinion Research) did work for private clients (like a Some Dude candidate in FL-03 back in 2010).
In 2008, more polls were released by Democratic candidates and organizations than were Republican polls. The margin was fairly modest (188 to 174), given what eventually became a strong Democratic election year.
In 2010, considerably more polls were released by Republican sources than Democratic sources. The final tally was 301 Republican sponsored polls, versus just 189 Democratic sponsored polls. Or, for those who prefer percentages, 61 percent of the internal or private polls released in the 2010 election cycle were released by GOP sources.
This is almost certainly not a coincidence. If the electoral winds are at your back, the chances that your internal polling will look good enough to share with others is pretty darned high. Conversely, if your political party is straining against a mighty political headwind, you might feel pretty motivated to keep your (lousy) poll numbers under wraps.
With this in mind, do we know anything about 2012? Well, it's awfully early, of course. But thus far, we have had 92 private poll releases for Democratic campaigns and causes, versus just 68 private polls releases for Republican campaigns and causes. That's a 58 percent Democratic majority. Not quite as good as 2010 was for the Republicans, but still a clear edge.
LESSON TWO: When only one side in a race is comfortable releasing their numbers, that is often telling.
On Thursday of this week, the DCCC released a poll in upstate New York showing second-term Rep. Bill Owens, who has always been up against it in ancestrally Republican territory, staked to a comfy 12-point lead over Republican Matt Doheny. Whether or not you really believe Owens has a double-digit lead in the New York 21st district, the response from Doheny's campaign was pretty soft:
“The fact that Bill Owens can’t break 50 percent as an incumbent in a poll commissioned by his own party is indicative of how much trouble he’s in this fall"Of course, if Team Doheny was that confident that Owens was in trouble, they (or the GOP ... or its many affiliated pressure groups) would probably have some contradictory data of their own to share. That said data has not emerged (at least not yet) may tell us more about the state of that race than the D-Trip's poll release did earlier in the week.
One of the reasons why a lot more polls dropped from GOP sources than Democratic sources in 2010 is because a lot of campaign polls released by buoyant Republican campaigns went unanswered by their Democratic counterparts. This, too, is telling, and perhaps more telling than the larger partisan disparities. That is because when you look at it race by race, you can begin to see what races might be developing a decided tilt. For example, Democratic incumbent Phil Hare's ouster in nominally Democratic territory in IL-17 two years ago was not as surprising as some might have thought, given the terrain. Why? Because over the course of 2010, we saw a half-dozen GOP sponsored polls showing surprising strength for Republican Bobby Shilling, while polls boosting Hare were conspicuously rare.
When campaigns play dueling polls, it is, at the very least, a sign that both campaigns feel confident enough in the strength of their campaign to put some numbers out there. When only one side is releasing data, either the silent partner in that race is keeping their awesome strength on the down-low for some inexplicable reason, or they don't like their numbers they are seeing.
For example, in 2012, if I were the GOP, I'd be a touch nervous about freshman Rep. Jim Renacci in Ohio. When the GOP gerrymandered Ohio so artfully in 2011, they threw Renacci in with fellow incumbent Betty Sutton (a Democrat) in a GOP-leaning district. Since then, Sutton or supportive groups have released three polls, the latter two of which staked Sutton to small leads. Meanwhile, the GOP has offered not a single counterpunch. Again, barring some kind of absurdly clever political jiujitsu, the only rational explanation for the silence on their end is because their numbers mirror those of Sutton's advocates, or they are worse than those of Sutton's advocates.
LESSON THREE: Looking at who is polling, and where they are polling, can give us hints about a shifting target list
One thing that we often forget about public opinion polling: It ain't free. So, one potentially instructive thing to watch for as the 2012 campaign cycle really begins to ramp up is what races are getting polled by the parties and affiliated organizations (such as the Democratic-affiliated House Majority PAC).
There are times when the choice of targeted races might prove a little bit surprising, and could hint at potential strategy for the fall. One recent example was a DCCC poll looking at FL-13, home to one of the most venerable Republican incumbents in the game: octogenarian Rep. Bill Young.
The DCCC's new in-house IVR polling team took a quickie look at the 13th, and found that their likely nominee (Jessica Ehrlich) trailed Young by a 49-35 margin. Virtually no one in the pundit class had the Florida 13th on their radar screen at the start of the campaign, so one now has to wonder if this was just a simple act of temperature-taking by the DCCC, or whether this portends a bold decision to try to pick off a district that has long been considered a GOP lock.
As August folds into September, and as the marathon campaign cycle reaches its latter stages, watch for what new districts start getting polled, and what races are curiously quiet. That silence could be an indicator that one side has conceded the district, and newfound noise could be a sign that one side has sense an opportunity.
Of course, taking horse-race numbers from a poll paid for by an entity with a rooting interest is always somewhat of a calculated risk. The old caveats are still valid, and should be respected.
However, the three lessons outlined above remind us that there are, when all is said and done, good reason to examine both the forest and the trees that can be gleaned from sponsored/private polling data. History teaches us that we can learn an awful lot beyond the simple analysis of "who's up" and "who's down."