I’ve been tracking the polling aggregates all year, using Huffinton Post’s Pollster.com aggregator. Starting today, I’m transitioning (mostly) to our own models, now that our Elections vertical is completely finished. I’ll still use their national numbers, since we don’t model that, but we’ll be using our own for the states. That means the trendlines won’t completely match up, so I want to take this moment to show the variances in the two models:
2016 BATTLEGROUND PRESIDENTIAL MATCHUPS
|
Daily Kos |
Pollster.com |
US |
C+8.4* |
C+8.4 |
AZ (11) |
T+0.1 |
T+4 |
CO (9) |
C+6.1 |
C+4 |
FL (29) |
C+4.2 |
C+3 |
GA (16) |
T+0.2 |
T+3 |
IA (6) |
T+0.4 |
T+3 |
MI (16) |
C+10.3 |
C+7 |
MO (10) |
T+4.8 |
T+7 |
NV (6) |
C+2.1 |
C+1 |
NH (4) |
C+6.4 |
C+5 |
NC (15) |
C+2.6 |
C+1 |
OH (18) |
C+3.0 |
C+3 |
PA (20) |
C+7.4 |
C+6 |
VA (13) |
C+8.1 |
C+7 |
WI (10) |
C+7.9 |
C+5 |
As you can see, the variances go from nothing (Ohio) to quite big (Wisconsin). What explains those differences?
1. We always use the 4-way numbers, while Pollster uses the two-way numbers. The margins may only shift 1-2 points by doing that, but that’s enough to give us many of the differences you see above.
2. We include more polls than they do. Our crack Elections team has an innate ability to dig up polls from who-knows-where. That means we have a more comprehensive polling results database. That is good, sometimes, but could also be detrimental if we include shadier or less-proven polls. I’m not going to claim one way is better than the other until after the election, when we can compare the models to the final results.
3. There are other differences in the models. The aggregators have to decide how much to weigh particular polls, how quickly to decay old numbers, how much weight to assign new ones, what polls to exclude, etc. There is a “secret sauce” component to these things, which is why every aggregator ends up with different results (including 538, RCP, etc). That’s why we can get trendlines like these:
Pollster, Arizona:
Daily Kos, Arizona:
NY Times, Arizona:
As you can see, that statistical “secret sauce” can yield significant differences in the topline numbers.
I, of course, have the utmost confidence in our numbers. They are based on Drew Linzer’s methodology that was (arguably) the most accurate of 2012, and inarguably the most accurate of 2014. This is from the NY Times:
So yeah, I’m comfortable making the shift now. But no matter whose numbers you look at, it’s still looking like a killer election for us. So our job is to nod our heads in approval, then work our butts off to make it even better than any stupid model says.
p.s. Our model is not stupid.
You don’t live in a swing state? Click here to sign up for a phonebanking shift with MoveOn. You’ll be calling voters in the swing states in no time.