There’s been a lot of cherrypicking of polls, early voting data, anecdotal evidence and historical precent to try to game out what is going to happen next Tuesday. And given that the state of democracy in the country is at stake, most of us aren’t entirely content to simply wait it out and see what happens.
One of the big criticisms (from me and others) is that RW pollsters have been flooding the field in an effort to control the narrative. And it does seem to be working. But what if they’re right? Is there any way to test whether Trafalgar and their ilk are on to something?
There are a lot of a priori reasons to suppose that many of these RW polls are off the mark. For one thing, a lot of them don’t pass the sniff test. E.g.:
For another, the pollsters themselves clearly have an agenda.
On the other, the RWers would claim, polls underestimated the republicans in 2016 and 2020 so we’re defiinitely doing so now. Even the estimable Nate Cohn has jumped on this bandwagon (2 such articles in 2 days):
In short, Mr. Trump’s supporters were less likely to respond to surveys than Joe Biden’s, even among people who had the same demographic characteristics.
…
The re-emergence of this pattern (or, perhaps, the persistence of this pattern in battleground states) raises the possibility that nonresponse might continue to be a big challenge for pollsters.
In asking the question about “who's right” with regards to the polls, I did a quick study. I took every pollster who reported at least 3 horserace polls in competitive races in both 2018 and 2020 and looked at their polling error (mean error if at least 5 races, otherwise median). The results are shown in the plot above (though admittedly, some of the color labels are a bit tough to distinguish).
An obvious point (that people make again and again and again): in 2020, most pollsters overestimated the Dems. True. Yes. Absolutely. But also there was a pandemic, people changed voting patterns dramatically, the mail was being suppressed, Trump was on the ballot, etc., etc., etc. We’ve heard all of this. Is this a good reason to suppose that RW voters will always be underestimated?
Consider those two outlier points in the lower left, the ones who overestimated the R’s both years, and missed (and for one of them, missed hilariously in 2018): Rasmussen and Trafalgar. None of the others polluting the discussion (Cygnal, InsiderAdvantage, etc.) have enough data to actually estimate their lean, but in this cycle, all of them have reported similar results. Basically, their argument goes, we were right in 2020, haven’t pushed our methodology even further rightward (doubtful, but possible), so we’re right now, and everyone else is making the same mistakes as last time.
If you excise those baddies, the remaining polling errors are essentially uncorrelated. In other words, it’s pointless to assume that Marist or Quinnipiac or whoever will necessarily be too D this cycle because they were last cycle. (Though I should note that they aren’t anti-correlated. There’s not reason to suppose that pollsters are overcorrecting past mistakes.).
Given all other data, which year do you think 2022 is more similar to? I’d go with the previous midterm.