David Beard:
Hello and welcome. I'm David Beard, contributing editor for Daily Kos Elections.
David Nir:
And I'm David Nir, political director of Daily Kos. The Downballot is a weekly podcast dedicated to the many elections that take place below the presidency, from Senate to City Council. If you haven't yet, please subscribe to The Downballot wherever you listen to podcasts, and leave us a five-star rating and review.
David Beard:
The political news is moving fast here the last week of September. So what are we going to cover?
David Nir:
In Pennsylvania in the governor's race, Republican Doug Mastriano's campaign is exploding in spectacular fashion. We also want to talk about the ad spending wars, where Democrats are doing a much better job of actually getting their spots before viewers, even when they're getting outspent by Republicans.
David Nir:
We're also going to give you an update on Republican J.R. Majewski and his stolen valor scandal in Ohio. And finally, there are the dispiriting results of the Italian general election, which saw the far right gain its first victory since World War II.
David Nir:
After that, we are talking with G. Elliott Morris, data journalist for The Economist and author of the new book, Strength in Numbers, an Exploration of the History of Polling and Why It Matters. So many interesting things to discuss. So let's get rolling.
David Nir:
So next week I will be taking off from The Downballot for Yom Kippur. We have a great guest host who's going to be joining David Beard next week, Joe Sudbay. You are going to love him. Of course, what do Jews do on Yom Kippur? Well, we spend the day fasting. Now, if you are not Jewish, but you still want to get in on the fasting action, believe it or not, I have a solution for you. You can join forces with Doug Mastriano, Pennsylvania's Republican nominee for governor. His campaign is going so well that he has called for quote, “40 days of fasting and prayer.”
David Nir:
Now, believe it or not, Mastriano is not the first Republican candidate we have seen call for fasting. Sam Brownback, believe it or not, did that in 2018 when he was in his last year as governor of Kansas. But this is just hilariously emblematic of what a disastrous situation Mastriano finds himself in. He's been down by double digits in many, many polls, but here is a quote that I just loved.
David Nir:
This is from a Republican operative who opposed Mastriano in the primary. He said, "We were opposed to Doug's candidacy in the primary because we feared that he would not be able to connect with the independent and moderate Democratic voters that are necessary for Republicans to win in Pennsylvania. Unfortunately, six weeks from the election, I haven't seen anything to suggest we were inaccurate in our assessment."
David Nir:
But here's the amazing thing. This dude who gave those quotes is head of the only super PAC that is actually spending money to help Mastriano beat Democratic State Attorney General Josh Shapiro. This is who his friends are, and even his friends think that he sucks. And the numbers are really, truly remarkable when you look at the airwaves. The Philadelphia Inquirer had a great piece about this race on Wednesday, the one discussing the 40 days of fasting and prayer.
David Nir:
And the authors note that on TV and online ads, Shapiro has outspent Mastriano $21.6 million to $6,300. No, I am not leaving off any zeros. Mastriano simply has no money, and what little money he doesn't have, he hasn't even put on the airwaves. That group that's supposedly helping Mastriano but still hates him has spent $5 million, but that is obviously just a small fraction of Shapiro's spending. And also the Inquirer notes that Shapiro has another $8 million of TV time reserved until election day. Mastriano's total is $0. And this huge disparity, not just in the raw dollar numbers, but in terms of who is actually spending, a campaign versus a super PAC, also has a huge impact on the number of ads that these groups can each run for the same dollar amounts. And Beard, I know you definitely want to talk about that.
David Beard:
Yeah, as bad as Mastriano is, we've also got some very bad Republican Senate candidates, and as a result, the GOP independent arms are having to bail these candidates out. Now overall, to take a broad view of the nine key Senate battleground races, the GOP is spending more money than the Democrats when you include both candidate spending and outside group spending, $106 million to $93 million in the past three weeks.
David Beard:
But because of the fact that Democratic money is mostly from candidates with a smaller amount of outside spending, where we've seen like Shapiro, a lot of our Senate candidates have had really strong fundraising quarters. Whereas a number of Republican candidates have been very bad fundraisers, like Mastriano, on the Senate side, the GOP is getting much worse ad rates because so much more of their spending is from outside groups.
David Beard:
Now, candidates have to be provided the lowest available ad rates whenever they're buying ad time, so that gives them discounted rates that allow them to let each dollar go further when they're buying ad time. But that's not true for these outside groups who have to pay higher rates, and particularly as election day gets closer and there's less and less ad space available, those rates go up and up and up. And you can find yourself paying three times, four times, even more what a candidate would be paying to get your ads on TV in front of voters.
David Beard:
Now, to look at a couple of specific examples, in Arizona, Senator Mark Kelly, the Democrat, and his allies are outspending Republican Blake Masters 52% to 48%. So that's the amount of money being spent. So it's very close thanks to a good amount of outside GOP spending. But in terms of eyeballs watching ads, Kelly's side has a four-to-one advantage in what we call gross rating points, which is how many people are actually seeing the ads. And that's because Kelly's dollars go so much further than the outside GOP groups.
David Beard:
Masters has only spent $9,000 on ads during most of September, whereas Kelly is the majority of the ad spending on the Democratic side. And there's a similar situation going on in Ohio, where Republicans are managing to actually air more commercials than Democrats in Ohio, but that's because they're spending three times as much money as the Democrats are. Tim Ryan, the Democratic nominee for Senate in Ohio, is almost all of the Democratic spending. He's responsible for 83% of the ads coming from his side, whereas J.D. Vance, the Republican, is responsible for just 8% of the ads.
David Beard:
So the vast, vast majority of the ads in Ohio on the Republican side are independent ads, which is why the Senate Leadership Fund, one of those outside groups, has had to commit $28 million to Ohio, which I'm sure they did not have budgeted in January of this year to bail out Vance because his fundraising has been so poor and Ryan's ad dollars go so far. So it's a major issue. Obviously, the GOP is funded by a bunch of very, very wealthy individuals who like to cut million-dollar checks. So it's not like they're not going to be seeing GOP ads in these key states, but boy, those are some expensive ads people are watching.
David Nir:
Speaking of Ohio, we also have to circle back and update an amazing story from last week that got even amazing-er. We told you about Republican J.R. Majewski in the Toledo-based Ninth Congressional District in Ohio and his appalling stolen valor scandal. That broke a week ago. The following day, the very same afternoon that we released last week's episode, it turned out that the NRCC cut him loose. The GOP's chief campaign arm for House races cut a million dollars, all of its planned advertising for Majewski in his attempt to unseat veteran Democratic Congresswoman Marcy Kaptur. This is a seat that, as we mentioned, the GOP gerrymandered to an extreme, turning it into one that Democrats typically won by about 20 points to one that actually would have given Donald Trump the edge.
David Nir:
There's also another angle here that is just really strange and has not been reported on. The Congressional Leadership Fund, which is the top super PAC affiliated with House GOP leadership, back in April, they announced a huge smorgasbord of fall TV ad reservations in dozens of media markets, including Toledo, where they said they were going to book $700,000 in TV time. And we had wondered, well, what happened to this? Is CLF also cutting Majewski loose, because sometimes the NRCC and CLF are not on the same page?
David Nir:
Well, something even weirder than that has happened. It appears that the Congressional Leadership Fund never actually made a reservation in Toledo at all. What happens is these groups announce their reservations, and then they go through the actual process of putting this money on the board. And that's a somewhat involved process. You're buying TV time on multiple TV stations and markets all across the country, But when these groups put out these press releases, whether it's the NRCC or CLF or the DCCC and say, "We're reserving a million dollars here and a million dollars there," yes, those reservations can change later, but you actually expect those initial reservations to be made.
David Nir:
And it's particularly strange because CLF is so heavily funded. They are by far the most deep-pocketed group on the GOP side, much bigger even than the NRCC. So what's going on? Do they have other fake reservations that they haven't actually followed through on? I would really love to know more on this, and I certainly hope that some enterprising reporters dig into this because this is a strange one.
David Beard:
I want to end us this week on a bit of a dispiriting update from Italy. Italy had their general election this past Sunday, and the far-right there is expected to take power for the first time since Mussolini and the end of World War II, after the far-right alliance comfortably won the general election. Giorgia Meloni, the leader of Brothers of Italy, which is a party that can trace its roots back to the neo-fascist movement immediately after World War II, is expected to become the country's first Prime Minister. Her party won 26% of the vote, along with the far-right League winning 9%, and former Prime Minister Silvio Berlusconi's Forza Italia winning 8%.
David Beard:
Now, that doesn't add up to 50%, and all told they won in the low 40s. But due to Italy's mixed system where some members of parliament are elected proportionally and some members are elected first-past-the-post, the way that we do it here, that alliance which ran candidates together was able to win the vast majority of the first-past-the-post seats. And that gives them an overall majority in both chambers, despite only winning 43% of the vote. Now, the Center Left Alliance, which was led by the Democratic Party, sort of the traditional center-left party in Italy, won 26% of the vote. A centrist alliance that was an offshoot of the left alliance won 8%, and the populist Five-star Movement won 15%.
David Beard:
Now, if those parties had all run together against the far right, there was a good chance that the election would have been very close or even that alliance could have won. But due to how the parties ran and due to the system, it was never particularly close in the end. And I'll also note that while the far right did grow its share of the vote by about 7%, there was also a really significant vote shift within the far right groups, where both the League and Forza Italia dropped votes versus the last election while Brothers of Italy shot up and took a ton of votes from those other two parties, which allowed Meloni to finish first and to claim the Prime Minister's office.
That was, at least in part, because both of those other parties were in coalition governments at different times in the past few years, leaving the Brothers of Italy as really one of the few parties in Italy that was outside of government the entire time, where there was all these COVID issues, and debt issues, and other struggles of the Italian government. And it was allowed to play the role of the opposition, which I'm sure had a role in attracting voters to the party when they were clean from the tough decisions that had to be made by the Italian government in recent years.
David Beard:
Now, one silver lining is that the far-right alliance did not win a two-thirds majority, so they cannot amend the Italian constitution without a referendum. So that means that there's a bit of a check. We've seen in other countries, most notably Hungary, that when the far right took power, and could unilaterally change the constitution, things got very bad very quickly, so there is at least that step.
David Beard:
And we've also seen Meloni back off a lot of her previous anti-Euro positions, and she's also strongly backed Ukraine in recent months. So the early expectation is that, internationally, she's not looking to rock the boat as much as she might have, but we can certainly expect big changes within Italy itself. They ran a very strongly anti-immigrant, racist campaign, a lot of very strong anti-LGBTQ issues in the Brothers of Italy. So those are things that we can expect this far-right alliance to go after in the upcoming years.
David Nir:
That does it for our weekly hits. Coming up, we are going to be talking with G. Elliott Morris, a data journalist who you may know from Twitter, who recently published a fascinating new book on polling called Strength in Numbers. We're going to be discussing the history of polling and so much more with him. A lot of great stuff in store. Check it out.
David Nir:
Joining us now is G. Elliott Morris, who is a data journalist for The Economist, and the author of the new book, Strength in Numbers: Help Polls Work and Why We Need Them. Elliot, thank you so much for joining us today.
G. Elliott Morris:
Yeah, thanks for having me on.
David Nir:
So tell us about the inspiration for writing your new book, Strength in Numbers, and what you're trying to get across.
G. Elliott Morris:
Well, first let me acknowledge, on the surface, I understand the book might sound a little bit crazy. To be writing a book right now about the polls, after 2016 and 2020, they didn't have the best track record then. I'm both a consumer of polls via my reporting duties at The Economist and an election forecaster. So I also know that those errors are not necessarily unprecedented.
G. Elliott Morris:
So in the history of the book, there's a lot on early polling, which was even worse, really, than it is now. And so I'm sitting here in early 2020, right as the pandemic launches, basically us into a permanent lockdown at home, which gives me lots of time to write thinking, "Well, maybe I'll write this polling book about that, about how the media is full of misconceptions about how accurate polls are, and maybe we just shouldn't trust them as much." So I sit down to write this book about everything I know about polls, and why everything is wrong.
G. Elliott Morris:
And as I'm doing that, as I'm sort of reading a lot of this old political science and polling archive literature from the '30s and '40s, I'm realizing there's a much bigger story here. The story about polls is not, hey, these are actually pretty accurate. It's that they're actually an incredibly important tool for democracy. And so that's where the book goes, and that was my biggest inspiration for writing it. Actually, we shouldn't be using this tool only for predicting elections, and we shouldn't be expecting, as I'm sure we'll get into, laser-like predictive accuracy with these polls. We should be using them for something much more important, which is writing about what people want from their government, and pushing leaders to do those things.
David Nir:
So you alluded to this a moment ago. You talked about the bad polling misses in the early days of polling. And I think probably a lot of folks imagine that polling really started in the first half of the 20th century, but you go back much deeper than that. So how did you trace it back to its earliest roots, and what did you find with the early history of polling?
G. Elliott Morris:
Well, we have evidence of unscientific, or what we now call straw polls, for stretching back to at least 1824. That was an election between four Democratic-Republican candidates, not necessarily mapping onto any current partisan divides, but important because it was still closely watched. And around the country you had newspapers, partisan newspapers, mainly, getting their editors and journalists to go around, and basically ask anyone they could find who they were going to vote for, and then they would report it back to the magazine, who typically had a candidate. And would say, "Hey, look at all these people who are going to vote for our candidate that we like."
G. Elliott Morris:
And those earliest examples of straw polls are probably not very accurate. The evidence we have of them is like, "Oh, here's 20 people we talk to in Kentucky, or in a 4th of July parade in Massachusetts." It's not the type of thing we would use today to predict an outcome, or to want to measure public opinion, but it is nevertheless the first evidence of polling that we have in America.
G. Elliott Morris:
That evidence, by the way, comes from an early pollster for the Franklin Roosevelt administration, who unearthed it and later gave it to George Gallup, who cited it in his book. So that's how I come to that information.
G. Elliott Morris:
And straw polls really took off thereafter. The most famous example of straw polls, obviously, is the Literary Digest poll, which again, I'm sure we'll talk about, but there's other straw polls too, or unscientific polls, before the 1936 and Literary Digest fiasco. I mean, they were actually pretty accurate in earlier elections in the 20th century, 1912, 1920, 1924, had pretty accurate presidential straw polls, at least more accurate than prognosticators at the time, or there's betting market legal election betting markets at the time, more accurate than the election [prognosticators] too. But they're not scientific. And so that sort of paved the way for a better tool to be introduced later in history. I won't spoil the ending.
David Nir:
So those early straw polls, were they born out of the same desire that we have when we look at polls today? Simply, people don't want to wait for the election, they want to get a sense of where things are going, they're hoping to predict the future.
G. Elliott Morris:
I think there's two answers here that come from the literature. The first is this suggestion that there's basically just some innate human desire to try to divine the electoral future. And fair enough. I mean, elections are very important, as the listeners of this podcast, I'm sure, will agree. Politics, then and now, is very high stakes game for both candidates and people. And so yeah, you want to anticipate outcomes so you can prepare. Maybe that's mentally, maybe that's resources, or what have you.
G. Elliott Morris:
But I mean really, to be totally honest, at the time, lots of these straw polls were conducted, and the really unscientific ones, especially, by partisan newspapers. They wanted to be able to say, "Look how bad all the other candidates are that we don't like. And to prove to you how bad they are, here's our straw poll that says 20 people are going to vote for a guy we like, and only two for the guys that we don't." And that was the motivation for at least lots of the isolated examples of this that we have in history.
G. Elliott Morris:
Now I'm sure there's lots of straw polls that we don't have evidence for, but at least drawing on a sample, the small end sample that we do have, that definitely comes through multiple ways.
David Nir:
So you mentioned the 1936 Literary Digest straw poll, and that may or may not be something our listeners are familiar with. So why don't you walk us through that one, and tell us why it's so notorious?
G. Elliott Morris:
Yeah. So the Literary Digest was a news magazine in the early 1900s. It conducted, what I think, we can probably not get in legal trouble for saying this, is the largest straw poll ever conducted. And maybe if you mean direct straw poll, at least in the American context. And so the sample was roughly 2 million people and it asked them how they're going to vote. And the way they come to that sample is by surveying both people who signed up to receive Literary Digest magazine, and then they also purchase lists of people who own telephones and automobiles, and they have their addresses, and they'll send them postcards with instructions to tick off the candidate that you like, either the Republican Alf Landon, or Franklin Roosevelt, the Democrat, and send them back to the Literary Digest headquarters where journalists and proto-pollsters, or research director type employees, would tally all of these votes. And then the magazine would report them. And they'd report them along the way as people were returning their ballots. They'd say, "Here's what the Literary Digest poll is saying now." And then they'd add ballots later and say, "Here's an updated Literary Digest poll account." And they did this for quite some time.
G. Elliott Morris:
Like I said, they were pretty accurate in previous elections, but in 1936, they have a bit of a downfall. So their poll, even though they sampled 2 million people, says that Roosevelt is going to lose by 14 percentage points in the national popular vote, 43% to 57% for Landon, which is probably the biggest polling error ever, given that Roosevelt won by 24 percentage points, making this a 38 percentage point error on margin. And again, it's not scientific, so we can't necessarily judge it by the standards that we judge modern polls, but certainly it's a horrendous misfire. And it spells death basically for the Literary Digest magazine also, which closed down in the next few years.
David Nir:
So with the benefit of hindsight, and what we understand now about modern polling and sampling, you mentioned that the Literary Digest obtained lists of owners of automobiles, and in particular telephones, not necessarily universally owned items in 1936. So it seems like they had a pretty clear sampling problem. Was this something that anyone understood or called out at the time, or did it just seem to make sense and it was only later, like I said, with the benefit of hindsight, that seemed totally borked?
G. Elliott Morris:
Well, right. So George Gallup spots this error first. The pollster we all know as sort of the first scientific pollster in America. At the time, he's getting his public polling firm, the American Institute for Public Opinion, off the ground. He's conducted a few polls nationwide. He sells, basically, stories about these polls to magazines to republish them, and he's trying to make a name for himself. So he does spot this error ahead of time. It's unclear if he comes to this error because of a sampling issue.
G. Elliott Morris:
Gallup actually tries to redo the Literary Digest poll, not by asking 2 million people how they're in a vote, but by asking a few thousand people how they're going to vote, both by mail and in person. And he says, not necessarily this is a bad poll because you're sampling telephones and automobiles, but because you're not enforcing demographic quotas in the poll. He says, you should have representative numbers of white men and white women, and educated men and women, and upper-class whites and lower-class whites. And only then are you going to have a representative poll of the time.
G. Elliott Morris:
And he also analyzes some of the patterns in their past data, which show a Republican bias. Now, I said earlier that their surveys were accurate. Well, technically they’re accurate if you adjust for this bias, which was pretty predictive. But they have a pretty stable four or five percentage point overestimation of Republicans most years. This was a well-known fact at the time. Aside from the sampling issue, what Gallup also understands is that there's non-response in the Literary Digest poll.
That means it's not only prone to error because their sampling owners of telephones and automobiles who tend to have more income and to be more prone to voting for Republicans, but also Alf Landon supporters, he thinks might be more likely to respond to the poll for whatever reason. Maybe they just have more time to read the newspaper and to send back mail and fill out a ballot or what have you. The political science after the fact says that this non-response error is the bigger cause of Literary Digest poll. Now, there's probably some sampling bias too, but if you go... Gallup's polling data from this time is public. You can actually look at the interviews.
G. Elliott Morris:
If you re-weight the data to match demographically, you still have big problems. You still have too many Republicans answering the poll. Now, I'm sure we'll get here, but this is really important because if the first ever example of scientific polling in American history has a non-response problem by party, you have too many Republicans or Democrats answering it, well, that tees up pretty nicely with the problems we're seeing in polling today and points to a broader, more fundamental issue with polls that I try to talk about in the book, which is this non-response or partisan non-response problem.
David Beard:
Now, before we get into the misses of the most recent years that a lot of people are very familiar with, I want to ask you a broader, more conceptual question, which is, what are polls good for? Even before the Trump era problems, people never like the idea of poll-tested candidates. More often than not, people seem to get excited by polls and then be disappointed when they don't turn out. We have here in America elections every two years. It's not like we don't have a good understanding of who people are supporting on a regular basis. Why do we need polls?
G. Elliott Morris:
I think we need polls for two reasons. First, because a world in which we have no polls is not a world where we have no election predictions. It's one where we have bad election predictions. You have prognosticators or pundits telling you who's going to win elections, and that misreads or misrepresents races to readers. It's a bad read of politics for people. That's, I think, the electoral case for polls. Polling aggregates have better track records than election experts who, according to some famous psychological studies, are basically monkeys throwing darts in accuracy of predicting events. That is, I think... here's the election case for why we need polls still, but that there's a much more fundamental reason why we need them. That is because a world in which you don't have polls is one where you don't have, at least in the American context, any idea nationally about what people want from the government.
G. Elliott Morris:
That's because, for multiple reasons, presidential elections, for example, are decided by the Electoral College, not the electrical college, that would be interesting. If you're judging what people want by the presidents they elect, you're going to have a really skewed understanding of what they actually want on a policy dimension or how they want to be represented in social policy or culturally. Similarly, if you're judging them based on the outputs of the US Congress and the Senate or in the house in some elections, say in 2012, where there's a big mismatch between the popular vote for the house and the members that are elected, you're going to similarly misunderstand the American people. Polls are really important in this bigger representational sense in how you understand the public.
G. Elliott Morris:
Finally, they are used by lawmakers. It's not like members of Congress, parties, presidents don't want this information. They do want to know what the people want, partly so they can represent them, but also so they can campaign on the most popular parts of their agenda. Maybe that's nefarious, but also maybe it means people are more likely to get what they want. Again, I'm a big political science reader and there's lots of political science in the book. The political science studies here say people on average get what they want when you have polling in elections.
G. Elliott Morris:
There's been a large increase in congruence of policy basically over the past half century. A lot of that is because you've increased the pool of members that can vote in this legislature, but I also think that part of that is because you increase the size of the pool of voters that participate in democracy. I think a lot of it has to be also because you have better signals of what the voters want. You have direct quotes, say from members of Congress in the '60s saying, "We want to do what the people want to do." I don't think we can just totally discount that.
David Beard:
If we don't want to throw the polling out with the bath water, if you will, how do we develop a better relationship with polling? People seem to get so upset about it. Should we just accept that polling is not going to give us what we want, which is the exact right answer to what's going to happen in the next election? Do we need to accept that if the race is within a single digit margin, the polls just aren't going to tell us that much?
G. Elliott Morris:
Yeah, no, that's exactly right. I'm so happy that you have distilled the book into two points. There are a few points at the end of the book that I will give, and they have specific recommendations for different types of readers of the polls.
G. Elliott Morris:
The first is for journalists, and I think they should listen most to what you've said here. That is there is a margin of error. There's uncertainty in every single poll as you say, but there's also a very long historical track record of multiple polls being subject to the same bias like we saw in 2012, sorry, 2016 and 2020... actually 2012 also, but in the opposite direction. What that should mean for political journalists is you should not be expecting laser-like predictive accuracy out of these surveys. After all, they are just estimates of what the people think. Those estimates are the result of a very complex process that is both artful and scientific. Journalists reporting on races should just not expect, bluntly, that a poll in a 49/51 race is going to be telling them much more than guesswork at that point. It is useful to have that indicator to know that the race is close, but you should not be treating the 51% as this candidate is 90% likely to win or whatever.
G. Elliott Morris:
That brings me into the second group of people for whom I have some recommendations. That is if you're the type of person who reads election forecasts, then there's a more specific problem with the polling aggregation models that you need to be aware of. That is that although basically we've been told that averaging polls gives us a better signal of the race, that's not true, as I said earlier, when you have lots of individual polls subject to the same uniform biases. If you are a consumer of election forecasts, my recommendation at the end of the book is not to listen basically to the probabilities of those forecasts, but instead, to you ask yourself what the forecast would say if polls are as wrong as they were in the past to produce what statisticians called conditional forecasts. That's on me, the forecaster, to give you, the reader. We've invested some resources in doing that this year.
G. Elliott Morris:
It is also just important, I think, for people to realize what we're doing when we're forecasting, when we're averaging polls together, is not canceling out all of the possible uncertainty from these processes. We're trying to capture that and distill that into a single number, but more importantly in a narrative about elections. That can be helpful for the political reporter who sees like 59/40, or sorry, 51/49 races actually closer to a coin flip than people would think. More importantly, I think people should just treat these surveys as more uncertain.
G. Elliott Morris:
Speaking of treating polls with more uncertainty, the final group I have recommendations for in the book is for pollsters who I think have been pretty misleading, to be frank, about the accuracy of their surveys. If you read reports of polls from the most important pollsters, for example, or the most prestigious pollsters, the Pew Research Center, Monmouth University, the New York Times, they are all reporting margins of sampling error, uncertainty in the poll that is only due to statistical randomness in who they talk to. Now, again, this is kind of technical, so sorry, but that's not the only source of uncertainty in polls. We know polls could also be wrong by, say, phrasing questions incorrectly, or if pollsters aren't making adjustments for the types of groups that are least likely to respond to them, like Trump supporters in 2016 or 2020. The statistical margin of error that they're reporting is actually about half the size of the empirical margin of error for election polling going back to the 1990s.
G. Elliott Morris:
The final recommendation for everyone reading surveys is to, as a rough heuristic, take the margin of error that's published by a pollster and double it. That gets you, I think, a pretty good read of the polls. There's a few rules there. Treat them with uncertainty when you're writing about them. Think about uncertainty as about twice as big as a pollster will tell you. Pay attention to the forecasts, the rough shape of them, but not the probability. Don't take that probability as the Oracle on Delphi. It is giving us a pretty rough statistical sense for the race, but it's not a perfectly calibrated prediction.
David Nir:
Can you give us an example of what you were just talking about a moment ago in terms of viewing current polling or polling averages in light of errors in previous cycles?
G. Elliott Morris:
In light of the recent misses, 2016 and 2020 being the biggest or most notable misfires for the averages, but also misses like in 2018, you had some pretty big errors in Ohio, and in 2012, you also had misses in the opposite direction. My recommendation is to better ground yourself to what the polls are telling you and what they could tell you if you're wrong is to, if you can use Microsoft Excel, download the polls from 538, which makes them freely available, and then swing them by whatever percent you think they're biased, and see what the election outcome would be in that scenario. That is what our election forecasts are doing under the hood. We are asking ourselves, or sorry, we're asking the computer to tell us if all the polls are biased by say five points towards Democrats, here's the election outcome. In this case, it's probably they're going to win 54 seats or whatever. If they're biased in the other direction, then Democrats in the Senate would win maybe 48 seats or whatever that number would be.
G. Elliott Morris:
I don't think that necessarily comes across in our probabilistic predictions. Instead, what I like to do is this exercise. To swing all the individual polls and add them up and then just tell people, "Hey, if the polls are as wrong as they were before, here's what's going to happen." I think this prompts the reaction from readers, from consumers of polls that forecasters are actually trying to get after, which is sort of anchoring to the uncertainty, the inherent uncertainty to the polls themselves. If I tell you there's a 80% chance that Democrats are going to win the Senate, you'll be like, "Oh, well hey, that's pretty likely." If I tell you, "If the polls are even less biased than they were in 2020, Democrats could lose the Senate.", I think that gives you a better uncertainty in concrete terms for just how wrong or how right the polls are going to have to be for a certain candidate to win. That's why we think it's useful.
David Nir:
Talking about those recent polling misses, obviously everyone listening to this podcast understands how the polls in both 2016 and 2020 were definitely to tilted toward Democrats. So much ink has been spilled on the number one question about those two polling years in particular, and that question is why? I've certainly read a lot about this. I am certain that you have as well. Do you have any answers that feel satisfying or are there any possible answers that you think maybe are under-considered? Or is it really like that one big study from a collection of political scientists saying, "We really don't know. It could have been any one of a million different things." What are your feelings on this?
G. Elliott Morris:
In our data, there are a few different explanations that I think, as you mentioned, are under-considered. And the big one is that there aren't enough Republicans responding to surveys, but more importantly, surveys or pollsters are talking to the wrong types of Republicans. In our polling that The Economist does with YouGov, for example, that poll has been balanced by 2016 presidential vote, or in 2020, it had been balanced by that. That means that when YouGov surveyed 1,000, 1,500 Americans, actually, every week, it made sure to have the right percentage of white and non-college-educated white and Black and high-income and low-income, et cetera, voters. In addition to that, it made sure that it had the right number of Trump voters and Clinton voters.
G. Elliott Morris:
And yet the poll was still as biased as the average of all the other polls, or the vast majority, which are not making these same political adjustments. And so that tells us that the 2016 Trump voters we were getting in our poll were not 2020 Trumpy enough, and that finding comes through in some other pollsters too. The New York Times' Nate Cohn's polls come up with some similar problems. The pollster in Monmouth University… he uses a similar methodology to balance the poll by party... also has found that they're just not talking to conservative-enough Republicans. And that's the big issue with polling bias in 2016 and 2020.
G. Elliott Morris:
There's a related issue in the way that pollsters predict likely voters. There's not as much concrete evidence for this, but the problem, according to Murray, is that in their likely voter model, the statistics they're running behind the scenes before they give you the poll result. Again, another thing that can add to the uncertainty of the poll. They're predicting likely voters based off of the past vote history of a person, of a single person that they're interviewing and their stated likelihood to vote, and they find there's just lots of error in this model that can lead to larger misfires. And these models are not easy to get right.
G. Elliott Morris:
And so that's why, for example, the Monmouth Poll misfires in the 2021 New Jersey election and also has problems in these years where there's Republican bias in non-response, but other pollsters are also having both of these problems happening at the same time. I think those are the two biggest considerations that people should think through. There's not enough very Republican people answering surveys, and when we're predicting voters — likely voters in high turnout elections — the models still seem to underestimate turnout among these very Republican voters, in addition to not having enough of them in the sample.
David Nir:
So is there any way to fix this twin set of problems?
G. Elliott Morris:
Yeah, there are some steps pollsters can take to try to avoid these issues, but at the end of the day, there's a fundamental problem with polling. I don't mean there's this earth-shattering issue with all polls and you shouldn't listen to them. I mean there's a statistical problem that is hard to solve, and that is getting enough people from both parties to respond to your poll. That is just a hard math problem, basically, and a hard survey design problem. So what pollsters have been doing to try to address this issue is the Pew Research Center, for example, has moved part of their surveying operation to mail surveys, which had gone out of vogue, basically, as telephone polls came to prominence in the '70s because they were much cheaper and as online polls became more available and predictive, or accurate, I should say, in the 2000s and 2010s.
G. Elliott Morris:
But Pew says we can reach these very Republican voters and also conservative, religious voters, evangelical voters better by sending them a postcard to fill out their survey online, to go to a website and fill out. Each postcard gets a unique link for the person they're trying to talk to, to fill out their poll online, and if that person doesn't fill out the poll, the Pew Research Center will send them another piece of mail. And then if they don't answer, they'll eventually, by five times, send them the questionnaire in an envelope for them to fill out by paper and send back. The Pew Research Center's solution here is saying, "We're just going to work really, really hard to try to get these people who we think aren't responding to us," and it works. They get response rates close to 30%, but it's also insanely expensive. Most pollsters do not have the resources to do polls this way.
G. Elliott Morris:
The other way you can invest money into fixing the fundamental problem with polls is not in the design phase, the trying to reach people phase, but in the modeling phase, where you, say, pull people off of the voter file, their vote history. Voter file companies also have pretty good predictions for the partisanship of voters, of registered voters. And you make sure, essentially, that you give enough phone calls or send enough mail, or what have you, to predicted Republicans and Democrats, and you make sure to sample this group in proportion to their numbers in the electorate. And that does a pretty good job, too. But this was the methodology that Monmouth University used in 2020 and The Times-Siena poll used in 2020, and they were still off, because again, it's really hard to solve this problem of not reaching enough conservative Republicans or very Trumpy Republicans.
G. Elliott Morris:
By the way, I should say, I can imagine this error going in the opposite direction. There's not necessarily a reason why conservative Republicans should not be answering phones as much as other people. Very liberal Democrats could easily be doing that too. And it takes a lot of money and statistical power and manpower to solve this issue, which I think really, the takeaway here should be that people should trust pollsters who release their methods and tell you they're putting lots of resources and trying to talk to people a lot more than just the random surveys they see published by, say, a partisan outlet or a campaign or a low-quality online pollster or what have you.
David Beard:
So now that we've had this very responsible discussion about the positives and negatives of polling, I'm going to do what all of our listeners want and ask you about the 2022 election. You recently released a forecasting model with The Economist. What can you tell us about what the polls are showing, what your model is showing? Anything that's surprised you so far?
G. Elliott Morris:
Well, I'm certainly surprised by how optimistic these models are for the Democrats, and that's partly due to polling data. The polls are average of the generic ballot poll, for example. The question that asks people who they're going to vote for, sorry, which party they're going to vote for in their congressional district is almost D plus two in our average, so that's pushing the model toward the Democrats quite significantly. Our model also, though, looks at other indicators that are more favorable to Democrats than I thought they would be maybe four months ago, and that is fundraising data. Democrats are winning a lot, much higher percentages of contributions from in-state donors than we would've predicted based off of the national environment than past dynamics in each race.
G. Elliott Morris:
And special elections have, as the readers of Daily Kos would know because we use your data, been very favorable to Democrats as well. Democrats are doing much better than we would've predicted based off of, say, the incumbency in the race or the presidential or past legislative lean of those districts. On the one hand, like you're saying, being the responsible consumer of the polls, we should say, "Well, they can be uncertain," so this D plus two might be like R plus two or R plus four instead, whatever. There's also other indicators that are optimistic.
G. Elliott Morris:
On the Senate side, I'm a bit more skeptical, to be honest, of the polling. If you look at a race like Ohio or Pennsylvania, for example, there's currently in polling averages, both ours and 538's, mainly because we rely on the underlying polls that are collected by 538 like everyone else does, a lot of those polls are partisan polls or polls from low-quality outlets. I wrote this newsletter article about a week ago, about how the average quality of a pollster this year is much lower than previous midterms, and that could mean that our models are prone to more error this year. That's not necessarily something we're going to know ahead of time. What I'm fairly confident in is that the polls are overestimating democratic support in states like Wisconsin and Ohio, where they've misfired relatively recently and where pollsters haven't made all that many changes to their methods.
David Nir:
We have been talking with G. Elliott Morris, Data Journalist for I and author of Strength in Numbers: How Polls Work and Why We Need Them. Elliott, please let our listeners know exactly where they can find all of your work, and in particular, where they can find the model you were just telling us about. We are not going to rattle off any percentages on the model, because by the time someone listens to this episode, of course they could change, but they should go and click and hit refresh plenty. So, tell us all the ways they can find you and your work.
G. Elliott Morris:
For the model, readers could go to economist.com/midterms, and you'll be redirected to the right page. You can buy my book also at any bookstore near you or read more blog posts and writing about the book at my website, gelliottmorris.com.
David Nir:
And where can folks find you on Twitter?
G. Elliott Morris:
At gelliottmorris. Yes, of course, Twitter.
David Nir:
Thank you so much for joining us.
G. Elliott Morris:
Thank you.
David Beard:
That's all from us this week. Thanks to G. Elliot Morris for joining us. The Downballot comes out every Thursday, everywhere you listen to podcasts. You can reach out to us by emailing thedownballot@dailykos.com. If you haven't already, please subscribe to The Downballot and leave us a five-star rating and review. Thanks to our producer, Cara Zelaya, and editor, Tim Einenkel. We'll be back next week with a new episode.