This week on The Downballot, we take a look at Pennsylvania’s governor's race, where Republican Doug Mastriano's campaign is exploding in spectacular fashion; ad spending wars, where Democrats are doing a much better job of actually getting their spots before viewers, even when they're getting outspent by Republicans; and an update on Republican J.R. Majewski and his stolen valor scandal in Ohio. Lastly, we review the dispiriting results of the Italian general election, which saw the far right gain its first victory since World War II.
After that, cohosts David Nir and David Beard spoke with this week’s guest, G. Elliott Morris, data journalist for The Economist and author of the book, “Strength in Numbers, an Exploration of the History of Polling and Why It Matters.”
You can listen below or subscribe to The Downballot wherever you listen to podcasts. You can also find a transcript for this week right here. New episodes come out every Thursday!
First up, out of Pennsylvania, a surprising update: Nir recalled how a a Republican operative who opposed Mastriano in the primary offered up a very telling statement. "We were opposed to Doug's candidacy in the primary because we feared that he would not be able to connect with the independent and moderate Democratic voters that are necessary for Republicans to win in Pennsylvania,” the operative said. “Unfortunately, six weeks from the election, I haven't seen anything to suggest we were inaccurate in our assessment."
Things are looking worse and worse for Mastriano. On TV and online ads alone, Shapiro has outspent Mastriano, $21.6 million to $6,300. “No, I am not leaving off any zeros,” Nir quipped as he cited these figures. “Mastriano simply has no money, and what little money he doesn't have, he hasn't even put on the airwaves.”
The Philadelphia Inquirer notes that Shapiro has another $8 million of TV time reserved until election day, whereas Mastriano's total is $0. This represents a glaring disparity, not just in the raw dollar numbers, but in terms of who is actually spending. A campaign versus a Super PAC also has a huge impact on the number of ads that these groups can each run for the same dollar amounts.
Beard added additional context on the GOP’s overall approach to investing in races this cycle: “As bad as Mastriano is, we've also got some very bad Republican Senate candidates, and as a result, the GOP independent arms are having to bail these candidates out. Now, overall, to take a broad view of the nine key Senate battleground races, the GOP is spending more money than the Democrats when you include both candidate spending and outside group spending, $106 million to $93 million in the past three weeks.
This doesn’t tell the whole story, though, Beard continued, because of the fact that Democratic money is mostly from candidates with a smaller amount of outside spend, and many of our Senate candidates have had very strong fundraising quarters. In contrast, a number of Republican candidates have been very bad fundraisers, such as Mastriano on the Senate side, the GOP is seeing much worse ad rates because so much more of their spending is from outside groups.
Moving on to Ohio, the cohosts circled back to offer up a major update to a story from last week. Last week, they discussed Republican J.R. Majewski, a candidate for the Toledo-based Ninth Congressional District in Ohio, and his appalling stolen valor scandal. The following day — in fact, the very same afternoon that we released last week's episode — it turned out that the NRCC cut him loose. The GOP's chief campaign arm for House races cut a million dollars, all of its planned advertising for Majewski in his attempt to unseat veteran Democratic Congresswoman Marcy Kaptur. This is a seat that, as Nir and Beard mentioned, the GOP gerrymandered to an extreme, turning it into one that Democrats typically won by about 20 points to one that actually would have given Donald Trump the edge.
Beard took listeners across the Atlantic Ocean over to Italy, which had their general election this past Sunday. The far-right there is now expected to take power for the first time since Benito Mussolini and the end of World War II, after the far-right alliance comfortably won the general election. Giorgia Meloni, the leader of Brothers of Italy, which is a party that can trace its roots back to the neo-fascist movement immediately after World War II, is expected to become the country's first Prime Minister. Her party won 26% of the vote, along with the far right lead winning 9%, and former Prime Minister Sylvio Berlusconi's Forza Italia winning 8%.
As Beard explained, that doesn't add up to 50%, and all told, they won in the low 40s. But due to Italy's mixed system where some members of parliament are elected proportionally and some members are elected first-past-the-post, the way that we do it here, that alliance which ran candidates together was able to win the vast majority of the first-past-the-post seats. That gives them an overall majority in both chambers, despite only winning 43% of the vote. Now, the Center Left Alliance, which was led by the Democratic Party, sort of the traditional center left party in Italy, won 26% of the vote. A centrist alliance that was an offshoot of the left alliance won 8%, and the populist five-star movement won 15%.
”Now, if those parties had all run together against the far right, there was a good chance that the election would have been very close or even that alliance could have won,” he added. “But due to how the parties ran and due to the system, it was never particularly close in the end.”
Next, G. Elliott Morris, joined Nir and Beard to talk more about his book, how we can fix polling, and why polls matter.
The origins of bad polling misses can be traced back to straw polls, Morris said:
Well, we have evidence of unscientific, or what we now call straw polls, for stretching back to at least 1824. That was an election between four Democratic Republican candidates, not necessarily mapping onto any current partisan divides, but important because it was still closely watched. And around the country you had newspapers, partisan newspapers, mainly, getting their editors and journalists to go around, and basically ask anyone they could find who they were going to vote for, and then they would report it back to the magazine, who typically had a candidate. And would say, "Hey, look at all these people who are going to vote for our candidate that we like."
Those earliest examples of straw polls are probably not very accurate. The evidence we have of them is like, "Oh, here's 20 people we talk to in Kentucky, or in a 4th of July parade in Massachusetts." It's not the type of thing we would use today to predict an outcome, or to want to measure public opinion, but it is nevertheless the first evidence of polling that we have in America.
That evidence, comes from an early pollster for the Franklin Roosevelt administration, who unearthed it and later gave it to George Gallup, who cited it in his book. After that, straw polls took off in popularity.
Nir started off by asking Morris what polls are good for: “Even before the Trump era problems, people never like the idea of poll-tested candidates. More often than not, people seem to get excited by polls and then be disappointed when they don't turn out. We have here in America elections every two years. It's not like we don't have a good understanding of who people are supporting on a regular basis. Why do we need polls?”
Morris replied:
I think we need polls for two reasons. First, because a world in which we have no polls is not a world where we have no election predictions. It's one where we have bad election predictions. You have prognosticators or pundits telling you who's going to win elections, and that misreads or misrepresents races to readers. It's a bad read of politics for people. That's, I think, the electoral case for polls. Polling aggregates have better track records than election experts who, according to some famous psychological studies, are basically monkeys throwing darts in accuracy of predicting events. That is, I think, the squirrely, here's the election case for why we need polls still, but that there's a much more fundamental reason why we need them. That is because a world in which you don't have polls is one where you don't have, at least in the American context, any idea nationally about what people want from the government.
That's because, for multiple reasons, presidential elections, for example, are decided by the Electoral College ... If you're judging what people want by the presidents they elect, you're going to have a really skewed understanding of what they actually want on a policy dimension or how they want to be represented in social policy or culturally. Similarly, if you're judging them based on the outputs of the US Congress and the Senate or in the house in some elections, say in 2012, where there's a big mismatch between the popular vote for the house and the members that are elected, you're going to similarly misunderstand the American people. Polls are really important in this bigger representational sense in how you understand the public.
Finally, they are used by lawmakers. It's not like members of Congress, parties, presidents don't want this information. They do want to know what the people want, partly so they can represent them, but also so they can campaign on the most popular parts of their agenda. Maybe that's nefarious, but also maybe it means people are more likely to get what they want. Again, I'm a big political science reader and there's lots of political science in the book. The political science studies here say people on average get what they want when you have polling in elections.
Beard followed up by inquiring about how we can develop a better relationship with polling, as “we don't want to throw the polling out with the bath water.” He asked, “Should we just accept that polling is not going to give us what we want, which is the exact right answer to what's going to happen in the next election? Do we need to accept that if the race is within a single digit margin, the polls just aren't going to tell us that much?”
Morris admitted the complexity of polling, and offered several recommendations:
There are a few points at the end of the book that I will give, and they have specific recommendations for different types of readers of the polls.
The first is for journalists, and I think they should listen most to what you've said here. That is there is a margin of error. There's uncertainty in every single poll as you say, but there's also a very long historical track record of multiple polls being subject to the same bias like we saw in 2012, sorry, 2016 and 2020, actually 2012 also, but in the opposite direction. What that should mean for political journalists is you should not be expecting laser-like predictive accuracy out of these surveys. After all, they are just estimates of what the people think. Those estimates are the result of a very complex process that is both artful and scientific. Journalists reporting on races should just not expect, bluntly, that a poll in a 49/51 race is going to be telling them much more than guesswork at that point. It is useful to have that indicator to know that the race is close, but you should not be treating the 51% as this candidate is 90% likely to win or whatever.
That brings me into the second group of people for whom I have some recommendations. That is if you're the type of person who reads election forecasts, then there's a more specific problem with the polling aggregation models that you need to be aware of. That is that although basically we've been told that averaging polls gives us a better signal of the race, that's not true, as I said earlier, when you have lots of individual polls subject to the same uniform biases. If you are a consumer of election forecasts, my recommendation at the end of the book is not to listen basically to the probabilities of those forecasts, but instead, to you ask yourself what the forecast would say if polls are as wrong as they were in the past to produce what statisticians called conditional forecasts. That's on me, the forecaster, to give you, the reader. We've invested some resources in doing that this year.
”It is also just important, I think, for people to realize what we're doing when we're forecasting, when we're averaging polls together, is not canceling out all of the possible uncertainty from these processes,” Morris added. “We're trying to capture that and distill that into a single number, but more importantly in a narrative about elections. That can be helpful for the political reporter who sees like 59/40, or sorry, 51/49 races actually closer to a coin flip than people would think. More importantly, I think people should just treat these surveys as more uncertain.”
The final recommendation Morris offered was for everyone reading surveys to, as a rough heuristic, take the margin of error that's published by a pollster and double it. There are here are some steps pollsters can take to try to avoid these issues, but at the end of the day, there remains a fundamental problem with polling, he noted:
I don't mean there's this earth-shattering issue with all polls and you shouldn't listen to them. I mean there's a statistical problem that is hard to solve, and that is getting enough people from both parties to respond to your poll. That is just a hard math problem, basically, and a hard survey design problem. So what pollsters have been doing to try to address this issue is the Pew Research Center, for example, has moved part of their surveying operation to mail surveys, which had gone out of vogue, basically, as telephone polls came to prominence in the '70s because they were much cheaper and as online polls became more available and predictive, or accurate, I should say, in the 2000s and 2010s.
“The other way you can invest money into fixing the fundamental problem with polls is not in the design phase, the trying to reach people phase, but in the modeling phase, where you, say, pull people off of the voter file, their vote history,” Morris said.
”So now that we've had this very responsible discussion about the positives and negatives of polling, I'm going to do what all of our listeners want and ask you about the 2022 election. You recently released a forecasting model with The Economist. What can you tell us about what the polls are showing, what your model is showing? Anything that's surprised you so far?” Beard asked.
Morris has been surprised by how optimistic these models are for the Democrats, which is partly due to polling data:
The polls are average of the generic ballot poll, for example. The question that asks people who they're going to vote for, sorry, which party they're going to vote for in their congressional district is almost D plus two in our average, so that's pushing the model toward the Democrats quite significantly. Our model also, though, looks at other indicators that are more favorable to Democrats than I thought they would be maybe four months ago, and that is fundraising data. Democrats are winning a lot, much higher percentages of contributions from in-state donors than we would've predicted based off of the national environment than past dynamics in each race.
And special elections have, as the readers of Daily Kos would know because we use your data, been very favorable to Democrats as well. Democrats are doing much better than we would've predicted based off of, say, the incumbency in the race or the presidential or past legislative lean of those districts. On the one hand, like you're saying, being the responsible consumer of the polls, we should say, "Well, they can be uncertain," so this D plus two might be like R plus two or R plus four instead, whatever. There's also other indicators that are optimistic.
On the Senate side, he feels a bit more skeptical of the polling:
If you look at a race like Ohio or Pennsylvania, for example, there's currently in polling averages, both ours and FiveThirtyEight's, mainly because we rely on the underlying polls that are collected by FiveThirtyEight like everyone else does, a lot of those polls are partisan polls or polls from low-quality outlets. I wrote this newsletter article about a week ago, about how the average quality of a pollster this year is much lower than previous midterm, and that could mean that our models are prone to more error this year. That's not necessarily something we're going to know ahead of time. What I'm fairly confident in is that the polls are overestimating democratic support in states like Wisconsin and Ohio, where they've misfired relatively recently and where pollsters haven't made all that many changes to their methods.
To find the model Morris discusses, and more, readers can go to economist.com/midterms. Morris’s book is available at any bookstore near you, and those interested in reading more of his blog posts and writing about the book, you may visit his website, gelliotmorris.com.
The Downballot comes out every Thursday everywhere you listen to podcasts! As a reminder, you can reach our hosts by email at thedownballot@dailykos.com. Please send in any questions you may have for next week's mailbag. You can also reach out via Twitter at @DKElections.