Some folks may have noticed a diary that ran in mid-January on a study that said that an "election verification exit poll" provided evidence that the 2008 California Prop 8 vote count had been hacked. The diarist said, "Just to clarify, this study is saying that in some places the actual votes that people cast differed from what was recorded by the state by nearly 18%" (emphasis in original). That is assuming, of course, that the "actual votes" matched the exit poll.
To me, the most remarkable thing about the study is that it concludes that Republicans were 23 points less likely to participate in the exit poll than Democrats. In a sane world, that result should sink most of the attempts to use exit polls as evidence that election results are "mathematically impossible." Of course, nothing is ever that simple.
Before I go any further, let me make plain that I'm not a defender of "electronic voting." I think voting on paperless DREs is a lousy idea: even if the machines happen to work well in a particular election, there is no way to prove it. (If they worked terribly, it may or may not be possible to prove that.) I think the best system uses paper ballots, hand-marked by most voters, that can be counted both by optical scanners and by hand. Post-election vote tabulation audits can help to ensure that the scanner counts are materially correct. And so on -- I won't try to put my entire wish list here.
On with the story. In 2008, the Election Defense Alliance conducted election verification exit polls at ten Los Angeles County polling places. EDA analyzes people's reported votes on both Prop 8 (to make same-sex marriage unconstitutional) and Prop 4, which would impose new restrictions on non-emancipated minors seeking abortions. EDA also did exit polling in other places, which I won't discuss here. EDA sent multiple interviewers to each polling place in an attempt to approach every voter; over 54% of the voters filled out a questionnaire. In the official returns for these polling places, Prop 8 trailed by about 5.6 percentage points -- but in the exit poll, it trailed by about 21.1 percentage points! You can see why EDA would be suspicious. Either the exit poll, or the official count, or both must be substantially biased.
EDA leaves no reasonable doubt that the exit poll is substantially biased. If one makes it to page 38 of EDA's 49-page report, one reads that 66.0% of respondents said they were Democrats, versus just 10.2% of Republicans. Even in L.A. County, that means trouble. EDA found that 61.5% of people who voted at these polling places were registered Democrats, and 15.6% were Republicans. EDA doesn't mention that some people's party identification may not match their official registration, but it is very unlikely that that could explain the 4.5-point excess of Democrats and 5.4-point paucity of Republicans in the sample. EDA estimates (on page 40) that 60.2% of Democratic voters participated in the survey, versus just 36.7% of Republican voters.
At this point, my irony meter redlines. Back in January 2005, Edison/Mitofsky (E/M) released an evaluation report of the November 2004 national exit poll. On page 31 of that report, E/M stated, "While we cannot measure the completion rate by Democratic and Republican voters, hypothetical completion rates of 56% among Kerry voters and 50% among Bush voters overall would account for the entire Within Precinct Error that we observed in 2004." EDA principal Jonathan Simon was among the people who scoffed that a 6-point gap in response rates was at best an unsupported speculation. One wonders what he thinks now.
Simon might argue (as he has in the past) that if E/M were right about 2004, then response rates should have been higher in the most Republican precincts -- and they weren't. This reasoning exemplifies the ecological fallacy. By the same reasoning, if blacks' incomes are lower on average than whites', then states with the largest proportions of blacks should have the lowest per capita incomes -- and they don't.* The fact is that whites and blacks alike have higher incomes in some states than in others, but whites' average income outstrips blacks' in every state. The EDA data show the same sort of pattern: (estimated) response rates for both Democrats and Republicans vary markedly across polling places, but the Democratic response rate is always larger.
*Wonk note: by "whites" and "blacks," I refer specifically to people categorized in the Census as "White alone" or as "Black or African American alone."
In this case, EDA argues that the 23-point difference in response rates between Democrats and Republicans actually doesn't matter much. Their analysis weights the results for both Prop 4 and Prop 8, on the assumption that the Democrats who did not respond voted for or against the propositions in the same proportions as those who did respond, and likewise for other party affiliations. EDA argues that their adjusted exit poll results for Prop 4 match the vote counts very closely, while the adjusted results for Prop 8 still differ from the vote counts by more than 11 points on the margin.
Without wading too far into the muck, there are three big problems here. One is conceptual: there is no reason to assume that (say) Democrats who didn't respond to the survey voted the same way as those who did respond. At this point in the analysis, EDA has already documented and controlled for biases in gender, age, race/ethnicity, and partisanship; as far as I can tell from the report, none of these controls compensates for any of the other known biases. To assume that controlling for partisanship has finally eradicated sampling bias might be described as a triumph of hope over experience.
Second, EDA assumes that the Prop 4 results should be fairly similar to the Prop 8 results -- which, in a freehand sense, is true. (Statewide, Prop 8 passed by about 4 1/2 points; Prop 4 failed by about 4 points. Support tends to rise and fall together.) However, both precinct-level results within L.A. County and county-by-county results across the state indicate that Prop 8 was more polarizing than Prop 4 -- that is, the vote shares tended to vary more. This result is also consistent with a Survey USA pre-election poll which showed Democrats and Republicans disagreeing more sharply about Prop 8 than about Prop 4. This pattern holds in the official returns from the ten polling places in EDA's study but not in the exit poll results (either before or after adjustment). That divergence suggests that EDA's adjustment didn't work. Also, the adjusted results generally show too little support for Prop 4, compared to the official returns, where it was unpopular and too much support where it was popular. That pattern offers further reason to doubt the adjustment.
Finally, California actually hand-counted paper ballots or voter-verifiable paper records in over 1% of California precincts, in its Post-Election Manual Tally (PEMT). The PEMT in Los Angeles tallied votes in over 50 election day precincts, cast on InkaVote ballots (as well as the Vote By Mail results from over 50 precincts). It found a net change of two votes. Of course, it is possible that not only were voting machines "hacked" in L.A. County and perhaps throughout the state, but physical ballots and/or the PEMT were tampered with as well. But at some point one really ought to consider that maybe the exit poll was way off -- especially given all the evidence that the exit poll was way off.
I'm pretty much done with "election verification exit polling" of this sort. Exit polls could be useful to gather data about people's voting experiences -- for instance, whether voters received clear instructions, and whether they had trouble with the voting equipment. Overall, I think the country needs pollworkers, election day observers, manual tally/audit observers, and reform advocates a lot more than it needs people handing out questionnaires. All that energy seems worthy of a better cause.