Skip to main content

A report was released by the Election Defense Alliance yesterday that everyone should read. Here's the press release:

Major Miscount of Vote in 2006 Election:

Reported Results Skewed Toward Republicans by 4 percent, 3 million votes
Election Defense Alliance Calls for Investigation

BOSTON, MA - November 16, 2006
CONTACT: Jonathan Simon 617.538.6012

Election Defense Alliance, a national election integrity organization, issued an urgent call for further investigation into the 2006 election results and a moratorium on deployment of all electronic election equipment, after analysis of national exit polling data indicated a major undercount of Democratic votes and an overcount of Republican votes in U.S. House and Senate races across the country. "These findings raise urgent questions about the electoral machinery and vote counting systems used in the United States," according to Sally Castleman, National Chair of EDA. This is a national indictment of the vote counting process in the United States!

As in 2004, the exit polling data and the reported election results don't add up. "But this time there is an objective yardstick in the methodology which establishes the validity of the Exit Poll and challenges the accuracy of the election returns," said Jonathan Simon, co-founder of Election Defense Alliance. The Exit Poll findings are detailed in a paper published today on the EDA website.

The 2006 Edison-Mitofsky Exit Poll was commissioned by a consortium of major news organizations. Its conclusions were based on the responses of a very large sample, of more than 10,000 voters nationwide*, and posted at 7:07 p.m. Election Night, on the CNN website. That Exit Poll showed Democratic House candidates had out-polled Republicans by 55.0 percent to 43.5 percent - an 11.5 percent margin - in the total vote for the U.S. House, sometimes referred to as the "generic" vote.

By contrast, the election results showed Democratic House candidates won 52.7 percent of the vote to 45.1 percent for Republican candidates, producing a 7.6 percent margin in the total vote for the U.S. House -- 3.9 percent less than the Edison-Mitofsky poll. This discrepancy, far beyond the poll's +/- 1 percent margin of error, has less than a one in 10,000 likelihood of occurring by chance.

The full report is here: Landslide Denied: Exit Polls vs. Vote Count 2006

I don't know enough about statistics or this organization to know just how credible this is, and perhaps some of you will be able to shed light on it. If it's credible, what can (or should) people do?

Originally posted to iconoclastic cat on Fri Nov 17, 2006 at 11:16 AM PST.

EMAIL TO A FRIEND X
Your Email has been sent.
You must add at least one tag to this diary before publishing it.

Add keywords that describe this diary. Separate multiple keywords with commas.
Tagging tips - Search For Tags - Browse For Tags

?

More Tagging tips:

A tag is a way to search for this diary. If someone is searching for "Barack Obama," is this a diary they'd be trying to find?

Use a person's full name, without any title. Senator Obama may become President Obama, and Michelle Obama might run for office.

If your diary covers an election or elected official, use election tags, which are generally the state abbreviation followed by the office. CA-01 is the first district House seat. CA-Sen covers both senate races. NY-GOV covers the New York governor's race.

Tags do not compound: that is, "education reform" is a completely different tag from "education". A tag like "reform" alone is probably not meaningful.

Consider if one or more of these tags fits your diary: Civil Rights, Community, Congress, Culture, Economy, Education, Elections, Energy, Environment, Health Care, International, Labor, Law, Media, Meta, National Security, Science, Transportation, or White House. If your diary is specific to a state, consider adding the state (California, Texas, etc). Keep in mind, though, that there are many wonderful and important diaries that don't fit in any of these tags. Don't worry if yours doesn't.

You can add a private note to this diary when hotlisting it:
Are you sure you want to remove this diary from your hotlist?
Are you sure you want to remove your recommendation? You can only recommend a diary once, so you will not be able to re-recommend it afterwards.
Rescue this diary, and add a note:
Are you sure you want to remove this diary from Rescue?
Choose where to republish this diary. The diary will be added to the queue for that group. Publish it from the queue to make it appear.

You must be a member of a group to use this feature.

Add a quick update to your diary without changing the diary itself:
Are you sure you want to remove this diary?
(The diary will be removed from the site and returned to your drafts for further editing.)
(The diary will be removed.)
Are you sure you want to save these changes to the published diary?

Comment Preferences

  •  however, remember (3+ / 0-)
    Recommended by:
    Febble, HudsonValleyMark, slksfca

    that every national exit poll done since 1988 has had a dem bias.

    a 3.9% difference doesn't look unreasonable based on historical baises towards democrats in exit polls.

    Election 2006: When the output made contact with the atmospheric circulation facilitator

    by FleetAdmiralJ on Fri Nov 17, 2006 at 11:20:49 AM PST

    •  I didn't know that! (4+ / 0-)

      Thanks. I think I have some reading to do.

    •  to move on to the next argument (1+ / 0-)
      Recommended by:
      Febble

      the paper argues that we know the weighted House exit poll is wrong because 49% of voters report voting for Bush and only 43% for Kerry, and that gap is Too Wide.

      But the argument is wrong because exit polls, as far as I can tell, always overstate the previous winner's vote share (see Table 3, numbered page 11). I haven't run all the numbers for midterm exit polls yet, and I expect those gaps to be narrower, but in 1998 Clinton's retrospective margin over Dole was about 3 points bigger than his margin in the official returns.

      And before someone tells me that just proves that Dole stole votes, I hope they will look at Table 3 for a while. Conceivably it might help.

      •  just to put those 1998 numbers out there (1+ / 0-)
        Recommended by:
        Febble

        The raw 1998 numbers are almost embarrassing, since as usual the Dem share is overstated -- Clinton retrospectively beats Dole by over 14 points. But applying the weights and filtering out the folks who say they didn't vote in 1996, we have Clinton 50.76%, Dole 39.25%, for an 11.5% difference. The vote shares in official returns, per Leip, were 49.2% and 40.7%, an 8.5% difference. So, a 3-point gap.

    •  ugh (reading through the report) (3+ / 0-)
      Recommended by:
      Febble, RyoCokey, HudsonValleyMark

      they're trying to disprove that the above is a factor by looking at the question of "who did you vote for in 2004" and then taking the margin this time and the actual margin in 2004 and trying to make conclusions from that.

      This is a terrible way to make a point, for several reasons:

      1. people may lie, forget, etc, especially on an election that isn't even taking place this time
      1. no conclusion about votes in 2006 can be made from votes done in 2004.
      1. no conclusion about who voted in 2006 can be made based on who voted in 2004, etc.

      I don't even see what they're trying to prove.  The fact that the 7:07 exit polls show that there is a national 2% margin between those who said they voted for Bush or Kerry in 2004 means absolutely nothing, and the fact that the final adjusted margin was 49% Bush 43% Kerry means nothing.  There is no expectation that exactly the same people who voted in 2004 would vote in 2006 and in the same proportions.

      Election 2006: When the output made contact with the atmospheric circulation facilitator

      by FleetAdmiralJ on Fri Nov 17, 2006 at 11:31:34 AM PST

      [ Parent ]

    •  Circular Logic? (0+ / 0-)

      A D bias as measured against what, the official elections? Doesn't that just confirm the conclusion that the official elections are biased against Ds? At the very least, logic measuring the exit polls against the elections that the exit polls measure is purely circular. Is there some other way you're measuring the exit poll bias somehow objectively?

      "When the going gets weird, the weird turn pro." - HST

      by DocGonzo on Fri Nov 17, 2006 at 12:18:47 PM PST

      [ Parent ]

      •  Very good question (2+ / 0-)
        Recommended by:
        alisonk, DocGonzo

        I don't fully trust these claims about Democratic bias in exit polls.  And if it's true that the polls have shown Democrats outperforming the actual vote since 1988, I'd like to know how that proves anything -- it might be that the Republicans have been cheating successfully since 1988.

        •  Well, that's certainly one (0+ / 0-)

          hypothesis.  But it's not supported by the data, and the alternative is well supported.

          •  Okay, but what is the data? (0+ / 0-)

            I'm still not hearing anything except assertions.  

            How does the data support the hypothesis that the difference between the vote totals and the poll projections is caused by bad polls and not by election fraud?

            •  OK (0+ / 0-)

              Edison-Mitofsky issued an evaluation of the 2004 exit polls in which they looked for correlations between the magnitude of the precinct level discrepancy and factors likely to be associated with fraud and factors likely to be associated with a biased sample.  They found no correlation between voting method and discrepancy, but they did find correlations between methodogical factors and bias  - notably, where interviewing rate was low, the discrepancy was greater than where it was high.  Low interviewing rates will tend to allow more opportunites for unwilling voters to escape selection.

              I myself was then contracted (as a result of a series of DKos diaries - check my diaries) to reanalyse the data, because I was critical of the measure used to quantify precinct level discrepancy.  I ran multiple regression analyses and found that the net discrepancy was well accounted for by factors likely to be associated with departures from non-random sampling protocol.  More importantly, I found absolutely no correlation between the magnitude of the discrepancy and change in Bush's vote share.  This analysis was presented by Mitofsky at an American Statistical Association meeting in Philadelphia last year, and was written up by Mark Lindeman (with whom I worked on the quantitive measure) here:

              http://inside.bard.edu/...

              In addition to these findings, which strongly support the hypothesis of an underlying differential in propensity to participate in polls between supporters of the two main parties, and if anything contra-indicate the hypothesis that Bush's change in vote share was due to the same factors that contributed to the discrepancy (e.g. fraud), there is evidence from a recent poll:

              http://www.pollster.com/...

              that indicates that Republicans are significantly more likely than Democrats to indicate willingness to participate in an exit poll.

              There is also evidence from previous elections.  For the last five presidential elections there has been a significant discrepancy in the direction of "redshift" (count "redder" than the poll), and Michael Butterworth of CBSnews presented an analysis of data from that election at this years AAPOR conference in Montreal that found very similar factors associated with redshift in that election to those found in the 2004 data.  I do not know of an online version of this paper, but you could contact him.

              At AAPOR 2005, a paper was presented that showed the results of an experiment into factors designed to improve response rates.  

              http://www.mysterypollster.com/...

              (scroll down; the first bit's about the work that resulted in my work for Mitofsky).

              Conditions were randomly allocated, and one condition did result in greater response rates.  Unfortunately it also resulted in greater pro-Democratic bias.  This is an important finding because, as the variables were experimentally manipulated, by definition they would have been orthogonal to fraud.  Therefore, if one condition was associated with greater redshift than the other, the causal factor must have been methodological.  There is therefore direct experimental evidence for an underlying differential willingness to participate that interacts with polling methodology.

              Cheers

              Lizzie

              •  Wow -- that'll learn me... (0+ / 0-)

                I asked for data (or annotated analysis, even better) and you dropped the kitchen sink on me.  I didn't realize you were involved in this issue; so many posters make bald assertions -- sometimes original, sometimes just parroting others -- that I reserve my trust for those who present evidence.

                It's likely that no one else will read these posts, but just in case, I'll point out an error in your post: in the paragraph that indicates that Republicans are significantly more likely than Democrats to indicate willingness to participate in an exit poll. you have reversed the comparison.

                However, thank you for a great answer.

                •  Just lost my response (0+ / 0-)

                  Yes, of course.  I've typed that stuff so often my fingers took off on their own, and reversed the meaning!

                  Thanks for responding.  My name is Elizabeth Liddle, in case you want to google.

                  Cheers

                  Lizzie

                  •  Thank you for ALL your work (1+ / 0-)
                    Recommended by:
                    Febble

                    It is critically important to have a large body of committed, partisan Democrats / progressives / liberals active in our democracy.

                    It is just as important that those with true expertise bring hard facts to the debates, so that we avoid the blind alleys and wild goose chases.

                    Thank you for bringing your expertise to bear.

                •  Not a great answer (0+ / 0-)

                  Don't back down so fast - you had the right instinct.

                  She resorted to discussing the polling methodology from the 2004 election, not the 2006 one.

                  I'm disappointed at how easily people are cowed by the team of FleetAdmiralJ, HudsonValleyMark, and Febble, who post together, I've noticed.

                  None of them HAVE provided data to rebut the CURRENT polling issue. Old arguments from 2004 don't wash here. New election, new issue.

                  •  how so? (0+ / 0-)

                    Are you saying that even though exit poll reports of previous votes have been demonstrated to be unreliable, we should assume that they were accurate in 2006 because no one has actually proven otherwise yet?

                    Are you actually saying anything?

                    •  We shouldn't assume ANYTHING (0+ / 0-)

                      We should not automatically assume they were conducted ala the 2004 methods, so arguments about 2006 results based on 2004 methodology are less than useless.

                      I'm interested in facts, not theories. If you have any facts to share re 2006 and the new study, let 'er rip. But so far all you have is theories.

                      I find it fascinating how people are so overimpressed by people with credentials, as if being trained a profession somehow makes one incapable of bias, mistake, or even lying. Anyone can lie, err, and make prejudicial comments, no matter their degree or profession.

                  •  Um, guy (0+ / 0-)

                    the question was a general one, and I discussed methodology from several elections.  I do not have data on the methodology in 2006. Are you arguing that data from previous elections is irrelevant?

                    And yes, HudsonValleyMark and I often post together because we have worked together.  I don't know FleetAdmiralJ personally, though I enjoy his comments.  It would seem we are all interested in statistics and public opinion research.

                    •  Alternative hypotheses... (0+ / 0-)

                      ... are also possible for your continued association. Just sayin'.

                      •  Well, perhaps you'd like to suggest one (0+ / 0-)

                        instead of the innuendo.

                        Put up or shut up.

                        •  What could I say that you wouldn't just deny? (0+ / 0-)
                          •  Well, you could say the truth (0+ / 0-)

                            I wouldn't deny that.

                            And the truth is that I am a British social scientist with a keen interest in Democratic US politics, who was devastated by Kerry's loss, and spent a substantial amount of time trying to figure out whether the election was fair (check my diaries).

                            However, being a trained data analyst, I got involved in the exit poll debate, and, eventually, in the data itself (check my diaries).  I also made some good online friends, including HudsonValleyMark, with whom I collaborated on some original research.  One result of that work was presented (an invited paper) at a meeting of the American Statistical Association in Minneapolis in 2005.  We were both also, together with Steve Freeman and Ron Baiman, invited to present at the AAPOR meeting in Montreal this year (google).  We tend to respond to the same kinds of diaries, being interested in the same issues, and indeed, having acquired some expertise and knowledge of those areas.

                            I cannot speak for FleetAdmiralJ, whom I only know as a poster on Daily Kos.

                            Well, you can check out some of that.  When you've done so, please come back and present your allegations.

                            Elizabeth Liddle

                          •  Oh, and as you will also know (0+ / 0-)

                            if you've actually looked into this, that I was contracted by Warren Mitofsky to re-analyse the exit poll data.  This contract arose as a result of my criticisms of the original analysis, again, on Daily Kos, and actually frontpaged by DemfromCT.

      •  No, not circular (0+ / 0-)

        If you read my post downthread you will see the kinds of evidence that supports the hypothesis that Democrats tend to be more willing to participate in exit polls, including an actual poll on the issue a month before the last election.

        Another piece of evidence was an actual controlled experiment with randomly assigned variables.  Other evidence is correlational, and so needs to be treated with caution, but nonetheless powerful.  For example, precinct level discrepancy tends to be correlated with the interviewing rate - where the interviewing interval is large, discrepancy tends to be greater, suggesting that where the opportunity for unwilling voters to avoid selection is greater, so is the bias in the poll. Other factors unlikely to be correlated with fraud are nonetheless correlated with discrepancy.  And, interestingly, in 2004, discrepancy was completely uncorrelated with benefit to Bush.  

  •  Well, I have considerable admiration (3+ / 0-)
    Recommended by:
    alisonk, Blue Shark, neroden

    for both the authors, but their conclusion is based on a key assumption that does not appear to stand up to scrutiny.

    They claim that it is implausible that, when "adjusted" for results, the retrospective vote margin in the exit poll for Bush was 5 points over his official 2004 margin of 2.5 votes, and from that discrepancy infer that

    ...the degree of statistical distortion now required to force exit polls to match the official tally is the clearest possible warning that the ever-growing catalog of reported vulnerabilities in America’s electronic vote counting systems are not only possible to exploit, they are actually being exploited.

    So the first question is: there is any precedent for a winning candidate's margin being inflated at the following exit poll?  And, according to a paperpresented by Mark Lindeman at the Annual Meeting of the Midwest Political Science Association in Chicago, 2006, the answer seems to be yes, every time, even when the previous winner isn't running (Clinton,Reagan), is losing (GHW Bush, Carter), or even is no longer the incumbent (Nixon).  I don't have data for midterms, but I believe Mark is working on them at this very moment.

    I'd also take issue with another point in the paper that "the hypothesis...that Republicans had been more reluctant to respond and that therefore Democrats were "oversampled" was "never supported by evidence".  There is actually a wealth of evidence. Some of this was reported in the E-M evaluation document; I myself have been privileged to have had the opportunity to confirm that evidence; it has been found in at least two actual experiments (with randomly allocated variables) in previous elections; it was found by Michael Butterworth in an analysis of the 1996 election; "redshift" has been found in presidential elections at least as far back as 1988, was substantial in 1992, and therefore can only be with difficulty ascribed to systematic election fraud; non-response bias is actually observed in the age, race and sex data on non-respondents; pro-Kerry bias was suspected in 2004 before a single vote result was in, on the basis of deviation from pre-election polls; and a poll about a month before the election on exit polls indicated that Republicans would be signficantly less willing than Democrats to take part.

    So given the evidence for differential participation rates in exit polls, and given the evidence that the margin of even unpopular winners tends to be retrospectively inflated, it seems somewhat hyperbolic to be headlining the inference from these particular data that a "landslide" was "denied".

    There was clearly a great deal of corruption in 2006.   The robocalls and the push polls were outrageous, and may have cost the Dems a Senate seat (Tennessee) and House seats ( e.g. Jennings).  The undervotes in Florida 13 were clearly miscounts of some sort, whether malicious or simply due to crappy programming.  If it happened there, it could easily have happened else where too.  E-voting, as it stands right now sucks.  But that is precisely why I am concerned about inferences made from polls that are unlikely to tell us much about where to look, even if unbiased, AND are unlikely to be unbiased.

    I fear this is misdirection, however well-intentioned.

  •  Election Integrity (3+ / 0-)

    Thanks for posting this diary.

    Added the "Election Integrity" tag.

    Folks interested in this topic are pooling information at
    http://groups.yahoo.com/...
    and check dkosopedia on Voting_Rights for further resources.

    Please think about volunteering to be a poll worker in your local precinct
    Serving_as_an_election_official

    Solar is civil defense. Video of my small scale solar experiments at http://solarray.blogspot.com/2006/03/solar-video.html

    by gmoke on Fri Nov 17, 2006 at 11:46:37 AM PST

  •  We should be concerned but (0+ / 0-)

    There could be election fraud. But the discrepency between the actual vote count and exit polling doesn't necessary mean election fraud. It could indicate that more Democrats are open to being interviewed than Republicans.

    But I do admit something wierd is going on. Prior to 2000, exit polling tended to be quite accurate. Since 2000, exit polls have generally overestimated the Democratic vote. Its hard to explain why Republicans would suddenly be less willing to take a survey (or lie on the surveys).

    So the question is, is the exit polling actually more accurate than the actual vote count?

    •  Thoughtful post, but you do perpetrate a myth (1+ / 0-)
      Recommended by:
      RyoCokey

      Exit polls weren't particularly accurate prior to 2000. In fact, the actual precinct-level data was more accurate in 2000 than in any of the past five presidential elections.  It does appear to fluctuate, and one theory is that the discrepancy tends to be greater where there is more interest in the election.  But it's difficult to draw general conclusions, because by definition, elections are rare events, and each one is different from the previous one. There was a big discrepancy in 1992, the year Perot was a major factor.

  •  Although there is a large sample size (0+ / 0-)

    Does this mean they polled 10,000 voters across the country on the generic ballot?  Is this 10,000 supposed to represent approximately 100 million people who voted this past election (this is my guess)?

    Let's say that a congressional district is made up of 500,000 people, of which around 450,000 can vote (just to throw out some numbers).  If the district had a voter turnout of about 50-60% this election, that would give us 225,000-270,000 voters per district.  When we saw all those polls during the campaign, a poll with 1,000 people would be considered very good, with an uncertainty of 2-3% in the numbers.

    Let's say 250,000 people voted in this imaginary district.  If polling for 1,000 people was performed, than the actual number of voters is 250 times the sample size of the poll.

    Now to the national generic Congressional ballot, if we use the same factor of 250 that we find in the imaginary district, then about 400,000 people need to be exit polled to estimate accurately the 100 million people who voted across the country.

    Is there something wrong with my math?  Is there some other model that allows 10,000 exit polled voters to even begin to estimate all the voters in the country?

    With that being said, I'm not sure if this is cause for a declaration of voting fraud.  It is better to use these results as evidence for the need to have a paper trail and to secure electronic voting machines if they are to be used in the future.

    Science without religion is lame, religion without science is blind -- Albert Einstein

    by BasharH on Fri Nov 17, 2006 at 12:52:30 PM PST

    •  Easier Answer (1+ / 0-)
      Recommended by:
      Febble

      There's an even easier solution to the issue, and that is to use actual results to verify exit polling and machine counting and vice versa.

      Where you have auditable paper ballots -- like in precincts that use optical scanning equipment -- you simply take a random sample, say 5%, of the voted ballots and you compare those results against the machine tapes and the exit polls for those districts.

      There were enough of those precincts around the country to give a very large -- and accurate -- picture of how close (or far) the exit polling was to actual results.

      This should be an easy, and incontrovertable, analytical tool to see what actually happened in the 2006 election.

      If unverifyable touch-screen voting precincts ended up dramatically different in their actual results than the exit polling for those races, but paper ballot audit precincts turned out to be very close to exit poll results, then we know for sure there was tampering somewhere with the TS machines.

      If, on the other hand, the paper ballot counts had essentially the same error rate as the TS machines vs. exit polls, then we also have the right answer.

      This is not rocket science; but putting out press releases based on paranoia and unprovable assumptions is of no help in getting our country on the right track to using voting systems that have a high degree of integrity.

      •  Well, I agree about the press release (0+ / 0-)

        And your idea in principle is good, although 5% of each precinct would be far too few in most cases.  I'd recount all the polled precincts.

        But better still would be to recount a random sample of precincts throughout each state.  That's what the Holt Bill HR 550 is about, and it would be an excellent start.

        •  No Problem (1+ / 0-)
          Recommended by:
          Febble

          I was using the 5% figure as something that many statisticians would probably say is "statistically significant" but if 10% or 20% gave people more comfort that things were accurate, let's do the largest number logical to assure people that the votes cast are the votes recorded.

          Also, the Holt bill is a good start for the future.  We're in agreement on that.

          I was just trying to put some real science behind what happened last week, eliminating the paranoia.

          Just for the record, I'm an election inspector in Santa Barbara county where we use Optical Scan equipment.  Our Board of Elections does a paper ballot audit of 10% of the scanned ballots a couple of weeks after every election to make sure the AccuVote scanning machines are correctly recording the votes and that when they are transmitted by modem on election night by poll workers, the transmitted results are accurate.

          We have confidence here that we get the correct result, but I haven't seen any data regarding exit polls vs. our reported results.  Since I could be pretty sure that the results we get here are reliable, I would love to see how exit polling matched up.

    •  Well, there's something a little wrong (0+ / 0-)

      with your math.  In general, the statistical power you have depends on the size of your sample, not the proportion of the population it represents.  A sample size of 1000 will be as good for a population of 10,000 as for 100,000.

      The problem is the diversity of the population, and getting a truly representative sample that is nonetheless random.  Polling is tricky.  

    •  Something is wrong with your math (1+ / 0-)
      Recommended by:
      Febble

      The margin of error for a survey is determined entirely by the size of the sample, not the size of the population being sampled from (there's actually an exception involving what's called the "finite population correction" but it only applies when the size of the sample is only slightly smaller than the size of the population).  Roughly, the MOE decreases according to the square root of the sample size; poll 4 times as many people and you cut your MOE in half.

      I know this is pretty counterintuitive; in fact it's one of the hardest statistical concepts to learn.  But it can be easily verified by doing experiments with random numbers if you have access to a programming language or a stats package.

Subscribe or Donate to support Daily Kos.

Click here for the mobile view of the site