Like many of the rest of you, I really enjoyed watching the right's meltdown this week. While they were busy unskewing polls to avoid confronting reality, our side was busy winning the election. But while this election was a broad triumph for science over ignorance, that doesn't mean that we shouldn't look at polling results critically.

The polls right before the election showed amazing agreement. The last 30 polls released before the election all fit into a narrow 5% range from Obama +4 to Romney +1. In fact, the only poll to show Obama +4 was a Democracy Corps poll, a Democratic pollster. The only polls to show Romney leading were Gallup and Rasmussen, two pollsters with some of the strongest pro-Romney house effects. If we take these polls out, all of the polls fit into an astoundingly small 3% window.

This is fantastic considering some of the obstacles pollsters have to deal with:

**Non-response bias:** Up to 9 out of 10 people refuse to answer polls

**Sampling bias:** Are some types of voters more likely to answer polls? Should we include cellphones or not?

**Likely voter screen:** The mother of all polling dilemmas, how do we determine if someone will actually vote?

Yet, with all of these challenges, nearly all of our pollsters projected results within a tiny 3% window. These results seem to be too good to be true.

In fact, the results may be too good to be true. I'll let you decide.

We tend to judge polls unfairly. If the last poll that a polling organization conducts before the election is close to the actual result, we consider them a good pollster. If they miss by more than a few points, we call them bad.

But the truth is, if we are looking at only one poll, the results can vary wildly and it's not the pollster's fault. Consider a poll that shows a 50-50 Obama-Romney tie with a 3% margin of error. This means that 95% of the time, the result will be between Obama 53% Romney 47% (Obama +6) and Romney 53% Obama 47% (Romney +6) for a whopping 12 point swing in the winning margin. And that's only if the pollster is lucky enough not to miss outside the margin of error.

Polls *must* miss outside the margin of error about 1 out of every 20 times. This is not to say that if a pollster conducts 20 polls that exactly one poll will miss. A pollster has about a 64% chance of missing at least once in 20 polls. However, if we conduct 1000 polls, we should miss on average about 50 of those polls. The probability of missing on at least 40 polls is about 94%. The probability of missing on at least 30 polls is about 99.85%

Let's limit our discussion to polls where all of the interviews took place in November. October 31st was the day that president Obama surveyed the storm damage with governor Christie. If there was a storm bounce, these polls should capture all of it. There were 18 such polls:

For all of our polls, we're going to ignore the small number of undecided and "other" likely voters and focus on the margin of victory between Romney and Obama.

Of these 18 polls, not a single one missed the correct result by more than its margin of error. This is a very good thing. Most of our polls have about a 3% margin of error. They're still counting votes, but right now, the election results are: Obama 50.6, Romney 47.9 (Obama +2.7%). A poll with a perfect likely voter model just missing the margin of error in Romney's favor would show Romney +4. On the other side, a poll missing in Obama's favor would show Obama +9. Can you imagine if a pollster published these results the day before the election? They would be laughed out of Pollster Town. With 18 polls, assuming all of them had perfect models, there was a 60% chance that one of them would miss. It's slightly lucky that nobody missed, but nothing overly unusual.

Let's go a step further. A poll will miss by less than half of its margin of error about 32% of the time. For a poll with a 3% margin of error, this means we can move up to 1.5% between the candidates. Our narrower mini margin of error allows a result between Obama 49.1, Romney 49.4 (Romney +0.3%) and Obama 52.1, Romney 46.4(Obama +5.7%). Of the 18 polls in our list, only two: Rasmussen and Gallup missed by more than half of their margin of error. A collective result this good or better only happens about 4% of the time.

Let me repeat that: **assuming every single one of our pollsters has a perfect likely voter screen and has overcome a crippling non-response bias, results with this little statistical noise should only occur about 4% of the time.**

But the models were not perfect. The mean margin in our 18 polls is Obama +1%, an average error of 1.7%. From our list of polls, not a single poll missed this average by more than half of its margin of error. This is likely to occur only about 0.1% of the time. Marvel at the narrow polling gap while you can, folks. Results like this should only occur once every 4000 years in presidential elections!

Let's now go back in time. There are 86 presidential polls listed on Real Clear Politics dating back to the beginning of August. If we compare the winning margin in each of these polls to the RCP average the day before the poll was released, not one poll missed the RCP average by more than its margin of error. For 86 polls we should expect about four misses but we had zero. There is just over a 1% chance of having no polling misses. How fortunate for our pollsters!

We put a lot of stock in our public polls. They drive enthusiasm and donations in what has become a $6 billion election industry. While we may treat one poll with cynicism, we tend to accept the wisdom of polling averages without question. Maybe that should change.

There is a terrible lack of transparency in public polls. Most polls don't release their RV results or their actual raw data. We have no idea what goes into the secret sauce of a pollster's likely voter screen. If a pollster wanted to put their thumb on the scale, they could produce just about any result they wanted. How do we know that they didn't do it here? How do we know that pollsters are releasing all of their polls and not holding back results that might appear embarrassing? Are pollsters cheating and looking at other results before releasing their own?

I'm not saying that all pollsters are bad. Some pollsters (PPP for example) take great pains to be transparent. We need to be a little bit more wary of polling averages. It's time to start asking polling companies some hard questions.