Cross posted at my personal blog.
I have always been interested in things involving probability (stochastic events) and my my interest has been peaked by a couple of things:
- The recent market crash and
- The current of fear that is going around due to rare but tragic events (e. g., the Virginia Tech campus shootings).
It turns out that it is sometimes possible to overreact to rare events, and sometimes it is fatal to ignore their possibility.
More below the fold...
This post is inspired by this 3-quarks daily article.
I've been fascinated by the mathematics of the "rare event" and the consequences.
First, let's examine the "rare event" from a "fear and security" point of view (e. g., we fear airplane crashes, when auto crashes are far more likely to kill us)
A high school student was suspended for customizing a first-person shooter game with a map of his school. A contractor was fired from his government job for talking about a gun, and then visited by the FBI when he created a comic about the incident. A dean at Yale banned realistic stage weapons from the university theaters -- a policy that was reversed within a day. And some teachers terrorized a sixth-grade class by staging a fake gunman attack, without telling them that it was a drill.
These things all happened, even though shootings like this are incredibly rare; even though -- for all the press -- less than one percent (.pdf) of homicides and suicides of children ages 5 to 19 occur in schools. In fact, these overreactions occurred, not despite these facts, but because of them.
The Virginia Tech massacre is precisely the sort of event we humans tend to overreact to. Our brains aren't very good at probability and risk analysis, especially when it comes to rare occurrences. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. There's a lot of research in the psychological community about how the brain responds to risk -- some of it I have already written about -- but the gist is this: Our brains are much better at processing the simple risks we've had to deal with throughout most of our species' existence, and much poorer at evaluating the complex risks society forces us face today.
Novelty plus dread equals overreaction.
[...]
Our greatest recent overreaction to a rare event was our response to the terrorist attacks of 9/11. I remember then-Attorney General John Ashcroft giving a speech in Minnesota -- where I live -- in 2003, and claiming that the fact there were no new terrorist attacks since 9/11 was proof that his policies were working. I thought: "There were no terrorist attacks in the two years preceding 9/11, and you didn't have any policies. What does that prove?" [...]
On the other hand, there are cases where NOT paying attention to the potentially catastrophic rare event can lead to, well, catastrophe:
Taleb continues his examination of Black Swans, the highly improbable and unpredictable events that have massive impact. He claims that those who are putting society at risk are "no true statisticians", merely people using statistics either without understanding them, or in a self-serving manner. "The current subprime crisis did wonders to help me drill my point about the limits of statistically driven claims," he says.
Taleb, looking at the cataclysmic situation facing financial institutions today, points out that "the banking system, betting against Black Swans, has lost over 1 Trillion dollars (so far), more than was ever made in the history of banking".
So, what is going on here?
Well, I can't do the Nassim Nicholas Taleb article justice in a paragraph or two; you'll have to read it for yourself (and it is well worth reading). Yes, he wrote the book Fooled by Randomness, which I highly recommend.
But here is my oversimplified summary of the article I am referring to:
- Statistical methods work fairly well in relatively simple situations (e. g., games of chance) or where there is a ton of data (e. g., mortality tables). On the other hand, it is very difficult to model complex situations accurately (e. g., financial models) and it is even more difficult to test one's model.
When one talks about a rare event (say, one that your model says will occur once per 1000 years), how in the world can you check that when you need several thousand years of data to check your model?
- Even if one could model complex situations with a probability distribution, often the "tails" of these distributions (where the rare events occur) are NOT robust with respect to a small change in parameters.
- One must also consider the consequences of a rare event. Sometimes that 1 in 1000 event can completely wipe out a market and completely outweigh all previous gains.
- Beware of the "billions of monkeys with typewriters" phenomenon. Here is what I mean: you have, say, 500 economic experts, each of which follows their own advice. By pure chance there will be ONE of them will have a lucky streak and end up with lots of money to show for their plan. Guess which one writes the book? The 499 who went bankrupt won't write books.
So, to see if that author is good or not, one should see what would have happened had things gone just a bit differently. Would you still have been successful, or would you have been wiped out?