Skip to main content

View Diary: Morning feature: The Monty Hall problem (with poll and statistics questions answered) (310 comments)

Comment Preferences

  •  Our brains (9+ / 0-)

    are pattern recognition machines, not probability engines. Intuition can easily be fooled.

    For example, Mary was a hippie who grew up in a commune, had numerous body piercings, and changed her hair color weekly. After twenty years, which do you think is more likely?

    1. Mary paints her toenails and works in a bank.
    1. Mary works in a bank.

    Surprisingly, many people pick choice 1 even though choice 2 is at the very worst at least as likely as choice 1. Mathematically, the probability of A is always greater than or equal to the probability of A and B.

    If you don't know where you're going, any road will do.

    by exregis on Tue May 05, 2009 at 06:07:26 AM PDT

    •  Sort of.... (4+ / 0-)
      Recommended by:
      SoCalJayhawk, Dave in RI, plf515, Toon

      We're actually better at probabilistic reasoning than many mathematicians care to admit.  Problems like the Monty Hall issue posit something that usually doesn't exist in the real world: a finite, knowable sample set.  In the real world, the sample set may (or may not) be finite, but it's often far too large to be even computationally knowable.

      So we fall back on other tools which are less precise, but which can still be rigorous and very useful in a probabilistic way.  They don't guarantee success, but they "nudge the odds" toward success.  And we humans have been "nudging the odds" toward success well enough, often enough, to have survived and thrived very well as a species.

      A classic example are studies that show we tend to "over value" short-term risks and rewards and "under value" long-term risks/rewards.

      It's true that we weigh short-term risks/rewards more heavily than long-term risks/rewards, but does that mean we "over value" them?  There is, after all, a non-zero probability that we won't be alive to see a long-term risk/reward play out, and the longer the term the higher that probability.  Even if we don't die, the longer the term, the greater the probability that intervening causal elements and/or later decision points will change the risk:reward calculation in ways we can't precisely estimate.  Given those vagueries, we tend to weigh long-term risk/rewards lower than a mathematical calculation might suggest.

      Does that mean we're reasoning poorly ... or that the mathematical calculation isn't considering enough of the real-world variables that we weigh, often without being able to articulate them?

      Good morning! ::hugggggggggs::

Subscribe or Donate to support Daily Kos.

Click here for the mobile view of the site