Bayes’ Theorem

There is a very long and somehow windy—if often funny—introduction to Bayes’ theorem by a researcher in artificial intelligence. In particular, it contains several Java applets that shows how intuition about posterior probabilities can be wrong. The whole text is about constructing Bayes’ theorem for simple binomial outcomes with two possible causes. It is indeed funny and entertaining (at least at the beginning) but, as a mathematician, I do not see how these many pages build more intuition than looking at the mere definition of a conditional probability and at the inversion that is the essence of Bayes’ theorem. The author agrees to some level about this “By this point, Bayes’ Theorem may seem blatantly obvious or even tautological, rather than exciting and new. If so, this introduction has entirely succeeded in its purpose.” Quite right.

When looking further, there is however a whole crowd on the blogs that seems to see more in Bayes’s theorem than a mere probability inversion, see here and there and there again for examples, a focus that actually confuses—to some extent—the theorem [two-line proof, no problem, Bayes’ theorem being indeed tautological] with the construction of prior probabilities or densities [a forever-debatable issue]. The theorem per se offers no difficulty, so this may be due to the counter-intuitive inversion of probabilities as the one found in the example of the first blog. But the fact that people often confuse probabilities of causes and probabilities of effects—i.e. the right order of conditioning—does not require a deeper explanation for Bayes’ theorem, rather a pointer at causal reasoning!

18 Responses to “Bayes’ Theorem”

  1. […] while ago, I posted how strangely people seem to be attracted by re- and re-explaining Bayes’ theorem when I see […]

  2. […] Bayes’ Theorem: xianblog.wordpress.com/2009/01/22/bayes-theorem/ […]

  3. […] positives and false negatives have different impacts on the error (here comes Bayes theorem!), depending on population sizes and settings, as illustrated by the case of cheating athletes and […]

  4. […] that address highly popular topics, the other hits are about statistical topics! And includes Bayes’ Theorem. Possibly related posts: (automatically generated)The New […]

  5. […] Choice” — started the whole thing off by describing his bemusement over how much “fascination for Bayes’ Theorem [there] seems to be outside Statistics”. After all, it’s just a theorem, right? He contends that the theorem itself mustn’t be […]

  6. I do not see how these many pages build more intuition than looking at the mere definition of a conditional probability

    Sir… I fear I must ask if you have ever, in your entire life, attempted to explain a point of mathematics to someone who was not a mathematician.

  7. Sure, I keep seeing the magic in all those points, which all are consequences of Bayes’ Theorem. As you say, people get confused between the tool and the consequences. After all, if they end up using Bayesian tools, where is the problem?!

  8. There are several different points of fascination about Bayes:

    1. Surprising results from conditional probability. For example, if you test positive for a disease with a 1% prevalence rate, and the test is 95% effective, that you probably don’t have the disease.

    2. Bayesian data analysis as a way to solve statistical problems. For example, the classic partial-pooling examples of Lindley, Novick, Efron, Morris, Rubin, etc.

    3. Bayesian inference as a way to include prior information in statistical analysis.

    4. Bayes or Bayes-like rules for decision analysis and inference in computer science, for example identifying spam.

    5. Bayesian inference as coherent reasoning, following the principles of Von Neumann, Keynes, Savage, etc.

    My impression is that people have difficulty separating these ideas. In my opinion, all five of the above items are cool but they don’t always go together in any given problem. For example, the conditional probability laws in point 1 above are always valid, but not always particularly relevant, especially in continuous problems. (Consider the example in chapter 1 of Bayesian Data Analysis of empirical probabilities for football point spreads, or the example of kidney cancer rates in chapter 2.) Similarly, subjective probability is great, but in many many applications it doesn’t arise at all.

    Anyway, all of the five items above are magical, but a lot of the magic comes from the specific models being used–and, for many statisticians, the willingness to dive into the unknown by using an unconventional model at all–not just from the simple formula.

  9. Especially if it is well-designed! But my point there was a surprise at how fascinating Bayes’ Theorem seems to be outside Statistics. Does the Math kill the magic or am I getting dull with old age?!

  10. What’s funny to me is how authoritative any document seems if it’s sitting on the web.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.