There is a very long and somehow windy—if often funny—introduction to Bayes’ theorem by a researcher in artificial intelligence. In particular, it contains several Java applets that shows how intuition about posterior probabilities can be wrong. The whole text is about constructing Bayes’ theorem for simple binomial outcomes with two possible causes. It is indeed funny and entertaining (at least at the beginning) but, as a mathematician, I do not see how these many pages build more intuition than looking at the mere definition of a conditional probability and at the inversion that is the essence of Bayes’ theorem. The author agrees to some level about this “By this point, Bayes’ Theorem may seem blatantly obvious or even tautological, rather than exciting and new. If so, this introduction has entirely succeeded in its purpose.” Quite right.
When looking further, there is however a whole crowd on the blogs that seems to see more in Bayes’s theorem than a mere probability inversion, see here and there and there again for examples, a focus that actually confuses—to some extent—the theorem [two-line proof, no problem, Bayes' theorem being indeed tautological] with the construction of prior probabilities or densities [a forever-debatable issue]. The theorem per se offers no difficulty, so this may be due to the counter-intuitive inversion of probabilities as the one found in the example of the first blog. But the fact that people often confuse probabilities of causes and probabilities of effects—i.e. the right order of conditioning—does not require a deeper explanation for Bayes’ theorem, rather a pointer at causal reasoning!