a resolution of the Jeffreys-Lindley paradox

“…it is possible to have the best of both worlds. If one allows the significance level to decrease as the sample size gets larger (…) there will be a finite number of errors made with probability one. By allowing the critical values to diverge slowly, one may catch almost all the errors.” (p.1527)

When commenting another post, Michael Naaman pointed out to me his 2016 Electronic Journal of Statistics paper where he resolves the Jeffreys-Lindley paradox. The argument there is to consider a Type I error going to zero with the sample size n going to infinity but slowly enough for both Type I and Type II errors to go to zero. And guarantee  a finite number of errors as the sample size n grows to infinity. This translates for the Jeffreys-Lindley paradox into a pivotal quantity within the posterior probability of the null that converges to zero with n going to infinity. Hence makes it (most) agreeable with the Type I error going to zero. Except that there is little reason to assume this pivotal quantity goes to infinity with n, despite its distribution remaining constant in n. Being constant is less unrealistic, by comparison! That there exists an hypothetical sequence of observations such that the p-value and the posterior probability agree, even exactly, does not “solve” the paradox in my opinion.

4 Responses to “a resolution of the Jeffreys-Lindley paradox”

  1. Radford Neal Says:

    The solution to the Lindley paradox is that there is no paradox. The Bayesian answer is correct, and is quite intuitive, in circumstances where the assumed prior corresponds to what you actually believe. What the frequentist answer is is of no consequence. Frequentist methods are fundamentally broken, in many ways other than this example, so it’s hardly a surprising paradox that they disagree with the correct Bayesian answer.

    More typically, however, one doesn’t think that the mean is either exactly zero or is from some prior that is flat in the vicinity of zero. Instead, one usually thinks that there is a substantial probability for the mean to be close to zero, but not exactly equal to zero. This of course leads to a different Bayesian conclusion, which is again entirely intuitive if this prior corresponds to what you actually believe.

    Either way, there is no problem. There is nothing to “resolve”.

  2. I must admit that sadly I do not quite follow your analysis, but I did follow the paper itself and agree with your conclusions (which are perhaps slightly less harsh than your original reply to the author! ok he was asking for it).

    Hearing that a solution to Lindley’s paradox has been found by allowing the significance level to depend on the sample size should be a good stimulus for the rolling of eyes.

    Also is Lindley’s paradox really asking to be resolved? Or does it just simply exist?

Leave a Reply to The naked statistician Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.