a resolution of the Jeffreys-Lindley paradox
“…it is possible to have the best of both worlds. If one allows the significance level to decrease as the sample size gets larger (…) there will be a finite number of errors made with probability one. By allowing the critical values to diverge slowly, one may catch almost all the errors.” (p.1527)
When commenting another post, Michael Naaman pointed out to me his 2016 Electronic Journal of Statistics paper where he resolves the Jeffreys-Lindley paradox. The argument there is to consider a Type I error going to zero with the sample size n going to infinity but slowly enough for both Type I and Type II errors to go to zero. And guarantee a finite number of errors as the sample size n grows to infinity. This translates for the Jeffreys-Lindley paradox into a pivotal quantity within the posterior probability of the null that converges to zero with n going to infinity. Hence makes it (most) agreeable with the Type I error going to zero. Except that there is little reason to assume this pivotal quantity goes to infinity with n, despite its distribution remaining constant in n. Being constant is less unrealistic, by comparison! That there exists an hypothetical sequence of observations such that the p-value and the posterior probability agree, even exactly, does not “solve” the paradox in my opinion.
April 25, 2019 at 4:03 pm
The solution to the Lindley paradox is that there is no paradox. The Bayesian answer is correct, and is quite intuitive, in circumstances where the assumed prior corresponds to what you actually believe. What the frequentist answer is is of no consequence. Frequentist methods are fundamentally broken, in many ways other than this example, so it’s hardly a surprising paradox that they disagree with the correct Bayesian answer.
More typically, however, one doesn’t think that the mean is either exactly zero or is from some prior that is flat in the vicinity of zero. Instead, one usually thinks that there is a substantial probability for the mean to be close to zero, but not exactly equal to zero. This of course leads to a different Bayesian conclusion, which is again entirely intuitive if this prior corresponds to what you actually believe.
Either way, there is no problem. There is nothing to “resolve”.
April 26, 2019 at 9:45 am
Thanks! My review was presumably too cryptic and reserved, but I do agree with the fact that the “paradox” is not a paradox, either at the frequentist/Bayesian interface or within the Bayesian paradigm itself. Most posts I published on that theme feed this argument.
April 24, 2019 at 4:25 am
I must admit that sadly I do not quite follow your analysis, but I did follow the paper itself and agree with your conclusions (which are perhaps slightly less harsh than your original reply to the author! ok he was asking for it).
Hearing that a solution to Lindley’s paradox has been found by allowing the significance level to depend on the sample size should be a good stimulus for the rolling of eyes.
Also is Lindley’s paradox really asking to be resolved? Or does it just simply exist?
April 26, 2019 at 9:41 am
Radford’s comment provides the answer to your question!