## Archive for Type I error

## at last the type IX error

Posted in Statistics with tags comics, false discovery rate, Neyman-Pearson tests, significance test, testing of hypotheses, Type I error, Type II error, xkcd on May 11, 2020 by xi'an## a resolution of the Jeffreys-Lindley paradox

Posted in Books, Statistics, University life with tags Electronic Journal of Statistics, Jeffreys-Lindley paradox, p-value, Type I error, Type II error on April 24, 2019 by xi'an

“…it is possible to have the best of both worlds. If one allows the significance level to decrease as the sample size gets larger (…) there will be a finite number of errors made with probability one. By allowing the critical values to diverge slowly, one may catch almost all the errors.” (p.1527)

**W**hen commenting another post, Michael Naaman pointed out to me his 2016 Electronic Journal of Statistics paper where he resolves the Jeffreys-Lindley paradox. The argument there is to consider a Type I error going to zero with the sample size n going to infinity but slowly enough for both Type I and Type II errors to go to zero. And guarantee a finite number of errors as the sample size n grows to infinity. This translates for the Jeffreys-Lindley paradox into a pivotal quantity within the posterior probability of the null that converges to zero with n going to infinity. Hence makes it (most) agreeable with the Type I error going to zero. Except that there is little reason to assume this pivotal quantity goes to infinity with n, despite its distribution remaining constant in n. Being constant is less unrealistic, by comparison! That there exists an hypothetical sequence of observations such that the p-value and the posterior probability agree, even exactly, does not “solve” the paradox in my opinion.

## uniformly most powerful Bayesian tests???

Posted in Books, Statistics, University life with tags Bayes factors, Bayesian tests, minimaxity, Neyman-Pearson, power, Type I error, UMP tests, uniformly most powerful tests on September 30, 2013 by xi'an

“The difficulty in constructing a Bayesian hypothesis test arises from the requirement to specify an alternative hypothesis.”

**V**ale Johnson published (and arXived) a paper in the *Annals of Statistics* on uniformly most powerful Bayesian tests. This is in line with earlier writings of Vale on the topic and good quality mathematical statistics, but I cannot really buy the arguments contained in the paper as being compatible with (my view of) Bayesian tests. A “uniformly most powerful Bayesian test” (acronymed as UMBT) is defined as

“UMPBTs provide a new form of default, nonsubjective Bayesian tests in which the alternative hypothesis is determined so as to maximize the probability that a Bayes factor exceeds a specified threshold”

which means selecting *the prior* under the alternative so that the *frequentist* probability of the Bayes factor exceeding the threshold is maximal *for all* values of the parameter. This does not sound very Bayesian to me indeed, due to this averaging over all possible values of the observations **x**** and** comparing the probabilities for all values of the parameter

**rather than integrating against a prior or posterior**

*θ***selecting the prior under the alternative with the sole purpose of favouring the alternative, meaning its further use**

*and**when*the null is rejected is not considered at all

**catering to non-Bayesian theories, i.e. trying to sell Bayesian tools as supplementing**

*and**p*-values and arguing the method is objective because the solution satisfies a frequentist coverage (at best, this maximisation of the rejection probability reminds me of minimaxity, except there is no clear and generic notion of minimaxity in hypothesis testing).