## Archive for point null hypotheses

## unrejected null [xkcd]

Posted in Statistics with tags evidence, Nature, point null hypotheses, preregistered experiments, replication crisis, xkcd on July 18, 2018 by xi'an## estimation versus testing [again!]

Posted in Books, Statistics, University life with tags Bayes factors, Bayesian inference, Harold Jeffreys, hypothesis testing, parameter estimation, point null hypotheses, psychology, refereeing, review, spike-and-slab prior, unification on March 30, 2017 by xi'an**T**he following text is a review I wrote of the paper “Parameter estimation and Bayes factors”, written by J. Rouder, J. Haff, and J. Vandekerckhove. (As the journal to which it is submitted gave me the option to sign my review.)

The opposition between estimation and testing as a matter of prior modelling rather than inferential goals is quite unusual in the Bayesian literature. In particular, if one follows Bayesian decision theory as in Berger (1985) there is no such opposition, but rather the use of different loss functions for different inference purposes, while the Bayesian model remains single and unitarian.

Following Jeffreys (1939), it sounds more congenial to the Bayesian spirit to return the posterior probability of an hypothesis * H⁰* as an answer to the question whether this hypothesis holds or does not hold. This however proves impossible when the “null” hypothesis

*has prior mass equal to zero (or is not measurable under the prior). In such a case the mathematical answer is a probability of zero, which may not satisfy the experimenter who asked the question. More fundamentally, the said prior proves inadequate to answer the question and hence to incorporate the information contained in this very question. This is how Jeffreys (1939) justifies the move from the original (and deficient) prior to one that puts some weight on the null (hypothesis) space. It is often argued that the move is unnatural and that the null space does not make sense, but this only applies when believing very strongly in the model itself. When considering the issue from a modelling perspective, accepting the null*

**H⁰***means using a new model to represent the model and hence testing becomes a model choice problem, namely whether or not one should use a complex or simplified model to represent the generation of the data. This is somehow the “unification” advanced in the current paper, albeit it does appear originally in Jeffreys (1939) [and then numerous others] rather than the relatively recent Mitchell & Beauchamp (1988). Who may have launched the spike & slab denomination.*

**H⁰**I have trouble with the analogy drawn in the paper between the spike & slab estimate and the Stein effect. While the posterior mean derived from the spike & slab posterior is indeed a quantity drawn towards zero by the Dirac mass at zero, it is rarely the point in using a spike & slab prior, since this point estimate does not lead to a conclusion about the hypothesis: for one thing it is never exactly zero (if zero corresponds to the null). For another thing, the construction of the spike & slab prior is both artificial and dependent on the weights given to the spike and to the slab, respectively, to borrow expressions from the paper. This approach thus leads to model averaging rather than hypothesis testing or model choice and therefore fails to answer the (possibly absurd) question as to which model to choose. Or refuse to choose. But there are cases when a decision must be made, like continuing a clinical trial or putting a new product on the market. Or not.

In conclusion, the paper surprisingly bypasses the decision-making aspect of testing and hence ends up with a inconclusive setting, staying midstream between Bayes factors and credible intervals. And failing to provide a tool for decision making. The paper also fails to acknowledge the strong dependence of the Bayes factor on the tail behaviour of the prior(s), which cannot be [completely] corrected by a finite sample, hence its relativity and the unreasonableness of a fixed scale like Jeffreys’ (1939).

## Measuring statistical evidence using relative belief [book review]

Posted in Books, Statistics, University life with tags ABC, Bayes factor, CHANCE, CRC Press, discrepancies, Error and Inference, improper prior, integrated likelihood, Jeffreys-Lindley paradox, Likelihood Principle, marginalisation paradoxes, model checking, model validation, Monty Hall problem, Murray Aitkin, p-value, point null hypotheses, relative belief ratio, University of Toronto on July 22, 2015 by xi'an

“It is necessary to be vigilant to ensure that attempts to be mathematically general do not lead us to introduce absurdities into discussions of inference.” (p.8)

**T**his new book by Michael Evans (Toronto) summarises his views on statistical evidence (expanded in a large number of papers), which are a quite unique mix of Bayesian principles and less-Bayesian methodologies. I am quite glad I could receive a version of the book before it was published by CRC Press, thanks to Rob Carver (and Keith O’Rourke for warning me about it).* [Warning: this is a rather long review and post, so readers may chose to opt out now!]*

“The Bayes factor does not behave appropriately as a measure of belief, but it does behave appropriately as a measure of evidence.” (p.87)

## Adaptive revised standards for statistical evidence [guest post]

Posted in Books, Statistics, University life with tags Bayes factors, Carlos Pereira, Luis Pericchi, María-Eglée Pérez, p-values, PNAS, point null hypotheses, Valen Johnson on March 25, 2014 by xi'an*[ Here is a discussion of Valen Johnson’s PNAS paper written by Luis Pericchi, Carlos Pereira, and María-Eglée Pérez, in conjunction with an arXived paper of them I never came to discuss. This has been accepted by PNAS along with a large number of other letters. Our discussion permuting the terms of the original title also got accepted.]*

**J**ohnson [1] argues for decreasing the bar of statistical significance from 0.05 and 0.01 to 0:005 and 0:001 respectively. There is growing evidence that the canonical fixed standards of significance are inappropriate. However, the author simply proposes other fixed standards. The essence of the problem of classical testing of significance lies on its goal of minimizing type II error (false negative) for a fixed type I error (false positive). A real departure instead would be to minimize a weighted sum of the two errors, as proposed by Jeffreys [2]. Significance levels that are constant with respect to sample size do not balance errors. Size levels of 0.005 and 0.001 certainly will lower false positives (type I error) to the expense of increasing type II error, unless the study is carefully de- signed, which is not always the case or not even possible. If the sample size is small the type II error can become unacceptably large. On the other hand for large sample sizes, 0.005 and 0.001 levels may be too high. Consider the Psychokinetic data, Good [3]: the null hypothesis is that individuals can- not change by mental concentration the proportion of 1’s in a sequence of n = 104; 490; 000 0’s and 1’s, generated originally with a proportion of 1=2. The proportion of 1’s recorded was 0:5001768. The observed p-value is p = 0.0003, therefore according to the present revision of standards, still the null hypothesis is rejected and a Psychokinetic effect claimed. This is contrary to intuition and to virtually any Bayes Factor. On the other hand to make the standards adaptable to the amount of information (see also Raftery [4]) Perez and Pericchi [5] approximate the behavior of Bayes Factors by,

**T**his formula establishes a bridge between carefully designed tests and the adaptive behavior of Bayesian tests: The value n_{0} comes from a theoretical design for which a value of both errors has been specifieded, and n is the actual (larger) sample size. In the Psychokinetic data n_{0} = 44,529 for type I error of 0:01, type II error of 0.05 to detect a difference of 0.01. The α_{ref} (104, 490,000) = 0.00017 and the null of no Psychokinetic effect is accepted.

**A** simple constant recipe is not the solution to the problem. The standard how to judge the evidence should be a function of the amount of information. Johnson’s main message is to toughen the standards and design the experiments accordingly. This is welcomed whenever possible. But it does not balance type I and type II errors: it would be misleading to pass the message—use now standards divided by ten, regardless of neither type II errors nor sample sizes. This would move the problem without solving it.

## workshop a Venezia (2)

Posted in pictures, Statistics, Travel, University life with tags ABC, approximate likelihood, Ca' Foscari University, clustering, composite likelihood, empirical likelihood, hypothesis testing, Italia, loss functions, normalising constant, point null hypotheses, Venezia on October 10, 2012 by xi'an**I** could only attend one day of the workshop on likelihood, approximate likelihood and nonparametric statistical techniques with some applications, and I wish I could have stayed a day longer (and definitely not only for the pleasure of being in Venezia!) Yesterday, Bruce Lindsay started the day with an extended review of composite likelihood, followed by recent applications of composite likelihood to clustering (I was completely unaware he had worked on the topic in the 80’s!). His talk was followed by several talks working on composite likelihood and other pseudo-likelihoods, which made me think about potential applications to ABC. During my tutorial talk on ABC, I got interesting questions on multiple testing and how to combine the different “optimal” summary statistics (*answer:* take all of them, it would not make sense to co;pare one pair with one summary statistic and another pair with another summary statistic), and on why we were using empirical likelihood rather than another pseudo-likelihood (*answer:* I do not have a definite answer. I guess it depends on the ease with which the pseudo-likelihood is derived and what we do with it. I would e.g. feel less confident to use the pairwise composite as a substitute likelihood rather than as the basis for a score function.) In the final afternoon, Monica Musio presented her joint work with Phil Dawid on score functions and their connection with pseudo-likelihood and estimating equations (another possible opening for ABC), mentioning a score family developped by Hyvärinen that involves the gradient of the square-root of a density, in the best James-Stein tradition! (Plus an approach bypassing the annoying missing normalising constant.) Then, based on a joint work with Nicola Satrori and Laura Ventura, Ruli Erlis exposed a 3rd-order tail approximation towards a (marginal) posterior simulation called HOTA. As Ruli will visit me in Paris in the coming weeks, I hope I can explore the possibilities of this method when he is (t)here. At last, Stéfano Cabras discussed higher-order approximations for Bayesian point-null hypotheses (jointly with Walter Racugno and Laura Ventura), mentioning the Pereira and Stern (so special) loss function mentioned in my post on Måns’ paper the very same day! It was thus a very informative and beneficial day for me, furthermore spent in a room overlooking the Canal Grande in the most superb location!

## testing via credible sets

Posted in Statistics, University life with tags Bayesian decision theory, Bayesian tests, credible intervals, HPD region, loss functions, measure theory, p-values, point null hypotheses, polymath, precision, StackExchange, The Bayesian Choice on October 8, 2012 by xi'an**M**åns Thulin released today an arXiv document on some decision-theoretic justifications for [running] Bayesian hypothesis testing through credible sets. His main point is that using the unnatural prior setting mass on a point-null hypothesis can be avoided by rejecting the null when the point-null value of the parameter does not belong to the credible interval and that this decision procedure can be validated through the use of special loss functions. While I stress to my students that point-null hypotheses are very unnatural and should be avoided at all cost, and also that constructing a confidence interval is not the same as designing a test—the former assess the precision in the estimation, while the later opposes two different and even incompatible models—, let us consider Måns’ arguments for their own sake.

**T**he idea of the paper is that there exist loss functions for testing point-null hypotheses that lead to HPD, symmetric and one-sided intervals as acceptance regions, depending on the loss func. This was already found in Pereira & Stern (1999). The issue with these loss functions is that they involve the corresponding credible sets in their definition, hence are somehow tautological. For instance, when considering the HPD set and *T(x)* as the largest HPD set not containing the point-null value of the parameter, the corresponding loss function is

parameterised by *a,b,c*. And depending on the HPD region.

**M**åns then introduces new loss functions that do not depend on *x* and still lead to either the symmetric or the one-sided credible intervals.as acceptance regions. However, one test actually has two different alternatives (Theorem 2), which makes it essentially a composition of two one-sided tests, while the other test returns the result to a one-sided test (Theorem 3), so even at this face-value level, I do not find the result that convincing. (For the one-sided test, George Casella and Roger Berger (1986) established links between Bayesian posterior probabilities and frequentist p-values.) Both Theorem 3 and the last result of the paper (Theorem 4) use a generic and set-free observation-free loss function (related to eqn. (5.2.1) in my book!, as quoted by the paper) but (and this is a big *but*) they only hold for prior distributions setting (prior) mass on both the null and the alternative. Otherwise, the solution is to always reject the hypothesis with the zero probability… This is actually an interesting argument on the why-are-credible-sets-unsuitable-for-testing debate, as it cannot bypass the introduction of a prior mass on Θ_{0}!

**O**verall, I furthermore consider that a decision-theoretic approach to testing should encompass future steps rather than focussing on the reply to the (admittedly dumb) question *is θ* zero? Therefore, it must have both plan A and plan B at the ready, which means preparing (and using!) prior distributions under both hypotheses. Even on point-null hypotheses.

**N**ow, after I wrote the above, I came upon a Stack Exchange page initiated by Måns last July. This is presumably not the first time a paper stems from Stack Exchange, but this is a fairly interesting outcome: thanks to the debate on his question, Måns managed to get a coherent manuscript written. Great! (In a sense, this reminded me of the polymath experiments of Terry Tao, Timothy Gower and others. Meaning that maybe most contributors could have become coauthors to the paper!)

## Bayesian ideas and data analysis

Posted in Books, R, Statistics, Travel, University life with tags Bayes factors, Bayesian ideas, Bayesian inference, Bayesian model choice, Bayesian statistics, book review, harmonic mean, Harold Jeffreys, Kaikoura, measure theory, New Zealand, Pierre Simon de Laplace, point null hypotheses, prior construction on October 31, 2011 by xi'an**H**ere is [yet!] another Bayesian textbook that appeared recently. I read it in the past few days and, despite my obvious biases and prejudices, I liked it very much! It has a lot in common (at least in spirit) with our *Bayesian Core*, which may explain why I feel so benevolent towards *Bayesian ideas and data analysis*. Just like ours, the book by Ron Christensen, Wes Johnson, Adam Branscum, and Timothy Hanson is indeed focused on explaining the Bayesian ideas through (real) examples and it covers a lot of regression models, all the way to non-parametrics. It contains a good proportion of WinBugs and R codes. It intermingles methodology and computational chapters in the first part, before moving to the serious business of analysing more and more complex regression models. Exercises appear throughout the text rather than at the end of the chapters. As the volume of their book is more important (over 500 pages), the authors spend more time on analysing various datasets for each chapter and, more importantly, provide a rather unique entry on prior assessment and construction. Especially in the regression chapters. The author index is rather original in that it links the authors with more than one entry to the topics they are connected with (Ron Christensen winning the game with the highest number of entries). Continue reading