contemporary issues in hypothesis testing

hipocontemptThis week [at Warwick], among other things, I attended the CRiSM workshop on hypothesis testing, giving the same talk as at ISBA last June. There was a most interesting and unusual talk by Nick Chater (from Warwick) about the psychological aspects of hypothesis testing, namely about the unnatural features of an hypothesis in everyday life, i.e., how far this formalism stands from human psychological functioning.  Or what we know about it. And then my Warwick colleague Tom Nichols explained how his recent work on permutation tests for fMRIs, published in PNAS, testing hypotheses on what should be null if real data and getting a high rate of false positives, got the medical imaging community all up in arms due to over-simplified reports in the media questioning the validity of 15 years of research on fMRI and the related 40,000 papers! For instance, some of the headings questioned the entire research in the area. Or transformed a software bug missing the boundary effects into a major flaw.  (See this podcast on Not So Standard Deviations for a thoughtful discussion on the issue.) One conclusion of this story is to be wary of assertions when submitting a hot story to journals with a substantial non-scientific readership! The afternoon talks were equally exciting, with Andrew explaining to us live from New York why he hates hypothesis testing and prefers model building. With the birthday model as an example. And David Draper gave an encompassing talk about the distinctions between inference and decision, proposing a Jaynes information criterion and illustrating it on Mendel‘s historical [and massaged!] pea dataset. The next morning, Jim Berger gave an overview on the frequentist properties of the Bayes factor, with in particular a novel [to me] upper bound on the Bayes factor associated with a p-value (Sellke, Bayarri and Berger, 2001)

B¹⁰(p) ≤ 1/-e p log p

with the specificity that B¹⁰(p) is not testing the original hypothesis [problem] but a substitute where the null is the hypothesis that p is uniformly distributed, versus a non-parametric alternative that p is more concentrated near zero. This reminded me of our PNAS paper on the impact of summary statistics upon Bayes factors. And of some forgotten reference studying Bayesian inference based solely on the p-value… It is too bad I had to rush back to Paris, as this made me miss the last talks of this fantastic workshop centred on maybe the most important aspect of statistics!

5 Responses to “contemporary issues in hypothesis testing”

  1. Vladimir Vovk Says:

    I have just read the Kamary et al. 2014 paper (arXiv:1412.2044) and agree that embedding the two models to be compared into one bigger model is a natural Bayesian solution. (It’s a pity you can’t give a philosophical reason for using arithmetic mixtures, but I can easily believe that the alternatives are “less practical to manage”).

    • Thank you for the support. I would also prefer using a geometric mixture to preserve the tail behaviour, but this is fraught with many difficulties.

  2. Vladimir Vovk Says:

    The inequality B¹⁰(p) ≤ 1/-e p log p can be obtained by maximizing the expression c p^{c-1} over c\in(0,1). But the advantage of the expression c p^{c-1} for a fixed c\in(0,1) is that it transforms p-values into Bayes factors, as Shafer et al. argue in their paper “Test Martingales, Bayes Factors and p-Values” (Statistical Science 26, 84-101, 2011, Section 6). Glenn Shafer used “Calibrate p-values by taking the square root” as the title of his talk at the Rutgers Foundations of Probability seminar in September. So one possibility is to replace a p-value p by the Bayes factor 0.5 p^{-0.5}. The advantage of this transformation is that it gives a Bayes factor, not an upper bound (and it’s more cautious).

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.