**I**n yet another permutation of the original title (!), Andrew Gelman posted the answer Val Johnson sent him after our (submitted) letter to PNAS. As Val did not send me a copy (although Andrew did!), I will not reproduce it here and I rather refer the interested readers to Andrews’ blog… In addition to Andrew’s (sensible) points, here are a few idle (post-X’mas and pre-skiing) reflections:

makes me wonder in which metric this exponential rate (in γ?) occurs;*“evidence against a false null hypothesis accrues exponentially fast”*- that
is difficult to accept as an argument since there is no trace of a decision-theoretic argument in the whole paper;*“most decision-theoretic analyses of the optimal threshold to use for declaring a significant finding would lead to evidence thresholds that are substantially greater than 5 (and probably also greater 25)”* - Val rejects our minimaxity argument on the basis that
but the prior that corresponds to those tests is minimising the integrated probability of not rejecting at threshold level γ, a loss function integrated against parameter and observation, a Bayes risk in other words… Point masses or spike priors are clearly characteristics of minimax priors. Furthermore, the additional argument that*“[UMPBTs] do not involve minimization of maximum loss”*has been used by many to refute the Bayesian perspective and makes me wonder what are the arguments left in using a (pseudo-)Bayesian approach;*“in most applications, however, a unique loss function/prior distribution combination does not exist”* - the next paragraph is pure tautology: the fact that
is a paraphrase of the definition of UMPBTs, not an argument. I do not see we should solely*“no other test, based on either a subjectively or objectively specified alternative hypothesis, is as likely to produce a Bayes factor that exceeds the specified evidence threshold”*, since minimising those should lead to a point mass on the null (or, more seriously, should not lead to the minimax-like selection of the prior under the alternative).*“worry about false negatives”*

## Valen in Le Monde

Posted in Books, Statistics, University life with tags blogging, comments, False positive, Le Monde, Monsanto, p-values, Passeur de Sciences, statistical significance, UMPB test, uniformly most powerful tests, Valen Johnson on November 21, 2013 by xi'anValen Johnson made the headline inLe Monde, last week. (More precisely, to the scientific blogPasseur de Sciences. Thanks, Julien, for the pointer!) With the alarming title of “Une étude ébranle un pan de la méthode scientifique”(A study questions one major tool of the scientific approach). The reason for this French fame is Valen’s recent paper in PNAS,Revised standards for statistical evidence, where he puts forward his uniformly most powerful Bayesian tests (recently discussed on the ‘Og) to argue against the standard 0.05 significance level and in favour of “the 0.005 or 0.001 level of significance.”While I do plan to discuss the PNAS paper later (and possibly write a comment letter to PNAS with Andrew), I find interesting the way it made the headlines within days of its (early edition) publication: the argument suggesting to replace .05 with .001 to increase the proportion of reproducible studies is both simple and convincing for a scientific journalist. If only the issue with p-values and statistical testing could be that simple… For instance, the above quote from Valen is reproduced as “an [alternative] hypothesis that stands right below the significance level has in truth only 3 to 5 chances to 1 to be true”, the “truth” popping out of nowhere. (If you read French, the 300+ comments on the blog are also worth their weight in jellybeans…)## Share:

8 Comments »