## Archive for False positive

## Bayesian basics in Le Monde

Posted in Statistics with tags Bayes theorem, causality, COVID-19, False positive, Le Monde, medical statistics, pandemic on September 12, 2020 by xi'an## exoplanets at 99.999…%

Posted in Books, pictures, Statistics, University life with tags astrostatistics, book reviews, confidence intervals, False positive, Monte Carlo technique, Significance on January 22, 2016 by xi'an**T**he latest Significance has a short article providing some coverage of the growing trend in the discovery of exoplanets, including new techniques used to detect those expoplanets from their impact on the associated stars. This [presumably] comes from the recent book *Cosmos: The Infographics Book of Space* *[a side comment: new books seem to provide material for many articles in Significance these days!]* and the above graph is also from the *book*, not the ultimate infographic representation in my opinion given that a simple superposition of lines could do as well. Or better.

¨A common approach to ruling out these sorts of false positives involves running sophisticated numerical algorithms, called Monte Carlo simulations, to explore a wide range of blend scenarios (…) A new planet discovery needs to have a confidence of (…) a one in a million chance that the result is in error.”

The above sentence is obviously of interest, first because the detection of false positives by Monte Carlo hints at a rough version of ABC to assess the likelihood of the observed phenomenon under the null [no detail provided] and second because the probability statement in the end is quite unclear as of its foundations… Reminding me of the Higgs boson controversy. The very last sentence of the article is however brilliant, albeit maybe unintentionaly so:

“To date, 1900 confirmed discoveries have been made. We have certainly come a long way from 1989.”

Yes, 89 down, strictly speaking!

## Statistical evidence for revised standards

Posted in Statistics, University life with tags Bayes factors, Bayesian tests, evidence, False positive, minima, PNAS, statistical significance, UMPBTs, uniformly most powerful tests, Valen Johnson on December 30, 2013 by xi'an**I**n yet another permutation of the original title (!), Andrew Gelman posted the answer Val Johnson sent him after our (submitted) letter to PNAS. As Val did not send me a copy (although Andrew did!), I will not reproduce it here and I rather refer the interested readers to Andrews’ blog… In addition to Andrew’s (sensible) points, here are a few idle (post-X’mas and pre-skiing) reflections:

makes me wonder in which metric this exponential rate (in γ?) occurs;*“evidence against a false null hypothesis accrues exponentially fast”*- that
is difficult to accept as an argument since there is no trace of a decision-theoretic argument in the whole paper;*“most decision-theoretic analyses of the optimal threshold to use for declaring a significant finding would lead to evidence thresholds that are substantially greater than 5 (and probably also greater 25)”* - Val rejects our minimaxity argument on the basis that
but the prior that corresponds to those tests is minimising the integrated probability of not rejecting at threshold level γ, a loss function integrated against parameter and observation, a Bayes risk in other words… Point masses or spike priors are clearly characteristics of minimax priors. Furthermore, the additional argument that*“[UMPBTs] do not involve minimization of maximum loss”*has been used by many to refute the Bayesian perspective and makes me wonder what are the arguments left in using a (pseudo-)Bayesian approach;*“in most applications, however, a unique loss function/prior distribution combination does not exist”* - the next paragraph is pure tautology: the fact that
is a paraphrase of the definition of UMPBTs, not an argument. I do not see we should solely*“no other test, based on either a subjectively or objectively specified alternative hypothesis, is as likely to produce a Bayes factor that exceeds the specified evidence threshold”*, since minimising those should lead to a point mass on the null (or, more seriously, should not lead to the minimax-like selection of the prior under the alternative).*“worry about false negatives”*

## Revised evidence for statistical standards

Posted in Kids, Statistics, University life with tags Andrew Gelman, Bayesian tests, False positive, letter, non-reproducible research, PNAS, Ronald Fisher, uniformly most powerful tests, Valen Johnson on December 19, 2013 by xi'an**W**e just submitted a letter to PNAS with Andrew Gelman last week, in reaction to Val Johnson’s recent paper “Revised standards for statistical evidence”, essentially summing up our earlier comments within 500 words. Actually, we wrote one draft each! In particular, Andrew came up with the (neat) rhetorical idea of alternative Ronald Fishers living in parallel universes who had each set a different significance reference level and for whom alternative Val Johnsons would rise and propose a modification of the corresponding Fisher’s level. For which I made the above graph, left out of the letter and its 500 words. It relates “the old z” and “the new z”, meaning the boundaries of the rejection zones when, for each golden dot, the “old z” is the previous “new z” and “the new z” is Johnson’s transform. We even figured out that Val’s transform was bringing the significance down by a factor of 10 in a large range of values. As an aside, we also wondered why most of the supplementary material was spent on deriving UMPBTs for specific (formal) problems when the goal of the paper sounded much more global…

**A**s I am aware we are not the only ones to have submitted a letter about Johnson’s proposal, I am quite curious at the reception we will get from the editor! (Although I have to point out that all of my earlier submissions of letters to to PNAS got accepted.)

## Valen in Le Monde

Posted in Books, Statistics, University life with tags blogging, comments, False positive, Le Monde, Monsanto, p-values, Passeur de Sciences, statistical significance, UMPB test, uniformly most powerful tests, Valen Johnson on November 21, 2013 by xi'anValen Johnson made the headline inLe Monde, last week. (More precisely, to the scientific blogPasseur de Sciences. Thanks, Julien, for the pointer!) With the alarming title of “Une étude ébranle un pan de la méthode scientifique”(A study questions one major tool of the scientific approach). The reason for this French fame is Valen’s recent paper in PNAS,Revised standards for statistical evidence, where he puts forward his uniformly most powerful Bayesian tests (recently discussed on the ‘Og) to argue against the standard 0.05 significance level and in favour of “the 0.005 or 0.001 level of significance.”While I do plan to discuss the PNAS paper later (and possibly write a comment letter to PNAS with Andrew), I find interesting the way it made the headlines within days of its (early edition) publication: the argument suggesting to replace .05 with .001 to increase the proportion of reproducible studies is both simple and convincing for a scientific journalist. If only the issue with p-values and statistical testing could be that simple… For instance, the above quote from Valen is reproduced as “an [alternative] hypothesis that stands right below the significance level has in truth only 3 to 5 chances to 1 to be true”, the “truth” popping out of nowhere. (If you read French, the 300+ comments on the blog are also worth their weight in jellybeans…)## Share:

8 Comments »