**A** news item in the latest issue of Nature I received about Brad Efron winning the “Nobel Prize of Statistics” this year. The bootstrap is certainly an invention worth the recognition, not to mention Efron’s contribution to empirical Bayes analysis,, even though I remain overall reserved about the very notion of a Nobel prize in any field… With an appropriate XXL quote, who called the bootstrap method the ‘best statistical pain reliever ever produced’!

## Archive for empirical Bayes methods

## bootstrap in Nature

Posted in Statistics with tags ASA, bootstrap, Brad Efron, empirical Bayes methods, IMS, International Prize in Statistics, ISI, Nature, Nobel Prize, RSS, Stanford on December 29, 2018 by xi'an## a Bayesian interpretation of FDRs?

Posted in Statistics with tags baseball data, empirical Bayes methods, false discovery rate, FDRs, ferry harbour, FNR, hypothesis testing, multiple tests, Seattle, shrinkage estimation, Washington State on April 12, 2018 by xi'an**T**his week, I happened to re-read John Storey’ 2003 “The positive discovery rate: a Bayesian interpretation and the q-value”, because I wanted to check a connection with our testing by mixture [still in limbo] paper. I however failed to find what I was looking for because I could not find any Bayesian flavour in the paper apart from an FRD expressed as a “posterior probability” of the null, in the sense that the setting was one of opposing two simple hypotheses. When there is an unknown parameter common to the multiple hypotheses being tested, a prior distribution on the parameter makes these multiple hypotheses connected. What makes the connection puzzling is the assumption that the observed statistics defining the significance region are *independent* (Theorem 1). And it seems to depend on the choice of the significance region, which should be induced by the Bayesian modelling, not the opposite. (This alternative explanation does not help either, maybe because it is on baseball… Or maybe because the sentence “If a player’s [posterior mean] is above .3, it’s more likely than not that their true average is as well” does not seem to appear naturally from a Bayesian formulation.) *[Disclaimer: I am not hinting at anything wrong or objectionable in Storey’s paper, just being puzzled by the Bayesian tag!]*

## double yolk priors [a reply from the authors]

Posted in Books, Statistics, University life with tags Brad Efron, David Spiegelhalter, egg double yolks, empirical Bayes methods, frequentist inference, reply on March 14, 2018 by xi'an*[Here is an email I received from Subhadeep Mukhopadhyay, one of the authors of the paper I discussed yesterday.}*

*Leg*and

_{j}(u)_j*T*

_{j}=*Leg*. One is orthonormal polyn of

_{j}(G(θ))*L*and the other one is

_{2}[0,1]*L*. The second one is poly of rank-transform

_{2}[G]*G(θ)*.

*g⁻¹(θ)*, which is the d(u;G,F) over unit interval. Now, this new transformed function is a proper density.

*d(G(θ))*which can be expended into (NOT in Leg-basis) in , in eq (2.2), as it lives in the Hilbert space

*L*

_{2}(G)*g*.

## double yolk priors

Posted in Statistics with tags Brad Efron, David Spiegelhalter, egg double yolks, empirical Bayes methods, frequentist inference on March 13, 2018 by xi'an

“To develop a “defendable and defensible” Bayesian learning model, we have to go beyond blindly ‘turning the crank’ based on a “go-as-you-like” [approximate guess] prior. A lackluster attitude towards prior modeling could lead to disastrous inference, impacting various fields from clinical drug development to presidential election forecasts. The real questions are: How can we uncover the blind spots of the conventional wisdom-based prior? How can we develop the science of prior model-building that combines both data and science [DS-prior] in a testable manner – a double-yolk Bayesian egg?”

**I** came through R bloggers on this presentation of a paper by Subhadeep Mukhopadhyay and Douglas Fletcher, Bayesian modelling via goodness of fit, that aims at solving all existing problems with classical Bayesian solutions, apparently! (With also apparently no awareness of David Spiegelhalter’s take on the matter.) As illustrated by both quotes, above and below:

“The two key issues of modern Bayesian statistics are: (i) establishing principled approach for distilling statistical prior that is consistent with the given data from an initial believable scientific prior; and (ii) development of a Bayes-frequentist consolidated data analysis work ow that is more effective than either of the two separately.”

(I wonder who else in this Universe would characterise “modern Bayesian statistics” in such a non-Bayesian way! And love the notion of distillation applied to priors!) The setup is actually one of empirical Bayes inference where repeated values of the parameter θ drawn from the prior are behind independent observations. Which is not the usual framework for a statistical analysis, where a single value of the parameter is supposed to hide behind the data, but most convenient for frequency based arguments behind empirical Bayes methods (which is the case here). The paper adopts a far-from-modern discourse on the “truth” of “the” prior… (Which is always conjugate in that Universe!) Instead of recognising the relativity of a statistical analysis based on a given prior.

When I tried to read the paper any further, I hit a wall as I could not understand the principle described therein. And how it “consolidates Bayes and frequentist, parametric and nonparametric, subjective and objective, quantile and information-theoretic philosophies.”. Presumably the lack of oxygen at the altitude of Chamonix…. Given an “initial guess” at the prior, g, a conjugate prior (in dimension one with an invertible cdf), a family of priors is created in what first looks like a form of non-parametric exponential tilting of g. But a closer look [at (2.1)] exposes the “family” as the tautological π(θ)=g(θ)x π(θ)/g(θ). The ratio is expanded into a Legendre polynomial series. Which use in Bayesian statistics dates a wee bit further back than indicated in the paper (see, e.g., Friedman, 1985; Diaconis, 1986). With the side issue that the resulting approximation does not integrate to one. Another side issue is that the coefficients of the Legendre truncated series are approximated by simulations from the prior [Step 3 of the Type II algorithm], rarely an efficient approach to the posterior.

## Approximate Bayesian model choice

Posted in Books, R, Statistics, Travel, University life with tags ABC, ABC model selection, 真夏方程式, Detective Galileo, empirical Bayes methods, integrated likelihood, Jeffreys-Lindley paradox, model posterior probabilities, Montréal, vague priors on March 17, 2014 by xi'anThe above is the running head of the arXived paper with full title “Implications of uniformly distributed, empirically informed priors for phylogeographical model selection: A reply to Hickerson et al.” by Oaks, Linkem and Sukuraman. That I (again) read in the plane to Montréal (third one in this series!, and last because I also watched the Japanese psycho-thriller Midsummer’s Equation featuring a physicist turned detective in one of many TV episodes. I just found some common features with The Devotion of Suspect X, only to discover now that the book has been turned into another episode in the series.)

“Here we demonstrate that the approach of Hickerson et al. (2014) is dangerous in the sense that the empirically-derived priors often exclude from consideration the true values of the models’ parameters. On a more fundamental level, we question the value of adopting an empirical Bayesian stance for this model-choice problem, because it can mislead model posterior probabilities, which are inherently measures of belief in the models after prior knowledge is updated by the data.”

**T**his paper actually is a reply to Hickerson et al. (2014, *Evolution*), which is itself a reply to an earlier paper by Oaks et al. (2013, Evolution). *[Warning: I did not check those earlier references!]* The authors object to the use of “narrow, empirically informed uniform priors” for the reason reproduced in the above quote. In connection with the msBayes of Huang et al. (2011, *BMC Bioinformatics*). The discussion is less about ABC used for model choice and posterior probabilities of models and more about the impact of vague priors, Oaks et al. (2013) arguing that this leads to a bias towards models with less parameters, a “statistical issue” in their words, while Hickerson et al. (2014) think this is due to msBayes way of selecting models and their parameters at random.

“…it is difficult to choose a uniformly distributed prior on divergence times that is broad enough to confidently contain the true values of parameters while being narrow enough to avoid spurious support of models with less parameter space.”

**S**o quite an interesting debate that takes us in fine far away from the usual worries about ABC model choice! We are more at the level empirical versus natural Bayes, seen in the literature of the 80’s. (The meaning of empirical Bayes is not that clear in the early pages as the authors seem to involve any method using the data “twice”.) I actually do not remember reading papers about the formal properties of model choice done through classical empirical Bayes techniques. Except the special case of Aitkin’s (1991,2009) integrated likelihood. Which is essentially the analysis performed on the coin toy example (p.7)

“…models with more divergence parameters will be forced to integrate over much greater parameter space, all with equal prior density, and much of it with low likelihood.”

**T**he above argument is an interesting rephrasing of Lindley’s paradox, which I cannot dispute, but of course it does not solve the fundamental issue of how to choose the prior away from vague uniform priors… I also like the quote “the estimated posterior probability of a model is a single value (rather than a distribution) lacking a measure of posterior uncertainty” as this is an issue on which we are currently working. I fully agree with the statement and we think an alternative assessment to posterior probabilities could be more appropriate for model selection in ABC settings (paper soon to come, hopefully!).

## séminaire à Laval, Québec

Posted in pictures, Statistics, Travel, University life with tags 14w5125, ABC, Banff, BIRS, Canada, empirical Bayes methods, ice climbing, Québec, seminar, simulation, Université Laval on February 24, 2014 by xi'an**O**n Friday, I am giving a talk on ABC at Université Laval, in the old city of Québec. While on my way to the 14w5125 workshop on scalable Bayesian computation at BIRS, Banff. I have not visited Laval since the late 1980’s (!) even though my last trip to Québec (the city) was in 2009, when François Perron took me there for ice-climbing and skiing after a seminar in Montréal… (This trip, I will not stay long enough in Québec, alas. Keeping my free day-off for another attempt at ice-climbing near Banff.) Here are slides I have used often in the past year, but this may be the last occurrence as we are completing a paper on the topic with my friends from Montpellier.

## my week at War[wick]

Posted in pictures, Running, Statistics, Travel, Uncategorized with tags ABC, AMIS, Bayesian asymptotics, COLT2014, empirical Bayes methods, empirical likelihood, MASDOC, University of Warwick, Warwickshire, Zeeman building on February 1, 2014 by xi'an**T**his was a most busy and profitable week in Warwick as, in addition to meeting with local researchers and students on a wide range of questions and projects, giving an extended seminar to MASDOC students, attending as many seminars as humanly possible (!), and preparing a 5k race by running in the Warwickshire countryside (in the dark and in the rain), I received the visits of Kerrie Mengersen, Judith Rousseau and Jean-Michel Marin, with whom I made some progress on papers we are writing together. In particular, Jean-Michel and I wrote the skeleton of a paper we (still) plan to submit to COLT 2014 next week. And Judith, Kerrie and I drafted new if paradoxical aconnections between empirical likelihood and model selection. Jean-Michel and Judith also gave talks at the CRiSM seminar, Jean-Michel presenting the latest developments on the convergence of our AMIS algorithm, Judith summarising several papers on the analysis of empirical Bayes methods in non-parametric settings.