Archive for classics

Jeffreys priors for hypothesis testing [Bayesian reads #2]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , , on February 9, 2019 by xi'an

A second (re)visit to a reference paper I gave to my OxWaSP students for the last round of this CDT joint program. Indeed, this may be my first complete read of Susie Bayarri and Gonzalo Garcia-Donato 2008 Series B paper, inspired by Jeffreys’, Zellner’s and Siow’s proposals in the Normal case. (Disclaimer: I was not the JRSS B editor for this paper.) Which I saw as a talk at the O’Bayes 2009 meeting in Phillie.

The paper aims at constructing formal rules for objective proper priors in testing embedded hypotheses, in the spirit of Jeffreys’ Theory of Probability “hidden gem” (Chapter 3). The proposal is based on symmetrised versions of the Kullback-Leibler divergence κ between null and alternative used in a transform like an inverse power of 1+κ. With a power large enough to make the prior proper. Eventually multiplied by a reference measure (i.e., the arbitrary choice of a dominating measure.) Can be generalised to any intrinsic loss (not to be confused with an intrinsic prior à la Berger and Pericchi!). Approximately Cauchy or Student’s t by a Taylor expansion. To be compared with Jeffreys’ original prior equal to the derivative of the atan transform of the root divergence (!). A delicate calibration by an effective sample size, lacking a general definition.

At the start the authors rightly insist on having the nuisance parameter v to differ for each model but… as we all often do they relapse back to having the “same ν” in both models for integrability reasons. Nuisance parameters make the definition of the divergence prior somewhat harder. Or somewhat arbitrary. Indeed, as in reference prior settings, the authors work first conditional on the nuisance then use a prior on ν that may be improper by the “same” argument. (Although conditioning is not the proper term if the marginal prior on ν is improper.)

The paper also contains an interesting case of the translated Exponential, where the prior is L¹ Student’s t with 2 degrees of freedom. And another one of mixture models albeit in the simple case of a location parameter on one component only.

three ½ [out of 159] versions of Johnny B. Goode

Posted in Statistics with tags , , , , , , on April 2, 2017 by xi'an

reading classics (The End)

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , on February 24, 2015 by xi'an

La Défense from Paris-Dauphine, Nov. 15, 2012Today was the final session of our Reading Classics Seminar for the academic year 2014-2015. I have not reported on this seminar much so far because it has had starting problems, namely hardly any student present on the first classes and therefore several re-starts until we reached a small group of interested students. And this is truly The End for this enjoyable experiment as this is the final year for my TSI Master at Paris-Dauphine, as it will become integrated within the new MASH Master next year.

As a last presentation for the entire series, my student picked John Skilling’s Nested Sampling, not that it was in my list of “classics”, but he had worked on the paper in a summer project and was thus reasonably fluent with the topic. As he did a good enough job (!), here are his slides.

Some of the questions that came to me during the talk were on how to run nested sampling sequentially, both in the data and in the number of simulated points, and on incorporating more deterministic moves in order to remove some of the Monte Carlo variability. I was about to ask about (!) the Hamiltonian version of nested sampling but then he mentioned his last summer internship on this very topic! I also realised during that talk that the formula (for positive random variables)

\int_0^\infty(1-F(x))\text{d}x = \mathbb{E}_F[X]

does not require absolute continuity of the distribution F.

reading classics (#1,2)

Posted in Books, Kids, Statistics, University life with tags , , , , , , on December 4, 2014 by xi'an

La Défense from Paris-Dauphine, Nov. 15, 2012Today was the second session of our Reading Classics Seminar for the academic year 2014-2015. I have not reported on this seminar so far because it has had starting problems, namely hardly any student present on the first classes and therefore several re-starts until we reach a small group of interested students. Actually, this is the final year for my TSI Master at Paris-Dauphine, as it will become integrated within the new MASH Master next year. The latter started this year and drew away half of our potential applicants, presumably because of the wider spectrum between machine-learning, optimisation, programming and a tiny bit of statistics… If we manage to salvage [within the new Master] our speciality of offering the only Bayesian Statistics training in France, this will not be a complete disaster!

Anyway, the first seminar was about the great 1939 Biometrika paper by Pitman about the best invariant estimator appearing magically as a Bayes estimator! Alas, the student did not grasp the invariance part and hence focussed on less relevant technical parts, which was not a great experience (and therefore led me to abstain from posting the slides here). The second paper was not on my list but was proposed by another student as of yesterday when he realised he was to present today! This paper, entitled “The Counter-intuitive Non-informative Prior for the Bernoulli Family”, was published in the Journal of Statistics Education in 2004 by Zu and Liu, I had not heard of the paper (or of the journal) previously and I do not think it is worth advertising any further as it gives a very poor entry to non-informative priors in the simplest of settings, namely for Bernoulli B(p) observations. Indeed, the stance of the paper is to define a non-informative prior as one returning the MLE of p as its posterior expectation (missing altogether the facts that such a definition is parameterisation-invariant and that, given the modal nature of the MLE, a posterior mode would be much more appropriate, leading to the uniform prior of p as a solution) and that the corresponding prior was made of two Dirac masses at 0 and 1! Which again misses several key points like defining properly convergence in a space of probability distributions and using an improper prior differently from a proper prior. Esp. since in the next section, the authors switch to Haldane’s prior being the Be(0,0) distribution..! A prior that cannot be used since the posterior is not defined when all the observations are identical. Certainly not a paper to make it to the list! (My student simply pasted pages from this paper as his slides and so I see again no point in reposting them here. )

marauders of the lost sciences

Posted in Books, Statistics, University life with tags , , , , , , on October 26, 2014 by xi'an

The editors of a new blog entitled Marauders of the Lost Sciences (Learn from the giants) sent me an email to signal the start of this blog with a short excerpt from a giant in maths or stats posted every day:

There is  a new blog I wanted to tell you 
about which  excerpts one  interesting or 
classic  paper  or  book  a day  from the 
mathematical  sciences.  We plan on daily
posting across the  range of mathematical 
fields and at any level, but about 20-30% 
of the posts in queue are from statistics.

The goal is to entice people to read the great 
works of old.

The first post today was from an old paper by 
Fisher applying Group Theory to the design of 
experiments.

Interesting concept, which will hopefully generate comments to put the quoted passage into context. Somewhat connected to my Reading Statistical Classics posts. Which incidentally if sadly will not take place this year since only two students registered. should take place in the end since more students registered! (I am unsure about the references behind the title of that blog, besides Spielberg’s Raiders of the Lost Ark and Norman’s Marauders of Gor… I just hope Statistics does not qualify as a lost science!)