**W**hen I read through the June-July issue of the IMS Bulletin, I saw many causes for celebration and congratulations!, from Richard Samworth’s award of an Advanced ERC grant, to the new IMS fellows, including my friends, Ismael Castillo, Steve Mc Eachern, and Natesh Pillai, as well as my current or former associate editors, Johan Segers (JRSS B) and Changbao Wu (Biometrika). To my friends Alicia Carriquiry, David Dunson, and Tamara Broderick receiving 2021 COPSS awards, along others, including Wing Hung Wong (of the precursor Tanner & Wong, 1987 fame!). Natesh also figures among the “Quadfecta 23”, the exclusive club of authors having published at least one paper in each of the four Annals published by the IMS!

## Archive for JRSSB

## congrats [IMS related]

Posted in Statistics with tags associate editor, Biometrika, COPSS Award, Data augmentation, ERC, European Research Council, IMS, IMS Bulletin, Institute of Mathematical Statistics, JRSSB, MCMC, Series B on July 21, 2021 by xi'an## congrats, Håvard!!!

Posted in Statistics with tags approximate Bayesian inference, computational statistics, Gaussian Markov fields, Guy Medal, honours, INLA, Journal of the Royal Statistical Society, JRSSB, Laplace approximation, R-INLA, Royal Statistical Society, software on March 4, 2021 by xi'an## misspecified [but published!]

Posted in Statistics with tags ABC, Approximate Bayesian computation, Journal of the Royal Statistical Society, JRSSB, misspecified model, Series B on April 1, 2020 by xi'an## Jeffreys priors for hypothesis testing [Bayesian reads #2]

Posted in Books, Statistics, University life with tags Arnold Zellner, Bayes factor, Bayesian tests of hypotheses, CDT, class, classics, Gaussian mixture, improper priors, Jeffreys prior, JRSSB, Kullback-Leibler divergence, Oxford, PhD course, Saint Giles cemetery, Susie Bayarri, Theory of Probability, University of Oxford on February 9, 2019 by xi'anA second (re)visit to a reference paper I gave to my OxWaSP students for the last round of this CDT joint program. Indeed, this may be my first complete read of Susie Bayarri and Gonzalo Garcia-Donato 2008 Series B paper, inspired by Jeffreys’, Zellner’s and Siow’s proposals in the Normal case. *(Disclaimer: I was not the JRSS B editor for this paper.) *Which I saw as a talk at the O’Bayes 2009 meeting in Phillie.

The paper aims at constructing formal rules for objective proper priors in testing embedded hypotheses, in the spirit of Jeffreys’ Theory of Probability “hidden gem” (Chapter 3). The proposal is based on symmetrised versions of the Kullback-Leibler divergence κ between null and alternative used in a transform like an inverse power of 1+κ. With a power large enough to make the prior proper. Eventually multiplied by a reference measure (i.e., the arbitrary choice of a dominating measure.) Can be generalised to any intrinsic loss (not to be confused with an intrinsic prior à la Berger and Pericchi!). Approximately Cauchy or Student’s t by a Taylor expansion. To be compared with Jeffreys’ original prior equal to the derivative of the atan transform of the root divergence (!). A delicate calibration by an effective sample size, lacking a general definition.

At the start the authors rightly insist on having the nuisance parameter v to differ for each model but… as we all often do they relapse back to having the “same ν” in both models for integrability reasons. Nuisance parameters make the definition of the divergence prior somewhat harder. Or somewhat arbitrary. Indeed, as in reference prior settings, the authors work first conditional on the nuisance then use a prior on ν that may be improper by the “same” argument. (Although *conditioning* is not the proper term if the marginal prior on ν is improper.)

The paper also contains an interesting case of the translated Exponential, where the prior is L¹ Student’s t with 2 degrees of freedom. And another one of mixture models albeit in the simple case of a location parameter on one component only.