Archive for Biometrika

estimation of a normal mean matrix

Posted in Statistics with tags , , , , , , , , , on May 13, 2021 by xi'an

A few days ago, I noticed the paper Estimation under matrix quadratic loss and matrix superharmonicity by Takeru Matsuda and my friend Bill Strawderman had appeared in Biometrika. (Disclaimer: I was not involved in handling the submission!) This is a “classical” shrinkage estimation problem in that covariance matrix estimators are compared under under a quadratic loss, using Charles Stein’s technique of unbiased estimation of the risk is derived. The authors show that the Efron–Morris estimator is minimax. They also introduce superharmonicity for matrix-variate functions towards showing that generalized Bayes estimator with respect to a matrix superharmonic priors are minimax., including a generalization of Stein’s prior. Superharmonicity that relates to (much) earlier results by Ed George (1986), Mary-Ellen Bock (1988),  Dominique Fourdrinier, Bill Strawderman, and Marty Wells (1998). (All of whom I worked with in the 1980’s and 1990’s! in Rouen, Purdue, and Cornell). This paper also made me realise Dominique, Bill, and Marty had published a Springer book on Shrinkage estimators a few years ago and that I had missed it..!

ratio of Gaussians

Posted in Books, Statistics, University life with tags , , , , , , , , on April 12, 2021 by xi'an

Following (as usual) an X validated question, I came across two papers of George Marsaglia on the ratio of two arbitrary (i.e. unnormalised and possibly correlated) Normal variates. One was a 1965 JASA paper,

where the density of the ratio X/Y is exhibited, based on the fact that this random variable can always be represented as (a+ε)/(b+ξ) where ε,ξ are iid N(0,1) and a,b are constant. Surprisingly (?), this representation was challenged in a 1969 paper by David Hinkley (corrected in 1970).

And less surprisingly the ratio distribution behaves almost like a Cauchy, since its density is

meaning it is a two-component mixture of a Cauchy distribution, with weight exp(-a²/2-b²/2), and of an altogether more complex distribution ƒ². This is remarked by Marsaglia in the second 2006 paper, although the description of the second component remains vague, besides a possible bimodality. (It could have a mean, actually.) The density ƒ² however resembles (at least graphically) the generalised Normal inverse density I played with, eons ago.

approximation of Bayes Factors via mixing

Posted in Books, Statistics, University life with tags , , , , , , , , , , , on December 21, 2020 by xi'an

A [new version of a] paper by Chenguang Dai and Jun S. Liu got my attention when it appeared on arXiv yesterday. Due to its title which reminded me of a solution to the normalising constant approximation that we proposed in the 2010 nested sampling evaluation paper we wrote with Nicolas. Recovering bridge sampling—mentioned by Dai and Liu as an alternative to their approach rather than an early version—by a type of Charlie Geyer (1990-1994) trick. (The attached slides are taken from my MCMC graduate course, with a section on the approximation of Bayesian normalising constants I first wrote for a short course at Jim Berger’s 70th anniversary conference, in San Antonio.)

A difference with the current paper is that the authors “form a mixture distribution with an adjustable mixing parameter tuned through the Wang-Landau algorithm.” While we chose it by hand to achieve sampling from both components. The weight is updated by a simple (binary) Wang-Landau version, where the partition is determined by which component is simulated, ie by the mixture indicator auxiliary variable. Towards using both components on an even basis (à la Wang-Landau) and stabilising the resulting evaluation of the normalising constant. More generally, the strategy applies to a sequence of surrogate densities, which are chosen by variational approximations in the paper.

my talk in Newcastle

Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , on November 13, 2020 by xi'an

I will be talking (or rather zooming) at the statistics seminar at the University of Newcastle this afternoon on the paper Component-wise approximate Bayesian computation via Gibbs-like steps that just got accepted by Biometrika (yay!). Sadly not been there for real, as I would have definitely enjoyed reuniting with friends and visiting again this multi-layered city after discovering it for the RSS meeting of 2013, which I attended along with Jim Hobert and where I re-discussed the re-Read DIC paper. Before traveling south to Warwick to start my new appointment there. (I started with a picture of Seoul taken from the slopes of Gwanaksan about a year ago as a reminder of how much had happened or failed to happen over the past year…Writing 2019 as the year was unintentional but reflected as well on the distortion of time induced by the lockdowns!)

 

marginal likelihood as exhaustive X validation

Posted in Statistics with tags , , , , , , , , on October 9, 2020 by xi'an

In the June issue of Biometrika (for which I am deputy editor) Edwin Fong and Chris Holmes have a short paper (that I did not process!) on the validation of the marginal likelihood as the unique coherent updating rule. Marginal in the general sense of Bissiri et al. (2016). Coherent in the sense of being invariant to the order of input of exchangeable data, if in a somewhat self-defining version (Definition 1). As a consequence, marginal likelihood arises as the unique prequential scoring rule under coherent belief updating in the Bayesian framework. (It is unique given the prior or its generalisation, obviously.)

“…we see that 10% of terms contributing to the marginal likelihood come from out-of-sample predictions, using on average less than 5% of the available training data.”

The paper also contains the interesting remark that the log marginal likelihood is the average leave-p-out X-validation score, across all values of p. Which shows that, provided the marginal can be approximated, the X validation assessment is feasible. Which leads to a highly relevant (imho) spotlight on how this expresses the (deadly) impact of the prior selection on the numerical value of the marginal likelihood. Leaving outsome of the least informative terms in the X-validation leads to exactly the log geometric intrinsic Bayes factor of Berger & Pericchi (1996). Most interesting connection with the Bayes factor community but one that depends on the choice of the dismissed fraction of p‘s.

%d bloggers like this: