Archive for reference prior

no country for odd means

Posted in Books, Kids, Statistics, University life with tags , , , , , , on November 16, 2015 by xi'an

This morning, Clara Grazian and I arXived a paper about Jeffreys priors for mixtures. This is a part of Clara’s PhD dissertation between Roma and Paris, on which she has worked for the past year. Jeffreys priors cannot be computed analytically for mixtures, which is such a drag that it led us to devise the delayed acceptance algorithm. However, the main message from this detailed study of Jeffreys priors is that they mostly do not work for Gaussian mixture models, in that the posterior is almost invariably improper! This is a definite death knell for Jeffreys priors in this setting, meaning that alternative reference priors, like the one we advocated with Kerrie Mengersen and Mike Titterington, or the similar solution in Roeder and Wasserman, have to be used. [Disclaimer: the title has little to do with the paper, except that posterior means are off for mixtures…]

SAMSI workshop

Posted in Statistics, Travel with tags , , , , on March 22, 2010 by xi'an

Taking advantage of the people gathered at Frontiers of Statistical Decision Making and Bayesian Analysis, Dongchu Sun organised a one-day SAMSI workshop on reference priors for spatio-temporal models. Talking with a small group focused on this  topic was quite enjoyable and a change from the larger crowds at the conference (even though talks were also enjoyable there!). I particularly appreciated the discussion we had around AR(p) models and the difficulty of assessing whether or not non-stationary regions should be included in the analysis. The generalisation of the Berger-Yang (1994) paradigm to general values of p seems to put too much mass on the non-stationary region, even when using a symmetrisation technique… I came out of the meeting (exhausted and) wondering whether or not it was at all meaningfull to consider testing for stationarity, even though Bayes factors can be constructed in this setting.

Model choice by Kullback projection

Posted in Statistics with tags , , on February 3, 2009 by xi'an

Nott and Leng just posted a paper on ArXiv that expands on previous papers of ours (Dupuis and Robert, written in 1998, published in 2003; Goutis and Robert, 1998) by incorporating a Lasso perspective. Besides the fact that it relates to one of my preferred papers—Kullback-Leibler projections being a natural way for me to propagate priors on submodels when defining a single prior on the “big” or encompassing model—, it contains interesting extensions, one being that they can achieve a consistency result (while I am not sure our approach always was consistent) and the other that the computation of the projection is made much easier via the Lasso perspective. One may wonder where the Lasso appears in this setting, but using a dual Lagrangian representation of the Lasso penalty as a L1 ball or something similar means defining a parameter subspace as a ball indeed. Computing the projected parameters is then equivalent to finding a Lasso estimate, Further, because the Lasso perspective allows for all possible submodels, the need to approximately explore the tree of submodels that was a problem with Dupuis and Robert (2003) vanishes. Also interestingly, although the Lasso defines a single constraint, all submodels—in the classical sense of variable selection—can be assessed from this perspective as well.

There is however one point with which I disagree, namely that the predictive on the submodel is obtained in the current paper by projecting a Monte Carlo or an MCMC sample from the predictive on the full model, while I think this is incorrect because the likelihood is then computed using the parameter for the full model. Using a projection of such a sample means at least reweighting by the ratio of the likelihoods…