## Model choice by Kullback projection

**N**ott and Leng just posted a paper on ArXiv that expands on previous papers of ours (Dupuis and Robert, written in 1998, published in 2003; Goutis and Robert, 1998) by incorporating a Lasso perspective. Besides the fact that it relates to one of my preferred papers—Kullback-Leibler projections being a natural way for me to propagate priors on submodels when defining a single prior on the “big” or encompassing model—, it contains interesting extensions, one being that they can achieve a consistency result (while I am not sure our approach always was consistent) and the other that the computation of the projection is made much easier via the Lasso perspective. One may wonder where the Lasso appears in this setting, but using a dual Lagrangian representation of the Lasso penalty as a *L _{1}* ball or something similar means defining a parameter subspace as a ball indeed. Computing the projected parameters is then equivalent to finding a Lasso estimate, Further, because the Lasso perspective allows for all possible submodels, the need to approximately explore the tree of submodels that was a problem with Dupuis and Robert (2003) vanishes. Also interestingly, although the Lasso defines a single constraint, all submodels—in the classical sense of variable selection—can be assessed from this perspective as well.

**T**here is however one point with which I disagree, namely that the predictive on the submodel is obtained in the current paper by projecting a Monte Carlo or an MCMC sample from the predictive on the full model, while I think this is incorrect because the likelihood is then computed using the parameter for the full model. Using a projection of such a sample means at least reweighting by the ratio of the likelihoods…

February 20, 2009 at 7:42 am

[…] Cheng at the Bayesian model choice group and [with the help of the group] realised that my earlier comment on the […]