## Another slice

**N**o this is not yet another post-Christmas/NY ‘Og entry about food! Ian Murray, Ryan Adams and David MacKay posted a small piece on arXiv on Tuesday where they advocate a new type of slice sampler in cases when the posterior distribution on the parameter is associated with a Gaussian prior,

and where the update in the Markov chain is based on an elliptic update,

,

except that is also updated at each MCMC step by a slice sampler. The resulting algorithm is a slice sampler in that it does not reject new values of .

**I** find the proposal interesting, especially because it incorporates a “cyber-parameter” like within the Markov chain, but I wonder how widely the efficiency of the algorithm persists. Indeed, simulating from the prior cannot be very efficient when the likelihood strongly differs from the Gaussian prior. A lack of rejection is not a positive property *per se* and Gibbs sampling (incl. slice sampling) is notoriously slow for this very lack…

January 7, 2010 at 10:40 pm

Thanks for your interest!

Indeed MCMC in general is hard and one could easily construct examples to break our algorithm.

The main “selling point” isn’t really the lack of rejections (which I still think are nice), but that the algorithm is simple to use: it has short code and zero free parameters. Despite its simplicity, it works well for at least some models containing Gaussian process priors.

January 8, 2010 at 6:57 am

Given this reliance on Gaussian process priors, I wonder if a connection or comparison could be made with the Rue, Martino, & Chopin approach…