## Nested sampling, the return

**A**fter a fairly lenghty period of pondering whether or not we should invest in a revision, thanks to some scathing referee’s reports, Nicolas Chopin and I have at last made the jump and we just completed this revision of our assessment of *nested sampling* and resubmitted it to Biometrika.

**T**o recap, nested sampling was introduced by John Skilling, in 2004 or so [if this page is a clue], presented at the Valencia 8 meeting in 2006 as an invited paper and consecutively published in Bayesian Analysis the same year under a slightly modified format. The method is a stochastic ascending climb of the likelihood function that approximates the marginal likelihood (or the *evidence* dear to Jeffreys) by a Riemann like representation, using the prior distribution as a proposal. It has been adopted enthusiastically by astronomers (I first heard from it at the Bayesian Cosmology meeting in Sussex in June 2006) and physicists, but less so by statisticians, to the point that it remains mostly unknown within our community. When the paper was presented, Nicolas Chopin and I were rather unconvinced by some of the claims made there, partly because of the un-orthodox style of John Skilling’s writing, and we wrote a skeptical discussion for the Valencia volume that led to an exchange of emails with the author. In order to study more in details the exact properties of the method, we embarked upon a larger experiment where we ended up with the conclusions that

- nested sampling enjoys the same speed of convergence as regular Monte Carlo methods, a fact already noted in the discussion of Evans in the Valencia volume, and a normal asymptotic approximation as well;
- there exists an importance sampling version of nested sampling where simulating under the likelihood constraint is straightforward;
- the unidimensional features of the methods are not absolute in that the computational effort is still in d3, if d is the dimension of the space;
- the implementation requires computational efforts that are equivalent to a specific kind of slice sampling since the method simulates from the prior under a minimum likelihood constraint;
- for a given computational effort, nested sampling does not necessarily dominate alternative approaches to evidence approximation like bridge sampling or reverse importance sampling.

**W**e however botched the illustration of those points with poor programming choices (like using an infinite variance proposal in the MCMC step!) and got promptly rejected from Biometrika! Besides valid criticisms on the programming choices and surprising ones on the (ir)relevance of the CLT, we also got a fairly helpful suggestion on a possible closed form implementation of the importance sampling version nested sampling that kills one of the error terms in the Riemann approximation. When applied to a standard probit posterior, this version of nested sampling proved itself to be indeed very efficient, bypassing a well-tuned importance sampler.

January 26, 2009 at 10:05 am

[…] Montréal next Friday, I am giving a seminar there at 3:30 on Bayesian model choice based on recent papers with Nicolas Chopin and Jean-Michel Marin. Here are the slides (minus the last […]

November 4, 2008 at 10:44 pm

[…] sampling, the return (2) Following our revision of the paper, documented in an earlier blog, we received an extensive and highly critical email from John Skilling. While this email […]