Nested sampling, the return

After a fairly lenghty period of pondering whether or not we should invest in a revision, thanks to some scathing referee’s reports, Nicolas Chopin and I have at last made the jump and we just completed this revision of our assessment of nested sampling and resubmitted it to Biometrika.

To recap, nested sampling was introduced by John Skilling, in 2004 or so [if this page is a clue], presented at the Valencia 8 meeting in 2006 as an invited paper and consecutively published in Bayesian Analysis the same year under a slightly modified format. The method is a stochastic ascending climb of the likelihood function that approximates the marginal likelihood (or the evidence dear to Jeffreys) by a Riemann like representation, using the prior distribution as a proposal. It has been adopted enthusiastically by astronomers (I first heard from it at the Bayesian Cosmology meeting in Sussex in June 2006) and physicists, but less so by statisticians, to the point that it remains mostly unknown within our community. When the paper was presented, Nicolas Chopin and I were rather unconvinced by some of the claims made there, partly because of the un-orthodox style of John Skilling’s writing, and we wrote a skeptical discussion for the Valencia volume that led to an exchange of emails with the author. In order to study more in details the exact properties of the method, we embarked upon a larger experiment where we ended up with the conclusions that

  1. nested sampling enjoys the same speed of convergence as regular Monte Carlo methods, a fact already noted in the discussion of Evans in the Valencia volume, and a normal asymptotic approximation as well;
  2. there exists an importance sampling version of nested sampling where simulating under the likelihood constraint is straightforward;
  3. the unidimensional features of the methods are not absolute in that the computational effort is still in d3, if d is the dimension of the space;
  4. the implementation requires computational efforts that are equivalent to a specific kind of slice sampling since the method simulates from the prior under a minimum likelihood constraint;
  5. for a given computational effort, nested sampling does not necessarily dominate alternative approaches to evidence approximation like bridge sampling or reverse importance sampling.

We however botched the illustration of those points with poor programming choices (like using an infinite variance proposal in the MCMC step!) and got promptly rejected from Biometrika! Besides valid criticisms on the programming choices and surprising ones on the (ir)relevance of the CLT, we also got a fairly helpful suggestion on a possible closed form implementation of the importance sampling version nested sampling that kills one of the error terms in the Riemann approximation. When applied to a standard probit posterior, this version of nested sampling proved itself to be indeed very efficient, bypassing a well-tuned importance sampler.

2 Responses to “Nested sampling, the return”

  1. […] Montréal next Friday, I am giving a seminar there at 3:30 on Bayesian model choice based on recent papers with Nicolas Chopin and Jean-Michel Marin. Here are the slides (minus the last […]

  2. […] sampling, the return (2) Following our revision of the paper, documented in an earlier blog, we received an extensive and highly critical email from John Skilling. While this email […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: