“This criticism is receivable when there is a huge number of possible values of N, even though I see no fundamental contradiction with my ideas about Bayesian computation. However, it is more debatable when there are a few possible values for N, given that the exploration of the augmented space by a RJMCMC algorithm is often very inefficient, in particular when the proposed parameters are generated from the prior.”

Yes, I agree. This paragraph was aimed at astronomers, many of whom only know about the ‘different trial values of N’ approach.

“The more when nested sampling is involved and simulations are run under the likelihood constraint!”

I think it’s less. The DNS target distribution is usually easier than the posterior, because the posterior might be dominated by levels 50-70 (say) yet the trans dimensional moves might be accepted a lot in level 30 where the likelihood constraint is lower.

]]>“I live in fear of a posterior that contains a minute region of parameter space with a huge spike of likelihood!”

It’s much more common that the phase transition occurs at a higher temperature, and that will only affect marginal likelihoods. I’d bet there are many wrong marginal likelihoods in the literature because of phase transitions, but I doubt there are many incorrect posterior distributions. One example of an incorrect posterior distribution is this strange paper by Carlos Rodriguez, where he thinks we should all use Jeffreys priors: http://arxiv.org/abs/0709.1067 For his non-Jeffreys prior the only thing that failed was his MCMC run, which didn’t mix between the two phases.

I think John Skilling’s obtuse writing style is to blame for people’s lack of understanding of these problems. If you read his 2006 paper in BA it’s mostly about phase transitions, yet many papers since then just use NS because they feel like it / it sounds cool.

]]>Imperial: it’s being organised by the DIDE (Infectious Diseases) people. Jeff Eaton is the organiser.

]]>Where? I wouldn’t mind learning STAN and I don’t currently have anything I’m working on that it’s appropriate for… (I got close the other week, but it became obvious that the data wouldn’t support the model)

]]>Then you can also engage Michael on nested sampling at this course! Enjoy.

]]>Have you seen the most recent revision of Mike Betancourt’s Adiabatic Monte Carlo paper? I think his explanation in terms of metastabilities in the contact form might be an explanation.

]]>The slab and spike likelihood would be one case: in principle the nested sampler will keep on shrinking its restricted likelihood region until it lassos the spike—provided that the slab part has at least a slight gradient in all directions leading to the spike, so to speak. On the other hand the NS sampler might well reach its stopping condition while exploring the slab, so I’d hardly say it’s guaranteed in reality to succeed.

Having played with some toy models (e.g. a row of positive or negative charged ‘atoms’ having log-likelihood proportional to the number of matched neighbours) I think there is an argument for running one chain on a powered up version of the posterior (e.g. L^10) during practical data analyses, just in case there’s a ‘phase’ to the likelihood that’s not yet been discovered.

]]>Thanks, Ewan: this “phase transition” is a wee bit of a mystery to me. The way it is described, it is made of a highly concentrated spike on top of a rather flattish likelihood. It is hard to get an intuition as to why the simulation of points at random over the restricted likelihood levels would favour a visit of the spike region when using an imperfect method like MCMC. For instance, when simulating a Guassian mixture posterior distribution, there are funnels around the zero variance – mean as one observation points, funnels that go up to infinity and the nested sampler does not usually visit those funnels.

]]>I wonder what you think of the ‘phase change’ problem as a difficulty for thermodynamic methods, but not in principle for nested sampling? I live in fear of a posterior that contains a minute region of parameter space with a huge spike of likelihood!

]]>