Bayesian computation: fore and aft

BagneuxWith my friends Peter Green (Bristol), Krzysztof Łatuszyński (Warwick) and Marcello Pereyra (Bristol), we just arXived the first version of “Bayesian computation: a perspective on the current state, and sampling backwards and forwards”, which first title was the title of this post. This is a survey of our own perspective on Bayesian computation, from what occurred in the last 25 years [a  lot!] to what could occur in the near future [a lot as well!]. Submitted to Statistics and Computing towards the special 25th anniversary issue, as announced in an earlier post.. Pulling strength and breadth from each other’s opinion, we have certainly attained more than the sum of our initial respective contributions, but we are welcoming comments about bits and pieces of importance that we miss and even more about promising new directions that are not posted in this survey. (A warning that is should go with most of my surveys is that my input in this paper will not differ by a large margin from ideas expressed here or in previous surveys.)

8 Responses to “Bayesian computation: fore and aft”

  1. A few small suggestions for page 13 of the archived version:

    1) The “Barber et al. (2013)” paper has finally appeared yesterday, the final reference is as follows.
    S. Barber, J. Voss, M. Webster: The Rate of Convergence for Approximate Bayesian Computation. Electronic Journal of Statistics, vol. 9, pp. 80–105, 2015. doi:10.1214/15-EJS988 .

    2) The exponent q/(q+4) is for the RMSE, not for the MSE. Definitely for our paper, and I believe also for Blum (2010).

    3) I believe there is an excess closing bracket after “(re-expressed as a bandwidth) of order $n^{−1/(q+4)}$”.

    I hope this helps,

  2. The particle marginal Metropolis-Hastings (PMMH) presented in your tutorial can be indeed re-interpreted as a pseudo-marginal method but stating that the whole class of particle MCMC is a pseudo-marginal method is wrong.

    First, the PMMH presented in our JRSSB paper samples the joint posterior of the states and parameter and not only the marginal posterior distribution of the parameter. This cannot be established using a simple pseudo-marginal argument. We establish the validity of our PMMH sampling from the joint posterior by introducing an artificial target distribution on the space of all random variables used to build the particle filter.

    Second, this artificial target distribution based on the conditional Sequential Monte Carlo kernel allows us to develop algorithms such as the particle Gibbs sampler.

  3. The only thing that I have to say on this (it’s a very nice review) is that the thing you ask for in section 4.3 is precisely what INLA does for latent Gaussian models. (It may not be clear from the original paper, but the structure is that a high-dimensional penalised MLE is computed and then the model is expanded around there and a similar computation is used for the integration. Again, it’s a very specific algorithm built around high-dimensonal optimisation for a very specific class of problem. I’m not sure it’s the whole enchilada that section 4.3 is aiming for, but it’s definitely a partial example.

    I’m also less sure (although this is probably my lack of understanding) about your characterisation of ABC as “proper Bayesian inference applied on approximate models”. My takeaway from your model choice papers was that ABC was materially different from vanilla Bayes and that the “approximate” was at least as important (probably more) as the the “Bayes”. Part of the joy of ABC has been watching people develop basically a new inference theory for a certain class of approximate models. I’m just not sure it’s Bayes. (Side note: I don’t think that that’s a bad thing)

  4. Michael Betancourt Says:


    What happened to the HMC section?!

    • oh darn!, I fear it is still leapfrogging somewhere…. Thanks for pointing out this gap, the section was forgotten by no one but me when putting the paper together for arXival!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: