ABC vs. PMCMC for MRFs
Prior to me going to the Banff meeting, Richard Everitt posted on arXiv a paper “Bayesian Parameter Estimation for Latent Markov Random Fields and Social Networks” which I only finished reading in the limo from JFK to Princeton… It is a fairly interesting comparison of ABC and MCMC algorithms applied to the cases of MRFs observed with MRF errors (latent MRF models) and of exponential random graphs with errors as those used for social network modelling. The MCMC algorithm is a combination of SMC, of particle MCMC à la Andrieu et al. (2010) and of the exchange algorithm of Murray et al. (2006) that improves upon the single auxiliary variable method of Møller et al. (2004) [which can also be reinterpreted à la Andrieu-Roberts (2009)]. Recall that the exchange algorithm provides a direct evaluation of the ratio of the normalising constants based on a running pair of parameters (hence the possible “exchange”). The issue of simulating exactly from an MRF is bypassed by validating an MCMC algorithm based on a finite number of iterations (under strong conditions). The SMC sampler for MRFs mixes hot coupling (based on a clique completion of a spanning tree of the true graph) and tempering. The ABC algorithm uses the same approach as ours (in Grelaud et al., 2009) through the summary (sufficient!) statistics, plus the ABC-SMC sampler of Del Moral et al. (2011). The comparison is run on a small 10×10 Ising model and… on the Florentine family network Yves Atchadé used in our Wang-Landau paper!
Now, comparing ABC with MCMC is not a thing that would come naturally to my mind and my answer to the question about their relative merits (as in the talk in London last Thursday) is to say that you only use ABC when MCMC cannot work. Well, this study shows a bit more depth in the analysis! First, ABC managed to pick the major features of the posterior in both cases, while a regular MCMC got either stuck in one region or fairly inefficient. Second, the involved fusion algorithm constructed by Richard managed to overcome those difficulties and provided a richer sample than ABC in the same number of runs (as it should, ABC being a slow learner.)