The approximation in ABC
An interesting new paper about ABC was posted by Richard Wilkinson on ArXiv today. The main point in the paper is to replace the approximation error resulting from the loose acceptance condition in regular ABC (i.e. when accepting a simulated value within a distance ε from the observation) with an exact ABC simulation from a controlled approximation to the target, essentially a convolution of the regular target with an arbitrary kernel π. The idea is indeed interesting in that the outcome is completely controlled, thanks to the degree of freedom brought by the choice of the kernel π, but I think its scope does not compare with the kernel smoothing perspective found in Beaumont et al. (2002). The convolution of the distribution f of the observables with the arbitrary kernel π indeed does not bring us closer to the ideal inference based on the true posterior, while the nonparametric motivations of Beaumont et al. (2002) and more recently of Blum and François (2008) have the goal of improving the approximation to the true posterior, based on previous simulations. (The criticism in the current paper of the Epanechnikov kernel as a poor choice for a measurement error distribution is thus off-key, given that the incentive is nonparametric in Beaumont et al. (2002). Note also that the final extension of the paper both to Monte Carlo integration and to model choice is quite indistinguishable from a nonparametric approximation of the posterior distribution.) In that sense, both papers of Beaumont et al. (2002) and of Blum and François (2008) develop adaptive methods in the sense of sequential Monte Carlo, like the PMC version of ours.
This type of work also relates to the theory of model approximation built by Tony O’Hagan in the early 2000′s (as pointed out in the paper).
My main criticism of the paper is that the approach proposed by Wilkinson requires a modification of the ABC algorithm and thus that the algorithm is exact only after this modification, i.e. after changing the problem. The algorithm also includes an upper bound c on the convolution kernel π, which is not an enormous requirement given that π is arbitrary but still is a requirement. The extension of the method to include MCMC steps as in ABC-MCMC is altogether not surprising, but the comparison of two MCMC versions is rather interesting, one avoiding the use of the upper bound c by integrating both approximations in a single acceptance step. The paper rightly concludes by stating that more work is needed, in particular to include the additional approximation step due to the use of summary (insufficient) statistics, which is much more of an issue, as discussed in yesterday’s blog.