rare events for ABC
Dennis Prangle, Richard G. Everitt and Theodore Kypraios just arXived a new paper on ABC, aiming at handling high dimensional data with latent variables, thanks to a cascading (or nested) approximation of the probability of a near coincidence between the observed data and the ABC simulated data. The approach amalgamates a rare event simulation method based on SMC, pseudo-marginal Metropolis-Hastings and of course ABC. The rare event is the near coincidence of the observed summary and of a simulated summary. This is so rare that regular ABC is forced to accept not so near coincidences. Especially as the dimension increases. I mentioned nested above purposedly because I find that the rare event simulation method of Cérou et al. (2012) has a nested sampling flavour, in that each move of the particle system (in the sample space) is done according to a constrained MCMC move. Constraint derived from the distance between observed and simulated samples. Finding an efficient move of that kind may prove difficult or impossible. The authors opt for a slice sampler, proposed by Murray and Graham (2016), however they assume that the distribution of the latent variables is uniform over a unit hypercube, an assumption I do not fully understand. For the pseudo-marginal aspect, note that while the approach produces a better and faster evaluation of the likelihood, it remains an ABC likelihood and not the original likelihood. Because the estimate of the ABC likelihood is monotonic in the number of terms, a proposal can be terminated earlier without inducing a bias in the method.
This is certainly an innovative approach of clear interest and I hope we will discuss it at length at our BIRS ABC 15w5025 workshop next February. At this stage of light reading, I am slightly overwhelmed by the combination of so many computational techniques altogether towards a single algorithm. The authors argue there is very little calibration involved, but so many steps have to depend on as many configuration choices.
November 24, 2016 at 12:20 pm
Hi Christian, thanks for reading and blogging about our paper!
In reply to a few of your comments:
* The idea with the latent variables is we assume they are iid U(0,1). There’s not much loss of generality as they can be transformed to other distributions. One way of thinking about these latent variables is that they represent all the uniform draws required by the simulator.
* You’re right that there are still several calibration choices, but using slice sampling moves avoids many which would be needed if we used Metropolis-Hastings in its place.
* Yes the algorithm ends up being quite complicated, or at least having several layers. When we started the project we envisioned something simpler than this! Hopefully there’s scope to improve on this in future. One contribution of the paper is to illustrate (along with other recent work) the efficiency improvements that are possible from latent variable methods in ABC, and give some rough asymptotics on this.
* Ewan Cameron mentioned the connection to nested sampling as well. I hope to think about this before the BIRS workshop.
November 24, 2016 at 12:52 pm
Thanks Dennis! I now see a trend in writing models for ABC as transforms of a vector of uniforms, from Meeds and Welling, to Graham and Storkey, to your paper. What about connecting with quasi-Monte-Carlo as well?
November 24, 2016 at 5:04 pm
I worry that QMC won’t be much help in general as the dimension of latent variables is too high. There may be some specific applications where it’s very useful though.
November 24, 2016 at 2:03 pm
I also hope to finish reading your paper by then … it’s a long one!