## exact ABC

**M**inh-Ngoc Tran and Robert Kohn have devised an “exact” ABC algorithm. They claim therein to remove the error due to the non-zero tolerance by using an unbiased estimator of the likelihood. Most interestingly, they start from the debiasing technique of Rhee and Glynn [also at the basis of the Russian roulette]. Which sums up as using a telescoping formula on a sequence of converging biased estimates. And cutting the infinite sum with a stopping rule.

“Our article proposes an ABC algorithm to estimate [the observed likelihood] that completely removes the error due to [the ABC] approximation…”

The sequence of biased but converging approximations is associated with a sequence of decreasing tolerances. The corresponding sequence of weights that determines the truncation in the series is connected to the decrease in the bias in an implicit manner for all realistic settings. Although Theorem 1 produces conditions on the ABC kernel and the sequence of tolerances and pseudo-sample sizes that guarantee unbiasedness and finite variance of the likelihood estimate. For a geometric stopping rule with rejection probability p, both tolerance and pseudo-sample size decrease as a power of p. As a side product the method also returns an unbiased estimate of the evidence. The overall difficulty I have with the approach is the dependence on the stopping rule and its calibration, and the resulting impact on the computing time of the likelihood estimate. When this estimate is used in a pseudo-marginal scheme à la Andrieu and Roberts (2009), I fear this requires new pseudo-samples at each iteration of the Metropolis-Hastings algorithm, which then becomes prohibitively expensive. Later today, Mark Girolami pointed out to me that Anne-Marie Lyne [one of the authors of the Russian roulette paper] also considered this exact approach in her thesis and concluded at an infinite computing time.

January 21, 2016 at 2:52 pm

I looked at this idea of debiasing ABC estimates in my PhD thesis, as it followed on nicely from our work using debiased estimates in Pseudo marginal MCMC schemes. (https://projecteuclid.org/euclid.ss/1449670853)

Unfortunately, I found that the results of Rhee and Glynn showing both finite variance and finite expected compute time of debiasing estimators do not apply in the ABC situation. Essentially this is because the computation required to produce each successive estimate, Y_i, for the infinite series increases too fast relative to the convergence of E[(Y_i – Y)^2]. So the stopping distribution can be chosen such that the variance is finite, but this means that the expected compute time will not be.

After this finding (and some lengthy simulations!) I concluded that the method is only of use for unrealistically small problems, and you’re unlikely to have resorted to ABC unless your problem is fairly large-scale to start with!