Unfortunately, I found that the results of Rhee and Glynn showing both finite variance and finite expected compute time of debiasing estimators do not apply in the ABC situation. Essentially this is because the computation required to produce each successive estimate, Y_i, for the infinite series increases too fast relative to the convergence of E[(Y_i – Y)^2]. So the stopping distribution can be chosen such that the variance is finite, but this means that the expected compute time will not be.

After this finding (and some lengthy simulations!) I concluded that the method is only of use for unrealistically small problems, and you’re unlikely to have resorted to ABC unless your problem is fairly large-scale to start with!

]]>