improved convergence of regression-adjusted ABC
“These results highlight the tension in ABC between choices of the summary statistics and bandwidth that will lead to more accurate inferences when using the ABC posterior, against choices that will reduce the computational cost or Monte Carlo error of algorithms for sampling from the ABC posterior.”
Wentao Li and Paul Fearnhead have arXived a new paper on the asymptotics of ABC that shows the benefit of including Beaumont et al. (2002) post-processing. This step modifies the simulated values of the parameter θ by a regression step bringing the value of the corresponding summary statistic closer to the observed value of that summary. Under some assumptions on the model and summary statistics, the regression step allows for a tolerance ε that is of order O(1/√n), n being the sample size, or even ε⁵=o(1/√n³), while keeping the Monte Carlo noise to a negligible level and improving the acceptance probability all the way to 1 as n grows. In the sense that the adjusted ABC estimator satisfies the same CLT as the true Bayes estimate (Theorem 3.1). As such this is a true improvement over our respective recent results—which both lead to Proposition 3.1 in the current paper—, provided the implementation does not require a significant additional computing time (but I do not see why it should). Surprisingly the Monte Carlo effort (or sample size N) does not seem to matter at all if the tolerance is indeed of order O(1/√n), while I am under the impression that it should increase with n. Otherwise the Monte Carlo error dominates. Note also that the regression adjustment is linear here, instead of the local or non-parametric version of the original Beaumont et al. (2002).