Semi-automatic ABC [revised]
Paul Fearnhead and Dennis Prangle have posted a revised version of their semi-automatic ABC paper. Compared with the earlier version commented on that post, the paper makes a better case for the ABC algorithm, considered there from a purely inferential viewpoint and calibrated for estimation purposes. In particular, the paper contains an important result in the form of a consistency theorem that shows that ABC is a convergent estimation method when the number of observations or datasets grows to infinity. I had not seen this result before and it definitely is an argument to remember when presenting ABC methods to newcomers.
Of course, I still remain skeptical about the “optimality” resulting from the choice of summary statistics in the paper, partly because
- practice shows that proper approximation to genuine posterior distributions stems from using a (much) larger number of summary statistics than the dimension of the parameter;
- the validity of the approximation to the optimal summary statistics depends on the quality of the pilot run;
- important inferential issues like model choice are not covered by this approach.
But, nonetheless, the paper provides a way to construct default summary statistics that should come as a supplement to summary statistics provided by the experts, if not as a substitute.
The new version of Section 3 is much more satisfactory wrt the criticisms I voiced earlier, spelling out the computing cost and more importantly the connection with indirect inference. A clear strength of the paper remains with Section 4 which provides a major simulation experiment. My only criticism is the absence of a phylogeny example that would relate to the models that launched ABC methods. This is less of a mainstream statistics example, but it would be highly convincing to those primary users of ABC.
In conclusion, I find the paper both exciting and bringing new questions to the front. The appeal of this new field and the particularly highly debated issue of the choice of summary statistics will certainly create the opportunity for a wide discussion by the ABC community, were it to become a discussion paper.
November 3, 2011 at 12:19 am
[…] by Paul Fearnhead and Dennis Prangle. I have already commented the paper in several posts (here and there), so here are my slides to summarise the paper and to introduce the discussion. I hope we can […]
October 26, 2011 at 12:15 am
[…] ABC readers, note the future Read Paper meeting on December 14 by Paul Fearnhead and Dennis Prangle. […]
July 20, 2011 at 12:11 am
[…] over handling HMMs with four parameters to calibrate in parallel. At some point I got confused with Dennis’ result […]
July 8, 2011 at 12:13 am
[…] method on its own rather than an approximation to Bayesian inference is clearly appealing. (Fearnhead and Prangle, and Dean, Singh, Jasra and Peters could be quoted as […]
June 14, 2011 at 12:14 am
[…] theoretical arguments are currently missing (although an extension of the perspective adopted in Fearnhead and Prangle and in Dean et al., namely to see ABC as an inference method per se rather than an approximation to […]
April 19, 2011 at 8:22 pm
[…] implementing automatic estimates for summary statistics? see paper, via C. Robert’s blog. Nick asked about gists — they are code boxes provided by Github that are easy to embed into […]
April 19, 2011 at 12:13 am
[…] Xi'an's Og an attempt at bloggin, from scratch… « Semi-automatic ABC [revised] […]
April 18, 2011 at 9:02 am
Thank you, Ajay, for pointing out your paper that I missed on the arXiv newslist! I will take a look at it as soon as possible.
April 18, 2011 at 8:21 am
Just to mention, that there is some recent work on consistency of ABC in the following paper (indeed on the noisy ABC algorithm of the above paper).
Click to access 1103.5399v1.pdf