ABC with neural nets
Blum and Francois proposed on ArXiv last september a generalisation of Beaumont et al.’s (Genetics, 2002) ABC where the local linear regression of the parameter θ on the sufficient (or summary) statistics s is replaced by a nonlinear regression with heteroskedasticity. The nonlinear mean and variance are estimated by a neural net with one hidden layer, using the R package nnet. The result is interesting in that it seems to allow for the inclusion of more or even all the simulated pairs (θ,s), compared with Beaumont et al.’s (2002). This is somehow to be expected since the nonlinear fit adapts differently to different parts of the space. Therefore, weighting simulated s’ by a kernel Kδ(s-s’) is not very relevant and it is thus not surprising that the window δ is not influential, in contrast with the basic ABC and even Beaumont et al.’s (2002) where δ has a different meaning. I do like the nonparametric perspective adopted in the paper, even though the choice of neural nets is not the only possibility since more generic (or statistical) estimation techniques could be used instead.
The part of the paper I understand less is the adaptive feature. While we did advertise adaptivity in our ABC-PMC paper, the adaptive stage is restricted to one step and seems to only consider a restriction on the support of s. This is rather surprising in that importance sampling is usually prohibited to work on restricted samples because the fundamental importance identity is then lost.