“One aim is to extend the approach of Sisson et al. (2007) to provide an algorithm that is robust to implement.”C.C. Drovandi & A.N. Pettitt

**A** paper by Drovandi and Pettit appeared in the Early View section of * Biometrics*. It uses a combination of particles and of MCMC moves to adapt to the true target, with an acceptance probability

where is the proposed value and is the current value (picked at random from the particle population), while *q* is a proposal kernel used to simulate the proposed value. The algorithm is adaptive in that the previous population of particles is used to make the choice of the proposal *q*, as well as of the tolerance level . Although the method is valid as a particle system applied in the ABC setting, I have difficulties to gauge the level of novelty of the method (then applied to a model of Riley et al., 2003, ** J. Theoretical Biology**). Learning from previous particle populations to build a better kernel

*q*is indeed a constant feature in SMC methods, from Sisson et al.’s ABC-PRC (2007)—note that Drovandi and Pettitt mistakenly believe the ABC-PRC method to include partial rejection control, as argued in this earlier post—, to Beaumont et al.’s ABC-PMC (2009). The paper also advances the idea of adapting the tolerance on-line as an quantile of the previous particle population, but this is the same idea as in Del Moral et al.’s ABC-SMC. The only strong methodological difference, as far as I can tell, is that the MCMC steps are repeated “numerous times” in the current paper, instead of once as in the earlier papers. This however partly cancels the appeal of an O(

*N*) order method versus the O(

*N²*) order PMC and SMC methods. An interesting remark made in the paper is that more advances are needed in cases when simulating the pseudo-observations is highly costly, as in Ising models. However, replacing exact simulation [as we did in the model choice paper] with a Gibbs sampler cannot be

*that*detrimental.