Archive for Approximate Bayesian computation

adaptive ABC tolerance

Posted in Books, Statistics, University life with tags , , , , , , , , , on June 2, 2020 by xi'an

“There are three common approaches for selecting the tolerance sequence (…) [they] can lead to inefficient sampling”

Umberto Simola, Jessi Cisewski-Kehe, Michael Gutmann and Jukka Corander recently arXived a paper entitled Adaptive Approximate Bayesian Computation Tolerance Selection. I appreciate that they start from our ABC-PMC paper, i.e., Beaumont et al. (2009) [although the representation that the ABC tolerances are fixed in advance is somewhat incorrect in that we used in our codes quantiles of the distances to set our tolerances.] This is also the approach advocated for the initialisation step by the current paper.  Although remaining a wee bit vague. Subsequent steps are based on the proximity between the resulting approximations to the ABC posteriors, more exactly with a quantile derived from the maximum of the ratio between two estimated successive ABC posteriors. Mimicking the Accept-Reject step if always one step too late.  The iteration stops when the ratio is almost one, possibly missing the target due to Monte Carlo variability. (Recall that the “optimal” tolerance is not zero for a finite sample size.)

“…the decrease in the acceptance rate is mitigated by the improvement in the proposed particles.”

A problem is that it depends on the form of the approximation and requires non-parametric hence imprecise steps. Maybe variational encoders could help. Interesting approach by Sugiyama et al. (2012), of which I knew nothing, the core idea being that the ratio of two densities is also the solution to minimising a distance between the numerator density and a variable function times the bottom density. However since only the maximum of the ratio is needed, a more focused approach could be devised. Rather than first approximating the ratio and second maximising the estimated ratio. Maybe the solution of Goffinet et al. (1992) on estimating an accept-reject constant could work.

A further comment is that the estimated density is not properly normalised, which lessens the Accept-Reject analogy since the optimum may well stand above one. And thus stop “too soon”. (Incidentally, the paper contains the mixture example of Sisson et al. (2007), for which our own graphs were strongly criticised during our Biometrika submission!)

PhD position for research in ABC in Chalmers University

Posted in Statistics with tags , , , , , , , , , on May 27, 2020 by xi'an

[Posting a call for PhD candidates from Umberto Piccini as the deadline is June 1, next Monday!]

A PhD student position in mathematical statistics on simulation-based inference methods for models with an “intractable” likelihood is available at the Dept. Mathematical Sciences, Chalmers University, Gothenburg (Sweden).

You will be part of an international collaboration to create new methodology bridging between simulation-based inference (such as approximate Bayesian computation and other likelihood-free methods) and deep neuronal networks. The goal is to ease inference for stochastic modelling.

Details on the project and the essential requirements are at

The PhD student position is fully funded and is up to 5 years, in the dynamic and international city of Gothenburg, the second largest city in Sweden, As a PhD student in Mathematical Sciences you will have opportunities for many inspiring conversations, a lot of autonomous work and some travel.

The position will be supervised by Assoc. Prof. Umberto Picchini.

Apply by 01 June 2020 following the instructions at

For informal enquiries, please get in touch with Umberto Picchini

my demonic talk

Posted in Statistics with tags , , , , , , , , , , , on May 13, 2020 by xi'an

from Svalbard [with snow]

Posted in Statistics with tags , , , , , , , , , , , , on April 25, 2020 by xi'an

ABC webinar, first!

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , on April 13, 2020 by xi'an


The première of the ABC World Seminar last Thursday was most successful! It took place at the scheduled time, with no technical interruption and allowed 130⁺ participants from most of the World [sorry, West Coast friends!] to listen to the first speaker, Dennis Prangle,  presenting normalising flows and distilled importance sampling. And to answer questions. As I had already commented on the earlier version of his paper, I will not reproduce them here. In short, I remain uncertain, albeit not skeptical, about the notions of normalising flows and variational encoders for estimating densities, when perceived as a non-parametric estimator due to the large number of parameters it involves and wonder at the availability of convergence rates. Incidentally, I had forgotten at the remarkable link between KL distance & importance sampling variability. Adding to the to-read list Müller et al. (2018) on neural importance sampling.