Archive for ABC


Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , , , , on October 8, 2019 by xi'an

In connection with the recent PhD thesis defence of Juliette Chevallier, in which I took a somewhat virtual part for being physically in Warwick, I read a paper she wrote with Stéphanie Allassonnière on stochastic approximation versions of the EM algorithm. Computing the MAP estimator can be done via some adapted for simulated annealing versions of EM, possibly using MCMC as for instance in the Monolix software and its MCMC-SAEM algorithm. Where SA stands sometimes for stochastic approximation and sometimes for simulated annealing, originally developed by Gilles Celeux and Jean Diebolt, then reframed by Marc Lavielle and Eric Moulines [friends and coauthors]. With an MCMC step because the simulation of the latent variables involves an untractable normalising constant. (Contrary to this paper, Umberto Picchini and Adeline Samson proposed in 2015 a genuine ABC version of this approach, paper that I thought I missed—although I now remember discussing it with Adeline at JSM in Seattle—, ABC is used as a substitute for the conditional distribution of the latent variables given data and parameter. To be used as a substitute for the Q step of the (SA)EM algorithm. One more approximation step and one more simulation step and we would reach a form of ABC-Gibbs!) In this version, there are very few assumptions made on the approximation sequence, except that it converges with the iteration index to the true distribution (for a fixed observed sample) if convergence of ABC-SAEM is to happen. The paper takes as an illustrative sequence a collection of tempered versions of the true conditionals, but this is quite formal as I cannot fathom a feasible simulation from the tempered version and not from the untempered one. It is thus much more a version of tempered SAEM than truly connected with ABC (although a genuine ABC-EM version could be envisioned).

Hausdorff school on MCMC [28 March-02 April, 2020]

Posted in pictures, Statistics, Travel with tags , , , , , , , , , , , , , on September 26, 2019 by xi'an

The Hausdorff Centre for Mathematics will hold a week on recent advances in MCMC in Bonn, Germany, March 30 – April 3, 2020. Preceded by two days of tutorials. (“These tutorials will introduce basic MCMC methods and mathematical tools for studying the convergence to the invariant measure.”) There is travel support available, but the application deadline is quite close, as of 30 September.

Note that, in a Spring of German conference, the SIAM Conference on Uncertainty Quantification will take place in Munich (Garching) the week before, on March 24-27. With at least one likelihood-free session. Not to mention the ABC in Grenoble workshop in France, on 19-20 March. (Although these places are not exactly nearby!)

ABC in Clermont-Ferrand

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on September 20, 2019 by xi'an

Today I am taking part in a one-day workshop at the Université of Clermont Auvergne on ABC. With applications to cosmostatistics, along with Martin Kilbinger [with whom I worked on PMC schemes], Florent Leclerc and Grégoire Aufort. This should prove a most exciting day! (With not enough time to run up Puy de Dôme in the morning, though.)

unimaginable scale culling

Posted in Books, pictures, Statistics, Travel with tags , , , , , , , , , , , , , on September 17, 2019 by xi'an

Despite the evidence brought by ABC on the inefficiency of culling in massive proportions the British Isles badger population against bovine tuberculosis, the [sorry excuse for a] United Kingdom government has permitted a massive expansion of badger culling, with up to 64,000 animals likely to be killed this autumn… Since the cows are the primary vectors of the disease, what about starting with these captive animals?!

likelihood-free inference by ratio estimation

Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , on September 9, 2019 by xi'an

“This approach for posterior estimation with generative models mirrors the approach of Gutmann and Hyvärinen (2012) for the estimation of unnormalised models. The main difference is that here we classify between two simulated data sets while Gutmann and Hyvärinen (2012) classified between the observed data and simulated reference data.”

A 2018 arXiv posting by Owen Thomas et al. (including my colleague at Warwick, Rito Dutta, CoI warning!) about estimating the likelihood (and the posterior) when it is intractable. Likelihood-free but not ABC, since the ratio likelihood to marginal is estimated in a non- or semi-parametric (and biased) way. Following Geyer’s 1994 fabulous estimate of an unknown normalising constant via logistic regression, the current paper which I read in preparation for my discussion in the ABC optimal design in Salzburg uses probabilistic classification and an exponential family representation of the ratio. Opposing data from the density and data from the marginal, assuming both can be readily produced. The logistic regression minimizing the asymptotic classification error is the logistic transform of the log-ratio. For a finite (double) sample, this minimization thus leads to an empirical version of the ratio. Or to a smooth version if the log-ratio is represented as a convex combination of summary statistics, turning the approximation into an exponential family,  which is a clever way to buckle the buckle towards ABC notions. And synthetic likelihood. Although with a difference in estimating the exponential family parameters β(θ) by minimizing the classification error, parameters that are indeed conditional on the parameter θ. Actually the paper introduces a further penalisation or regularisation term on those parameters β(θ), which could have been processed by Bayesian Lasso instead. This step is essentially dirving the selection of the summaries, except that it is for each value of the parameter θ, at the expense of a X-validation step. This is quite an original approach, as far as I can tell, but I wonder at the link with more standard density estimation methods, in particular in terms of the precision of the resulting estimate (and the speed of convergence with the sample size, if convergence there is).

likelihood-free Bayesian design [SimStat 2019 discussion]

Posted in Statistics with tags , , , , , , , , , , on September 5, 2019 by xi'an

off to SimStat2019, Salzburg

Posted in Mountains, Running, Statistics, University life with tags , , , , , , , , , , , , , on September 2, 2019 by xi'an

Today, I am off to Salzburg for the SimStat 2019 workshop, or more formally the 10th International Workshop on Simulation and Statistics, where I give a talk on ABC. The program of the workshop is quite diverse and rich and so I do not think I will have time to take advantage of the Hohe Tauern or the Berchtesgaden Alps to go climbing. Especially since I am also discussing papers in an ABC session.