## Archive for ABC-Gibbs

## off to Vancouver

Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags ABC, ABC-Gibbs, approximate Bayesian inference, British Columbia, colloquium, Institute of Applied Mathematics, lottery, NeurIPS 2019, Pacifics Institute for the Mathematical Sciences, PIMS, Squamish, The Chief, UBC, University of British Columbia, workshop on December 7, 2019 by xi'an**T**oday I am flying to Vancouver for an ABC workshop, the second Symposium on Advances in Approximate Bayesian Inference, which is a pre-NeurIPS workshop following five earlier editions, to some of which I took part. With an intense and exciting programme. Not attending the following NeurIPS as I had not submitted any paper (and was not considering relying on a lottery!). Instead, I will give a talk at ~~ABC~~ UBC on Monday 4pm, as, coincidence, coincidence!, I was independently invited by UBC to the IAM-PIMS Distinguished Colloquium series. Speaking on ABC on a broader scale than in the workshop. Where I will focus on ABC-Gibbs. (With alas no time for climbing, missing an opportunity for a winter attempt at The Stawamus Chief!)

## ABC-SAEM

Posted in Books, Statistics, University life with tags ABC, ABC-Gibbs, ABC-MCMC, Alan Turing, École Polytechnique, EM, JSM 2015, MAP estimators, MCMC, MCMC-SAEM, Monolix, Paris-Saclay campus, PhD thesis, SAEM, Seattle, simulated annealing, stochastic approximation, University of Warwick, well-tempered algorithm on October 8, 2019 by xi'an**I**n connection with the recent PhD thesis defence of Juliette Chevallier, in which I took a somewhat virtual part for being physically in Warwick, I read a paper she wrote with Stéphanie Allassonnière on stochastic approximation versions of the EM algorithm. Computing the MAP estimator can be done via some adapted for simulated annealing versions of EM, possibly using MCMC as for instance in the Monolix software and its MCMC-SAEM algorithm. Where SA stands sometimes for stochastic approximation and sometimes for simulated annealing, originally developed by Gilles Celeux and Jean Diebolt, then reframed by Marc Lavielle and Eric Moulines [friends and coauthors]. With an MCMC step because the simulation of the latent variables involves an untractable normalising constant. (Contrary to this paper, Umberto Picchini and Adeline Samson proposed in 2015 a genuine ABC version of this approach, paper that I thought I missed—although I now remember discussing it with Adeline at JSM in Seattle—, ABC is used as a substitute for the conditional distribution of the latent variables given data and parameter. To be used as a substitute for the Q step of the (SA)EM algorithm. One more approximation step and one more simulation step and we would reach a form of ABC-Gibbs!) In this version, there are very few assumptions made on the approximation sequence, except that it converges with the iteration index to the true distribution (for a fixed observed sample) if convergence of ABC-SAEM is to happen. The paper takes as an illustrative sequence a collection of tempered versions of the true conditionals, but this is quite formal as I cannot fathom a feasible simulation from the tempered version and not from the untempered one. It is thus much more a version of tempered SAEM than truly connected with ABC (although a genuine ABC-EM version could be envisioned).

## ABC in Clermont-Ferrand

Posted in Mountains, pictures, Statistics, Travel, University life with tags ABC, ABC-Gibbs, Approximate Bayesian computation, Auvergne, Clermont-Ferrand, conditional sufficiency, cosmostats, dimension reduction, Gibbs sampling, likelihood-free methods, PMC, volcano on September 20, 2019 by xi'an**T**oday I am taking part in a one-day workshop at the Université of Clermont Auvergne on ABC. With applications to cosmostatistics, along with Martin Kilbinger [with whom I worked on PMC schemes], Florent Leclerc and Grégoire Aufort. This should prove a most exciting day! (With not enough time to run up Puy de Dôme in the morning, though.)

## likelihood-free approximate Gibbs sampling

Posted in Books, Statistics with tags ABC, ABC-Gibbs, ABC-within-Gibbs, curse of dimensionality, expectation-propagation, Gibbs sampling, local regression, neural network, summary statistics on June 19, 2019 by xi'an

“Low-dimensional regression-based models are constructed for each of these conditional distributions using synthetic (simulated) parameter value and summary statistic pairs, which then permit approximate Gibbs update steps (…) synthetic datasets are not generated during each sampler iteration, thereby providing efficiencies for expensive simulator models, and only require sufficient synthetic datasets to adequately construct the full conditional models (…) Construction of the approximate conditional distributions can exploit known structures of the high-dimensional posterior, where available, to considerably reduce computational overheads”

**G**uilherme Souza Rodrigues, David Nott, and Scott Sisson have just arXived a paper on approximate Gibbs sampling. Since this comes a few days after we posted our own version, here are some of the differences I could spot in the paper:

- Further references to earlier occurrences of Gibbs versions of ABC, esp. in cases when the likelihood function factorises into components and allows for summaries with lower dimensions. And even to ESP.
- More an ABC version of Gibbs sampling that a Gibbs version of ABC in that approximations to the conditionals are first constructed and then used with no further corrections.
- Inherently related to regression post-processing à la Beaumont et al. (2002) in that the regression model is the start to designing an approximate full conditional, conditional on the “other” parameters and on the overall summary statistic. The construction of the approximation is far from automated. And may involve neural networks or other machine learning estimates.
- As a consequence of the above, a preliminary ABC step to design the collection of approximate full conditionals using a single and all-purpose multidimensional summary statistic.
- Once the approximations constructed, no further pseudo-data is generated.
- Drawing from the approximate full conditionals is done exactly, possibly via a bootstrapped version.
- Handling a highly complex g-and-k dynamic model with 13,140 unknown parameters, requiring a ten days simulation.

“In certain circumstances it can be seen that the likelihood-free approximate Gibbs sampler will exactly target the true partial posterior (…) In this case, then Algorithms 2 and 3 will be exact.”

Convergence and coherence are handled in the paper by setting the algorithm(s) as noisy Monte Carlo versions, à la Alquier et al., although the issue of incompatibility between the full conditionals is acknowledged, with the main reference being the finite state space analysis of Chen and Ip (2015). It thus remains unclear whether or not the Gibbs samplers that are implemented there do converge and if they do what is the significance of the stationary distribution.

## A precursor of ABC-Gibbs

Posted in Books, R, Statistics with tags ABC, ABC-Gibbs, compatible conditional distributions, Genetics, Gibbs sampler, high dimensions, incoherent inference, incompatible conditionals, insufficiency, likelihood-free methods, sufficient statistics on June 7, 2019 by xi'an**F**ollowing our arXival of ABC-Gibbs, Dennis Prangle pointed out to us a 2016 paper by Athanasios Kousathanas, Christoph Leuenberger, Jonas Helfer, Mathieu Quinodoz, Matthieu Foll, and Daniel Wegmann, Likelihood-Free Inference in High-Dimensional Model, published in Genetics, Vol. 203, 893–904 in June 2016. This paper contains a version of ABC Gibbs where parameters are sequentially simulated from conditionals that depend on the data only through small dimension conditionally sufficient statistics. I had actually blogged about this paper in 2015 but since then completely forgotten about it. (The comments I had made at the time still hold, already pertaining to the coherence or lack thereof of the sampler. I had also forgotten I had run an experiment of an exact Gibbs sampler with incoherent conditionals, which then seemed to converge to something, if not the exact posterior.)

All ABC algorithms, including ABC-PaSS introduced here, require that statistics are sufficient for estimating the parameters of a given model. As mentioned above, parameter-wise sufficient statistics as required by ABC-PaSS are trivial to find for distributions of the exponential family. Since many population genetics models do not follow such distributions, sufficient statistics are known for the most simple models only. For more realistic models involving multiple populations or population size changes, only approximately-sufficient statistics can be found.

While Gibbs sampling is not mentioned in the paper, this is indeed a form of ABC-Gibbs, with the advantage of not facing convergence issues thanks to the sufficiency. The drawback being that this setting is restricted to exponential families and hence difficult to extrapolate to non-exponential distributions, as using almost-sufficient (or not) summary statistics leads to incompatible conditionals and thus jeopardise the convergence of the sampler. When thinking a wee bit more about the case treated by Kousathanas et al., I am actually uncertain about the validation of the sampler. When tolerance is equal to zero, this is not an issue as it reproduces the regular Gibbs sampler. Otherwise, each conditional ABC step amounts to introducing an auxiliary variable represented by the simulated summary statistic. Since the distribution of this summary statistic depends on more than the parameter for which it is sufficient, in general, it should also appear in the conditional distribution of other parameters. At least from this Gibbs perspective, it thus relies on incompatible conditionals, which makes the conditions proposed in our own paper the more relevant.