Archive for likelihood-free methods

another mirror of ABC in Gre[e]noble

Posted in Statistics with tags , , , , , , , , , , on March 3, 2020 by xi'an

There will now be a second mirror workshop of ABC in Grenoble. Taking place at the Université de Montpellier, more precisely at the Alexander Grothendieck Montpellier Institute, Building 9, room 430 (4th floor), Triolet Campus. It is organised by my friend Jean-Michel Marin. Great to see a mirror at one of the major breeding places of ABC, where I personally heard of ABC for the first time and met several of the main A[B]Ctors..! The dates are 19-20 March, with talks transmitted from 9am to 5am [GMT+1]. Since the video connection can accommodate 1918 more mirrors, if anyone else is interested in organising another mirror, please contact me for technical details.

in Bristol for the day

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on February 28, 2020 by xi'an

I am in Bristol for the day, giving a seminar at the Department of Statistics where I had not been for quite a while (and not since the Department has moved to a beautifully renovated building). The talk is on ABC-Gibbs, whose revision is on the verge of being resubmitted. (I also hope Greta will let me board my plane tonight…)

mirror of ABC in Grenoble

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , on February 26, 2020 by xi'an

I am quite glad to announce that there will definitely be at least one mirror workshop of ABC in Grenoble. Taking place at Warwick University, in the Zeeman building (room MS0.05) and organised by my colleague Rito Dutta. The dates are 19-20 March, with talks transmitted from 9am to 5am [GMT+1]. Since the video connection can accommodate 19 more mirrors, if anyone is interested in organising an other mirror, please contact me for technical details.

Bayesian inference with no likelihood

Posted in Books, Statistics, University life with tags , , , , , , , , on January 28, 2020 by xi'an

This week I made a quick trip to Warwick for the defence (or viva) of the PhD thesis of Jack Jewson, containing novel perspectives on constructing Bayesian inference without likelihood or without complete trust in said likelihood. The thesis aimed at constructing minimum divergence posteriors in an M-open perspective and built a rather coherent framework from principles to implementation. There is a clear link with the earlier work of Bissiri et al. (2016), with further consistency constraints where the outcome must recover the true posterior in the M-closed scenario (if not always the case with the procedures proposed in the thesis).

Although I am partial to the use of empirical likelihoods in setting, I appreciated the position of the thesis and the discussion of the various divergences towards the posterior derivation (already discussed on this blog) , with interesting perspectives on the calibration of the pseudo-posterior à la Bissiri et al. (2016). Among other things, the thesis pointed out a departure from the likelihood principle and some of its most established consequences, like Bayesian additivity. In that regard, there were connections with generative adversarial networks (GANs) and their Bayesian versions that could have been explored. And an impression that the type of Bayesian robustness explored in the thesis has more to do with outliers than with misspecification. Epsilon-contamination amodels re quite specific as it happens, in terms of tails and other things.

The next chapter is somewhat “less” Bayesian in my view as it considers a generalised form of variational inference. I agree that the view of the posterior as a solution to an optimisation is tempting but changing the objective function makes the notion less precise.  Which makes reading it somewhat delicate as it seems to dilute the meaning of both prior and posterior to the point of becoming irrelevant.

The last chapter on change-point models is quite alluring in that it capitalises on the previous developments to analyse a fairly realistic if traditional problem, applied to traffic in London, prior and posterior to the congestion tax. However, there is always an issue with robustness and outliers in that the notion is somewhat vague or informal. Things start clarifying at the end but I find surprising that conjugates are robust optimal solutions since the usual folk theorem from the 80’s is that they are not robust.

distilling importance

Posted in Books, Statistics, University life with tags , , , , , , , , , , on November 13, 2019 by xi'an

As I was about to leave Warwick at the end of last week, I noticed a new arXival by Dennis Prangle, distilling importance sampling. In connection with [our version of] population Monte Carlo, “each step of [Dennis’] distilled importance sampling method aims to reduce the Kullback Leibler (KL) divergence from the distilled density to the current tempered posterior.”  (The introduction of the paper points out various connections with ABC, conditional density estimation, adaptive importance sampling, X entropy, &tc.)

“An advantage of [distilled importance sampling] over [likelihood-free] methods is that it performs inference on the full data, without losing information by using summary statistics.”

A notion used therein I had not heard before is the one of normalising flows, apparently more common in machine learning and in particular with GANs. (The slide below is from Shakir Mohamed and Danilo Rezende.) The  notion is to represent an arbitrary variable as the bijective transform of a standard variate like a N(0,1) variable or a U(0,1) variable (calling the inverse cdf transform). The only link I can think of is perfect sampling where the representation of all simulations as a function of a white noise vector helps with coupling.

I read a blog entry by Eric Jang on the topic (who produced this slide among other things) but did not emerge much the wiser. As the text instantaneously moves from the Jacobian formula to TensorFlow code… In Dennis’ paper, it appears that the concept is appealing for quickly producing samples and providing a rich family of approximations, especially when neural networks are included as transforms. They are used to substitute for a tempered version of the posterior target, validated as importance functions and aiming at being the closest to this target in Kullback-Leibler divergence. With the importance function interpretation, unbiased estimators of the gradient [in the parameter of the normalising flow] can be derived, with potential variance reduction. What became clearer to me from reading the illustration section is that the prior x predictive joint can also be modeled this way towards producing reference tables for ABC (or GANs) much faster than with the exact model. (I came across several proposals of that kind in the past months.) However, I deem mileage should vary depending on the size and dimension of the data. I also wonder at the connection between the (final) distribution simulated by distilled importance [the least tempered target?] and the ABC equivalent.

Hausdorff school on MCMC [28 March-02 April, 2020]

Posted in pictures, Statistics, Travel with tags , , , , , , , , , , , , , on September 26, 2019 by xi'an

The Hausdorff Centre for Mathematics will hold a week on recent advances in MCMC in Bonn, Germany, March 30 – April 3, 2020. Preceded by two days of tutorials. (“These tutorials will introduce basic MCMC methods and mathematical tools for studying the convergence to the invariant measure.”) There is travel support available, but the application deadline is quite close, as of 30 September.

Note that, in a Spring of German conference, the SIAM Conference on Uncertainty Quantification will take place in Munich (Garching) the week before, on March 24-27. With at least one likelihood-free session. Not to mention the ABC in Grenoble workshop in France, on 19-20 March. (Although these places are not exactly nearby!)

ABC in Clermont-Ferrand

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on September 20, 2019 by xi'an

Today I am taking part in a one-day workshop at the Université of Clermont Auvergne on ABC. With applications to cosmostatistics, along with Martin Kilbinger [with whom I worked on PMC schemes], Florent Leclerc and Grégoire Aufort. This should prove a most exciting day! (With not enough time to run up Puy de Dôme in the morning, though.)