Archive for approximate Bayesian inference

Scott Sisson’s ABC seminar in Paris [All about that Bayes]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on January 20, 2020 by xi'an

On the “All about that Bayes” seminar tomorrow (Tuesday 21 at 3p.m., room 42, AgroParisTech, 16 rue Claude Bernard, Paris 5ième), Scott Sisson, School of Mathematics and Statistics at UNSW, and visiting Paris-Dauphine this month, will give a talk on

Approximate posteriors and data for Bayesian inference

Abstract
For various reasons, including large datasets and complex models, approximate inference is becoming increasingly common. In this talk I will provide three vignettes of recent work. These cover a) approximate Bayesian computation for Gaussian process density estimation, b) likelihood-free Gibbs sampling, and c) MCMC for approximate (rounded) data.

repulsive postdoc!

Posted in Statistics with tags , , , , , , , , , , on December 20, 2019 by xi'an

Rémi Bardenet has been awarded an ERC grant on Monte Carlo integration via repulsive point processes and is now looking for a postdoc starting next March. (Our own ABSINT ANR grant still has an open offer of a postdoctoral position on approximate Bayesian methods, feel free to contact me if potentially interested.)

off to Vancouver

Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on December 7, 2019 by xi'an

Today I am flying to Vancouver for an ABC workshop, the second Symposium on Advances in Approximate Bayesian Inference, which is a pre-NeurIPS workshop following five earlier editions, to some of which I took part. With an intense and exciting programme. Not attending the following NeurIPS as I had not submitted any paper (and was not considering relying on a lottery!). Instead, I will give a talk at ABC UBC on Monday 4pm, as, coincidence, coincidence!, I was independently invited by UBC to the IAM-PIMS Distinguished Colloquium series. Speaking on ABC on a broader scale than in the workshop. Where I will focus on ABC-Gibbs. (With alas no time for climbing, missing an opportunity for a winter attempt at The Stawamus Chief!)

label switching by optimal transport: Wasserstein to the rescue

Posted in Books, Statistics, Travel with tags , , , , , , , , , , , , , , on November 28, 2019 by xi'an

A new arXival by Pierre Monteiller et al. on resolving label switching by optimal transport. To appear in NeurIPS 2019, next month (where I will be, but extra muros, as I have not registered for the conference). Among other things, the paper was inspired from an answer of mine on X validated, presumably a première (and a dernière?!). Rather than picketing [in the likely unpleasant weather ]on the pavement outside the conference centre, here are my raw reactions to the proposal made in the paper. (Usual disclaimer: I was not involved in the review of this paper.)

“Previous methods such as the invariant losses of Celeux et al. (2000) and pivot alignments of Marin et al. (2005) do not identify modes in a principled manner.”

Unprincipled, me?! We did not aim at identifying all modes but only one of them, since the posterior distribution is invariant under reparameterisation. Without any bad feeling (!), I still maintain my position that using a permutation invariant loss function is a most principled and Bayesian approach towards a proper resolution of the issue. Even though figuring out the resulting Bayes estimate may prove tricky.

The paper thus adopts a different approach, towards giving a manageable meaning to the average of the mixture distributions over all permutations, not in a linear Euclidean sense but thanks to a Wasserstein barycentre. Which indeed allows for an averaged mixture density, although a point-by-point estimate that does not require switching to occur at all was already proposed in earlier papers of ours. Including the Bayesian Core. As shown above. What was first unclear to me is how necessary the Wasserstein formalism proves to be in this context. In fact, the major difference with the above picture is that the estimated barycentre is a mixture with the same number of components. Computing time? Bayesian estimate?

Green’s approach to the problem via a point process representation [briefly mentioned on page 6] of the mixture itself, as for instance presented in our mixture analysis handbook, should have been considered. As well as issues about Bayes factors examined in Gelman et al. (2003) and our more recent work with Kate Jeong Eun Lee. Where the practical impossibility of considering all possible permutations is processed by importance sampling.

An idle thought that came to me while reading this paper (in Seoul) was that a more challenging problem would be to face a model invariant under the action of a group with only a subset of known elements of that group. Or simply too many elements in the group. In which case averaging over the orbit would become an issue.

off to SimStat2019, Salzburg

Posted in Mountains, Running, Statistics, University life with tags , , , , , , , , , , , , , on September 2, 2019 by xi'an

Today, I am off to Salzburg for the SimStat 2019 workshop, or more formally the 10th International Workshop on Simulation and Statistics, where I give a talk on ABC. The program of the workshop is quite diverse and rich and so I do not think I will have time to take advantage of the Hohe Tauern or the Berchtesgaden Alps to go climbing. Especially since I am also discussing papers in an ABC session.

Introductory overview lecture: the ABC of ABC [JSM19 #1]

Posted in Statistics with tags , , , , , , , , , , , on July 28, 2019 by xi'an

Here are my slides [more or less] for the introductory overview lecture I am giving today at JSM 2019, 4:00-5:50, CC-Four Seasons I. There is obviously quite an overlap with earlier courses I gave on the topic, although I refrained here from mentioning any specific application (like population genetics) to focus on statistical and computational aspects.

Along with the other introductory overview lectures in this edition of JSM:

a generalized representation of Bayesian inference

Posted in Books with tags , , , , , , on July 5, 2019 by xi'an

Jeremias Knoblauch, Jack Jewson and Theodoros Damoulas, all affiliated with Warwick (hence a potentially biased reading!), arXived a paper on loss-based Bayesian inference that Jack discussed with me on my last visit to Warwick. As I was somewhat scared by the 61 pages, of which the 8 first pages are in NeurIPS style. The authors argue for a decision-theoretic approach to Bayesian inference that involves a loss over distributions and a divergence from the prior. For instance, when using the log-score as the loss and the Kullback-Leibler divergence, the regular posterior emerges, as shown by Arnold Zellner. Variational inference also falls under this hat. The argument for this generalization is that any form of loss can be used and still returns a distribution that is used to assess uncertainty about the parameter (of interest). In the axioms they produce for justifying the derivation of the optimal procedure, including cases where the posterior is restricted to a certain class, one [Axiom 4] generalizes the likelihood principle. Given the freedom brought by this general framework, plenty of fringe Bayes methods like standard variational Bayes can be seen as solutions to such a decision problem. Others like EP do not. Of interest to me are the potentials for this formal framework to encompass misspecification and likelihood-free settings, as well as for assessing priors, which is always a fishy issue. (The authors mention in addition the capacity to build related specific design Bayesian deep networks, of which I know nothing.) The obvious reaction of mine is one of facing an abundance of wealth (!) but encompassing approximate Bayesian solutions within a Bayesian framework remains an exciting prospect.