Archive for webinar

connection between tempering & entropic mirror descent

Posted in Books, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , on April 30, 2024 by xi'an

The next One World ABC webinar is this  Thursday,  the 2nd May, at 9am UK time, with Francesca Crucinio (King’s College London, formerly CREST and even more formerly Warwick) presenting

“A connection between Tempering and Entropic Mirror Descent”.

a joint work with Nicolas Chopin and Anna Korba (both from CREST) whose abstract follows:

This work explores the connections between tempering (for Sequential Monte Carlo; SMC) and entropic mirror descent to sample from a target probability distribution whose unnormalized density is known. We establish that tempering SMC corresponds to entropic mirror descent applied to the reverse Kullback-Leibler (KL) divergence and obtain convergence rates for the tempering iterates. Our result motivates the tempering iterates from an optimization point of view, showing that tempering can be seen as a descent scheme of the KL divergence with respect to the Fisher-Rao geometry, in contrast to Langevin dynamics that perform descent of the KL with respect to the Wasserstein-2 geometry. We exploit the connection between tempering and mirror descent iterates to justify common practices in SMC and derive adaptive tempering rules that improve over other alternative benchmarks in the literature.

Bayesian score calibration at One World ABC’minar [only comes every 1462 days]

Posted in Books, Statistics, University life with tags , , , , , , , , , , on February 22, 2024 by xi'an

insufficient Gibbs at One World ABC [25/01]

Posted in Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , on January 22, 2024 by xi'an

The next [on-line] One World Approximate Bayesian Computation (ABC) Seminar will be delivered by Antoine Luciano, currently writing his PhD with Robin Ryder and I. It will take place at 9am, UK/GMT time, on Thursday 25 January, with members of the stats lab here in CEREMADE attending Antoine’s lecture live at the PariSanté campus. Here is the abstract for the talk:

In some applied scenarios, the availability of complete data is restricted, often due to privacy concerns, and only aggregated, robust and inefficient statistics derived from the data are accessible. These robust statistics are not sufficient, but they demonstrate reduced sensitivity to outliers and offer enhanced data protection due to their higher breakdown point. In this article, operating within a parametric framework, we propose a method to sample from the posterior distribution of parameters conditioned on different robust and inefficient statistics: specifically, the pairs (median, MAD) or (median, IQR), or one or more quantiles. Leveraging a Gibbs sampler and the simulation of latent augmented data, our approach facilitates simulation according to the posterior distribution of parameters belonging to specific families of distributions. We demonstrate its applicability on the Gaussian, Cauchy, and translated Weibull families.

based on our recent arXival.

is it necessary to learn summary statistics? [One World ABC seminar]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , on November 19, 2023 by xi'an

Next week, on 30 November, at 9am (UK time), Yanzhi Chen (Cambridge) will give a One World ABC webinar on Is “It Necessary to Learn Summary Statistics for Likelihood-free Inference?”, a PMLR paper join with Michael Guttman and Adrian Weller:

Likelihood-free inference (LFI) is a set of techniques for inference in implicit statistical models. A longstanding question in LFI has been how to design or learn good summary statistics of data, but this might now seem unnecessary due to the advent of recent end-to- end (i.e. neural network-based) LFI methods. In this work, we rethink this question with a new method for learning summary statistics. We show that learning sufficient statistics may be easier than direct posterior inference, as the former problem can be reduced to a set of low-dimensional, easy-to-solve learning problems. This suggests us to explicitly decouple summary statistics learning from posterior inference in LFI. Experiments on five inference tasks with different data types validate our hypothesis.

 

Arnak Dalalyan at the RSS Journal Webinar

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on October 15, 2023 by xi'an

My friend and CREST colleague Arnak Dalalyan will (re)present [online] a Read Paper at the RSS on 31 October with my friends Hani Doss and Alain Durmus as discussants:

‘Theoretical Guarantees for Approximate Sampling and Log-Concave Densities’

Arnak Dalalyan ENSAE Paris, France

Sampling from various kinds of distributions is an issue of paramount importance in statistics since it is often the key ingredient for constructing estimators, test procedures or confidence intervals. In many situations, exact sampling from a given distribution is impossible or computationally expensive and, therefore, one needs to resort to approximate sampling strategies. However, there is no well-developed theory providing meaningful non-asymptotic guarantees for the approximate sampling procedures, especially in high dimensional problems. The paper makes some progress in this direction by considering the problem of sampling from a distribution having a smooth and log-concave density defined on ℝᵖ⁠, for some integer p > 0. We establish non-asymptotic bounds for the error of approximating the target distribution by the distribution obtained by the Langevin Monte Carlo method and its variants. We illustrate the effectiveness of the established guarantees with various experiments. Underlying our analysis are insights from the theory of continuous time diffusion processes, which may be of interest beyond the framework of log-concave densities that are considered in the present work.