Archive for webinar

manifold learning [BNP Seminar, 11/01/23]

Posted in Books, Statistics, University life with tags , , , , , , , , on January 9, 2023 by xi'an

An incoming BNP webinar on Zoom by Judith Rousseau and Paul Rosa (U of Oxford), on 11 January at 1700 Greenwich time:

Bayesian nonparametric manifold learning

In high dimensions it is common to assume that the data have a lower dimensional structure. We consider two types of low dimensional structure: in the first part the data is assumed to be concentrated near an unknown low dimensional manifold, in the second case it is assumed to be possibly concentrated on an unknown manifold. In both cases neither the manifold nor the density is known. Atypical example is for noisy observations on an unknown low dimensional manifold.

We first consider a family of Bayesian nonparametric density estimators based on location – scale Gaussian mixture priors and we study the asymptotic properties of the posterior distribution. Our work shows in particular that non conjuguate location-scale Gaussian mixture models can adapt to complex geometries and spatially varying regularity when the density is supported near a low dimensional manifold.

In the second part of the talk we will consider also the case where the distribution is supported on a low dimensional manifold. In this non dominated model,we study different types of posterior contraction rates: Wasserstein and L_1(\mu_\mathcal{M}) where \mu_\mathcal{M} is the Haussdorff measure on the manifold \mathcal{M} supporting the density. Some more generic results on Wasserstein contraction rates are also discussed.

 

Adversarial Bayesian Simulation [One World ABC’minar]

Posted in Statistics with tags , , , , , , , , , on November 15, 2022 by xi'an

The next One World ABC webinar will take place on 24 November, at 1:30 UK Time (GMT) and will be presented by Yi Yuexi Wang (University of Chicago) on “Adversarial Bayesian Simulation”, available on arXiv. [The link to the webinar is available to those who have registered.]

In the absence of explicit or tractable likelihoods, Bayesians often resort to approximate Bayesian computation (ABC) for inference. In this talk, we will cover two summary-free ABC approaches, both inspired by adversarial learning. The first one adopts a classification-based KL estimator to quantify the discrepancy between real and simulated datasets. We consider the traditional accept/reject kernel as well as an exponential weighting scheme which does not require the ABC acceptance threshold. In the second paper, we develop a Bayesian GAN (B-GAN) sampler that directly targets the posterior by solving an adversarial optimization problem. B-GAN is driven by a deterministic mapping learned on the ABC reference by conditional GANs. Once the mapping has been trained, iid posterior samples are obtained by filtering noise at a negligible additional cost. We propose two post-processing local refinements using (1) data-driven proposals with importance reweighting, and (2) variational Bayes. For both methods, we support our findings with frequentist-Bayesian theoretical results and highly competitive performance in empirical analysis. (Joint work with Veronika Rockova)

nonparametric ABC [seminar]

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , , , on June 3, 2022 by xi'an

Puzzle: How do you run ABC when you mistrust the model?! We somewhat considered this question in our misspecified ABC paper with David and Judith. An AISTATS 2022 paper by Harita Dellaporta (Warwick), Jeremias KnoblauchTheodoros Damoulas (Warwick), and François-Xavier Briol (formerly Warwick) is addressing this same question and Harita presented the paper at the One World ABC webinar yesterday.

It is inspired from Lyddon, Walker & Holmes (2018), who place a nonparametric prior on the generating model, in top of the assumed parametric model (with an intractable likelihood). This induces a push-forward prior on the pseudo-true parameter, that is, the value that brings the parametric family the closest possible to the true distribution of the data. Here defined as a minimum distance parameter, the maximum mean discrepancy (MMD). Choosing RKHS framework allows for a practical implementation, resorting to simulations for posterior realisations from a Dirichlet posterior and from the parametric model, and stochastic gradient for computing the pseudo-true parameter, which may prove somewhat heavy in terms of computing cost.

The paper also containts a consistency result in an ε-contaminated setting (contamination of the assumed parametric family). Comparisons like the above with a fully parametric Wasserstein-ABC approach show that this alter resists better misspecification, as could be expected since the later is not constructed for that purpose.

Next talk is on 23 June by Cosma Shalizi.

Recent Advances in Approximate Bayesian Inference [YSE, 15.2.22]

Posted in Statistics, University life with tags , , , , , on May 11, 2022 by xi'an


On June 15, the Young Statisticians Europe initiative is organising an on-line seminar on approximate Bayesian inference. With talks by

starting at 7:00 PT / 10:00 EST / 16:00 CET. The registration form is available here.

Concentration and robustness of discrepancy-based ABC [One World ABC ‘minar, 28 April]

Posted in Statistics, University life with tags , , , , , , , , , , , on April 15, 2022 by xi'an

Our next speaker at the One World ABC Seminar will be Pierre Alquier, who will talk about “Concentration and robustness of discrepancy-based ABC“, on Thursday April 28, at 9.30am UK time, with an abstract reported below.
Approximate Bayesian Computation (ABC) typically employs summary statistics to measure the discrepancy among the observed data and the synthetic data generated from each proposed value of the parameter of interest. However, finding good summary statistics (that are close to sufficiency) is non-trivial for most of the models for which ABC is needed. In this paper, we investigate the properties of ABC based on integral probability semi-metrics, including MMD and Wasserstein distances. We exhibit conditions ensuring the contraction of the approximate posterior. Moreover, we prove that MMD with an adequate kernel leads to very strong robustness properties.
%d bloggers like this: