Archive for One World ABC Seminar

ABC with path signatures [One World ABC seminar, 2/2/23]

Posted in Books, pictures, Running, Statistics, Travel, University life with tags , , , , , , , on January 29, 2023 by xi'an

The next One World ABC seminar is by Joel Dyer (Oxford) at 1:30pm (UK time) on 02 February.

Title: Approximate Bayesian Computation with Path Signatures

Abstract: Simulation models often lack tractable likelihood functions, making likelihood-free inference methods indispensable. Approximate Bayesian computation (ABC) generates likelihood-free posterior samples by comparing simulated and observed data through some distance measure, but existing approaches are often poorly suited to time series simulators, for example due to an independent and identically distributed data assumption. In this talk, we will discuss our work on the use of path signatures in ABC as a means to handling the sequential nature of time series data of different kinds. We will begin by discussing popular approaches to ABC and how they may be extended to time series simulators. We will then introduce path signatures, and discuss how signatures naturally lead to two instances of ABC for time series simulators. Finally, we will demonstrate that the resulting signature-based ABC procedures can produce competitive Bayesian parameter inference for simulators generating univariate, multivariate, irregularly spaced, and even non-Euclidean sequences.

Reference: J. Dyer, P. Cannon, S. M Schmon (2022). Approximate Bayesian Computation with Path Signatures. arXiv preprint 2106.12555

Adversarial Bayesian Simulation [One World ABC’minar]

Posted in Statistics with tags , , , , , , , , , on November 15, 2022 by xi'an

The next One World ABC webinar will take place on 24 November, at 1:30 UK Time (GMT) and will be presented by Yi Yuexi Wang (University of Chicago) on “Adversarial Bayesian Simulation”, available on arXiv. [The link to the webinar is available to those who have registered.]

In the absence of explicit or tractable likelihoods, Bayesians often resort to approximate Bayesian computation (ABC) for inference. In this talk, we will cover two summary-free ABC approaches, both inspired by adversarial learning. The first one adopts a classification-based KL estimator to quantify the discrepancy between real and simulated datasets. We consider the traditional accept/reject kernel as well as an exponential weighting scheme which does not require the ABC acceptance threshold. In the second paper, we develop a Bayesian GAN (B-GAN) sampler that directly targets the posterior by solving an adversarial optimization problem. B-GAN is driven by a deterministic mapping learned on the ABC reference by conditional GANs. Once the mapping has been trained, iid posterior samples are obtained by filtering noise at a negligible additional cost. We propose two post-processing local refinements using (1) data-driven proposals with importance reweighting, and (2) variational Bayes. For both methods, we support our findings with frequentist-Bayesian theoretical results and highly competitive performance in empirical analysis. (Joint work with Veronika Rockova)

nonparametric ABC [seminar]

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , , , on June 3, 2022 by xi'an

Puzzle: How do you run ABC when you mistrust the model?! We somewhat considered this question in our misspecified ABC paper with David and Judith. An AISTATS 2022 paper by Harita Dellaporta (Warwick), Jeremias KnoblauchTheodoros Damoulas (Warwick), and François-Xavier Briol (formerly Warwick) is addressing this same question and Harita presented the paper at the One World ABC webinar yesterday.

It is inspired from Lyddon, Walker & Holmes (2018), who place a nonparametric prior on the generating model, in top of the assumed parametric model (with an intractable likelihood). This induces a push-forward prior on the pseudo-true parameter, that is, the value that brings the parametric family the closest possible to the true distribution of the data. Here defined as a minimum distance parameter, the maximum mean discrepancy (MMD). Choosing RKHS framework allows for a practical implementation, resorting to simulations for posterior realisations from a Dirichlet posterior and from the parametric model, and stochastic gradient for computing the pseudo-true parameter, which may prove somewhat heavy in terms of computing cost.

The paper also containts a consistency result in an ε-contaminated setting (contamination of the assumed parametric family). Comparisons like the above with a fully parametric Wasserstein-ABC approach show that this alter resists better misspecification, as could be expected since the later is not constructed for that purpose.

Next talk is on 23 June by Cosma Shalizi.

Concentration and robustness of discrepancy-based ABC [One World ABC ‘minar, 28 April]

Posted in Statistics, University life with tags , , , , , , , , , , , on April 15, 2022 by xi'an

Our next speaker at the One World ABC Seminar will be Pierre Alquier, who will talk about “Concentration and robustness of discrepancy-based ABC“, on Thursday April 28, at 9.30am UK time, with an abstract reported below.
Approximate Bayesian Computation (ABC) typically employs summary statistics to measure the discrepancy among the observed data and the synthetic data generated from each proposed value of the parameter of interest. However, finding good summary statistics (that are close to sufficiency) is non-trivial for most of the models for which ABC is needed. In this paper, we investigate the properties of ABC based on integral probability semi-metrics, including MMD and Wasserstein distances. We exhibit conditions ensuring the contraction of the approximate posterior. Moreover, we prove that MMD with an adequate kernel leads to very strong robustness properties.

One World ABC seminar [31.3.22]

Posted in Statistics, University life with tags , , , , , , , , , on March 16, 2022 by xi'an

The next One World ABC seminar is on Thursday 31 March, with David Warnes (from QUT) talking on Multifidelity multilevel Monte Carlo for approximate Bayesian computation It will take place at 10:30 CET (GMT+1).

Models of stochastic processes are widely used in almost all fields of science. However, data are almost always incomplete observations of reality. This leads to a great challenge for statistical inference because the likelihood function will be intractable for almost all partially observed stochastic processes. As a result, it is common to apply likelihood-free approaches that replace likelihood evaluations with realisations of the model and observation process. However, likelihood-free techniques are computationally expensive for accurate inference as they may require millions of high-fidelity, expensive stochastic simulations. To address this challenge, we develop a novel approach that combines the multilevel Monte Carlo telescoping summation, applied to a sequence of approximate Bayesian posterior targets, with a multifidelity rejection sampler that learns from low-fidelity, computationally inexpensive,
model approximations to minimise the number of high-fidelity, computationally expensive, simulations required for accurate inference. Using examples from systems biology, we demonstrate improvements of more than two orders of magnitude over standard rejection sampling techniques

%d bloggers like this: