Archive for approximate Bayesian inference

Recent Advances in Approximate Bayesian Inference [YSE, 15.2.22]

Posted in Statistics, University life with tags , , , , , on May 11, 2022 by xi'an


On June 15, the Young Statisticians Europe initiative is organising an on-line seminar on approximate Bayesian inference. With talks by

starting at 7:00 PT / 10:00 EST / 16:00 CET. The registration form is available here.

Concentration and robustness of discrepancy-based ABC [One World ABC ‘minar, 28 April]

Posted in Statistics, University life with tags , , , , , , , , , , , on April 15, 2022 by xi'an

Our next speaker at the One World ABC Seminar will be Pierre Alquier, who will talk about “Concentration and robustness of discrepancy-based ABC“, on Thursday April 28, at 9.30am UK time, with an abstract reported below.
Approximate Bayesian Computation (ABC) typically employs summary statistics to measure the discrepancy among the observed data and the synthetic data generated from each proposed value of the parameter of interest. However, finding good summary statistics (that are close to sufficiency) is non-trivial for most of the models for which ABC is needed. In this paper, we investigate the properties of ABC based on integral probability semi-metrics, including MMD and Wasserstein distances. We exhibit conditions ensuring the contraction of the approximate posterior. Moreover, we prove that MMD with an adequate kernel leads to very strong robustness properties.

Big Bayes postdoctoral position in Oxford [UK]

Posted in Statistics with tags , , , , , , , , , , , on March 3, 2022 by xi'an

Forwarding a call for postdoctoral applications from Prof Judith Rousseau, with deadline 30 March:

Seeking a Postdoctoral Research Assistant, to join our group at the Department of Statistics. The Postdoctoral Research Assistant will be carrying out research for the ERC project General Theory for Big Bayes, reporting to Professor Judith Rousseau. They will provide guidance to junior members of the research group such as PhD students, and/or project volunteers.

The aim of this project is to develop a general theory for the analysis of Bayesian methods in complex and high (or infinite) dimensional models which will cover not only fine understanding of the posterior distributions but also an analysis of the output of the algorithms used to implement the approaches. The main objectives of the project are (briefly): 1) Asymptotic analysis of the posterior distribution of complex high dimensional models 2) Interactions between the asymptotic theory of high dimensional posterior distributions and computational complexity. We will also enrich these theoretical developments by 3) strongly related domains of applications, namely neuroscience, terrorism and crimes, and ecology.

The postholder will hold or be close to completion of a PhD/DPhil in statistics together with relevant experience. They will have the ability to manage own academic research and associated activities and have previous experience of contributing to publications/presentations. They will contribute ideas for new research projects and research income generation. Ideally, the postholder will also have experience in theoretical properties of Bayesian procedures and/or approximate Bayesian methods.

robust inference using posterior bootstrap

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , on February 18, 2022 by xi'an

The famous 1994 Read Paper by Michael Newton and Adrian Raftery was entitled Approximate Bayesian inference, where the boostrap aspect is in randomly (exponentially) weighting each observation in the iid sample through a power of the corresponding density, a proposal that happened at about the same time as Tony O’Hagan suggested the related fractional Bayes factor. (The paper may also be equally famous for suggesting the harmonic mean estimator of the evidence!, although it only appeared as an appendix to the paper.) What is unclear to me is the nature of the distribution g(θ) associated with the weighted bootstrap sample, conditional on the original sample, since the outcome is the result of a random Exponential sample and of an optimisation step. With no impact of the prior (which could have been used as a penalisation factor), corrected by Michael and Adrian via an importance step involving the estimation of g(·).

At the Algorithm Seminar today in Warwick, Emilie Pompe presented recent research, including some written jointly with Pierre Jacob, [which I have not yet read] that does exactly that inclusion of the log prior as penalisation factor, along with an extra weight different from one, as motivated by the possibility of a misspecification. Including a new approach to cut models. An alternative mentioned during the talk that reminds me of GANs is to generate a pseudo-sample from the prior predictive and add it to the original sample. (Some attendees commented on the dependence of the later version on the chosen parameterisation, which is an issue that had X’ed my mind as well.)

noisy importance sampling

Posted in Statistics with tags , , , on February 14, 2022 by xi'an

A recent short arXival by Fernando Llorente, Luca Martino, Jesse Read, and David Delgado–Gómez in which they analyse settings where (only) a noisy version of the target density is available. Not necessarily in an unbiased fashion although the paper is somewhat unclear as to which integral is targeted in (6), since the integrand is not the original target p(x). The following development is about finding the optimal importance function, which differs from the usual due to the random nature of the approximation, but it does not seem to reconnect with the true target p(x), except when the noisy realisation is unbiased… To me this is a major issue in simulation methodology in that getting away from the unbiasedness constraint opens (rather obviously) a much wider choice of techniques.

%d bloggers like this: