Archive for ABC

BayesComp Satellite [AG:DC] program

Posted in Statistics with tags , , , , , , , on February 1, 2023 by xi'an

The programme for our [AG:DC] 12-14 March satellite of BayesComp 2023 in Levi, Finland, is now on-line. (There will be a gondola shuttle running from town to hotel for all sessions.)

ABC with path signatures [One World ABC seminar, 2/2/23]

Posted in Books, pictures, Running, Statistics, Travel, University life with tags , , , , , , , on January 29, 2023 by xi'an

The next One World ABC seminar is by Joel Dyer (Oxford) at 1:30pm (UK time) on 02 February.

Title: Approximate Bayesian Computation with Path Signatures

Abstract: Simulation models often lack tractable likelihood functions, making likelihood-free inference methods indispensable. Approximate Bayesian computation (ABC) generates likelihood-free posterior samples by comparing simulated and observed data through some distance measure, but existing approaches are often poorly suited to time series simulators, for example due to an independent and identically distributed data assumption. In this talk, we will discuss our work on the use of path signatures in ABC as a means to handling the sequential nature of time series data of different kinds. We will begin by discussing popular approaches to ABC and how they may be extended to time series simulators. We will then introduce path signatures, and discuss how signatures naturally lead to two instances of ABC for time series simulators. Finally, we will demonstrate that the resulting signature-based ABC procedures can produce competitive Bayesian parameter inference for simulators generating univariate, multivariate, irregularly spaced, and even non-Euclidean sequences.

Reference: J. Dyer, P. Cannon, S. M Schmon (2022). Approximate Bayesian Computation with Path Signatures. arXiv preprint 2106.12555

dynamic mixtures and frequentist ABC

Posted in Statistics with tags , , , , , , , , , , , , , , , on November 30, 2022 by xi'an

This early morning in NYC, I spotted this new arXival by Marco Bee (whom I know from the time he was writing his PhD with my late friend Bernhard Flury) and found he has been working for a while on ABC related problems. The mixture model he considers therein is a form of mixture of experts, where the weights of the mixture components are not constant but functions on (0,1) of the entry as well. This model was introduced by Frigessi, Haug and Rue in 2002 and is often used as a benchmark for ABC methods, since it is missing its normalising constant as in e.g.

f(x) \propto p(x) f_1(x) + (1-p(x)) f_2(x)

even with all entries being standard pdfs and cdfs. Rather than using a (costly) numerical approximation of the “constant” (as a function of all unknown parameters involved), Marco follows the approximate maximum likelihood approach of my Warwick colleagues, Javier Rubio [now at UCL] and Adam Johansen. It is based on the [SAME] remark that under a uniform prior and using an approximation to the actual likelihood the MAP estimator is also the MLE for that approximation. The approximation is ABC-esque in that a pseudo-sample is generated from the true model (attached to a simulation of the parameter) and the pair is accepted if the pseudo-sample stands close enough to the observed sample. The paper proposes to use the Cramér-von Mises distance, which only involves ranks. Given this “posterior” sample, an approximation of the posterior density is constructed and then numerically optimised. From a frequentist view point, a direct estimate of the mode would be preferable. From my Bayesian perspective, this sounds like a step backwards, given that once a posterior sample is available, reconnecting with an approximate MLE does not sound highly compelling.

Adversarial Bayesian Simulation [One World ABC’minar]

Posted in Statistics with tags , , , , , , , , , on November 15, 2022 by xi'an

The next One World ABC webinar will take place on 24 November, at 1:30 UK Time (GMT) and will be presented by Yi Yuexi Wang (University of Chicago) on “Adversarial Bayesian Simulation”, available on arXiv. [The link to the webinar is available to those who have registered.]

In the absence of explicit or tractable likelihoods, Bayesians often resort to approximate Bayesian computation (ABC) for inference. In this talk, we will cover two summary-free ABC approaches, both inspired by adversarial learning. The first one adopts a classification-based KL estimator to quantify the discrepancy between real and simulated datasets. We consider the traditional accept/reject kernel as well as an exponential weighting scheme which does not require the ABC acceptance threshold. In the second paper, we develop a Bayesian GAN (B-GAN) sampler that directly targets the posterior by solving an adversarial optimization problem. B-GAN is driven by a deterministic mapping learned on the ABC reference by conditional GANs. Once the mapping has been trained, iid posterior samples are obtained by filtering noise at a negligible additional cost. We propose two post-processing local refinements using (1) data-driven proposals with importance reweighting, and (2) variational Bayes. For both methods, we support our findings with frequentist-Bayesian theoretical results and highly competitive performance in empirical analysis. (Joint work with Veronika Rockova)

another drawer of socks

Posted in Books, Kids, R, Statistics with tags , , , , , , on November 6, 2022 by xi'an

A socks riddle from the Riddler but with no clear ABC connection! Twenty-eight socks from fourteen pairs of socks are taken from a drawer, one by one, and laid on a surface that only fit nine socks at a time, with complete pairs removed. What is the probability that all pairs are stored without running out of space? No orphan socks then!!

Writing an R code for this experiment is straightforward

for(v in 1:1e6){
 S=sample(rep(1:14,2))
 x=S[1]
 for(t in 2:18){
  if(S[t]%in%x){x=x[S[t]!=x]}else{x=c(x,S[t])}
  if(sum(!!x)>9){
    F=F+1;break()}}}

and it returns a value quite close to 0.7 for the probability of success. I was expecting a less brute-force resolution but the the Riddler only provided the answer of 70.049 based on the above tree of probabilities (which I was too lazy to code).

%d bloggers like this: