Archive for Approximate Bayesian computation

ABC in… everywhere [programme]

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on April 8, 2021 by xi'an

The ABC in Svalbard workshop is taking place on-line next week (and most sadly not in Svalbard). The programme is available on the ABC site. It starts (in Australia) at 4:00GMT (14 AEST) and finishes (in France) at 15:30GMT (17:30 CET). Registration is free but needed to access the Zoom codes!  See you on Zoom next week!!!

γ-ABC

Posted in Statistics with tags , , , , , , , on March 24, 2021 by xi'an

An AISTATS 2021 paper by Masahiro Fujisawa,Takeshi Teshima, Issei Sato and Masashi Sugiyama (RIKEN, Tokyo) just appeared on arXiv.  (AISTATS 2021 is again virtual this year.)

“ABC can be sensitive to outliers if a data discrepancy measure is chosen inappropriately (…) In this paper, we propose a novel outlier-robust and computationally-efficient discrepancy measure based on the γ-divergence”

The focus is on measure of robustness for ABC distances as those can be lethal if insufficient summarisation is used. (Note that a referenced paper by Erlis Ruli, Nicola Sartori and Laura Ventura from Padova appeared last year on robust ABC.) The current approach mixes the γ-divergence of Fujisawa and Eguchi, with a k-nearest neighbour density estimator. Which may not prove too costly, of order O(n log n), but also may be a poor if robust approximation, even if it provides an asymptotic unbiasedness and almost surely convergent approximation. These properties are those established in the paper, which only demonstrates convergence in the sample size n to an ABC approximation with the true γ-divergence but with a fixed tolerance ε, when the most recent results are rather concerned with the rates of convergence of ε(n) to zero. (An extensive simulation section compares this approach with several ABC alternatives, incl. ours using the Wasserstein distance. If I read the comparison graphs properly, it does not look as if there is a huge discrepancy between the two approaches under no contamination.) Incidentally, the paper contains a substantial survey section and has a massive reference list, if missing the publication more than a year earlier of our Wasserstein paper in Series B.

computing Bayes 2.0

Posted in Books, Statistics, University life with tags , , , , , , , , , , , on December 11, 2020 by xi'an

Our survey paper on “computing Bayes“, written with my friends Gael Martin [who led this project most efficiently!] and David Frazier, has now been revised and resubmitted, the new version being now available on arXiv. Recognising that the entire range of the literature cannot be encompassed within a single review, esp. wrt the theoretical advances made on MCMC, the revised version is more focussed on the approximative solutions (when considering MCMC as “exact”!). As put by one of the referees [which were all very supportive of the paper], “the authors are very brave. To cover in a review paper the computational methods for Bayesian inference is indeed a monumental task and in a way an hopeless one”. This is the opportunity to congratulate Gael on her election to the Academy of Social Sciences of Australia last month. (Along with her colleague from Monash, Rob Hyndman.)

A is for…

Posted in Statistics with tags , , , , , on December 10, 2020 by xi'an

ABC with inflated tolerance

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , on December 8, 2020 by xi'an

joutsniemi_01_srgb_300ppi_leivonmaen-kansallispuisto_jukka-paakkinen

For the last One World ABC seminar of the year 2020, this coming Thursday, Matti Vihola is speaking from Finland on his recent Biometrika paper “On the use of ABC-MCMC with inflated tolerance and post-correction”. To attend the talk, all is required is a registration on the seminar webpage.

The Markov chain Monte Carlo (MCMC) implementation of ABC is often sensitive to the tolerance parameter: low tolerance leads to poor mixing and large tolerance entails excess bias. We propose an approach that involves using a relatively large tolerance for the MCMC sampler to ensure sufficient mixing, and post-processing of the output which leads to estimators for a range of finer tolerances. We introduce an approximate confidence interval for the related post-corrected estimators and propose an adaptive ABC-MCMC algorithm, which finds a balanced tolerance level automatically based on acceptance rate optimization. Our experiments suggest that post-processing-based estimators can perform better than direct MCMC targeting a fine tolerance, that our confidence intervals are reliable, and that our adaptive algorithm can lead to reliable inference with little user specification.