Archive for One World ABC Seminar

One World ABC seminar [season 2]

Posted in Books, Statistics, University life with tags , , , , , , on March 23, 2021 by xi'an

The One World ABC seminar will resume its talks on ABC methods with a talk on Thursday, 25 March, 12:30CET, by Mijung Park, from the Max Planck Institute for Intelligent Systems, on the exciting topic of producing differential privacy by ABC. (Talks will take place on a monthly basis.)

ABC with inflated tolerance

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , on December 8, 2020 by xi'an

joutsniemi_01_srgb_300ppi_leivonmaen-kansallispuisto_jukka-paakkinen

For the last One World ABC seminar of the year 2020, this coming Thursday, Matti Vihola is speaking from Finland on his recent Biometrika paper “On the use of ABC-MCMC with inflated tolerance and post-correction”. To attend the talk, all is required is a registration on the seminar webpage.

The Markov chain Monte Carlo (MCMC) implementation of ABC is often sensitive to the tolerance parameter: low tolerance leads to poor mixing and large tolerance entails excess bias. We propose an approach that involves using a relatively large tolerance for the MCMC sampler to ensure sufficient mixing, and post-processing of the output which leads to estimators for a range of finer tolerances. We introduce an approximate confidence interval for the related post-corrected estimators and propose an adaptive ABC-MCMC algorithm, which finds a balanced tolerance level automatically based on acceptance rate optimization. Our experiments suggest that post-processing-based estimators can perform better than direct MCMC targeting a fine tolerance, that our confidence intervals are reliable, and that our adaptive algorithm can lead to reliable inference with little user specification.

David Frazier’s talk on One World ABC seminar tomorrow [watch for the time!]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , on October 14, 2020 by xi'an

My friend and coauthor from Melbourne is giving the One World ABC seminar tomorrow. He will be talking at 10:30 UK time, 11:30 Brussels time, and 20:30 Melbourne time! On Robust and Efficient Approximate Bayesian Computation: A Minimum Distance Approach. Be on time!

one World ABC seminar [term #2]

Posted in Statistics with tags , , , , , , , , , , on September 29, 2020 by xi'an

The on-line One World ABC seminar continues on-line this semester! With talks every other Thursday at 11:30 UK time (12:30 central European time). Incoming speakers are

with presenters to be confirmed for 29 October. Anyone interested in presenting at this webinar in a near future should not hesitate in contacting Massimiliano Tamborrino in Warwick or any of the other organisers of the seminar!

improving synthetic likelihood

Posted in Books, Statistics, University life with tags , , , , , , , , on July 9, 2020 by xi'an

Chris Drovandi gave an after-dinner [QUT time!] talk for the One World ABC webinar on a recent paper he wrote with Jacob Proddle, Scott Sisson and David Frazier. Using a regular MCMC step on a synthetic likelihood approximation to the posterior. Or a (simulation based) unbiased estimator of it.

By evaluating the variance of the log-likelihood estimator, the authors show that the number of simulations n need scale like n²d² to keep the variance under control. And suggest PCA decorrelation of the summary statistic components as a mean to reduce the variance since it then scales as n²d. Rather idly, I wonder at the final relevance of precisely estimating the (synthetic) likelihood when considering it is not the true likelihood and when the n² part seems more damning. Moving from d² to d seems directly related to the estimation of a full correlation matrix for the Normal synthetic distribution of the summary statistic versus the estimation of a diagonal matrix. The usual complaint that performances highly depend on the choice of the summary statistic also applies here, in particular when its dimension is much larger than the dimension d of the parameter (as in the MA example). Although this does not seem to impact the scale of the variance.