**M**y friend and coauthor from Melbourne is giving the One World ABC seminar tomorrow. He will be talking at 10:30 UK time, 11:30 Brussels time, and 20:30 Melbourne time! On Robust and Efficient Approximate Bayesian Computation: A Minimum Distance Approach. Be on time!

## Archive for One World ABC Seminar

## David Frazier’s talk on One World ABC seminar tomorrow [watch for the time!]

Posted in pictures, Statistics, Travel, University life with tags ABC, Australia, Bayesian robustness, daylight saving time, Melbourne, Monash University, One World ABC Seminar, University of Warwick, Victoria, webinar on October 14, 2020 by xi'an## one World ABC seminar [term #2]

Posted in Statistics with tags ABC, ABCruise, Approximate Bayesian computation, approximate Bayesian inference, Bayesian synthetic likelihood, gaussian process, MCMC, One World ABC Seminar, online seminar, University of Warwick, webinar on September 29, 2020 by xi'an**T**he on-line One World ABC seminar continues on-line this semester! With talks every other Thursday at 11:30 UK time (12:30 central European time). Incoming speakers are

- Marko Järvenpää, on Batch simulations and uncertainty quantification in Gaussian process surrogate ABC, on the first of October
- David Frazier, on a minimum distance approach to ABC, on 15 October
**[Warning: 10:30 UK time, 11:30 EU time, 8:30 Victoria time!!!]** - David Nott, on Marginally calibrated deep distributional regression, on 12 November
- Matti Vihola, on the use of approximate Bayesian computation Markov chain Monte Carlo with inflated tolerance and post-correction, on 10 December

with presenters to be confirmed for 29 October. Anyone interested in presenting at this webinar in a near future should not hesitate in contacting Massimiliano Tamborrino in Warwick or any of the other organisers of the seminar!

## improving synthetic likelihood

Posted in Books, Statistics, University life with tags ABC, approximate Bayesian inference, Bayesian synthetic likelihood, Brisbane, MA(p) model, One World ABC Seminar, QUT, summary statistics, webinar on July 9, 2020 by xi'an**C**hris Drovandi gave an after-dinner [QUT time!] talk for the One World ABC webinar on a recent paper he wrote with Jacob Proddle, Scott Sisson and David Frazier. Using a regular MCMC step on a synthetic likelihood approximation to the posterior. Or a (simulation based) unbiased estimator of it.

By evaluating the variance of the log-likelihood estimator, the authors show that the number of simulations n need scale like n²d² to keep the variance under control. And suggest PCA decorrelation of the summary statistic components as a mean to reduce the variance since it then scales as n²d. Rather idly, I wonder at the final relevance of precisely estimating the (synthetic) likelihood when considering it is not the true likelihood and when the n² part seems more damning. Moving from d² to d seems directly related to the estimation of a full correlation matrix for the Normal synthetic distribution of the summary statistic versus the estimation of a diagonal matrix. The usual complaint that performances highly depend on the choice of the summary statistic also applies here, in particular when its dimension is much larger than the dimension d of the parameter (as in the MA example). Although this does not seem to impact the scale of the variance.

## focused Bayesian prediction

Posted in Books, pictures, Statistics, Travel, University life with tags Australia, Bayesian non-parametrics, Bayesian predictive, Casa Matemática Oaxaca, econometrics, likelihood-free inference, Mexico, misspecification, Monash University, One World ABC Seminar, prediction, pseudo-marginal MCMC, score function, webinar on June 3, 2020 by xi'an**I**n this fourth session of our One World ABC Seminar, my friend and coauthor Gael Martin, gave an after-dinner talk on focused Bayesian prediction, more in the spirit of Bissiri et al. than following a traditional ABC approach. because along with Ruben Loaiza-Maya and [my friend and coauthor] David Frazier, they consider the possibility of a (mild?) misspecification of the model. Using thus scoring rules à la Gneiting and Raftery. Gael had in fact presented an earlier version at our workshop in Oaxaca, in November 2018. As in other solutions of that kind, difficulty in weighting the score into a distribution. Although asymptotic irrelevance, direct impact on the current predictions, at least for the early dates in the time series… Further calibration of the set of interest A. Or the focus of the prediction. As a side note the talk perfectly fits the One World likelihood-free seminar as it does not use the likelihood function!

“The very premise of this paper is that, in reality, any choice of predictive class is such that the truth is not contained therein, at which point there is no reason to presume that the expectation of any particular scoring rule will be maximized at the truth or, indeed, maximized by the same predictive distribution that maximizes a different (expected) score.”

This approach requires the proxy class to be close enough to the true data generating model. Or in the word of the authors to be *plausible predictive* models. And to produce the true distribution via the score as it is proper. Or the closest to the true model in the misspecified family. I thus wonder at a possible extension with a non-parametric version, the prior being thus on functionals rather than parameters, if I understand properly the meaning of Π(P_{θ}). (Could the score function be misspecified itself?!) Since the score is replaced with its empirical version, the implementation is resorting to off-the-shelf MCMC. (I wonder for a few seconds if the approach could be seen as a pseudo-marginal MCMC but the estimation is always based on the same observed sample, hence does not directly fit the pseudo-marginal MCMC framework.)

*[Notice: Next talk in the series is tomorrow, 11:30am GMT+1.]*

## stratified ABC [One World ABC webinar]

Posted in Books, Statistics, University life with tags Ascension, Australia, BayesComp 2020, bootstrap likelihood, Gainesville, groundhog day, Monash University, One World ABC Seminar, stratified sampling on May 15, 2020 by xi'an**T**he third episode of the One World ABC seminar (Season 1!) was kindly delivered by Umberto Picchini on Stratified sampling and bootstrapping for ABC which I already if briefly discussed after BayesComp 2020. Which sounds like a million years ago… His introduction on the importance of estimating the likelihood using a kernel, while 600% justified wrt his talk, made the One World ABC seminar sounds almost like groundhog day! The central argument is in the computational gain brought by simulating a single θ dependent [expensive] dataset followed by [cheaper] bootstrap replicates. Which turns de fact into bootstrapping the summary statistics.

If I understand correctly, the post-stratification approach of Art Owen (2013?, I cannot find the reference) corrects a misrepresentation of mine. Indeed, defining a partition with unknown probability weights seemed to me to annihilate the appeal of stratification, because the Bernoulli variance of the estimated probabilities brought back the same variability as the mother estimator. But with bootstrap, this requires only two simulations, one for the weights and one for the target. And further allows for a larger ABC tolerance *in fine*. Free lunch?!

The speaker in two weeks (21 May or Ascension Thursday!) is my friend and co-author Gael Martin from Monash University, who will speak on Focused Bayesian prediction, at quite a late time down under..!