Archive for Monash University

focused Bayesian prediction

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on June 3, 2020 by xi'an

In this fourth session of our One World ABC Seminar, my friend and coauthor Gael Martin, gave an after-dinner talk on focused Bayesian prediction, more in the spirit of Bissiri et al. than following a traditional ABC approach.  because along with Ruben Loaiza-Maya and [my friend and coauthor] David Frazier, they consider the possibility of a (mild?) misspecification of the model. Using thus scoring rules à la Gneiting and Raftery. Gael had in fact presented an earlier version at our workshop in Oaxaca, in November 2018. As in other solutions of that kind, difficulty in weighting the score into a distribution. Although asymptotic irrelevance, direct impact on the current predictions, at least for the early dates in the time series… Further calibration of the set of interest A. Or the focus of the prediction. As a side note the talk perfectly fits the One World likelihood-free seminar as it does not use the likelihood function!

“The very premise of this paper is that, in reality, any choice of predictive class is such that the truth is not contained therein, at which point there is no reason to presume that the expectation of any particular scoring rule will be maximized at the truth or, indeed, maximized by the same predictive distribution that maximizes a different (expected) score.”

This approach requires the proxy class to be close enough to the true data generating model. Or in the word of the authors to be plausible predictive models. And to produce the true distribution via the score as it is proper. Or the closest to the true model in the misspecified family. I thus wonder at a possible extension with a non-parametric version, the prior being thus on functionals rather than parameters, if I understand properly the meaning of Π(Pθ). (Could the score function be misspecified itself?!) Since the score is replaced with its empirical version, the implementation is  resorting to off-the-shelf MCMC. (I wonder for a few seconds if the approach could be seen as a pseudo-marginal MCMC but the estimation is always based on the same observed sample, hence does not directly fit the pseudo-marginal MCMC framework.)

[Notice: Next talk in the series is tomorrow, 11:30am GMT+1.]

stratified ABC [One World ABC webinar]

Posted in Books, Statistics, University life with tags , , , , , , , , on May 15, 2020 by xi'an

The third episode of the One World ABC seminar (Season 1!) was kindly delivered by Umberto Picchini on Stratified sampling and bootstrapping for ABC which I already if briefly discussed after BayesComp 2020. Which sounds like a million years ago… His introduction on the importance of estimating the likelihood using a kernel, while 600% justified wrt his talk, made the One World ABC seminar sounds almost like groundhog day!  The central argument is in the computational gain brought by simulating a single θ dependent [expensive] dataset followed by [cheaper] bootstrap replicates. Which turns de fact into bootstrapping the summary statistics.

If I understand correctly, the post-stratification approach of Art Owen (2013?, I cannot find the reference) corrects a misrepresentation of mine. Indeed, defining a partition with unknown probability weights seemed to me to annihilate the appeal of stratification, because the Bernoulli variance of the estimated probabilities brought back the same variability as the mother estimator. But with bootstrap, this requires only two simulations, one for the weights and one for the target. And further allows for a larger ABC tolerance in fine. Free lunch?!

The speaker in two weeks (21 May or Ascension Thursday!) is my friend and co-author Gael Martin from Monash University, who will speak on Focused Bayesian prediction, at quite a late time down under..!

Computing Bayes: Bayesian Computation from 1763 to the 21st Century

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , on April 16, 2020 by xi'an

Last night, Gael Martin, David Frazier (from Monash U) and myself arXived a survey on the history of Bayesian computations. This project started when Gael presented a historical overview of Bayesian computation, then entitled ‘Computing Bayes: Bayesian Computation from 1763 to 2017!’, at ‘Bayes on the Beach’ (Queensland, November, 2017). She then decided to build a survey from the material she had gathered, with her usual dedication and stamina. Asking David and I to join forces and bring additional perspectives on this history. While this is a short and hence necessary incomplete history (of not everything!), it hopefully brings some different threads together in an original enough fashion (as I think there is little overlap with recent surveys I wrote). We welcome comments about aspects we missed, skipped or misrepresented, most obviously!

research position at Monash

Posted in Statistics, Travel, University life with tags , , , , , , , on March 23, 2020 by xi'an

0d6a2355-30b9-43c0-97b5-0aacf0bb7060

My friends (and coauthors) Gael Martin and David Frazier forwarded me this call for a two-year research research fellow position at Monash. Working with the team there on the most exciting topic of approximate Bayes!

The Research Fellow will conduct research associated with ARC Discovery Grant DP200101414: “Loss-Based Bayesian Prediction”. This project proposes a new paradigm for prediction. Using state-of-the-art computational methods, the project aims to produce accurate, fit for purpose predictions which, by design, reduce the loss incurred when the prediction is inaccurate. Theoretical validation of the new predictive method is an expected outcome, as is extensive application of the method to diverse empirical problems, including those based on high-dimensional and hierarchical data sets. The project will exploit recent advances in Bayesian computation, including approximate Bayesian computation and variational inference, to produce predictive distributions that are expressly designed to yield accurate predictions in a given loss measure. The Research Fellow would be expected to engage in all aspects of the research and would therefore build expertise in the methodological, theoretical and empirical aspects of this new predictive approach.

Deadline is 13 May 2020. This is definitely an offer to consider!

robust Bayesian synthetic likelihood

Posted in Statistics with tags , , , , , , , , , , , , , on May 16, 2019 by xi'an

David Frazier (Monash University) and Chris Drovandi (QUT) have recently come up with a robustness study of Bayesian synthetic likelihood that somehow mirrors our own work with David. In a sense, Bayesian synthetic likelihood is definitely misspecified from the start in assuming a Normal distribution on the summary statistics. When the data generating process is misspecified, even were the Normal distribution the “true” model or an appropriately converging pseudo-likelihood, the simulation based evaluation of the first two moments of the Normal is biased. Of course, for a choice of a summary statistic with limited information, the model can still be weakly compatible with the data in that there exists a pseudo-true value of the parameter θ⁰ for which the synthetic mean μ(θ⁰) is the mean of the statistics. (Sorry if this explanation of mine sounds unclear!) Or rather the Monte Carlo estimate of μ(θ⁰) coincidences with that mean.The same Normal toy example as in our paper leads to very poor performances in the MCMC exploration of the (unsympathetic) synthetic target. The robustification of the approach as proposed in the paper is to bring in an extra parameter to correct for the bias in the mean, using an additional Laplace prior on the bias to aim at sparsity. Or the same for the variance matrix towards inflating it. This over-parameterisation of the model obviously avoids the MCMC to get stuck (when implementing a random walk Metropolis with the target as a scale).