Archive for seminar

Bayesian score calibration at One World ABC’minar [only comes every 1462 days]

Posted in Statistics, University life, Books with tags , , , , , , , , , , on February 22, 2024 by xi'an

mostly MC[bruary]

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , , , , , , on February 18, 2024 by xi'an

All About that Bayes stroll

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , , , on February 9, 2024 by xi'an

For all Bayesians and sympathisers in the Paris area, an incoming All about that Bayes seminars¹ by Elisabeth Gassiat (Institut de Mathématiques d’Orsay) on 13 February, 16h00, on Campus Pierre & Marie Curie, SCAI:

A stroll through hidden Markov models

Hidden Markov models are latent variables models producing dependent sequences. I will survey recent results providing guarantees for their use in various fields such as clustering, multiple testing, nonlinear ICA or variational autoencoders.


¹Incidentally, I came across an unrelated All about that Bayes YouTube video, a talk given by Kristin Lennox (Lawrence Livermore National Laboratory). And then found out a myriad of talks or courses using that pun.

Bayesian model averaging with exact inference of likelihood- free scoring rule posteriors [23/01/2024, PariSanté campus]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on January 16, 2024 by xi'an

A special “All about that Bayes” seminar in Paris (PariSanté campus, 23/01, 16:00-17:00) next week by my Warwick collegue and friend Rito:

Bayesian Model Averaging with exact inference of likelihood- free Scoring Rule Posteriors

Ritabrata Dutta, University of Warwick

A novel application of Bayesian Model Averaging to generative models parameterized with neural networks (GNN) characterized by intractable likelihoods is presented. We leverage a likelihood-free generalized Bayesian inference approach with Scoring Rules. To tackle the challenge of model selection in neural networks, we adopt a continuous shrinkage prior, specifically the horseshoe prior. We introduce an innovative blocked sampling scheme, offering compatibility with both the Boomerang Sampler (a type of piecewise deterministic Markov process sampler) for exact but slower inference and with Stochastic Gradient Langevin Dynamics (SGLD) for faster yet biased posterior inference. This approach serves as a versatile tool bridging the gap between intractable likelihoods and robust Bayesian model selection within the generative modelling framework.

mostly MC[nuary]

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , on January 6, 2024 by xi'an