Archive for exact inference

Bayesian model averaging with exact inference of likelihood- free scoring rule posteriors [23/01/2024, PariSanté campus]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on January 16, 2024 by xi'an

A special “All about that Bayes” seminar in Paris (PariSanté campus, 23/01, 16:00-17:00) next week by my Warwick collegue and friend Rito:

Bayesian Model Averaging with exact inference of likelihood- free Scoring Rule Posteriors

Ritabrata Dutta, University of Warwick

A novel application of Bayesian Model Averaging to generative models parameterized with neural networks (GNN) characterized by intractable likelihoods is presented. We leverage a likelihood-free generalized Bayesian inference approach with Scoring Rules. To tackle the challenge of model selection in neural networks, we adopt a continuous shrinkage prior, specifically the horseshoe prior. We introduce an innovative blocked sampling scheme, offering compatibility with both the Boomerang Sampler (a type of piecewise deterministic Markov process sampler) for exact but slower inference and with Stochastic Gradient Langevin Dynamics (SGLD) for faster yet biased posterior inference. This approach serves as a versatile tool bridging the gap between intractable likelihoods and robust Bayesian model selection within the generative modelling framework.

approximate computation for exact statistical inference from differentially private data

Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , on September 10, 2023 by xi'an

“the employment of ABC for differentially private data serendipitously eradicates the “approximate” nature of the resulting posterior  samples, which otherwise would be the case if the data were noise-free.”

In parallel or conjunction with the 23w5601 workshop, I was reading some privacy literature and came across this Exact inference with approximate computation for differentially private data via perturbation by Ruobin Gong  that appeared in the Journal of Privacy and Confidentiality last year (2022). When differential privacy is implemented by perturbation, i.e. by replacing the private data with a randomised, usually Gaussian, version, the exact posterior distribution is a convolution which, if unavailable, can be approximated by a standard ABC step. Which, most interestingly does not impact the accuracy of the (public) posterior, i.e. it does not modify this posterior when the probability of acceptance in ABC is the density of the perturbation noise at the public data given the pseudo-data. Which follows from the 1984 introduction of the ABC idea. On the opposite, EM does not enjoy an exact version, as the E step must be (unbiasedly) approximated by a Monte Carlo representation that relies on the same ABC algorithm. Which usually makes the M step harder, although a Monte Carlo version of the gradient is also available.