Archive for approximate Bayesian inference

the Bayesian learning rule [One World ABC’minar, 27 April]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , on April 24, 2023 by xi'an

The next One World ABC seminar is taking place (on-line, requiring pre-registration) on 27 April, 9:30am UK time, with Mohammad Emtiyaz Khan (RIKEN-AIP, Tokyo) speaking about the Bayesian learning rule:

We show that many machine-learning algorithms are specific instances of a single algorithm called the Bayesian learning rule. The rule, derived from Bayesian principles, yields a wide-range of algorithms from fields such as optimization, deep learning, and graphical models. This includes classical algorithms such as ridge regression, Newton’s method, and Kalman filter, as well as modern deep-learning algorithms such as stochastic-gradient descent, RMSprop, and Dropout. The key idea in deriving such algorithms is to approximate the posterior using candidate distributions estimated by using natural gradients. Different candidate distributions result in different algorithms and further approximations to natural gradients give rise to variants of those algorithms. Our work not only unifies, generalizes, and improves existing algorithms, but also helps us design new ones.

ABC with privacy

Posted in Books, Statistics with tags , , , , , , , , on April 18, 2023 by xi'an


I very recently read a  2021 paper by Mijung Park, Margarita Vinaroz, and Wittawat Jitkrittum on running ABC while ensuring data privacy (published in Entropy).

“…adding noise to the distance computed on the real observations and pseudo-data suffices the privacy guarantee of the resulting  posterior samples”

For ABC tolerance, they use maximum mean discrepancy (MMD) and for privacy the standard if unconvincing notion of differential privacy, defined by ensuring an upper bound on the amount of variation in the probability ratio when replacing/removing/adding an observation. (But not clearly convincing users their data is secure.)

While I have no reservation about the validation of the double-noise approach, I find it surprising that noise must be (twice) added when vanilla ABC is already (i) noisy, since based on random pseudo-data, and (ii) producing only a sample from an approximate posterior instead of returning an exact posterior. My impression indeed was that ABC should be good enough by itself to achieve privacy protection. In the sense that the accepted parameter values were those that generated random samples sufficiently close to the actual data, hence not only compatible with the true data, but also producing artificial datasets that are close enough to the data. Presumably these artificial datasets should not be produced as the intersection of their ε neighbourhoods may prove enough to identify the actual data. (The proposed algorithm does return all generated datasets.) Instead the supported algorithm involves randomisation of both tolerance ε and distance ρ to the observed data (with the side issue that they may become negative since the noise is Laplace).

[A]ABC in Hawai’i

Posted in Statistics with tags , , , , , , , , on April 6, 2023 by xi'an

off to BayesComp!

Posted in Mountains, pictures, Running, Travel with tags , , , , , , , , on March 11, 2023 by xi'an

call for posters at BayesComp²³ satellite [AG:DC]

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , on November 22, 2022 by xi'an

An urgent reminder that the early bird deadline for BayesComp²³ and the different satellites is 30 November (with a difference of $50) and also a call for poster presentations at our AG:DC (aka, Bayesian computing without exact likelihood) satellite workshop. Poster spots will be attributed to presenters on a first come – first served basis, so do not delay in sending me an abstract at my gmail account bayesianstatistics

%d bloggers like this: