Archive for Bayesian inference

JAGS Workshop [10-14 July 2023]

Posted in Books, pictures, R, Statistics, Travel, University life with tags , , , , , , , , on March 21, 2023 by xi'an

Hey, JAGS users and would-be users, be warned that registration is now open for the annual JAGS workshop on probabilistic modelling for cognitive science. The tenth instalment of this workshop takes place July 10–14, 2023 in Amsterdam and online. This workshop is meant for researchers who want to learn how to apply Bayesian inference in practice. Most applications we discuss are taken from the field of cognitive science. The workshop is based on the book Bayesian Cognitive Modeling: A practical course written by Michael Lee and Eric-Jan Wagenmakers. It is followed by a shorter workshop (15-16 July) on Theory and Practice of Bayesian Hypothesis Testing.

mini-Bayes in Nature [and Paris-Saclay]

Posted in Books, Running, Statistics, University life with tags , , , , , , , , , , on February 7, 2023 by xi'an

MaxEnt im Garching

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , on December 28, 2022 by xi'an


The next edition of the MaxEnt conferences, or more precisely workshops on Bayesian Inference and Maximum Entropy Methods in Science and Engineering , MaxEnt2023, will take place in Garching (bei München) next 3-7 July. At the Max-Planck-Institut für Plasmaphysik. While the conference is usually of strong interest, it is rather improbable I will attend it this year. (The only time I took part in a MaxEnt conference was in 2009, in Oxford. Oxford, Mississippi!).

inferring the number of components [remotely]

Posted in Statistics with tags , , , , , , , , , , , , , , , , , on October 14, 2022 by xi'an

learning optimal summary statistics

Posted in Books, pictures, Statistics with tags , , , , , , , , , on July 27, 2022 by xi'an

Despite the pursuit of the holy grail of sufficient statistics, most applications will have to settle for the weakest concept of optimal statistics.”Quiz #1: How does Bayes sufficiency [which preserves the posterior density] differ from sufficiency [which preserves the likelihood function]?

Quiz #2: How does Fisher-information sufficiency [which preserves the information matrix] differ from standard sufficiency [which preserves the likelihood function]?

Read a recent arXival by Till Hoffmann and Jukka-Pekka Onnela that I frankly found most puzzling… Maybe due to the Norman train where I was traveling being particularly noisy.

The argument in the paper is to find a summary statistic that minimises the [empirical] expected posterior entropy, which equivalently means minimising the expected Kullback-Leibler distance to the full posterior.  And maximizing the mutual information between parameters θ and summaries t(.). And maximizing the expected surprise. Which obviously requires breaking the sample into iid components and hence considering the gain brought by a specific transform of a single observation. The paper also contains a long comparison with other criteria for choosing summaries.

“Minimizing the posterior entropy would discard the sufficient statistic t such that the posterior is equal to the prior–we have not learned anything from the data.”

Furthermore, the expected aspect of the criterion takes us away from a proper Bayes analysis (and exhibits artifacts as the one above), which somehow makes me question the relevance of comparing entropies under different distributions. It took me a long while to realise that the collection of summaries was set by the user and quite limited. Like a neural network representation of the posterior mean. And the intractable posterior is further approximated by a closed-form function of the parameter θ and of the summary t(.). Using there a neural density estimator. Or a mixture density network.

%d bloggers like this: