Archive for University of Amsterdam

Bayesian thinking for toddler & Bayesian probabilities for babies [book reviews]

Posted in Statistics with tags , , , , , , , , , , on January 27, 2023 by xi'an

My friend E.-J.  Wagenmakers sent me a copy of Bayesian Thinking for Toddlers, “a must-have for any toddler with even a passing interest in Ockham’s razor and the prequential principle.” E.-J. wrote the story and Viktor Beekman (of thesis’ cover fame!) drew the illustrations. The book can be read for free on https://psyarxiv.com/w5vbp/, but not purchased as publishers were not interested and self-publishing was not available at a high enough quality level. Hence, in the end, 200 copies were made as JASP material, with me being the happy owner of one of these. The story follows two young girls competing for dinosaur expertise, and being rewarded by cookies, in proportion to the probability of providing the correct answer to two dinosaur questions. Toddlers may get less enthusiastic than grown-ups about the message, but they will love the drawings (and the questions if they are into dinosaurs).

This reminded me of the Bayesian probabilities for babies book, by Chris Ferrie, which details the computation of the probability that a cookie contains candy when the first bite holds none. It is more genuinely intended for young kids, in shape and design, as can be checked on a YouTube video, with an hypothetical population of cookies (with and without candy) being the proxy for the prior distribution. I hope no baby will be traumatised from being exposed too early to the notions of prior and posterior. Only data can tell, twenty years from now, if the book induced a spike or a collapse in the proportion of Bayesian statisticians!

[Disclaimer about potential self-plagiarism: this post or an edited version will potentially appear in my Books Review section in CHANCE.

Calling Bullshit: The Art of Scepticism in a Data‑Driven World [EJ’s book review]

Posted in Books, Statistics with tags , , , , , on August 26, 2020 by xi'an

“…this book will train readers to be statistically savvy at a time when immunity to misinformation is essential: not just for the survival of liberal democracy, as the authors assert, but for survival itself.Perhaps a crash course on bullshit detection should be a mandatory part of the school curriculum.”

In the latest issue of Nature, EJ Wagenmaker has written a book review of the book Calling Bullshit, by  Carl Bergstrom and Jevin West. Book written out of a course taught by the authors at the University of Washington during Spring Quarter 2017 and aimed at teaching students how to debunk bullshit, that is, misleading exploitation of statistics and machine learning. And subsequently turned into a book. Which I have not read. In his overall positive review EJ regrets the poor data visualisation scholarship of the authors, who could have demonstrated and supported the opportunity for a visual debunking of the original data. And the lack of alternative solutions like Bayesian analysis to counteract p-fishing. Of course, the need for debunking and exposing statistically sounding misinformation has never been so present.

aftermaths of retiring significance

Posted in Books, pictures, Statistics, University life with tags , , , , , , on April 10, 2019 by xi'an


Beyond mentions in the general press of the retire significance paper, as in Retraction Watch, Bloomberg, The Guardian, Vox, and NPR, not to mention the large number of comments on Andrew’s blog, and Deborah Mayo’s tribune on a ban on free speech (!), Nature of “the week after” contained three letters from Ioannidis, calling for more stringent thresholds, Johnson, essentially if unclearly stating the same, and my friends from Amsterdam, Alexander Ly and E.J. Wagenmakers, along with Julia Haaf, getting back to the Great Old Ones, to defend the usefulness of testing versus estimation.

Dutch summer workshops on Bayesian modeling

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , on March 21, 2019 by xi'an

Just received an email about two Bayesian workshops in Amsterdam this summer:

both taking place at the University of Amsterdam. And focussed on Bayesian software.

are there a frequentist and a Bayesian likelihoods?

Posted in Statistics with tags , , , , , , , , , , on June 7, 2018 by xi'an

A question that came up on X validated and led me to spot rather poor entries in Wikipedia about both the likelihood function and Bayes’ Theorem. Where unnecessary and confusing distinctions are made between the frequentist and Bayesian versions of these notions. I have already discussed the later (Bayes’ theorem) a fair amount here. The discussion about the likelihood is quite bemusing, in that the likelihood function is the … function of the parameter equal to the density indexed by this parameter at the observed value.

“What we can find from a sample is the likelihood of any particular value of r, if we define the likelihood as a quantity proportional to the probability that, from a population having the particular value of r, a sample having the observed value of r, should be obtained.” R.A. Fisher, On the “probable error’’ of a coefficient of correlation deduced from a small sample. Metron 1, 1921, p.24

By mentioning an informal side to likelihood (rather than to likelihood function), and then stating that the likelihood is not a probability in the frequentist version but a probability in the Bayesian version, the W page makes a complete and unnecessary mess. Whoever is ready to rewrite this introduction is more than welcome! (Which reminded me of an earlier question also on X validated asking why a common reference measure was needed to define a likelihood function.)

This also led me to read a recent paper by Alexander Etz, whom I met at E.J. Wagenmakers‘ lab in Amsterdam a few years ago. Following Fisher, as Jeffreys complained about

“..likelihood, a convenient term introduced by Professor R.A. Fisher, though in his usage it is sometimes multiplied by a constant factor. This is the probability of the observations given the original information and the hypothesis under discussion.” H. Jeffreys, Theory of Probability, 1939, p.28

Alexander defines the likelihood up to a constant, which causes extra-confusion, for free!, as there is no foundational reason to introduce this degree of freedom rather than imposing an exact equality with the density of the data (albeit with an arbitrary choice of dominating measure, never neglect the dominating measure!). The paper also repeats the message that the likelihood is not a probability (density, missing in the paper). And provides intuitions about maximum likelihood, likelihood ratio and Wald tests. But does not venture into a separate definition of the likelihood, being satisfied with the fundamental notion to be plugged into the magical formula

posteriorprior×likelihood

%d bloggers like this: