Archive for Bayes formula

2021 Whova Meeting of the International Society for Bayesian Analysis

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on May 5, 2021 by xi'an

The website for the incoming ISBA 2021 meeting is now operational and open to all! The program is ready, as well, with short courses starting on 23 June. And the main event on 28 June, with very long days, from 5:15am till 9:30pm in (US) Eastern Time (EDT, ie EST-05:00, UTC-06:00, CET-06:00, IST-9:30, CDT-11:00, JST-13:00, AEST-14:00). The number of registered participants is currently above 1700!, which shows the positive side of having a free on-line event since everyone (with an Internet connection!) interested can participate. On the negative side, namely the limited human interactions and the challenge of staying focussed 24/5, a solution stands in creating local clusters where a group could attend together the sessions. Provided local health policies allow. I am still working on gathering at CIRM, Marseille, if the centre reopens on 27 June. And am happy to broadcast any initiative to this effect.

are there a frequentist and a Bayesian likelihoods?

Posted in Statistics with tags , , , , , , , , , , on June 7, 2018 by xi'an

A question that came up on X validated and led me to spot rather poor entries in Wikipedia about both the likelihood function and Bayes’ Theorem. Where unnecessary and confusing distinctions are made between the frequentist and Bayesian versions of these notions. I have already discussed the later (Bayes’ theorem) a fair amount here. The discussion about the likelihood is quite bemusing, in that the likelihood function is the … function of the parameter equal to the density indexed by this parameter at the observed value.

“What we can find from a sample is the likelihood of any particular value of r, if we define the likelihood as a quantity proportional to the probability that, from a population having the particular value of r, a sample having the observed value of r, should be obtained.” R.A. Fisher, On the “probable error’’ of a coefficient of correlation deduced from a small sample. Metron 1, 1921, p.24

By mentioning an informal side to likelihood (rather than to likelihood function), and then stating that the likelihood is not a probability in the frequentist version but a probability in the Bayesian version, the W page makes a complete and unnecessary mess. Whoever is ready to rewrite this introduction is more than welcome! (Which reminded me of an earlier question also on X validated asking why a common reference measure was needed to define a likelihood function.)

This also led me to read a recent paper by Alexander Etz, whom I met at E.J. Wagenmakers‘ lab in Amsterdam a few years ago. Following Fisher, as Jeffreys complained about

“..likelihood, a convenient term introduced by Professor R.A. Fisher, though in his usage it is sometimes multiplied by a constant factor. This is the probability of the observations given the original information and the hypothesis under discussion.” H. Jeffreys, Theory of Probability, 1939, p.28

Alexander defines the likelihood up to a constant, which causes extra-confusion, for free!, as there is no foundational reason to introduce this degree of freedom rather than imposing an exact equality with the density of the data (albeit with an arbitrary choice of dominating measure, never neglect the dominating measure!). The paper also repeats the message that the likelihood is not a probability (density, missing in the paper). And provides intuitions about maximum likelihood, likelihood ratio and Wald tests. But does not venture into a separate definition of the likelihood, being satisfied with the fundamental notion to be plugged into the magical formula


Bayes’ Rule [book review]

Posted in Books, Statistics, University life with tags , , , , , , , , , , on July 10, 2014 by xi'an

This introduction to Bayesian Analysis, Bayes’ Rule, was written by James Stone from the University of Sheffield, who contacted CHANCE suggesting a review of his book. I thus bought it from amazon to check the contents. And write a review.

First, the format of the book. It is a short paper of 127 pages, plus 40 pages of glossary, appendices, references and index. I eventually found the name of the publisher, Sebtel Press, but for a while thought the book was self-produced. While the LaTeX output is fine and the (Matlab) graphs readable, pictures are not of the best quality and the display editing is minimal in that there are several huge white spaces between pages. Nothing major there, obviously, it simply makes the book look like course notes, but this is in no way detrimental to its potential appeal. (I will not comment on the numerous appearances of Bayes’ alleged portrait in the book.)

“… (on average) the adjusted value θMAP is more accurate than θMLE.” (p.82)

Bayes’ Rule has the interesting feature that, in the very first chapter, after spending a rather long time on Bayes’ formula, it introduces Bayes factors (p.15).  With the somewhat confusing choice of calling the prior probabilities of hypotheses marginal probabilities. Even though they are indeed marginal given the joint, marginal is usually reserved for the sample, as in marginal likelihood. Before returning to more (binary) applications of Bayes’ formula for the rest of the chapter. The second chapter is about probability theory, which means here introducing the three axioms of probability and discussing geometric interpretations of those axioms and Bayes’ rule. Chapter 3 moves to the case of discrete random variables with more than two values, i.e. contingency tables, on which the range of probability distributions is (re-)defined and produces a new entry to Bayes’ rule. And to the MAP. Given this pattern, it is not surprising that Chapter 4 does the same for continuous parameters. The parameter of a coin flip.  This allows for discussion of uniform and reference priors. Including maximum entropy priors à la Jaynes. And bootstrap samples presented as approximating the posterior distribution under the “fairest prior”. And even two pages on standard loss functions. This chapter is followed by a short chapter dedicated to estimating a normal mean, then another short one on exploring the notion of a continuous joint (Gaussian) density.

“To some people the word Bayesian is like a red rag to a bull.” (p.119)

Bayes’ Rule concludes with a chapter entitled Bayesian wars. A rather surprising choice, given the intended audience. Which is rather bound to confuse this audience… The first part is about probabilistic ways of representing information, leading to subjective probability. The discussion goes on for a few pages to justify the use of priors but I find completely unfair the argument that because Bayes’ rule is a mathematical theorem, it “has been proven to be true”. It is indeed a maths theorem, however that does not imply that any inference based on this theorem is correct!  (A surprising parallel is Kadane’s Principles of Uncertainty with its anti-objective final chapter.)

All in all, I remain puzzled after reading Bayes’ Rule. Puzzled by the intended audience, as contrary to other books I recently reviewed, the author does not shy away from mathematical notations and concepts, even though he proceeds quite gently through the basics of probability. Therefore, potential readers need some modicum of mathematical background that some students may miss (although it actually corresponds to what my kids would have learned in high school). It could thus constitute a soft entry to Bayesian concepts, before taking a formal course on Bayesian analysis. Hence doing no harm to the perception of the field.

Bayes on the radio (regrets)

Posted in Books, Kids, Running, Statistics with tags , , , , , on November 13, 2012 by xi'an

While running this morning I was reconsidering (over and over) my discussion of Bayes’ formula on the radio and thought I should have turned the presentation of Bayes’ theorem differently. I spent much too much time on the math side of Bayes’ formula and not enough on the stat side. The math aspect is not of real importance as it is a mere reformulation of conditional probabilities. The stat side is what matters as introducing a (prior) distribution on the parameter (space) is the #1 specificity of Bayesian statistics…. And the focus point of most criticisms, as expressed later by the physicist working on the Higgs boson, Dirk Zerwas.

I also regret not mentioning that Bayes’ formula was taught in French high schools, as illustrated by the anecdote of Bayes at the bac. And not reacting at the question about Bayes in the courtroom with yet another anecdote of Bayes’ formula been thrown out of the accepted tools by an English court of appeal about a year ago. Oh well, another argument for sticking to the written world.