## are there a frequentist and a Bayesian likelihoods?

**A** question that came up on X validated and led me to spot rather poor entries in Wikipedia about both the likelihood function and Bayes’ Theorem. Where unnecessary and confusing distinctions are made between the frequentist and Bayesian versions of these notions. I have already discussed the later (Bayes’ theorem) a fair amount here. The discussion about the likelihood is quite bemusing, in that the likelihood function is the … function of the parameter equal to the density indexed by this parameter at the observed value.

“What we can find from a sample is the likelihood of any particular value of r, if we define the likelihood as a quantity proportional to the probability that, from a population having the particular value of r, a sample having the observed value of r, should be obtained.”R.A. Fisher,On the “probable error’’ of a coefficient of correlation deduced from a small sample.Metron1, 1921, p.24

By mentioning an informal side to likelihood (rather than to likelihood function), and then stating that the likelihood is not a probability in the frequentist version but a probability in the Bayesian version, the W page makes a complete and unnecessary mess. Whoever is ready to rewrite this introduction is more than welcome! (Which reminded me of an earlier question also on X validated asking why a common reference measure was needed to define a likelihood function.)

This also led me to read a recent paper by Alexander Etz, whom I met at E.J. Wagenmakers‘ lab in Amsterdam a few years ago. Following Fisher, as Jeffreys complained about

“..likelihood, a convenient term introduced by Professor R.A. Fisher, though in his usage it is sometimes multiplied by a constant factor. This is the probability of the observations given the original information and the hypothesis under discussion.”H. Jeffreys,Theory of Probability, 1939, p.28

Alexander defines the likelihood up to a constant, which causes extra-confusion, for free!, as there is no foundational reason to introduce this degree of freedom rather than imposing an exact equality with the density of the data (albeit with an arbitrary choice of dominating measure, never neglect the dominating measure!). The paper also repeats the message that the likelihood is not a probability (density, *missing in the paper*). And provides intuitions about maximum likelihood, likelihood ratio and Wald tests. But does not venture into a separate definition of the likelihood, being satisfied with the fundamental notion to be plugged into the magical formula

posterior∝prior×likelihood

June 7, 2018 at 4:49 pm

Minor comment. In a discrete situation, of course, the it’s not a probability density, just a discrete set of probabilities. So it’s not necessarily “wrong” to leave out ‘density’.

June 7, 2018 at 5:35 pm

Thanks you Bill: Why keep density? Well, formally, It remains a density wrt the counting measure. Cheers!

June 19, 2018 at 2:39 pm

Good point, Christian.

The reason why I made that comment is that I have taught a course on decision theory many times for freshman/sophomore nonscience honors college students in which I assume no calculus and hence stuck to discrete probability densities (see, I can say that too!) as a pedagogical tool. Obviously, I couldn’t introduce measure theory in such a context!