## Archive for inverse probability

## baseless!

Posted in Books, Statistics with tags 1922, Bayes theorem, epistemic probability, frequency properties, George Boole, history of statistics, inverse probability, normalised maximum likelihood, Pierre Simon Laplace, R.A. Fisher, Siméon Poisson on July 13, 2021 by xi'an## Fisher, Bayes, and predictive Bayesian inference [seminar]

Posted in Statistics with tags fiducial inference, Foundations of Probability, inverse probability, Jerzy Neyman, Karl Pearson, R.A. Fisher, Rutgers University, seminar, Thomas Bayes, webinar on April 4, 2021 by xi'an**A**n interesting Foundations of Probability seminar at Rutgers University this Monday, at 4:30ET, 8:30GMT, by Sandy Zabell (the password is Angelina’s birthdate):

R. A. Fisher is usually perceived to have been a staunch critic of the Bayesian approach to statistics, yet his last book (Statistical Methods and Scientific Inference, 1956) is much closer in spirit to the Bayesian approach than the frequentist theories of Neyman and Pearson. This mismatch between perception and reality is best understood as an evolution in Fisher’s views over the course of his life. In my talk I will discuss Fisher’s initial and harsh criticism of “inverse probability”, his subsequent advocacy of fiducial inference starting in 1930, and his admiration for Bayes expressed in his 1956 book. Several of the examples Fisher discusses there are best understood when viewed against the backdrop of earlier controversies and antagonisms.

## why is the likelihood not a pdf?

Posted in Books, pictures, Statistics, University life with tags Bayesian statistics, dominating measure, history of statistics, integration, invariance, inverse probability, likelihood function, Ronald Fisher, uniform prior on January 4, 2021 by xi'an**T**he return of an old debate on X validated. *Can the likelihood be a pdf?! *Even though there exist cases where a [version of the] likelihood function shows such a symmetry between the sufficient statistic and the parameter, as e.g. in the Normal mean model, that they are somewhat exchangeable w.r.t. the same measure, the question is somewhat meaningless for a number of reasons that we can all link to Ronald Fisher:

- when defining the likelihood function, Fisher (in his 1912 undergraduate memoir!) warns against integrating it w.r.t. the parameter:
*“the integration with respect to m is illegitimate and has no definite meaning with respect to inverse probability”*. The likelihood is*“is a relative probability only, suitable to compare point with point, but incapable of being interpreted as a probability distribution over a region, or of giving any estimate of absolute probability.”*And again in 1922: “[the likelihood]*is not a differential element, and is incapable of being integrated: it is assigned to a particular point of the range of variation, not to a particular element of it”*. - He introduced the term “likelihood” especially to avoid the confusion:
*“I perceive that the word probability is wrongly used in such a connection: probability is a ratio of frequencies, and about the frequencies of such values we can know nothing whatever (…) I suggest that we may speak without confusion of the likelihood of one value of p being thrice the likelihood of another (…) likelihood is not here used loosely as a synonym of probability, but simply to express the relative frequencies with which such values of the hypothetical quantity p would in fact yield the observed sample”*. - Another point he makes repeatedly (both in 1912 and 1922) is the lack of invariance of the probability measure obtained by attaching a dθ to the likelihood function L(θ) and normalising it into a density: while the likelihood
*“is entirely unchanged by any [one-to-one] transformation”,*this definition of a probability distribution is not. Fisher actually distanced himself from a Bayesian “uniform prior” throughout the 1920’s.

which sums up as the urge to never neglect the dominating measure!