Archive for Bayesian foundations

a case for Bayesian deep learnin

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , on September 30, 2020 by xi'an

Andrew Wilson wrote a piece about Bayesian deep learning last winter. Which I just read. It starts with the (posterior) predictive distribution being the core of Bayesian model evaluation or of model (epistemic) uncertainty.

“On the other hand, a flat prior may have a major effect on marginalization.”

Interesting sentence, as, from my viewpoint, using a flat prior is a no-no when running model evaluation since the marginal likelihood (or evidence) is no longer a probability density. (Check Lindley-Jeffreys’ paradox in this tribune.) The author then goes for an argument in favour of a Bayesian approach to deep neural networks for the reason that data cannot be informative on every parameter in the network, which should then be integrated out wrt a prior. He also draws a parallel between deep ensemble learning, where random initialisations produce different fits, with posterior distributions, although the equivalent to the prior distribution in an optimisation exercise is somewhat vague.

“…we do not need samples from a posterior, or even a faithful approximation to the posterior. We need to evaluate the posterior in places that will make the greatest contributions to the [posterior predictive].”

The paper also contains an interesting point distinguishing between priors over parameters and priors over functions, ony the later mattering for prediction. Which must be structured enough to compensate for the lack of data information about most aspects of the functions. The paper further discusses uninformative priors (over the parameters) in the O’Bayes sense as a default way to select priors. It is however unclear to me how this discussion accounts for the problems met in high dimensions by standard uninformative solutions. More aggressively penalising priors may be needed, as those found in high dimension variable selection. As in e.g. the 10⁷ dimensional space mentioned in the paper. Interesting read all in all!

Mea Culpa

Posted in Statistics with tags , , , , , , , , , , , on April 10, 2020 by xi'an

[A quote from Jaynes about improper priors that I had missed in his book, Probability Theory.]

For many years, the present writer was caught in this error just as badly as anybody else, because Bayesian calculations with improper priors continued to give just the reasonable and clearly correct results that common sense demanded. So warnings about improper priors went unheeded; just that psychological phenomenon. Finally, it was the marginalization paradox that forced recognition that we had only been lucky in our choice of problems. If we wish to consider an improper prior, the only correct way of doing it is to approach it as a well-defined limit of a sequence of proper priors. If the correct limiting procedure should yield an improper posterior pdf for some parameter α, then probability theory is telling us that the prior information and data are too meager to permit any inferences about α. Then the only remedy is to seek more data or more prior information; probability theory does not guarantee in advance that it will lead us to a useful answer to every conceivable question.Generally, the posterior pdf is better behaved than the prior because of the extra information in the likelihood function, and the correct limiting procedure yields a useful posterior pdf that is analytically simpler than any from a proper prior. The most universally useful results of Bayesian analysis obtained in the past are of this type, because they tended to be rather simple problems, in which the data were indeed so much more informative than the prior information that an improper prior gave a reasonable approximation – good enough for all practical purposes – to the strictly correct results (the two results agreed typically to six or more significant figures).

In the future, however, we cannot expect this to continue because the field is turning to more complex problems in which the prior information is essential and the solution is found by computer. In these cases it would be quite wrong to think of passing to an improper prior. That would lead usually to computer crashes; and, even if a crash is avoided, the conclusions would still be, almost always, quantitatively wrong. But, since likelihood functions are bounded, the analytical solution with proper priors is always guaranteed to converge properly to finite results; therefore it is always possible to write a computer program in such a way (avoid underflow, etc.) that it cannot crash when given proper priors. So, even if the criticisms of improper priors on grounds of marginalization were unjustified,it remains true that in the future we shall be concerned necessarily with proper priors.

BFF⁷ postponed

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , on March 31, 2020 by xi'an

dodging bullets, IEDs, and fingerprint detection at SimStat19

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , on September 10, 2019 by xi'an

I attended a fairly interesting forensic science session at SimStat 2019 in Salzburg as it concentrated on evidence and measures of evidence rather than on strict applications of Bayesian methodology to forensic problems. Even though American administrations like the FBI or various police departments were involved. It was a highly coherent session and I had a pleasant discussion with some of the speakers after the session. For instance, my friend Alicia Carriquiry presented an approach to determined from images of bullets whether or not they have been fired from the same gun, leading to an interesting case for a point null hypothesis where the point null makes complete sense. The work has been published in Annals of Applied Statistics and is used in practice. The second talk by Danica Ommen on fiducial forensics on IED, asking whether or not copper wires used in the bombs are the same, which is another point null illustration. Which also set an interesting questioning on the dependence of the alternative prior on the distribution of material chosen as it is supposed to cover all possible origins for the disputed item. But more interestingly this talk launched into a discussion of making decision based on finite samplers and unknown parameters, not that specific to forensics, with a definitely surprising representation of the Bayes factor as an expected likelihood ratio which made me first reminiscent of Aitkin’s (1991) infamous posterior likelihood (!) before it dawned on me this was a form of bridge sampling identity where the likelihood ratio only involved parameters common to both models, making it an expression well-defined under both models. This identity could be generalised to the general case by considering a ratio of integrated likelihoods, the extreme case being the ratio equal to the Bayes factor itself. The following two talks by Larry Tang and Christopher Saunders were also focused on the likelihood ratio and their statistical estimates, debating the coherence of using a score function and presenting a functional ABC algorithm where the prior is a Dirichlet (functional) prior. Thus a definitely relevant session from a Bayesian perspective!

 

ABC intro for Astrophysics

Posted in Books, Kids, Mountains, R, Running, Statistics, University life with tags , , , , , , , , , , , on October 15, 2018 by xi'an

Today I received in the mail a copy of the short book published by edp sciences after the courses we gave last year at the astrophysics summer school, in Autrans. Which contains a quick introduction to ABC extracted from my notes (which I still hope to turn into a book!). As well as a longer coverage of Bayesian foundations and computations by David Stenning and David van Dyk.