Archive for model uncertainty

a case for Bayesian deep learnin

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , on September 30, 2020 by xi'an

Andrew Wilson wrote a piece about Bayesian deep learning last winter. Which I just read. It starts with the (posterior) predictive distribution being the core of Bayesian model evaluation or of model (epistemic) uncertainty.

“On the other hand, a flat prior may have a major effect on marginalization.”

Interesting sentence, as, from my viewpoint, using a flat prior is a no-no when running model evaluation since the marginal likelihood (or evidence) is no longer a probability density. (Check Lindley-Jeffreys’ paradox in this tribune.) The author then goes for an argument in favour of a Bayesian approach to deep neural networks for the reason that data cannot be informative on every parameter in the network, which should then be integrated out wrt a prior. He also draws a parallel between deep ensemble learning, where random initialisations produce different fits, with posterior distributions, although the equivalent to the prior distribution in an optimisation exercise is somewhat vague.

“…we do not need samples from a posterior, or even a faithful approximation to the posterior. We need to evaluate the posterior in places that will make the greatest contributions to the [posterior predictive].”

The paper also contains an interesting point distinguishing between priors over parameters and priors over functions, ony the later mattering for prediction. Which must be structured enough to compensate for the lack of data information about most aspects of the functions. The paper further discusses uninformative priors (over the parameters) in the O’Bayes sense as a default way to select priors. It is however unclear to me how this discussion accounts for the problems met in high dimensions by standard uninformative solutions. More aggressively penalising priors may be needed, as those found in high dimension variable selection. As in e.g. the 10⁷ dimensional space mentioned in the paper. Interesting read all in all!

can we trust computer simulations?

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , , , on July 10, 2015 by xi'an

lion

How can one validate the outcome of a validation model? Or can we even imagine validation of this outcome? This was the starting question for the conference I attended in Hannover. Which obviously engaged me to the utmost. Relating to some past experiences like advising a student working on accelerated tests for fighter electronics. And failing to agree with him on validating a model to turn those accelerated tests within a realistic setting. Or reviewing this book on climate simulation three years ago while visiting Monash University. Since I discuss in details below most talks of the day, here is an opportunity to opt away! Continue reading

EQUIP launch

Posted in Statistics, University life with tags , , , , , , on October 10, 2013 by xi'an

Today, as I was around (!), I attended the launch of the new Warwick research project EQUIP (which stands for Enabling quantification of uncertainty for inverse problems). This is an EPSRC funded project merging mathematics, numerical analysis, statistics and geophysics, with a primary target application [alas!] in the oil industry. It will start hiring four (4!) postdocs pretty soon. The talks were all interesting, but I particularly liked the idea that they were addressed primarily to students who were potentially interested in the positions. In addition, Mark Girolami gaves a most appreciated insight on the modelling of uncertainty in PDE models, connecting with earlier notions set by Tony O’Hagan, modelling that I hope we can discuss further when both in Warwick!

Snapshots from CRiSM workshop

Posted in Statistics, University life with tags , , , , , , , , on May 31, 2010 by xi'an

The workshop is a mix of theory, methodology, computational techniques and applications that makes it quite exciting. Although I already gave several versions of my talk in the past year, I still got interesting feedback, in particular a connection between our Savage-Dickey representation and bridge sampling. Among the interesting talks I attended, a few snapshots; both Jim Berger and Dongchu Sun presented new closed form expressions for effective numbers of parameters in complex linear models, Antonio Lijoi covered some mathematics of species estimation models [with some possible connections with Geoff Nicholls’  earlier talk on language classification], Chris Holmes exposed the advances he had made since Edinburgh (at least for those of us who attended both meetings), highlighting an interesting link with LDA, both Chris and Matthew Stephens considering direction selection for discrimination and clustering, Peter Müller talked about the analysis of a genetic pathway model represented as a graph, Robert Kohn zoomed through an adaptive particle MCMC algorithm where the likelihood was estimated (with the interesting side comment that what looks as an unbiased estimator from one perspective is also an auxiliary joint likelihood from another perspective) and David Spiegelhalter gave a both hilarious and thought-provoking talk on the “deeper uncertainty” that surrounds statistical models.

Talk at CRiSM

Posted in R, Statistics, University life with tags , , , , , , , , on May 30, 2010 by xi'an

This is the talk I am giving at the workshop on model uncertainty organised by the Centre for Research in Statistical Methodology (CRiSM) at the University of Warwick, on May 30-June 1. Careful readers will notice there is not much difference with my previous talk on the topic, as I only included the Savage-Dickey slides from the talk in San Antonio!