Archive for Bayesian model comparison

never mind the big data here’s the big models [workshop]

Posted in Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , on December 22, 2015 by xi'an

Maybe the last occurrence this year of the pastiche of the iconic LP of the Sex Pistols!, made by Tamara Polajnar. The last workshop as well of the big data year in Warwick, organised by the Warwick Data Science Institute. I appreciated the different talks this afternoon, but enjoyed particularly Dan Simpson’s and Rob Scheichl’s. The presentation by Dan was so hilarious that I could not resist asking him for permission to post the slides here:

Not only hilarious [and I have certainly missed 67% of the jokes], but quite deep about the meaning(s) of modelling and his views about getting around the most blatant issues. Ron presented a more computational talk on the ways to reach petaflops on current supercomputers, in connection with weather prediction models used (or soon to be used) by the Met office. For a prediction area of 1 km². Along with significant improvements resulting from multiscale Monte Carlo and quasi-Monte Carlo. Definitely impressive! And a brilliant conclusion to the Year of Big Data (and big models).

never mind the big data here’s the big models [workshop]

Posted in Kids, pictures, Statistics with tags , , , , , , , on December 10, 2015 by xi'an

A perfect opportunity to recycle the pastiche of the iconic LP of the Sex Pistols!, that Mark Girolami posted for the ATI Scoping workshop  last month in Warwick. There is an open workshop on the theme of big data/big models next week in Warwick, organised by the Warwick Data Science Institute. It will take place on December 15, from noon till 5:30pm in the Zeeman Building. Invited speakers are

• Robert Scheichl (University of Bath, Dept of Mathematical Sciences)
• Shiwei Lan (University of Warwick, Dept of Statistics)
• Konstantinos Zygalakis (University of Southampton, Dept of Mathematical Sciences)
Dan Simpson (University of Bath, Dept of Mathematical Sciences), with the enticing title of “To avoid fainting, keep repeating ‘It’s only a model’…”

a unified treatment of predictive model comparison

Posted in Books, Statistics, University life with tags , , , , , , , , , on June 16, 2015 by xi'an

“Applying various approximation strategies to the relative predictive performance derived from predictive distributions in frequentist and Bayesian inference yields many of the model comparison techniques ubiquitous in practice, from predictive log loss cross validation to the Bayesian evidence and Bayesian information criteria.”

Michael Betancourt (Warwick) just arXived a paper formalising predictive model comparison in an almost Bourbakian sense! Meaning that he adopts therein a very general representation of the issue, with minimal assumptions on the data generating process (excluding a specific metric and obviously the choice of a testing statistic). He opts for an M-open perspective, meaning that this generating process stands outside the hypothetical statistical model or, in Lindley’s terms, a small world. Within this paradigm, the only way to assess the fit of a model seems to be through the predictive performances of that model. Using for instance an f-divergence like the Kullback-Leibler divergence, based on the true generated process as the reference. I think this however puts a restriction on the choice of small worlds as the probability measure on that small world has to be absolutely continuous wrt the true data generating process for the distance to be finite. While there are arguments in favour of absolutely continuous small worlds, this assumes a knowledge about the true process that we simply cannot gather. Ignoring this difficulty, a relative Kullback-Leibler divergence can be defined in terms of an almost arbitrary reference measure. But as it still relies on the true measure, its evaluation proceeds via cross-validation “tricks” like jackknife and bootstrap. However, on the Bayesian side, using the prior predictive links the Kullback-Leibler divergence with the marginal likelihood. And Michael argues further that the posterior predictive can be seen as the unifying tool behind information criteria like DIC and WAIC (widely applicable information criterion). Which does not convince me towards the utility of those criteria as model selection tools, as there is too much freedom in the way approximations are used and a potential for using the data several times.

can you help?

Posted in Statistics, University life with tags , , , , , , , on October 12, 2013 by xi'an

An email received a few days ago:

Can you help me answering my query about AIC and DIC?

I want to compare the predictive power of a non Bayesian model (GWR, Geographically weighted regression) and a Bayesian hierarchical model (spLM).
For GWR, DIC is not defined, but AIC is.
For  spLM, AIC is not defined, but DIC is.

How can I compare the predictive ability of these two models? Does it make sense to compare AIC of one with DIC of the other?

I did not reply as the answer is in the question: the numerical values of AIC and DIC do not compare. And since one estimation is Bayesian while the other is not, I do not think the predictive abilities can be compared. This is not even mentioning my reluctance to use DIC…as renewed in yesterday’s post.