Archive for University of Warwick

Bayesian model comparison with intractable constants

Posted in Books, Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on February 8, 2016 by xi'an

abcIRichard Everitt, Adam Johansen (Warwick), Ellen Rowing and Melina Evdemon-Hogan have updated [on arXiv] a survey paper on the computation of Bayes factors in the presence of intractable normalising constants. Apparently destined for Statistics and Computing when considering the style. A great entry, in particular for those attending the CRiSM workshop Estimating Constants in a few months!

A question that came to me from reading the introduction to the paper is why a method like Møller et al.’s (2006) auxiliary variable trick should be considered more “exact” than the pseudo-marginal approach of Andrieu and Roberts (2009) since the later can equally be seen as an auxiliary variable approach. The answer was on the next page (!) as it is indeed a special case of Andrieu and Roberts (2009). Murray et al. (2006) also belongs to this group with a product-type importance sampling estimator, based on a sequence of tempered intermediaries… As noted by the authors, there is a whole spectrum of related methods in this area, some of which qualify as exact-approximate, inexact approximate and noisy versions.

Their main argument is to support importance sampling as the method of choice, including sequential Monte Carlo (SMC) for large dimensional parameters. The auxiliary variable of Møller et al.’s (2006) is then part of the importance scheme. In the first toy example, a Poisson is opposed to a Geometric distribution, as in our ABC model choice papers, for which a multiple auxiliary variable approach dominates both ABC and Simon Wood’s synthetic likelihood for a given computing cost. I did not spot which artificial choice was made for the Z(θ)’s in both models, since the constants are entirely known in those densities. A very interesting section of the paper is when envisioning biased approximations to the intractable density. If only because the importance weights are most often biased due to the renormalisation (possibly by resampling). And because the variance derivations are then intractable as well. However, due to this intractability, the paper can only approach the impact of those approximations via empirical experiments. This leads however to the interrogation on how to evaluate the validity of the approximation in settings where truth and even its magnitude are unknown… Cross-validation and bootstrap type evaluations may prove too costly in realistic problems. Using biased solutions thus mostly remains an open problem in my opinion.

The SMC part in the paper is equally interesting if only because it focuses on the data thinning idea studied by Chopin (2002) and many other papers in the recent years. This made me wonder why an alternative relying on a sequence of approximations to the target with tractable normalising constants could not be considered. A whole sequence of auxiliary variable completions sounds highly demanding in terms of computing budget and also requires a corresponding sequence of calibrations. (Now, ABC fares no better since it requires heavy simulations and repeated calibrations, while further exhibiting a damning missing link with the target density. ) Unfortunately, embarking upon a theoretical exploration of the properties of approximate SMC is quite difficult, as shown by the strong assumptions made in the paper to bound the total variation distance to the true target.

off to Oxford

Posted in Kids, pictures, Travel, University life with tags , , , , , , , on January 31, 2016 by xi'an

Oxford, Feb. 23, 2012I am off to Oxford this evening for teaching once again in the Bayesian module of the OxWaSP programme. Joint PhD programme between Oxford and Warwick, supported by the EPSRC. And with around a dozen new [excellent!] PhD students every year. Here are the slides of a longer course that I will use in the coming days:

And by popular request (!) here is the heading of my Beamer file:

\documentclass[xcolor=dvipsnames,professionalfonts]{beamer}
\usepackage{colordvi}
\usetheme{Montpellier}
\usecolortheme{beaver}
% Rather be using my own color
\definecolor{LightGrey}{rgb}{0.84,0.83,0.83}
\definecolor{LightYell}{rgb}{0.90,0.83,0.70}
\definecolor{StroYell}{rgb}{0.95,0.88,0.72}
\definecolor{myem}{rgb}{0.797,0.598,0.598}
\definecolor{lightred}{rgb}{0.75,0.033,0}
\definecolor{shadecolor1}{rgb}{0.90,0.83,0.70}
\setbeamercovered{transparent=20}
\setbeamercolor{structure}{fg=myem!120}
\setbeamercolor{alerted text}{fg=lightred}
\setbeamertemplate{blocks}[rounded][shadow=true]

Dublin [porter]

Posted in pictures, Travel, Wines with tags , , , , , , on January 20, 2016 by xi'an

OLYMPUS DIGITAL CAMERA

CRiSM workshop on estimating constants [#1]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , on January 19, 2016 by xi'an

The registration for the CRiSM workshop on estimating constants that Nial Friel, Helen Ogden and myself host next April 20-22 at the University of Warwick is now open. The plain registration fees are £40 and accommodation on the campus is available through the same form.

Since besides the invited talks, the workshop will host two poster session with speed (2-5mn) oral presentations, we encourage all interested researchers to submit a poster via the appropriate form. Once again, this should be an exciting two-day workshop, given the on-going activity in this area.

the Force awakens… some memories

Posted in Books, Kids, pictures, Travel with tags , , , , , , on January 10, 2016 by xi'an

In what may become a family tradition, I managed to accompany my daughter to the movies on the day off she takes just before her medical school finals. After last year catastrophic conclusion to the Hobbit trilogy, we went to watch the new Star Wars on the day it appeared in Paris. (Which involved me going directly to the movie theatre from the airport, on my way back from Warwick.) I am afraid I have to admit I enjoyed the movie a lot, despite my initial misgivings and the blatant shortcomings of this new instalment.

Indeed, it somewhat brought back [to me] the magic of watching the very first Star Wars, in the summer of 1977 and in a theatre located in down-town Birmingham, to make the connection complete! A new generation of (admittedly implausible) heroes takes over with very little help from the (equally implausible) old guys (so far). It is just brilliant to watch the scenario unfold towards the development of those characters and tant pis! if the battle scenes and the fighters and the whole Star Wars universe has not changed that much. While the new director has recovered the pace of the original film, he also builds the relations between most characters towards more depth and ambiguity. Once again, I like very much the way the original characters are treated, with just the right distance and irony, a position that would not have been possible with new actors. And again tant pis! if the new heroes share too much with the central characters of Hunger Games or The Maze Runner. This choice definitely appealed to my daughter, who did not complain in the least about the weaknesses in the scenario and about the very stretched ending. To the point of watching the movie a second time during the X’mas vacations.

approximating evidence with missing data

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , on December 23, 2015 by xi'an

University of Warwick, May 31 2010Panayiota Touloupou (Warwick), Naif Alzahrani, Peter Neal, Simon Spencer (Warwick) and Trevelyan McKinley arXived a paper yesterday on Model comparison with missing data using MCMC and importance sampling, where they proposed an importance sampling strategy based on an early MCMC run to approximate the marginal likelihood a.k.a. the evidence. Another instance of estimating a constant. It is thus similar to our Frontier paper with Jean-Michel, as well as to the recent Pima Indian survey of James and Nicolas. The authors give the difficulty to calibrate reversible jump MCMC as the starting point to their research. The importance sampler they use is the natural choice of a Gaussian or t distribution centred at some estimate of θ and with covariance matrix associated with Fisher’s information. Or derived from the warmup MCMC run. The comparison between the different approximations to the evidence are done first over longitudinal epidemiological models. Involving 11 parameters in the example processed therein. The competitors to the 9 versions of importance samplers investigated in the paper are the raw harmonic mean [rather than our HPD truncated version], Chib’s, path sampling and RJMCMC [which does not make much sense when comparing two models]. But neither bridge sampling, nor nested sampling. Without any surprise (!) harmonic means do not converge to the right value, but more surprisingly Chib’s method happens to be less accurate than most importance solutions studied therein. It may be due to the fact that Chib’s approximation requires three MCMC runs and hence is quite costly. The fact that the mixture (or defensive) importance sampling [with 5% weight on the prior] did best begs for a comparison with bridge sampling, no? The difficulty with such study is obviously that the results only apply in the setting of the simulation, hence that e.g. another mixture importance sampler or Chib’s solution would behave differently in another model. In particular, it is hard to judge of the impact of the dimensions of the parameter and of the missing data.

never mind the big data here’s the big models [workshop]

Posted in Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , on December 22, 2015 by xi'an

Maybe the last occurrence this year of the pastiche of the iconic LP of the Sex Pistols!, made by Tamara Polajnar. The last workshop as well of the big data year in Warwick, organised by the Warwick Data Science Institute. I appreciated the different talks this afternoon, but enjoyed particularly Dan Simpson’s and Rob Scheichl’s. The presentation by Dan was so hilarious that I could not resist asking him for permission to post the slides here:

Not only hilarious [and I have certainly missed 67% of the jokes], but quite deep about the meaning(s) of modelling and his views about getting around the most blatant issues. Ron presented a more computational talk on the ways to reach petaflops on current supercomputers, in connection with weather prediction models used (or soon to be used) by the Met office. For a prediction area of 1 km². Along with significant improvements resulting from multiscale Monte Carlo and quasi-Monte Carlo. Definitely impressive! And a brilliant conclusion to the Year of Big Data (and big models).

Follow

Get every new post delivered to your Inbox.

Join 981 other followers