There will be another i-like workshop this Spring, over two days in Oxford, St Anne’s College, involving talks by Xiao-Li Meng and Eric Moulines, as well as by researchers from the participating universities. Registration is now open. (I will take part as a part-time participant, travelling from Nottingham where I give a seminar on the 20th.)
Archive for intractable likelihood
Willie Neiswanger, Chong Wang, and Eric Xing (from CMU) recently arXived a paper entitled as above. The “embarrassing” in the title refers to the “embarrassingly simple” solution proposed therein, namely to solve the difficulty in handling very large datasets by running completely independent parallel MCMC samplers on parallel threads or computers and using the outcomes of those samplers as density estimates, pulled together as a product towards an approximation of the true posterior density. In other words, the idea is to break the posterior as
and to use the estimate
where the individual estimates are obtained by, say, non-parametric estimates. The method is then “asymptotically exact” in the weak (and unsurprising) sense of being converging in the number of MCMC iterations. Still, there is a theoretical justification that is not found in previous parallel methods that mixed all resulting samples without accounting for the subsampling. And I also appreciate the point that, in many cases, running MCMC samplers with subsamples produces faster convergence.
In the paper, the division of p into its components is done by partitioning the iid data into m subsets. And taking a power 1/m of the prior in each case. (Which may induce improperness issues.) However, the subdivision is arbitrary and can thus be implemented in other cases than the fairly restrictive iid setting. Because each (subsample) non-parametric estimate involves T terms, the resulting overall estimate contains Tm terms and the authors suggest using an independent Metropolis-within-Gibbs sampler to handle this complexity. Which is necessary [took me a while to realise this!] for producing a final sample from the (approximate) true posterior distribution. As an aside, I wonder why the bandwidths are all equal across all subsamples, as they should depend on those. And as it would not make much of a difference. It would also be interesting to build a typology of cases where subsampling leads to subposteriors that are close to orthogonal, preventing the implementation of the method.
As it happened, I read this paper on the very day Nial Friel (University College Dublin) gave a seminar at the Big’MC seminar on the convergence of approximations to ergodic transition kernels, based on the recent results of Mitrophanov on the stability of Markov chains, where he introduced the topic by the case of datasets large enough to prevent the computation of the likelihood function.
A new EPSRC programme grant, called i-like, has been awarded to researchers in Bristol, Lancaster, Oxford, and Warwick, to conduct research on intractable likelihoods. (I am also associated to this program as a [grateful] collaborator.) This covers several areas of statistics, like big data and inference on stochastic process, but my own primary interest in the programme is of course the possibilities to conduct collaboration on ABC and composite likelihood methods. (Great website design, by the way!)
A first announcement is that there will be a half-day launch in Oxford on January 31, 2013, which program is now available. Followed by a workshop in mid-May in Warwick (to which I will participate). This event is particularly aimed at PhD students and early-career researchers. The second announcement is that the EPSRC programme grant provides funding for five postdoctoral positions over a duration of four years, which is of course stupendous! So if you like i-like as much as I like it, and are a new researcher looking for opportunities in exciting areas, you should definitely consider applying!