Archive for modularisation

Markov melding

Posted in Books, Statistics, University life with tags , , , on July 2, 2020 by xi'an

“An alternative approach is to model smaller, simpler aspects of the data, such that designing these submodels is easier, then combine the submodels.”

An interesting paper by Andrew Manderson and Robert Goudie I read on arXiv on merging (or melding) several models together. With different data and different parameters. The assumption is one of a common parameter φ shared by all (sub)models. Since the product of the joint distributions across the m submodels involves m replicates of φ, the melded distribution is the product of the conditional distributions given φ, times a common (or pooled) prior on φ. Which leads to a perfectly well-defined joint distribution provided the support of this pooled prior is compatible with all conditionals.

The MCMC aspects of such a target are interesting in that the submodels can easily be exploited to return proposal distributions on their own parameters (plus φ). Although the notion is fraught with danger when considering a flat prior on φ, since the posterior is not necessarily well-defined. Or at the very least unrelated with the actual marginal posterior. This first stage is used to build a particle approximation to the posterior distribution of φ, exploited in the later simulation of the other subsample parameters and updates of φ. Due to the rare availability of the (submodel) marginal prior on φ, it is replaced in the paper by a kernel density estimate. Not a great idea as (a) it is unstable and (b) the joint density is costly, while existing! Which brings the authors to set a goal of estimating a ratio. Of the same marginal density in two different values of φ. (Not our frequent problem of the ratio of different marginals!) They achieve this by targeting another joint, using a weight function both for the simulation and the kernel density estimation… Requiring the calibration of the weight function and the production of a biased estimate of the ratio.

While the paper concentrates very much on computational improvements, including the possible recourse to unbiased MCMC, I also feel it is missing on the Bayesian aspects, since the construction of the multi-level Bayesian model faces many challenges. In a sense this is an alternative to our better together paper, where cuts are used to avoid the duplication of common parameters.

Better together in Kolkata [slides]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , on January 4, 2018 by xi'an

Here are the slides of the talk on modularisation I am giving today at the PC Mahalanobis 125 Conference in Kolkata, mostly borrowed from Pierre’s talk at O’Bayes 2018 last month:

[which made me realise Slideshare has discontinued the option to update one’s presentation, forcing users to create a new presentation for each update!] Incidentally, the amphitheatre at ISI is located right on top of a geological exhibit room with a reconstituted Barapasaurus tagorei so I will figuratively ride a dinosaur during my talk!

ISBA 2016 [#5]

Posted in Mountains, pictures, Running, Statistics, Travel with tags , , , , , , , , , , , , , on June 18, 2016 by xi'an

from above Forte Village, Santa Magherita di Pula, Sardinia, June 17, 2016On Thursday, I started the day by a rather masochist run to the nearby hills, not only because of the very hour but also because, by following rabbit trails that were not intended for my size, I ended up being scratched by thorns and bramble all over!, but also with neat views of the coast around Pula.  From there, it was all downhill [joke]. The first morning talk I attended was by Paul Fearnhead and about efficient change point estimation (which is an NP hard problem or close to). The method relies on dynamic programming [which reminded me of one of my earliest Pascal codes about optimising a dam debit]. From my spectator’s perspective, I wonder[ed] at easier models, from Lasso optimisation to spline modelling followed by testing equality between bits. Later that morning, James Scott delivered the first Bayarri Lecture, created in honour of our friend Susie who passed away between the previous ISBA meeting and this one. James gave an impressive coverage of regularisation through three complex models, with the [hopefully not degraded by my translation] message that we should [as Bayesians] focus on important parts of those models and use non-Bayesian tools like regularisation. I can understand the practical constraints for doing so, but optimisation leads us away from a Bayesian handling of inference problems, by removing the ascertainment of uncertainty…

Later in the afternoon, I took part in the Bayesian foundations session, discussing the shortcomings of the Bayes factor and suggesting the use of mixtures instead. With rebuttals from [friends in] the audience!

This session also included a talk by Victor Peña and Jim Berger analysing and answering the recent criticisms of the Likelihood principle. I am not sure this answer will convince the critics, but I won’t comment further as I now see the debate as resulting from a vague notion of inference in Birnbaum‘s expression of the principle. Jan Hannig gave another foundation talk introducing fiducial distributions (a.k.a., Fisher’s Bayesian mimicry) but failing to provide a foundational argument for replacing Bayesian modelling. (Obviously, I am definitely prejudiced in this regard.)

The last session of the day was sponsored by BayesComp and saw talks by Natesh Pillai, Pierre Jacob, and Eric Xing. Natesh talked about his paper on accelerated MCMC recently published in JASA. Which surprisingly did not get discussed here, but would definitely deserve to be! As hopefully corrected within a few days, when I recoved from conference burnout!!! Pierre Jacob presented a work we are currently completing with Chris Holmes and Lawrence Murray on modularisation, inspired from the cut problem (as exposed by Plummer at MCMski IV in Chamonix). And Eric Xing spoke about embarrassingly parallel solutions, discussed a while ago here.