Archive for path sampling

Pre-processing for approximate Bayesian computation in image analysis

Posted in R, Statistics, University life with tags , , , , , , , , , , , , , on March 21, 2014 by xi'an

ridge6With Matt Moores and Kerrie Mengersen, from QUT, we wrote this short paper just in time for the MCMSki IV Special Issue of Statistics & Computing. And arXived it, as well. The global idea is to cut down on the cost of running an ABC experiment by removing the simulation of a humongous state-space vector, as in Potts and hidden Potts model, and replacing it by an approximate simulation of the 1-d sufficient (summary) statistics. In that case, we used a division of the 1-d parameter interval to simulate the distribution of the sufficient statistic for each of those parameter values and to compute the expectation and variance of the sufficient statistic. Then the conditional distribution of the sufficient statistic is approximated by a Gaussian with these two parameters. And those Gaussian approximations substitute for the true distributions within an ABC-SMC algorithm à la Del Moral, Doucet and Jasra (2012).

residuals

Across 20 125 × 125 pixels simulated images, Matt’s algorithm took an average of 21 minutes per image for between 39 and 70 SMC iterations, while resorting to pseudo-data and deriving the genuine sufficient statistic took an average of 46.5 hours for 44 to 85 SMC iterations. On a realistic Landsat image, with a total of 978,380 pixels, the precomputation of the mapping function took 50 minutes, while the total CPU time on 16 parallel threads was 10 hours 38 minutes. By comparison, it took 97 hours for 10,000 MCMC iterations on this image, with a poor effective sample size of 390 values. Regular SMC-ABC algorithms cannot handle this scale: It takes 89 hours to perform a single SMC iteration! (Note that path sampling also operates in this framework, thanks to the same precomputation: in that case it took 2.5 hours for 10⁵ iterations, with an effective sample size of 10⁴…)

Since my student’s paper on Seaman et al (2012) got promptly rejected by TAS for quoting too extensively from my post, we decided to include me as an extra author and submitted the paper to this special issue as well.

Importance sampling schemes for evidence approximation in mixture models

Posted in R, Statistics, University life with tags , , , , , , , , , on November 27, 2013 by xi'an

boximpJeong Eun (Kate) Lee and I completed this paper, “Importance sampling schemes for evidence approximation in mixture models“, now posted on arXiv. (With the customary one-day lag for posting, making me bemoan the days of yore when arXiv would give a definitive arXiv number at the time of submission.) Kate came twice to Paris in the past years to work with me on this evaluation of Chib’s original marginal likelihood estimate (also called the candidate formula by Julian Besag). And on the improvement proposed by Berkhof, van Mechelen, and Gelman (2003), based on averaging over all permutations, idea that we rediscovered in an earlier paper with Jean-Michel Marin. (And that Andrew seemed to have completely forgotten. Despite being the very first one to publish [in English] a paper on a Gibbs sampler for mixtures.) Given that this averaging can get quite costly, we propose a preliminary step to reduce the number of relevant permutations to be considered in the averaging, removing far-away modes that do not contribute to the Rao-Blackwell estimate and called dual importance sampling. We also considered modelling the posterior as a product of k-component mixtures on the components, following a vague idea I had in the back of my mind for many years, but it did not help. In the above boxplot comparison of estimators, the marginal likelihood estimators are

  1. Chib’s method using T = 5000 samples with a permutation correction by multiplying by k!.
  2. Chib’s method (1), using T = 5000 samples which are randomly permuted.
  3. Importance sampling estimate (7), using the maximum likelihood estimate (MLE) of the latents as centre.
  4. Dual importance sampling using q in (8).
  5. Dual importance sampling using an approximate in (14).
  6. Bridge sampling (3). Here, label switching is imposed in hyperparameters.

Carnon [and Core, end]

Posted in Books, Kids, pictures, R, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , on June 16, 2012 by xi'an

Yet another full day working on Bayesian Core with Jean-Michel in Carnon… This morning, I ran along the canal for about an hour and at last saw some pink flamingos close enough to take pictures (if only to convince my daughter that there were flamingos in the area!). Then I worked full-time on the spatial statistics chapter, using a small dataset on sedges that we found in Gaetan and Guyon’s Spatial Statistics and Modelling. I am almost done tonight, with both path sampling and ABC R codes documented and working for this dataset. But I’d like to re-run both codes for longer to achieve smoother outcomes.

Harmonic means, again again

Posted in Books, R, Statistics, University life with tags , , , , , , , , on January 10, 2012 by xi'an

Another arXiv posting I had had no time to comment is Nial Friel’s and Jason Wyse’s “Estimating the model evidence: a review“. This is a review in the spirit of two of our papers, “Importance sampling methods for Bayesian discrimination between embedded models” with Jean-Michel Marin (published in Jim Berger Feitschrift, Frontiers of Statistical Decision Making and Bayesian Analysis: In Honor of James O. Berger, but not mentioned in the review) and “Computational methods for Bayesian model choice” with Darren Wraith (referred to by the review). Indeed, it considers a series of competing computational methods for approximating evidence, aka marginal likelihood:

The paper correctly points out the difficulty with the naïve harmonic mean estimator. (But it does not cover the extension to the finite variance solutions found in”Importance sampling methods for Bayesian discrimination between embedded models” and in “Computational methods for Bayesian model choice“.)  It also misses the whole collection of bridge and umbrella sampling techniques covered in, e.g., Chen, Shao and Ibrahim, 2000 . In their numerical evaluations of the methods, the authors use the Pima Indian diabetes dataset we also used in “Importance sampling methods for Bayesian discrimination between embedded models“. The outcome is that the Laplace approximation does extremely well in this case (due to the fact that the posterior is very close to normal), Chib’s method being a very near second. The harmonic mean estimator does extremely poorly (not a suprise!) and the nested sampling approximation is not as accurate as the other (non-harmonic) methods. If we compare with our 2009 study, importance sampling based on the normal approximation (almost the truth!) did best, followed by our harmonic mean solution based on the same normal approximation. (Chib’s solution was then third, with a standard deviation ten times larger.)

Computing evidence

Posted in Books, R, Statistics with tags , , , , , , , , , , on November 29, 2010 by xi'an

The book Random effects and latent variable model selection, edited by David Dunson in 2008 as a Springer Lecture Note. contains several chapters dealing with evidence approximation in mixed effect models. (Incidentally, I would be interested in the story behind the  Lecture Note as I found no explanation in the backcover or in the preface. Some chapters but not all refer to a SAMSI workshop on model uncertainty…) The final chapter written by Joyee Ghosh and David Dunson (similar to a corresponding paper in JCGS) contains in particular the interesting identity that the Bayes factor opposing model h to model h-1 can be unbiasedly approximated by (the average of the terms)

\dfrac{f(x|\theta_{i,h},\mathfrak{M}=h-1)}{f(x|\theta_{i,h},\mathfrak{M}=h)}

when

  • \mathfrak{M} is the model index,
  • the \theta_{i,h}‘s are simulated from the posterior under model h,
  • the model \mathfrak{M}=h-1 only considers the h-1 first components of \theta_{i,h},
  • the prior under model h-1 is the projection of the prior under model h. (Note that this marginalisation is not the projection used in Bayesian Core.)

Continue reading

Follow

Get every new post delivered to your Inbox.

Join 551 other followers