AMIS revised & resubmitted

Posted in R, Statistics, University life with tags , , , on December 19, 2010 by xi'an

After a thorough revision that removed most of the theoretical attempts at improving our understanding of AMIS convergence, we have now resubmitted the AMIS paper to Scandinavian Journal of Statistics and arXived the new version as well. (I remind the reader that AMIS stands for adaptive mixture importance sampling and that it implements an adaptive version of Owen and Zhou’s (2000, JASA) stabilisation mixture technique, using this correction on the past and present importance weights, at each iteration of this iterative algorithm.) The AMIS method starts being used in population genetics, including an on-going work by Jean-Marie Cornuet and a published paper in Molecular Biology and Evolution by Sirén, Marttinen and Corander. The challenge of properly demonstrating AMIS convergence remains open!

IMIS & AMIS

Posted in R, Statistics, University life with tags , , , , , , , , on July 30, 2010 by xi'an

A most interesting paper by Adrian Raftery and Le Bao appeared in the Early View section of Biometrics.  It aims at better predictions for HIV prevalence—in the original UNAIDS implementation, a naïve SIR procedure was used, based on the prior as importance function, which sometimes resulted in terrible degeneracy—, but its methodological input is about incremental mixture importance sampling (IMIS), thus relates to the general topic of adaptive Monte Carlo methods I am interested in. (And to some extent to our recent AMIS paper.) Actually, a less elaborate (and less related) version of the IMIS algorithm first appeared in a 2006 paper by Steele, Raftery and Edmond in JCGS in the setting of finite mixture likelihoods and I somehow managed to miss it…

Raftery and Bao propose to replace SIR with an iterative importance sampling technique developed in 2003 by Steele et al. that has some similarities with population Monte Carlo (PMC). (A negligible misrepresentation of PMC in the current paper is that our method does not use “the prior as importance function'”.) In its current format, the IMIS algorithm starts from a first guess (e.g., the prior distribution) and builds a sequence of Gaussian (or Gaussian mixture) approximations whose parameters are estimated from the current population, while all simulation are merged together at each step, using a mixture stabilising weight

$\pi(\theta_i^s|x) / \omega_0 p_0(\theta_i^0)+\sum_r \omega_r \hat q_r(\theta_i^s)$

where the weights $\omega_r$ depend on the number of simulations at step r. This pattern also appears in our adaptive multiple importance sampling (AMIS) algorithm developed in this arXiv paper with Jean-Marie Cornuet, Jean-Michel Marin and Antonietta Mira, and in the original paper by Owen and Zhou (2000, JASA) that inspired us. Raftery and Bo extend the methodology to an IMIS with optimisation at the initial stage, while AMIS incorporates the natural population Monte Carlo stepwise optimisation developed in Douc et al. (2008, Annals of Statistics) that brings the proposal kernel closer to the target after each iteration. The application of the simulations to conduct model choice found in the current paper and in Steele et al. can also be paralleled with the one using population Monte Carlo we conducted for cosmological data in MNRAS.

Interestingly, Raftery and Bo (and also Steele et al.) refer to the defensive mixture paper of Hesterberg (1995, Technometrics), which has been very influential in my research on importance sampling, and (less directly) to Owen and Zhou (2000, JASA), who did propose the deterministic mixture scheme that inspired AMIS. Besides the foundational papers of Oh and Berger (1991, JASA) and West (1993, J. Royal Statistical Society Series B), they also mention a paper by Raghavan and Cox (1998, J. Statistical Simulation & Computation) I was not aware of, which introduces as well a mixture of importance proposals as a variance stabilising technique.