Archive for Hansen-Hurwitz estimator

speeding up MCMC

Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , on May 13, 2014 by xi'an

Reykjavik2Just before I left for Iceland, Matias Quiroz, Mattias Villani and Robert Kohn arXived a paper entitled “speeding up MCMC by efficient data subsampling”. Somewhat connected with the earlier papers by Koattikara et al., and Bardenet et al., both discussed on the ‘Og, the idea is to replace the log-likelihood by an unbiased subsampled version and to correct for the resulting bias of the exponentiation of this (Horwitz-Thompson or Hansen-Hurwitz) estimator. They ground their approach within the (currently cruising!) pseudo-marginal paradigm, even though their likelihood estimates are not completely unbiased. Since the optimal weights in the sampling step are proportional to the log-likelihood terms, they need to build a surrogate of the true likelihood, using either a Gaussian process or a spline approximation. This is all in all a very interesting contribution to the on-going debate about increasing MCMC speed when dealing with large datasets and ungainly likelihood functions. The proposed solution however has a major drawback in that the entire dataset must be stored at all times to ensure unbiasedness. For instance, the paper considers a bivariate probit model with a sample of 500,000 observations. Which must be available at all times.  Further, unless I am confused, the subsampling step requires computing the surrogate likelihood for all observations, before running the subsampling step, another costly requirement.