**T**onight I am off to the National University of Singapore, at the Institute for Mathematical Sciences [and not the Institute of Mathematical Statistics!], to take part in a (first) week workshop on Bayesian Computation for High-Dimensional Statistical Models, covering topics like Approximate Bayesian Computation, Markov chain Monte Carlo, Multilevel Monte Carlo and Particle Filters. Having just barely recovered from the time difference with Vancouver, I now hope I can switch with not too much difficulty to Singapore time zone! As well as face the twenty plus temperature gap with the cool weather this morning in the Parc…

## Archive for multilevel Monte Carlo

## off to Singapore [IMS workshop]

Posted in pictures, Statistics, Travel, University life with tags ABC, IMS, Institute of Mathematical Statistics, MCMC, multilevel Monte Carlo, NUS, particle filters, Singapore, workshop on August 26, 2018 by xi'an## multilevel Monte Carlo for estimating constants

Posted in Books, Statistics, University life with tags multilevel Monte Carlo, normalising constant, particle filter, sequential Monte Carlo, telescoping formula, unbiased estimation on March 18, 2016 by xi'an**P**ierre Del Moral, Ajay Jasra, Kody Law, and Yan Zhou just arXived a paper entitled Sequential Monte Carlo samplers for normalizing constants. Which obviously attracted my interest! The context is one of a sequential Monte Carlo problem, with an associated sequence of targets and of attached normalising constants. While the quantity of interest only relates to the final distribution in the sequence, using Mike Giles‘ multilevel Monte Carlo approach allows for a more accurate estimation and recycling all the past particles, thanks to the telescoping formula. And the sequential representation also allows for an unbiased estimator, as is well known in the sequential Monte Carlo literature. The paper derives accurate bounds on both the variances of two normalisation constant estimators and the costs of producing such estimators (assuming there is an index typo in Corollary 3.1, where L-2 should be L-1). The improvement when compared with traditional SMC is clear on the example contained in the paper. As I read the paper rather quickly and without much attention to the notations, I may have missed the point, but I did not see any conclusion on the choice of the particle population size at each iteration of the SMC. After asking Ajay about it, he pointed out that this size can be derived as

(with notations taken from the paper).