**A** few years ago Lawrence Murray wrote a note on accelerating the resampling stage in particle filters by using a Metropolis step. And GPUs. The notion that Metropolis can be applied in this setting is at first puzzling since exact multinomial sampling is available. And Metropolis requires convergence guarantees. Which Lawrence covers by a Raftery and Lewis assessment, which has severe limitations in general but may well be adequate for this very case, although possibly too conservative in the number of recommended Metropolis iterations. The gain brought by Metropolis is that it does not require summing up all the particle weights, and as a result the gain is real in that Metropolis beats all other approaches (time-wise) when the number of particles is not too large and the heterogeneity of the weighs not too high. (I did not know of this note until Richard Everitt brought it to my attention.)

## Archive for stratified resampling

## multinomial resampling by Metropolis

Posted in Books, Statistics with tags Metropolis-Hastings algorithm, multinomial distribution, particle degeneracy, Raftery and Lewis' number of iterations, stratified resampling, systematic resampling on December 28, 2017 by xi'an## resampling methods

Posted in Books, pictures, Running, Statistics, Travel, University life with tags Book, Clifton, hidden Markov models, Hilbert curve, iterated importance sampling, resampling, sequential Monte Carlo, stratified resampling, systematic resampling, Université Paris Dauphine, University of Bristol on December 6, 2017 by xi'an**A** paper that was arXived [and that I missed!] last summer is a work on resampling by Mathieu Gerber, Nicolas Chopin (CREST), and Nick Whiteley. Resampling is used to sample from a weighted empirical distribution and to correct for very small weights in a weighted sample that otherwise lead to degeneracy in sequential Monte Carlo (SMC). Since this step is based on random draws, it induces noise (while improving the estimation of the target), reducing this noise is preferable, hence the appeal of replacing plain multinomial sampling with more advanced schemes. The initial motivation is for sequential Monte Carlo where resampling is rife and seemingly compulsory, but this also applies to importance sampling when considering several schemes at once. I remember discussing alternative schemes with Nicolas, then completing his PhD, as well as Olivier Cappé, Randal Douc, and Eric Moulines at the time (circa 2004) we were working on the Hidden Markov book. And getting then a somewhat vague idea as to why systematic resampling failed to converge.

In this paper, Mathieu, Nicolas and Nick show that stratified sampling (where a uniform is generated on every interval of length 1/n) enjoys some form of consistent, while systematic sampling (where the “same” uniform is generated on every interval of length 1/n) does not necessarily enjoy this consistency. There actually exists cases where convergence does not occur. However, a residual version of systematic sampling (where systematic sampling is applied to the residuals of the decimal parts of the n-enlarged weights) is itself consistent.

The paper also studies the surprising feature uncovered by Kitagawa (1996) that stratified sampling applied to an ordered sample brings an error of O(1/n²) between the cdf rather than the usual O(1/n). It took me a while to even understand the distinction between the original and the ordered version (maybe because Nicolas used the empirical cdf during his SAD (Stochastic Algorithm Day!) talk, ecdf that is the same for ordered and initial samples). And both systematic and deterministic sampling become consistent in this case. The result was shown in dimension one by Kitagawa (1996) but extends to larger dimensions via the magical trick of the Hilbert curve.

## resampling and [GPU] parallelism

Posted in Statistics, University life with tags GPU, particle MCMC, Raftery and Lewis' number of iterations, random number generator, resampling, stratified resampling, systematic resampling on March 13, 2012 by xi'an**I**n a recent note posted on arXiv, Lawrence Murray compares the implementation of resampling schemes for parallel systems like GPUs. Given a system of weighted particles, *(x _{i},ω_{i})*, there are several ways of drawing a sample according to those weights:

- regular
*multinomial resampling*, where each point in the (new) sample is one of the*(x*, with probability_{i},ω_{i})*(x*, meaning there is a uniform generated for each point;_{i},ω_{i}) *stratified resampling*, where the weights are added, divided into equal pieces and a uniform is sampled on each piece, which means that points with large weights are sampled at least once and those with small weights at most once;*systematic resampling*, which is the same as the above except that*the same*uniform is used for each piece,*Metropolis resampling*, where a Markov chain converges to the distribution (*ω*,…,_{1}*ω*on {1,…,P},_{P})

**T**he three first resamplers are common in the particle system literature (incl. Nicolas Chopin’s PhD thesis), but difficult to adapt to GPUs (and I always feel uncomfortable with the fact that systematic uses *a single uniform*!), while the last one is more unusual, but actually well-fitted for a parallel implementation. While Lawrence Murray suggests using Raftery and Lewis’ (1992) assessment of the required number of Metropolis iterations to “achieve convergence”, I would instead suggest taking advantage of the toric nature of the space (as represented above) to run a random walk and wait for the equivalent of a complete cycle. In any case, this is a cool illustration of the new challenges posed by parallel implementations (like the development of proper random generators).