resampling and [GPU] parallelism
In a recent note posted on arXiv, Lawrence Murray compares the implementation of resampling schemes for parallel systems like GPUs. Given a system of weighted particles, (xi,ωi), there are several ways of drawing a sample according to those weights:
- regular multinomial resampling, where each point in the (new) sample is one of the (xi,ωi), with probability (xi,ωi), meaning there is a uniform generated for each point;
- stratified resampling, where the weights are added, divided into equal pieces and a uniform is sampled on each piece, which means that points with large weights are sampled at least once and those with small weights at most once;
- systematic resampling, which is the same as the above except that the same uniform is used for each piece,
- Metropolis resampling, where a Markov chain converges to the distribution (ω1,…, ωP) on {1,…,P},
The three first resamplers are common in the particle system literature (incl. Nicolas Chopin’s PhD thesis), but difficult to adapt to GPUs (and I always feel uncomfortable with the fact that systematic uses a single uniform!), while the last one is more unusual, but actually well-fitted for a parallel implementation. While Lawrence Murray suggests using Raftery and Lewis’ (1992) assessment of the required number of Metropolis iterations to “achieve convergence”, I would instead suggest taking advantage of the toric nature of the space (as represented above) to run a random walk and wait for the equivalent of a complete cycle. In any case, this is a cool illustration of the new challenges posed by parallel implementations (like the development of proper random generators).
March 22, 2012 at 4:43 am
[…] Xi’an’s Og, Christian Robert reviews a preprint on resampling and GPU parallelism and shares some thoughts after a referee report for […]