Special Issue of ACM TOMACS on Monte Carlo Methods in Statistics
As posted here a long, long while ago, following a suggestion from the editor (and North America Cycling Champion!) Pierre Lécuyer (Université de Montréal), Arnaud Doucet (University of Oxford) and myself acted as guest editors for a special issue of ACM TOMACS on Monte Carlo Methods in Statistics. (Coincidentally, I am attending a board meeting for TOMACS tonight in Berlin!) The issue is now ready for publication (next February unless I am confused!) and made of the following papers:
* Massive parallelization of serial inference algorithms for a complex generalized linear model MARC A. SUCHARD, IVAN ZORYCH, PATRICK RYAN, DAVID MADIGAN 

*Convergence of a Particlebased Approximation of the Block Online Expectation Maximization Algorithm SYLVAIN LE CORFF and GERSENDE FORT 

* Efficient MCMC for Binomial Logit Models AGNES FUSSL, SYLVIA FRÜHWIRTHSCHNATTER, RUDOLF FRÜHWIRTH 

* Adaptive EquiEnergy Sampler: Convergence and Illustration AMANDINE SCHRECK and GERSENDE FORT and ERIC MOULINES 

* Particle algorithms for optimization on binary spaces CHRISTIAN SCHÄFER 

* Posterior expectation of regularly paved random histograms RAAZESH SAINUDIIN, GLORIA TENG, JENNIFER HARLOW, and DOMINIC LEE 

* Small variance estimators for rare event probabilities MICHEL BRONIATOWSKI and VIRGILE CARON 

* SelfAvoiding Random Dynamics on Integer Complex Systems FIRAS HAMZE, ZIYU WANG, and NANDO DE FREITAS 

* Bayesian learning of noisy Markov decision processes SUMEETPAL S. SINGH, NICOLAS CHOPIN, and NICK WHITELEY 
Here is the draft of the editorial that will appear at the beginning of this special issue. (All faults are mine, of course!)
While Monte Carlo methods are used in a wide range of domains, which started with particle physics in the 1940’s, statistics has a particular connection with those methods in that it both relies on them to handle complex models and validates their convergence by providing assessment tools. Both the bootstrap and the Markov chain Monte Carlo (MCMC) revolutions of the 1980’s and 1990’s have changed for good the way Monte Carlo methods are perceived by statisticians, moving them from a peripheral tool to an essential component of statistical analysis. We are thus pleased to have been given the opportunity of editing this special issue of ACM TOMACS and handling a fine collection of submissions.
The accepted papers in this issue almost cover the whole range of the use of simulation methods in statistics, from optimisation (Le Corff and Fort, Schäfer) to posterior simulation, Fussl et al., Hamze et al., Sainudiin et al., Singh et al.), to rare event inference (Broniatowski and Caron), to parallelisation (Suchard et al.), with a collection of Monte Carlo techniques, from particle systems (Le Corff and Fort, Schäfer) to Markov chain Monte Carlo (Fussl et al., Hamze et al., Sainudiin et al., Singh et al., Suchard et al.), to importance sampling (Broniatowski and Caron).
The accepted papers in this issue almost cover the whole range of the use of simulation methods in statistics, from optimisation (Le Corff and Fort, Schäfer) to posterior simulation, Fussl et al., Hamze et al., Sainudiin et al., Singh et al.), to rare event inference (Broniatowski and Caron), to parallelisation (Suchard et al.), with a collection of Monte Carlo techniques, from particle systems (Le Corff and Fort, Schäfer) to Markov chain Monte Carlo (Fussl et al., Hamze et al., Sainudiin et al., Singh et al., Suchard et al.), to importance sampling (Broniatowski and Caron).
The paper by Le Corff and Fort furthermore offers insights on the “workhorse” of computational statistics, namely the Expectation–Maximisation (EM) algorithm introduced by Dempster, Laird, and Rubin (1977). It indeed characterises the convergence speed of some online (sequential Monte Carlo) versions of the EM algorithm, thus helps quantifying the folklore that “EM converges fast”. In the same area of missing variable models, Fussl et al. reassess the classical (Bayesian) logit model and propose a new completion scheme that aggregate the missing variables towards a much more efficient MetropolisHastings sampler, in comparison with the existing schemes. The paper by Singh et al. can also be connected to this theme, as they study Bayesian inverse reinforcement learning problems involving latent variables that are estimated and used in prediction, thanks to an efficient MCMC sampler.
The paper by Schreck et al. on the (MCMC) equienergy sampler expands on a stateoftheart sampler by constructing and completely validating an adaptive version of the algorithm. This area being currently very active, it represents a major step for the field. Another paper, by Sainudiin et al., is also concerned with theoretical aspects, namely the construction and validation of an MCMC algorithm on an unusual space of treebased histograms. This is the closest paper in this issue to nonparametric statistical estimation, which is one significant missing domain here, since simulation in functional spaces offers highly topical idiosyncrasies. The paper by Broniatowski and Caron also remains on a rather theoretical plane by looking at large or moderate deviations in connection with importance sampling and crossentropy techniques, aiming at some degree of optimality in the long run.
As mentioned above, two papers are specifically addressing statistical problems of optimisation on binaryvalued systems, the particle algorithm of Schäfer that build specially designed parametric families on binary spaces that bring significant improvements over the existing schemes, and Hamze et al. on selfavoiding random walks, coupled with Bayesian optimisation, which handles remarkably well complex models.
A last area in rapid development that is represented in this issue is parallelisation. As discussed in Suchard et al., there are more and more models that require parallel implementation to be handled properly and, once more, specific statistical methodologies can and must be devised to answer such challenges. The paper by Suchard et al. handles generalized linear models for massive datasets for Bayesian maximum a posteriori using GPUs (graphical processing units), despite the serial nature of their cyclic coordinate descent algorithm. It can be seen as an outlier in this special issue in the sense that it deals more with statistical computing than with computational statistics, but we think it has completely its place in the field for reaching the implementation levels that are necessary to address to face the “big data” challenges.
February 16, 2014 at 4:18 pm
Are there simple text books or articles describing Monte Carlo ? Samples that I can code using ‘R’. How do I take a project plan and generate a simulation ?
February 16, 2014 at 6:45 pm
Yes there are many textbooks about Monte Carlo. I am sure a simple Google search can lead you to those.
December 10, 2012 at 9:23 pm
Hi,
the abstract of our paper seems to be missing. Here it is; see also http://arxiv.org/abs/1211.5901
Bayesian learning of noisy Markov decision processes
Sumeetpal S. Singh, Nicolas Chopin, Nick Whiteley
We consider the inverse reinforcement learning problem, that is, the problem of learning from, and then predicting or mimicking a controller based on state/action data. We propose a statistical model for such data, derived from the structure of a Markov decision process. Adopting a Bayesian approach to inference, we show how latent variables of the model can be estimated, and how predictions about actions can be made, in a unified framework. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior distribution. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller.
December 11, 2012 at 10:48 am
ok, ok, cut&paste mistake stands corrected now!