Archive for Monte Carlos Statistical Methods

computational statistics and molecular simulation [18w5023]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on November 19, 2018 by xi'an

The last day of the X fertilisation workshop at the casa matematicà Oaxaca, there were only three talks and only half of the participants. I lost the subtleties of the first talk by Andrea Agazzi on large deviations for chemical reactions, due to an emergency at work (Warwick). The second talk by Igor Barahona was somewhat disconnected from the rest of the conference, working on document textual analysis by way of algebraic data analysis (analyse des données) methods à la Benzécri. (Who was my office neighbour at Jussieu in the early 1990s.) In the last and final talk, Eric Vanden-Eijden made a link between importance sampling and PDMP, as an integral can be expressed via a trajectory of a path. A generalisation of path sampling, for almost any ODE. But also a competitor to nested sampling, waiting for the path to reach an Hamiltonian level, without some of the difficulties plaguing nested sampling like resampling. And involving continuous time processes. (Is there a continuous time version of ABC as well?!) Returning unbiased estimators of mean (the original integral) and variance. Example of a mixture example in dimension d=10 with k=50 components using only 100 paths.

computational statistics and molecular simulation [18w5023]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on November 16, 2018 by xi'an

This Thursday, our X fertilisation workshop at the interface between molecular dynamics and Monte Carlo statistical methods saw a wee bit of reduction in the audience as some participants had already left Oaxaca. Meaning they missed the talk of Christophe Andrieu on hypocoercivity which could have been another hand-on lecture, given the highly pedagogical contents of the talk. I had seen some parts of the talk in MCqMC 2018 in Rennes and at NUS, but still enjoyed the whole of it very much, and so did the audience given the induced discussion. For instance, previously, I had not seen the connection between the guided random walks of Gustafson and Diaconis, and continuous time processes like PDMP. Which Christophe also covered in his talk. (Also making me realise my colleague Jean Dolbeault in Dauphine was strongly involved in the theoretical analysis of PDMPs!) Then Samuel Power gave another perspective on PDMPs. With another augmentation, connected with time, what he calls trajectorial reversibility. This has the impact of diminishing the event rate, but creates some kind of reversibility which seems to go against the motivation for PDMPs. (Remember that all talks are available as videos on the BIRS webpage.) A remark in the talk worth reiterating is the importance of figuring out which kinds of approximations are acceptable in these approximations. Connecting somewhat with the next talk by Luc Rey-Bellet on a theory of robust approximations. In the sense of Poincaré, Gibbs, Bernstein, &tc. concentration inequalities and large deviations. With applications to rare events.The fourth and final “hand-on” session was run by Miranda Holmes-Certon on simulating under constraints. Motivated by research on colloids. For which the overdamp Langevin diffusion applies as an accurate model, surprisingly. Which makes a major change from the other talks [most of the workshop!] relying on this diffusion. (With an interesting intermede on molecular velcro made of DNA strands.) Connected with this example, exotic energy landscapes are better described by hard constraints. (Potentially interesting extension to the case when there are too many constraints to explore all of them?) Now, the definition of the measure projected on the manifold defined by the constraints is obviously an important step in simulating the distribution, which density is induced by the gradient of the constraints ∇q(x). The proposed algorithm is in the same spirit as the one presented by Tony the previous day, namely moving along the tangent space then on the normal space to get back to the manifold. A solution that causes issues when the gradient is (near) zero. A great hand-on session which induced massive feedback from the audience.

In the afternoon session, Gersende Fort gave a talk on a generalisation of the Wang-Landau algorithm, which modifies the true weights of the elements of a partition of the sampling space, to increase visits to low [probability] elements and jumps between modes. The idea is to rely on tempered versions of the original weights, learned by stochastic approximation. With an extra layer of adaptivity. Leading to an improvement with parameters that depends on the phase of the stochastic approximation. The second talk was by David Sanders on a recent paper in Chaos about importance sampling for rare events of (deterministic) billiard dynamics. With diffusive limits which tails are hard to evaluate, except by importance sampling. And the last talk of the day was by Anton Martinsson on simulated tempering for a molecular alignment problem. With weights of different temperatures proportional to the inverse of the corresponding normalising constants, which themselves can be learned by a form of bridge sampling  if I got it right.

On a very minor note, I heard at breakfast a pretty good story from a fellow participant having to give a talk at a conference that was moved to a very early time in the morning due to an official appearing at a later time and as a result “enjoying” a very small audience to the point that a cleaning lady appeared and started cleaning the board as she could not conceive the talks had already started! Reminding me of this picture at IHP.

computational statistics and molecular simulation [18w5023]

Posted in Books, Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , , , , on November 15, 2018 by xi'an

 I truly missed the gist of the first talk of the Wednesday morning of our X fertilisation workshop by Jianfeng Lu partly due to notations, although the topic very much correlated to my interests like path sampling, with an augmented version of HMC using an auxiliary indicator. And mentions made of BAOAB. Next, Marcello Pereyra spoke about Bayesian image analysis, with the difficulty of setting a prior on an image. In case of astronomical images there are motivations for an L¹ penalisation sparse prior. Sampling is an issue. Moreau-Yoshida proximal optimisation is used instead, in connection with our MCMC survey published in Stats & Computing two years ago. Transferability was a new concept for me, as introduced by Kerrie Mengersen (QUT), to extrapolate an estimated model to another system without using the posterior as a prior. With a great interlude about the crown of thorns starfish killer robot! Rather a prior determination based on historical data, in connection with recent (2018) Technometrics and Bayesian Analysis papers towards rejecting non-plausible priors. Without reading the papers (!), and before discussing the matter with Kerrie, here or in Marseille, I wonder at which level of precision this can be conducted. The use of summary statistics for prior calibration gave the approach an ABC flavour.

The hand-on session was Jonathan Mattingly’s discussion of gerrymandering reflecting on his experience at court! Hard to beat for an engaging talk reaching between communities. As it happens I discussed the original paper last year. Of course it was much more exciting to listen to Jonathan explaining his vision of the problem! Too bad I “had” to leave before the end for a [most enjoyable] rock climbing afternoon… To be continued at the dinner table! (Plus we got the complete explanation of the term gerrymandering, including this salamander rendering of the first identified as gerrymandered district!)

computational statistics and molecular simulation [18w5023]

Posted in Statistics with tags , , , , , , , , , , , , , , on November 13, 2018 by xi'an

This X fertilisation workshop Gabriel Stolz, Luke Bornn and myself organised towards reinforcing the interface between molecular dynamics and Monte Carlo statistical methods has now started! At the casa matematicà Oaxaca, the Mexican campus of BIRS, which is currently housed by a very nice hotel on the heights of Oaxaca. And after a fairly long flight for a large proportion of the participants. On the first day, Arthur Voter gave a fantastic “hand-on” review of molecular dynamics for material sciences, which was aimed at the statistician side of the audience and most helpful in my own understanding of the concepts and techniques at the source of HMC and PDMP algorithms. (Although I could not avoid a few mini dozes induced by jetlag.) Including the BAOAB version of HMC, which sounded to me like an improvement to investigate. The part on metastability, completed by a talk by Florian Maire, remained a wee bit mysterious [to me].

The shorter talks of the day all brought new perspectives and information to me (although they were definitely more oriented towards their “own” side of the audience than the hand-on lecture). For instance, Jesús María Sanz-Serna gave a wide ranging overview of numerical integrators and Tony Lelièvre presented a recent work on simulating measures supported by manifolds via an HMC technique constantly projecting over the manifold, with proper validation. (I had struggled with the paper this summer and this talk helped a lot.) There was a talk by Josh Fash on simulating implicit solvent models that mixed high-level programming and reversible jump MCMC, with an earlier talk by Yong Chen on variable dimension hidden Markov models that could have also alluded to reversible jump. Angela Bito talked about using ASIS (Ancillarity-sufficiency interweaving strategy) for improving the dynamics of an MCMC sampler associated with a spike & slab prior, the recentering-decentering cycle being always a sort of mystery to me [as to why it works better despite introducing multimodality in this case], and Gael Martin presented some new results on her on-going work with David Frazier about approximate Bayes with misspecified models, with the summary statistic being a score function that relates the work to the likelihood free approach of Bissiri et al.

more of the same!

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , on December 10, 2015 by xi'an

aboriginal artist, NGV, Melbourne, July 30, 2012Daniel Seita, Haoyu Chen, and John Canny arXived last week a paper entitled “Fast parallel SAME Gibbs sampling on general discrete Bayesian networks“.  The distributions of the observables are defined by full conditional probability tables on the nodes of a graphical model. The distributions on the latent or missing nodes of the network are multinomial, with Dirichlet priors. To derive the MAP in such models, although this goal is not explicitly stated in the paper till the second page, the authors refer to the recent paper by Zhao et al. (2015), discussed on the ‘Og just as recently, which applies our SAME methodology. Since the paper is mostly computational (and submitted to ICLR 2016, which takes place juuust before AISTATS 2016), I do not have much to comment about it. Except to notice that the authors mention our paper as “Technical report, Statistics and Computing, 2002”. I am not sure the editor of Statistics and Computing will appreciate! The proper reference is in Statistics and Computing, 12:77-84, 2002.

“We argue that SAME is beneficial for Gibbs sampling because it helps to reduce excess variance.”

Still, I am a wee bit surprised at both the above statement and at the comparison with a JAGS implementation. Because SAME augments the number of latent vectors as the number of iterations increases, so should be slower by a mere curse of dimension,, slower than a regular Gibbs with a single latent vector. And because I do not get either the connection with JAGS: SAME could be programmed in JAGS, couldn’t it? If the authors means a regular Gibbs sampler with no latent vector augmentation, the comparison makes little sense as one algorithm aims at the MAP (with a modest five replicas), while the other encompasses the complete posterior distribution. But this sounds unlikely when considering that the larger the number m of replicas the better their alternative to JAGS. It would thus be interesting to understand what the authors mean by JAGS in this setup!

another vanilla Rao-Blackwellisation

Posted in Statistics, University life with tags , , , , , on September 16, 2013 by xi'an

In the latest issue of Statistics and Computing (2013, Issue 23, pages 577-587), Iliopoulos and Malefaki published a paper that relates to our vanilla Rao-Blackwellisation paper with Randal Douc. The idea is to derive another approximation to the ideal importance sampling weight using the “accepted” Markov chain. (With Randal, we had a Bernoulli factory representation.) The density g(x) of the accepted chain being unknown; it is represented as the expectation under π of the function

\min\left\{q(z|x)/\pi(z),q(x|z)/\pi(x)\right\}

and hence estimated by a self-normalised average based on the whole Markov chain. This means the resulting importance estimate uses twice the output of the algorithm and that it is biased and of order O(n²), thus the same order as our original Rao-Blackwellised estimator (Robert & Casella, 1996)… This also means convergence and CLT are very hard to establish: the main convergence theorem of the paper holds only for finite state spaces, with a surprising smaller asymptotic variance for this self-normalised average than for the ideal importance sampling estimator in the independent Metropolis-Hastings case. (Both are biased by being self-normalised and the paper does not consider the magnitude of those biases.)

Interestingly, the authors also ran a comparison with our parallelised Rao-Blackwellised version (with Pierre Jacob and Murray Smith), but conclude (P.58) at a larger CPU (should be GPU!!) required by the parallelisation, which does not really make sense: when compared with the plain Metropolis-Hastings implementation, run on a single processor, the parallel version only requires an extra random permutation per thread or per processor. I thus suspect a faulty implementation that induces this CPU being linear in the size of the blocks, like maybe only saving one output per block… Also interestingly, the paper re-analyses the Pima Indian probit model Jean-Michel Marin and I (and many others) used as benchmark in several of our papers. As in the most standard examples, the outcome shows a mild reduction in variance when using this estimated importance sampling version. Maybe a comparison with the ideal importance sampler (i.e. the one that does not divide by the sum of the weights since using normalised versions of the target and importance densities) would have helped in the comparison.

Special Issue of ACM TOMACS on Monte Carlo Methods in Statistics

Posted in Books, R, Statistics, University life with tags , , , , , , , , , , , , on December 10, 2012 by xi'an

As posted here a long, long while ago, following a suggestion from the editor (and North America Cycling Champion!) Pierre Lécuyer (Université de Montréal), Arnaud Doucet (University of Oxford) and myself acted as guest editors for a special issue of ACM TOMACS on Monte Carlo Methods in Statistics. (Coincidentally, I am attending a board meeting for TOMACS tonight in Berlin!) The issue is now ready for publication (next February unless I am confused!) and made of the following papers:

* Massive parallelization of serial inference algorithms for a complex generalized linear model
MARC A. SUCHARD, IVAN ZORYCH, PATRICK RYAN, DAVID MADIGAN
*Convergence of a Particle-based Approximation of the Block Online Expectation Maximization Algorithm
SYLVAIN LE CORFF and GERSENDE FORT
* Efficient MCMC for Binomial Logit Models
AGNES FUSSL, SYLVIA FRÜHWIRTH-SCHNATTER, RUDOLF FRÜHWIRTH
* Adaptive Equi-Energy Sampler: Convergence and Illustration
AMANDINE SCHRECK and GERSENDE FORT and ERIC MOULINES
* Particle algorithms for optimization on binary spaces
CHRISTIAN SCHÄFER
* Posterior expectation of regularly paved random histograms
RAAZESH SAINUDIIN, GLORIA TENG, JENNIFER HARLOW, and DOMINIC LEE
* Small variance estimators for rare event probabilities
MICHEL BRONIATOWSKI and VIRGILE CARON
* Self-Avoiding Random Dynamics on Integer Complex Systems
FIRAS HAMZE, ZIYU WANG, and NANDO DE FREITAS
* Bayesian learning of noisy Markov decision processes
SUMEETPAL S. SINGH, NICOLAS CHOPIN, and NICK WHITELEY

Here is the draft of the editorial that will appear at the beginning of this special issue. (All faults are mine, of course!) Continue reading