Archive for multiple-try Metropolis algorithm

more multiple proposal MCMC

Posted in Books, Statistics with tags , , , , , , , on July 26, 2018 by xi'an

Luo and Tjelmeland just arXived a paper on a new version of multiple-try Metropolis Hastings, the addendum being in defining the additional proposed copies via a dependence graph like (a) above, with one version from the target and all others from operational and conditional proposal kernels. Respecting the dependence graph, as in (b). As I did, you may then wonder where both the graph and the conditional do come from. Which reminds me of the pseudo-posteriors of Carlin and Chib (1995), even though this is not terribly connected. Green (1995).) (But not disconnected either since the authors mention But, given the graph, following a Gibbs scheme, one of the 17 nodes is chosen as generated from the target, using the proper conditional on that index [which is purely artificial from the point of view of the original simulation problem!]. As noted above, the graph is an issue, but since it is artificial, it can be devised to either allow for quasi-independence between the proposed values or on the opposite to induce long range dependence, which corresponds to conducting multiple MCMC steps before reaching the end nodes, a feature that is very appealing in my opinion. And reminds me of prefetching. (As I am listening to Nicolas Chopin’s lecture in Warwick at the moment, there also seems to be a connection with pMCMC.) Still, I remain unclear as to the devising of the graph of dependent proposals, as its depth should be somehow connected with the mixing properties of the original proposal. Gains in convergence may thus come at a high cost at the construction stage.

MCMC with multiple tries

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , on April 5, 2018 by xi'an

Earlier this year, Luca Martino wrote and arXived a review on multiple try MCMC. As its name suggests, the starting point of this algorithm is to propose N potential moves simultaneously instead of one, possibly according to N different proposal (conditional) densities, and to select one by a normalised importance sampling weight. The move is accepted by a Metropolis-Hastings step based on the ratio of the normalisation constants [at the current and at the one-before-current stages]. Besides the cost of computing the summation and generating the different variates, this method also faces the drawback of requiring N-1 supplementary simulations that are only used for achieving detailed balance and computing a backward summation of importance weights. (A first section of the review is dedicated to independent Metropolis-Hastings proposals, q(θ), which make life simpler, but are less realistic in my opinion since some prior knowledge or experimentation is necessary to build a relevant distribution q(θ).) An alternative covered in the survey is ensemble Monte Carlo (Neal, 2011), which produces a whole sample at each iteration, with target the product of the initial targets. This reminded me of our pinball sampler, which aimed at producing a spread-out sample while keeping the marginal correct. Although the motivation sounds closer to a particle sampler. Especially with this associated notion of an empirical approximation of the target. The next part of the review is about delayed rejection, which is a natural alternative approach to speeding up MCMC by considering several possibilities, if sequentially. Started in Antonietta Mira‘s 1999 PhD thesis. The difficulty with this approach is that the acceptance probability gets increasingly complex as the number of delays grows, which may annihilate its appeal relative to simultaneous multiple tries.

multiple try Metropolis

Posted in Books, Statistics, University life with tags , , , , , , on February 18, 2016 by xi'an

martinLuca Martino and Francisco Louzada recently wrote a paper in Computational Statistics about some difficulties with the multiple try Metropolis algorithm. This version of Metropolis by Liu et al. (2000) makes several proposals in parallel and picks one among them by multinomial sampling where the weights are proportional to the corresponding importance weights. This is followed by a Metropolis acceptance step that requires simulating the same number of proposed moves from the selected value. While this is necessary to achieve detailed balance, this mixture of MCMC and importance sampling is inefficient in that it simulates a large number of particles and ends up using only one of them. By comparison, a particle filter for the same setting would propagate all N particles along iterations and only resamples occasionaly when the ESS is getting too small. (I also wonder if the method could be seen as a special kind of pseudo-marginal approach, given that the acceptance ratio is an empirical average with expectation the missing normalising constan [as I later realised the authors had pointed out!]… In which case efficiency comparisons by Christophe Andrieu and Matti Vihola could prove useful.)

The issue raised by Martino and Louzada is that the estimator of the normalising constant can be poor at times, especially when the chain is in low regions of the target, and hence get the chain stuck. The above graph illustrates this setting in the paper. However, the reason for the failure is mostly that the proposal distribution is inappropriate for the purpose of approximating the normalising constant, i.e., that importance sampling does not converge in this situation, since otherwise the average of the importance weights should a.s. converge to the normalising constant. And the method should not worsen when increasing the number of proposals at a given stage. (The solution proposed by the authors to have a random number of proposals seems unlikely to solve the issue in a generic situation. Changing the proposals towards different tail behaviours as in population Monte Carlo is more akin to defensive sampling and thus more likely to avoid trapping states. Interestingly, the authors eventually resort to a mixture denominator in the importance sampler following AMIS.)

multiple try/point Metropolis algorithm

Posted in Statistics, Travel with tags , , , , , on January 23, 2012 by xi'an

Among the arXiv documents I printed at the turn of the year in order to get a better look at them (in the métro if nowhere else!), there were two papers by Luca Martino and co-authors from Universidad Carlos III, Madrid, A multi-point Metropolis scheme with generic weight functions and Different acceptance functions for multiple try Metropolis schemes. The multiple-try algorithm sounds like another version of the delayed sampling algorithm of Tierney and Mira (1999) and Green and Mira (2001). I somehow missed it, even though it was introduced in Liu et al. (2000) and Quin and Liu (2001). Multiple-try Metropolis builds upon the idea that, instead of making one proposal at a time, it is feasible to build a sequence of proposals and to pick one among those, presumably rather likely and hence more open to being accepted. The sequence of proposals may depend upon the past propositions as well as on the current value, lending some degree of adaptability to the scheme. In the current implementation, the algorithm remains rather clumsy [in my opinion] in that (a) a fixed horizon N need be fixed and (b) an additional series of backward simulations need be produced simply to keep the balance equation happy… Hence a total number of simulations O(N) for one possible acceptance. The first note slightly extends Quin and Liu (2001) by using a fairly general weighting scheme. The second paper studies some particular choices for the weights in a much less adaptive scheme (where parallelisation would be an appropriate alternative, since each proposal in the multiple try only depends on the current value of the chain). But it does not demonstrate a more efficient behaviour than when using a cycle or a mixture of Metropolis-Hastings algorithms. The method seems to regain popularity, though, as Roberto Casarin, Radu Craiu and Fabrizio Leisen (also from Carlos III)  arXived a paper on a multiple-try algorithm, connected with population Monte Carlo, and more recently published it in Statistics and Computing.