Archive for independent Metropolis-Hastings algorithm

Metropolis gets off the ground

Posted in Books, Kids, Statistics with tags , , , , , , , on April 1, 2019 by xi'an

An X validated discussion that toed-and-froed about an incomprehension of the Metropolis-Hastings algorithm. Which started with a blame of George Casella‘s and Roger Berger’s Statistical Inference (p.254), when the real issue was the inquisitor having difficulties with the notation V ~ f(v), or the notion of random variable [generation], mistaking identically distributed with identical. Even (me) crawling from one iteration to the next did not help at the beginning. Another illustration of the strong tendency on this forum to jettison fundamental prerequisites…

adaptive independent Metropolis-Hastings

Posted in Statistics with tags , , , , , , on May 8, 2018 by xi'an

When rereading this paper by Halden et al. (2009), I was reminded of the earlier and somewhat under-appreciated Gåsemyr (2003). But I find the convergence results therein rather counter-intuitive in that they seem to justify adaptive independent proposals with no strong requirement. Besides the massive Doeblin condition:

“The Doeblin condition essentially requires that all the proposal distribution [sic] has uniformly heavier tails than the target distribution.”

Even when the adaptation is based on an history vector made of rejected values and non-replicated accepted values. Actually  convergence of this sequence of adaptive proposals kernels is established under a concentration of the Doeblin constants a¹,a²,… towards one, in the sense that


The reason may be that, with chains satisfying a Doeblin condition, there is a probability to reach stationarity at each step. Equal to a¹, a², … And hence to ignore adaptivity since each kernel keep the target π invariant. So in the end this is not so astounding. (The paper also reminded me of Wolfgang [or Vincent] Doeblin‘s short and tragic life.)

a programming bug with weird consequences

Posted in Kids, pictures, R, Statistics, University life with tags , , , , , , on November 25, 2015 by xi'an

One student of mine coded by mistake an independent Metropolis-Hastings algorithm with too small a variance in the proposal when compared with the target variance. Here is the R code of this implementation:

#target is N(0,1)
#proposal is N(0,.01)
for (t in 2:T){
  if (logu[t]>ratav){

It produces outputs of the following shape
smalvarwhich is quite amazing because of the small variance. The reason for the lengthy freezes of the chain is the occurrence with positive probability of realisations from the proposal with very small proposal density values, as they induce very small Metropolis-Hastings acceptance probabilities and are almost “impossible” to leave. This is due to the lack of control of the target, which is flat over the domain of the proposal for all practical purposes. Obviously, in such a setting, the outcome is unrelated with the N(0,1) target!

It is also unrelated with the normal proposal in that switching to a t distribution with 3 degrees of freedom produces a similar outcome:

It is only when using a Cauchy proposal that the pattern vanishes:

independent Metropolis-Hastings

Posted in Books, Statistics with tags , , , , , , on November 24, 2015 by xi'an

“In this paper we have demonstrated the potential benefits, both theoretical and practical, of the independence sampler over the random walk Metropolis algorithm.”

Peter Neal and Tsun Man Clement Lee arXived a paper on optimising the independent Metropolis-Hastings algorithm. I was a bit surprised at this “return” of the independent sampler, which I hardly mention in my lectures, so I had a look at the paper. The goal is to produce an equivalent to what Gelman, Gilks and Wild (1996) obtained for random walk samplers.  In the formal setting when the target is a product of n identical densities f, the optimal number k of components to update in one Metropolis-Hastings (within Gibbs) round is approximately 2.835/I, where I is the symmetrised Kullback-Leibler distance between the (univariate) target f and the independent proposal q. When I is finite. The most surprising part is that the optimal acceptance rate is again 0.234, as in the random walk case. This is surprising in that I usually associate the independent Metropolis-Hastings algorithm with high acceptance rates. But this is of course when calibrating the proposal q, not the block size k of the Gibbs part. Hence, while this calibration of the independent Metropolis-within-Gibbs sampler is worth the study and almost automatically applicable, it remains that it only applies to a certain category of problems where blocking can take place. As in the disease models illustrating the paper. And requires an adequate choice of proposal distribution for, otherwise, the above quote becomes inappropriate.

parallel Metropolis Hastings [published]

Posted in Statistics, University life with tags , , , , on October 27, 2011 by xi'an

As I was looking at the discussion paper by Yamin Yu and Xiao-Li Meng on improved efficiency for MCMC algorithms, which is available (for free) on-line, I realised the paper on parallel Metropolis-Hastings algorithm we wrote with Pierre Jacob and Murray Smith is now published in Journal of Computational and Graphical Statistics (on-line). This is a special issue for the 20th anniversary of the Journal of Computational and Graphical Statistics and our paper is within the “If Monte Carlo Be a Food of Computing, Simulate on” section! (My friends Olivier Cappé and Radu V. Craiu also have a paper in this issue.)  Here is the complete reference:

P. Jacob, C. P. Robert, & M. H. Smith. Using Parallel Computation to Improve Independent Metropolis–Hastings Based Estimation. Journal of Computational and Graphical Statistics. September 1, 2011, 20(3): 616-635. doi:10.1198/jcgs.2011.10167

The [20th Anniversary Featured Discussion] paper by Yamin Yu and Xiao-Li Meng has already been mentioned on Andrew’s blog, it is full of interesting ideas and remarks about improving Gibbs efficiency, in the spirit of the very fine work Jim Hobert and his collaborators have been developing in the past decade,  fun titles (“To center or not center – that is not the question”, “coupling is more promising than compromising”, “be all our insomnia remembered”, and “needing inception”, in connection with the talk Xiao-Li gave in Paris two months ago….), and above all the fascinating puzzle of linking statistical concepts and Monte Carlo concepts. How comes sufficiency and ancillarity are to play a role in simulation?! Where is the simulation equivalent of Basu’s theorem? These questions obviously relate to the idea of turning simulation into a measure estimation issue, discussed in a post of mine after the Columbia workshop. This interweaving paper also brings back memories of the fantastic Biometrika 1994 interleaving paper by Liu, Wong, and Kong, with its elegant proof of positive decreasing correlation and of improvement by Rao-Blackwellisation [another statistics theorem!] for data augmentation.