Archive for independent Metropolis-Hastings algorithm

adaptive incremental mixture MCMC

Posted in Statistics with tags , , , , , , , on August 12, 2022 by xi'an

Sadly, I missed this adaptive incremental mixture MCMC paper by my friends Florian Maire, Nial Friel, Antonietta Mira, and Adrian E. Raftery when it came out in JCGS in 2019. The core of the paper is about building a time-inhomogeneous mixture independent proposal, starting from an initial distribution and adding one component when hitting a point for which the ratio target / proposal is large, as this points out a part of the space that is not well-enough explored, while the other components do not change, except for a proportional decrease in the weights. This proposal reminded me of the inspiring paper of Gåsemyr (2003), which in some ways inspired our population Monte Carlo sampler. Obviously, there is a what-you-get-is-what-you-see drawback to the approach in that regions where this ratio is high may never be explored by the proposal, despite its adaptivity.

The added component is Normal, centred at the associated (accepted) proposed value ø and with covariance matrix a local estimate based on past iterations of the algorithm. And with weight proportional to the (powered) target density at ø, which does not require a normalising constant. The method however requires setting a certain number of calibration parameters like the power γ for the weight, the lower bound M for the ratio target to proposal, the rate of diminishing adaptation (which is also needed for ergodicity à la Roberts and Rosenthal (2007)).  And the implicit choice of a particular parameterisation for the Normal mixture to be close enough to the target. In the posted experiments, the number of components in the mixture does not grow to unmanageable figures, but a further adaption could be in removing components that are inactive or leading to systematic rejection as we did in the population Monte Carlo paper.

Metropolis gets off the ground

Posted in Books, Kids, Statistics with tags , , , , , , , on April 1, 2019 by xi'an

An X validated discussion that toed-and-froed about an incomprehension of the Metropolis-Hastings algorithm. Which started with a blame of George Casella‘s and Roger Berger’s Statistical Inference (p.254), when the real issue was the inquisitor having difficulties with the notation V ~ f(v), or the notion of random variable [generation], mistaking identically distributed with identical. Even (me) crawling from one iteration to the next did not help at the beginning. Another illustration of the strong tendency on this forum to jettison fundamental prerequisites…

adaptive independent Metropolis-Hastings

Posted in Statistics with tags , , , , , , on May 8, 2018 by xi'an

When rereading this paper by Halden et al. (2009), I was reminded of the earlier and somewhat under-appreciated Gåsemyr (2003). But I find the convergence results therein rather counter-intuitive in that they seem to justify adaptive independent proposals with no strong requirement. Besides the massive Doeblin condition:

“The Doeblin condition essentially requires that all the proposal distribution [sic] has uniformly heavier tails than the target distribution.”

Even when the adaptation is based on an history vector made of rejected values and non-replicated accepted values. Actually  convergence of this sequence of adaptive proposals kernels is established under a concentration of the Doeblin constants a¹,a²,… towards one, in the sense that

E[(1-a¹)(1-a²)…]=0.

The reason may be that, with chains satisfying a Doeblin condition, there is a probability to reach stationarity at each step. Equal to a¹, a², … And hence to ignore adaptivity since each kernel keep the target π invariant. So in the end this is not so astounding. (The paper also reminded me of Wolfgang [or Vincent] Doeblin‘s short and tragic life.)

a programming bug with weird consequences

Posted in Kids, pictures, R, Statistics, University life with tags , , , , , , on November 25, 2015 by xi'an

One student of mine coded by mistake an independent Metropolis-Hastings algorithm with too small a variance in the proposal when compared with the target variance. Here is the R code of this implementation:

#target is N(0,1)
#proposal is N(0,.01)
T=1e5
prop=x=rnorm(T,sd=.01)
ratop=dnorm(prop,log=TRUE)-dnorm(prop,sd=.01,log=TRUE)
ratav=ratop[1]
logu=ratop-log(runif(T))
for (t in 2:T){
  if (logu[t]>ratav){
    x[t]=prop[t];ratav=ratop[t]}else{x[t]=x[t-1]}
  }

It produces outputs of the following shape
smalvarwhich is quite amazing because of the small variance. The reason for the lengthy freezes of the chain is the occurrence with positive probability of realisations from the proposal with very small proposal density values, as they induce very small Metropolis-Hastings acceptance probabilities and are almost “impossible” to leave. This is due to the lack of control of the target, which is flat over the domain of the proposal for all practical purposes. Obviously, in such a setting, the outcome is unrelated with the N(0,1) target!

It is also unrelated with the normal proposal in that switching to a t distribution with 3 degrees of freedom produces a similar outcome:

It is only when using a Cauchy proposal that the pattern vanishes:

independent Metropolis-Hastings

Posted in Books, Statistics with tags , , , , , , on November 24, 2015 by xi'an

“In this paper we have demonstrated the potential benefits, both theoretical and practical, of the independence sampler over the random walk Metropolis algorithm.”

Peter Neal and Tsun Man Clement Lee arXived a paper on optimising the independent Metropolis-Hastings algorithm. I was a bit surprised at this “return” of the independent sampler, which I hardly mention in my lectures, so I had a look at the paper. The goal is to produce an equivalent to what Gelman, Gilks and Wild (1996) obtained for random walk samplers.  In the formal setting when the target is a product of n identical densities f, the optimal number k of components to update in one Metropolis-Hastings (within Gibbs) round is approximately 2.835/I, where I is the symmetrised Kullback-Leibler distance between the (univariate) target f and the independent proposal q. When I is finite. The most surprising part is that the optimal acceptance rate is again 0.234, as in the random walk case. This is surprising in that I usually associate the independent Metropolis-Hastings algorithm with high acceptance rates. But this is of course when calibrating the proposal q, not the block size k of the Gibbs part. Hence, while this calibration of the independent Metropolis-within-Gibbs sampler is worth the study and almost automatically applicable, it remains that it only applies to a certain category of problems where blocking can take place. As in the disease models illustrating the paper. And requires an adequate choice of proposal distribution for, otherwise, the above quote becomes inappropriate.

%d bloggers like this: