a programming bug with weird consequences
One student of mine coded by mistake an independent Metropolis-Hastings algorithm with too small a variance in the proposal when compared with the target variance. Here is the R code of this implementation:
#target is N(0,1) #proposal is N(0,.01) T=1e5 prop=x=rnorm(T,sd=.01) ratop=dnorm(prop,log=TRUE)-dnorm(prop,sd=.01,log=TRUE) ratav=ratop[1] logu=ratop-log(runif(T)) for (t in 2:T){ if (logu[t]>ratav){ x[t]=prop[t];ratav=ratop[t]}else{x[t]=x[t-1]} }
It produces outputs of the following shape
which is quite amazing because of the small variance. The reason for the lengthy freezes of the chain is the occurrence with positive probability of realisations from the proposal with very small proposal density values, as they induce very small Metropolis-Hastings acceptance probabilities and are almost “impossible” to leave. This is due to the lack of control of the target, which is flat over the domain of the proposal for all practical purposes. Obviously, in such a setting, the outcome is unrelated with the N(0,1) target!
It is also unrelated with the normal proposal in that switching to a t distribution with 3 degrees of freedom produces a similar outcome:
It is only when using a Cauchy proposal that the pattern vanishes:
November 25, 2015 at 6:22 pm
[…] a programming bug with weird consequences One student of mine coded by mistake an independent Metropolis-Hastings algorithm with too small a variance in the proposal when compared with the target variance. […]
November 25, 2015 at 5:06 pm
Just wondering: Wikpedia suggests adaptive rejection sampling is more robust. Any reason M-H is being used here?
November 25, 2015 at 5:11 pm
Carl: this is an illustration of how things can get wrong with the wrong proposal in a Metropolis-Hastings algorithm. In a toy problem. In realistic situations, actually in any problem with dimension more than one, adaptive rejection sampling is simply not available.