## amazing appendix

**I**n the first appendix of the 1995 Statistical Science paper of Besag, Green, Higdon and Mengersen, on MCMC, “Bayesian Computation and Stochastic Systems”, stands a fairly neat result I was not aware of (and which Arnaud Doucet, with his unrivalled knowledge of the literature!, pointed out to me in Oxford, avoiding me the tedium to try to prove it afresco!). I remember well reading a version of the paper in Fort Collins, Colorado, in 1993 (I think!) but nothing about this result.

It goes as follows: when running a Metropolis-within-Gibbs sampler for component x¹ of a collection of variates x¹,x²,…, thus aiming at simulating from the full conditional of x¹ given x⁻¹ by making a proposal q(x|x¹,x⁻¹), it is perfectly acceptable to use a proposal that depends on a parameter α (no surprise so far!) *and* to generate this parameter α anew at each iteration (still unsurprising as α can be taken as an auxiliary variable) *and* to have the distribution of this parameter α depending on the other variates x²,…, i.e., x⁻¹. This is the surprising part, as adding α as an auxiliary variable was messing up the update of x⁻¹. But the proof as found in the 1995 paper [page 35] does not require to consider α as such as it establishes global balance directly. (Or maybe still detailed balance when writing the whole Gibbs sampler as a cycle of Metropolis steps.) Terrific! And a whiff mysterious..!

May 20, 2018 at 7:22 pm

Thanks for posting this!

In my work, in each iteration I would like to use a normal proposal density for a parameter x_1 where the proposal mean and covariance matrix depend on the current values of parameters x_2 and x_3 which depend on the previous value of x_1 and so on. While this works well in practice, I was wondering if such a scheme preserves ergodicity (it seems it does!) and if to treat the proposal density as independent from the previous value of x_1 when calculating the acceptance ratio. Would highly appreciate any thoughts on this.