non-reversible jump MCMC

Philippe Gagnon and et Arnaud Doucet have recently arXived a paper on a non-reversible version of reversible jump MCMC, the methodology introduced by Peter Green in 1995 to tackle Bayesian model choice/comparison/exploration. Whom Philippe presented at BayesComp20.

“The objective of this paper is to propose sampling schemes which do not suffer from such a diffusive behaviour by exploiting the lifting idea (…)”

The idea is related to lifting, creating non-reversible behaviour by adding a direction index (a spin) to the exploration of the models, assumed to be totally ordered, as with nested models (mixtures, changepoints, &tc.).  As with earlier versions of lifting, the chain proceeds along one (spin) direction until the proposal is rejected in which case the spin spins. The acceptance probability in the event of a change of model (upwards or downwards) is essentially the same as the reversible one (meaning it includes the dreaded Jacobian!). The original difficulty with reversible jump remains active with non-reversible jump in that the move from one model to the next must produce plausible values. The paper recalls two methods proposed by Christophe Andrieu and his co-authors. One consists in buffering a tempering sequence, but this proves costly.  Pursuing the interesting underlying theme that both reversible and non-reversible versions are noisy approximations of the marginal ratio, the other one consists in marginalising out the parameter to approximate the marginal probability of moving between nearby models. Combined with multiple choice to preserve stationarity and select more likely moves at the same time. Still requiring a multiplication of the number of simulations but parallelisable. The paper contains an exact comparison result that non-reversible jump leads to a smaller asymptotic variance than reversible jump, but it is unclear to me whether or not this accounts for the extra computing time resulting from the multiple paths in the proposed algorithms. (Even though the numerical illustration shows an improvement brought by the non-reversible side for the same computational budget.)

2 Responses to “non-reversible jump MCMC”

  1. […] for… reversibility reasons. In that case, the acceptance probability is simply the Jacobian of the transform to the proposal, just as in reversible jump MCMC. […]

  2. Philippe Gagnon Says:

    Thank you for taking the time to review the paper.

    Let me clarify the part of the paper that is unclear: “The paper contains an exact comparison result that non-reversible jump leads to a smaller asymptotic variance than reversible jump, but it is unclear to me whether or not this accounts for the extra computing time resulting from the multiple paths in the proposed algorithms.” Consider that P_NRJ is the Markov kernel simulated by any non-reversible jump (NRJ) algorithms (e.g., the vanilla version without the approaches of Andrieu) and P_RJ that simulated by its reversible counterpart. If P_NRJ is the Markov kernel simulated by the vanilla NRJ, then P_RJ is the Markov kernel simulated by the vanilla RJ. These algorithms have the same computational cost because using a direction/spin does not increase the computation cost. Our result allows to compare these two algorithms in terms of asymptotic variance.

    Our result does not allow to compare the vanilla NRJ with the RJ using the approaches of Andrieu, for instance. As you mentioned, we compared such algorithms on a numerical example (in which all algorithms are given the same computational budget).

    Best regards,

    Philippe

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.