As reading and commenting the importance tempering for variable selection paper by Giacomo Zanella (previously Warwick) and Gareth Roberts (Warwick) has been on my to-do list for quite a while, the fact that Giacomo presented this work at CIRM Bayesian Masterclass last week was the right nudge to write this post.
The starting point for the method is to simulate from a tempered version of a Gibbs sampler, selecting the component [of the parameter vector θ] according to an importance weight that is the inverse of the conditional posterior to the complementary power. That is, the inverse of the importance weight. This approach differs from classical (MCMC) tempering in that it does not target the original distribution. Hence it produces a weighted sample, whose computing time is of the order of the dimension of θ, even though the tempered simulation of a single conditional can reduce the variance of the estimator. The method is generalisable to any collection of one-component proposal/importance distributions, with the assumption that they have fatter tails that the true conditionals. The resulting Markov chain is reversible with respect to another stationary measure made of the original distribution multiplied by the normalisation factor of the importance weights but this ensures that weighted averages converge to the right quantity. Interestingly so because the powered conditionals are not necessarily coherent from a Gibbsic perspective.
The method is applied to Bayesian [spike-and-slab] variable selection of variables, the importance selection of a subset of covariates being restricted to changing one index at a time. I did not understand first how the computation of the normalising constant avoids involving 2-to-the-power-p terms until Giacomo explained to me that the constant was only computed for conditionals. The complexity gets down from O(|γ|²) to O(|γ|p), where |γ| is the number of variables. Another question I had was about the tempering power β, which selection remains a wee bit of an art!