## control variates [seminar]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on November 5, 2021 by xi'an

Today, Petros Dellaportas (whom I have know since the early days of MCMC, when we met in CIRM) gave a seminar at the Warwick algorithm seminar on control variates for MCMC, reminding me of his 2012 JRSS paper. Based on the Poisson equation and using a second control variate to stabilise the Monte Carlo approximation do the first control variate. The difference with usual control variates is finding a first approximate G(x)-q(y|x)G(Y) to F-πF. And the first Poisson equation is using α(x,y)q(y|x) rather than π. Then the second expands log α(x,y)q(y|x) to achieve a manageable term.

Abstract: We provide a general methodology to construct control variates for any discrete time random walk Metropolis and Metropolis-adjusted Langevin algorithm Markov chains that can achieve, in a post-processing manner and with a negligible additional computational cost, impressive variance reduction when compared to the standard MCMC ergodic averages. Our proposed estimators are based on an approximate solution of the Poisson equation for a multivariate Gaussian target densities of any dimension.

I wonder if there were a neural network version that would first build G from scratch and later optimise it towards solving the Poisson equation. As in this recent arXival I haven’t read (yet).

## MCMC with control variates

Posted in Books, Statistics, University life with tags , , , , , , , , , , on February 17, 2012 by xi'an

In the latest issue of JRSS Series B (74(1), Jan, 2012), I just noticed that no paper is “from my time” as co-editor, i.e. that all of them have been submitted after I completed my term in Jan. 2010. Given the two year delay, this is not that surprising, but it also means I can make comments on some papers w/o reservation! A paper I had seen earlier (as a reader, not as an editor nor as a referee!) is Petros Dellaportas’ and Ioannis Kontoyiannis’ Control variates for estimation based on  reversible Markov chain Monte Carlo samplers. The idea is one of post-processing MCMC output, by stabilising the empirical average via control variates. There are two difficulties, one in finding control variates, i.e. functions $\Psi(\cdot)$ with zero expectation under the target distribution, and another one in estimating the optimal coefficient in a consistent way. The paper solves the first difficulty by using the Poisson equation, namely that G(x)-KG(x) has zero expectation under the stationary distribution associated with the Markov kernel K. Therefore, if KG can be computed in closed form, this is a generic control variate taking advantage of the MCMC algorithm. Of course, the above if is a big if: it seems difficult to find closed form solutions when using a Metropolis-Hastings algorithm for instance and the paper only contains illustrations within the conjugate prior/Gibbs sampling framework. The second difficulty is also met by Dellaportas and Kontoyiannis, who show that the asymptotic variance of the resulting central limit can be equal to zero in some cases.