**L**ast August, Felipe Medina-Aguayo (a former student at Warwick) and Richard Everitt (who has now joined Warwick) arXived a paper on multiple importance sampling (for normalising constants) that goes “exploring some improvements and variations of the balance heuristic via a novel extended-space representation of the estimator, leading to straightforward annealing schemes for variance reduction purposes”, with the interesting side remark that Rao-Blackwellisation may prove sub-optimal when there are many terms in the proposal family, in the sense that not every term in the mixture gets sampled. As already noticed by Victor Elvira and co-authors, getting rid of the components that are not used being an improvement without inducing a bias. The paper also notices that the loss due to using sample sizes rather than expected sample sizes is of second order, compared with the variance of the compared estimators. It further relates to a completion or auxiliary perspective that reminds me of the approaches we adopted in the population Monte Carlo papers and in the vanilla Rao-Blackwellisation paper. But it somewhat diverges from this literature when entering a simulated annealing perspective, in that the importance distributions it considers are freely chosen as powers of a generic target. It is quite surprising that, despite the normalising weights being unknown, a simulated annealing approach produces an unbiased estimator of the initial normalising constant. While another surprise therein is that the extended target associated to their balance heuristic does not admit the right density as marginal but preserves the same normalising constant… (This paper will be presented at BayesComp 2020.)

## Archive for variance reduction

## revisiting the balance heuristic

Posted in Statistics with tags Mathematical Sciences Building, multiple importance methods, normalising constant, population Monte Carlo, Rao-Blackwellisation, United Kingdom, University of Warwick, variance reduction on October 24, 2019 by xi'an## ABC by QMC

Posted in Books, Kids, Statistics, University life with tags ABC, ABC-PMC, ABC-SMC, CREST, JCGS, PhD thesis, population genetics, population Monte Carlo, qMC, quasi-Monte Carlo methods, variance reduction on November 5, 2018 by xi'an**A** paper by Alexander Buchholz (CREST) and Nicolas Chopin (CREST) on quasi-Monte Carlo methods for ABC is going to appear in the *Journal of Computational and Graphical Statistics*. I had missed the opportunity when it was posted on arXiv and only became aware of the paper’s contents when I reviewed Alexander’s thesis for the doctoral school. The fact that the parameters are simulated (in ABC) from a prior that is quite generally a standard distribution while the pseudo-observations are simulated from a complex distribution (associated with the intractability of the likelihood function) means that the use of quasi-Monte Carlo sequences is in general only possible for the first part.

The ABC context studied there is close to the original version of ABC rejection scheme [as opposed to SMC and importance versions], the main difference standing with the use of M pseudo-observations instead of one (of the same size as the initial data). This repeated version has been discussed and abandoned in a strict Monte Carlo framework in favor of M=1 as it increases the overall variance, but the paper uses this version to show that the multiplication of pseudo-observations in a quasi-Monte Carlo framework does not increase the variance of the estimator. (Since the variance apparently remains constant when taking into account the generation time of the pseudo-data, we can however dispute the interest of this multiplication, except to produce a constant variance estimator, for some targets, or to be used for convergence assessment.) L The article also covers the bias correction solution of Lee and Latuszyǹski (2014).

Due to the simultaneous presence of pseudo-random and quasi-random sequences in the approximations, the authors use the notion of mixed sequences, for which they extend a one-dimension central limit theorem. The paper focus on the estimation of Z(ε), the normalization constant of the ABC density, ie the predictive probability of accepting a simulation which can be estimated at a speed of O(N⁻¹) where N is the number of QMC simulations, is a wee bit puzzling as I cannot figure the relevance of this constant (function of ε), especially since the result does not seem to generalize directly to other ABC estimators.

A second half of the paper considers a sequential version of ABC, as in ABC-SMC and ABC-PMC, where the proposal distribution is there based on a Normal mixture with a *small* number of components, estimated from the (particle) sample of the previous iteration. Even though efficient techniques for estimating this mixture are available, this innovative step requires a calculation time that should be taken into account in the comparisons. The construction of a decreasing sequence of tolerances ε seems also pushed beyond and below what a sequential approach like that of Del Moral, Doucet and Jasra (2012) would produce, it seems with the justification to always prefer the lower tolerances. This is not necessarily the case, as recent articles by Li and Fearnhead (2018a, 2018b) and ours have shown (Frazier et al., 2018). Overall, since ABC methods are large consumers of simulation, it is interesting to see how the contribution of QMC sequences results in the reduction of variance and to hope to see appropriate packages added for standard distributions. However, since the most consuming part of the algorithm is due to the simulation of the pseudo-data, in most cases, it would seem that the most relevant focus should be on QMC add-ons on this part, which may be feasible for models with a huge number of standard auxiliary variables as for instance in population evolution.

## Markov chain importance sampling

Posted in Books, pictures, Running, Statistics, Travel, University life with tags Berlin, Euler discretisation, Freie Universität Berlin, importance sampling, Ingmar Schuster, Langevin MCMC algorithm, marginal, MCMC algorithms, Metropolis-Hastings algorithm, Rao-Blackwellisation, Université Paris Dauphine, variance reduction on May 31, 2018 by xi'an**I**ngmar Schuster (formerly a postdoc at Dauphine and now in Freie Universität Berlin) and Ilja Klebanov (from Berlin) have recently arXived a paper on recycling proposed values in [a rather large class of] Metropolis-Hastings and unadjusted Langevin algorithms. This means using the proposed variates of one of these algorithms as in an importance sampler, with an importance weight going from the target over the (fully conditional) proposal to the target over the marginal stationary target. In the Metropolis-Hastings case, since the later is not available in most setups, the authors suggest using a Rao-Blackwellised nonparametric estimate based on the entire MCMC chain. Or a subset.

“Our estimator refutes the folk theorem that it is hard to estimate [the normalising constant] with mainstream Monte Carlo methods such as Metropolis-Hastings.”

The paper thus brings an interesting focus on the proposed values, rather than on the original Markov chain, which naturally brings back to mind the derivation of the joint distribution of these proposed values we made in our (1996) Rao-Blackwellisation paper with George Casella. Where we considered a parametric and non-asymptotic version of this distribution, which brings a guaranteed improvement to MCMC (Metropolis-Hastings) estimates of integrals. In subsequent papers with George, we tried to quantify this improvement and to compare different importance samplers based on some importance sampling corrections, but as far as I remember, we only got partial results along this way, and did not cover the special case of the normalising constant Þ… Normalising constants did not seem such a pressing issue at that time, I figure. (A *Monte Carlo 101* question: how can we be certain the importance sampler offers a finite variance?)

Ingmar’s views about this:

I think this is interesting future work. My intuition is that for Metropolis-Hastings importance sampling with random walk proposals, the variance is guaranteed to be finite because the importance distribution ρ_θ is a convolution of your target ρ with the random walk kernel q. This guarantees that the tails of ρ_θ are no lighter than those of ρ. What other forms of q mean for the tails of ρ_θ I have less intuition about.

When considering the Langevin alternative with transition (4), I was first confused and thought it was incorrect for moving from one value of Y (proposal) to the next. But that’s what unadjusted means in “unadjusted Langevin”! As pointed out in the early Langevin literature, e.g., by Gareth Roberts and Richard Tweedie, using a discretised Langevin diffusion in an MCMC framework means there is a risk of non-stationarity & non-ergodicity. Obviously, the corrected (MALA) version is more delicate to approximate (?) but at the very least it ensures the Markov chain does not diverge. Even when the unadjusted Langevin has a stationary regime, its joint distribution is likely quite far from the joint distribution of a proper discretisation. Now this also made me think about a parameterised version in the 1996 paper spirit, but there is nothing specific about MALA that would prevent the implementation of the general principle. As for the unadjusted version, the joint distribution is directly available. (But not necessarily the marginals.)

Here is an answer from Ingmar about that point

Personally, I think the most interesting part is the practical performance gain in terms of estimation accuracy for fixed CPU time, combined with the convergence guarantee from the CLT. ULA was particularly important to us because of the papers of Arnak Dalalyan, Alain Durmus & Eric Moulines and recently from Mike Jordan’s group, which all look at an unadjusted Langevin diffusion (and unimodal target distributions). But MALA admits a Metropolis-Hastings importance sampling estimator, just as Random Walk Metropolis does – we didn’t include MALA in the experiments to not get people confused with MALA and ULA. But there is no delicacy involved whatsoever in approximating the marginal MALA proposal distribution. The beauty of our approach is that it works for almost all Metropolis-Hastings algorithms where you can evaluate the proposal density q, there is no constraint to use random walks at all (we will emphasize this more in the paper).

## a neat (theoretical) Monte Carlo result

Posted in Books, Statistics, University life with tags confidence sets, importance sampling, Monte Carlo integration, unbiased estimation, variance reduction on December 19, 2014 by xi'an**M**ark Huber just arXived a short paper where he develops a Monte Carlo approach that bounds the probability of large errors

by computing a lower bound on the sample size r and I wondered at the presence of μ in the bound as it indicates the approach is not translation invariant. One reason is that the standard deviation of the simulated random variables is bounded by cμ. Another reason is that Mark uses as its estimator the median

where the S’s are partial averages of sufficient length and the R’s are independent uniforms over (1-ε,1+ε): using those uniforms may improve the coverage of given intervals but it also means that the absolute scale of the error is multiplied by the scale of S, namely μ. I first thought that some a posteriori recentering could improve the bound but since this does not impact the variance of the simulated random variables, I doubt it is possible.

## MCMC with control variates

Posted in Books, Statistics, University life with tags Athens, control variate, Gibbs sampling, JRSSB, MCMC, Monte Carlo Statistical Methods, Poisson equation, Royal Statistical Society, Series B, simulation, variance reduction on February 17, 2012 by xi'an**I**n the latest issue of JRSS Series B (74(1), Jan, 2012), I just noticed that no paper is “from my time” as co-editor, i.e. that all of them have been submitted after I completed my term in Jan. 2010. Given the two year delay, this is not that surprising, but it also means I can make comments on some papers w/o reservation! A paper I had seen earlier *(as a reader, not as an editor nor as a referee!)* is Petros Dellaportas’ and Ioannis Kontoyiannis’ *Control variates for estimation based on reversible Markov chain Monte Carlo samplers*. The idea is one of post-processing MCMC output, by stabilising the empirical average via control variates. There are two difficulties, one in finding control variates, i.e. functions $\Psi(\cdot)$ with zero expectation under the target distribution, and another one in estimating the optimal coefficient in a consistent way. The paper solves the first difficulty by using the Poisson equation, namely that *G(x)-KG(x)* has zero expectation under the stationary distribution associated with the Markov kernel *K*. Therefore, if *KG* can be computed in closed form, this is a generic control variate taking advantage of the MCMC algorithm. Of course, the above *if* is a big *if*: it seems difficult to find closed form solutions when using a Metropolis-Hastings algorithm for instance and the paper only contains illustrations within the conjugate prior/Gibbs sampling framework. The second difficulty is also met by Dellaportas and Kontoyiannis, who show that the asymptotic variance of the resulting central limit can be equal to zero in some cases.

## yet more questions about Monte Carlo Statistical Methods

Posted in Books, Statistics, University life with tags Brigham Young University, Cauchy-Schwarz inequality, finite variance, importance sampling, Monte Carlo Statistical Methods, Provo, simulation, textbook, typos, Utah, variance reduction on December 8, 2011 by xi'an**A**s a coincidence, here is the third email I this week about typos in * Monte Carlo Statistical Method*, from Peng Yu this time. (Which suits me well in terms of posts as I am currently travelling to Provo, Utah!)

I’m reading the section on importance sampling. But there are a fewcases in your book MCSM2 that are not clear to me.

On page 96: “Theorem 3.12 suggests looking for distributions g forwhich |h|f/g is almost constant with finite variance.”

What is the precise meaning of “almost constant”? If |h|f/g is almostconstant, how come its variance is not finite?

“Almost constant” is not a well-defined property, I am afraid. By this sentence on page 96 we meant using densities g that made *|h|f/g* as little varying as possible while being manageable. Hence the insistence on the finite variance. Of course, the closer *|h|f/g* is to a constant function the more likely the variance is to be finite.

“It is importantto note that although the finite variance constraint is not necessary for theconvergence of (3.8) and of (3.11), importance sampling performs quite poorlywhen (3.12) ….”

It is not obvious to me why when (3.12) importance sampling performspoorly. I might have overlooked some very simple facts. Would youplease remind me why it is the case?From the previous discussion in the same section, it seems that h(x) ismissing in (3.12). I think that (3.12) should be (please compare withthe first equation in section 3.3.2)

The preference for a finite variance of *f/g* and against (3.12) is that we would like the importance function *g* to work well for most integrable functions *h*. Hence a requirement that the importance weight *f/g* itself behaves well. It guarantees some robustness across the *h*‘s and also avoids checking for the finite variance (as in your displayed equation) for all functions *h* that are square-integrable against *g*, by virtue of the Cauchy-Schwarz inequality.

## Alternative evidence estimation in phylogenetics

Posted in Statistics with tags ABC, Bayesian model choice, evidence, harmonic mean estimator, MCMC, variance reduction on January 21, 2010 by xi'an** An alternative marginal likelihood estimator for phylogenetic models** is the title of the paper recently posted by Arima and Tardella on arXiv. While working on phylogenetic trees, it does not consider an ABC approach for computing the evidence but instead relies on harmonic mean estimators (since thermodynamic alternatives take too long). It is based upon the harmonic mean estimator of the marginal (already discussed in this post), namely which

holds for *any* density *φ.* As Arima and Tardella point out, the special case when *φ *is the prior (Newton and Raftery, 1994, JRSS Series B) is still the one used in phylogenetic softwares like **MrBayes**, **PHASE** and **BEAST**, despite clear warnings about its unreliability! In ** An alternative marginal likelihood estimator for phylogenetic models**, the authors propose a choice of

*φ*that avoids the infinite variance problem of the original harmonic mean estimator. They perturbate the unormalised posterior distribution, of the kind (in one dimension, when the mode is near zero)

into

by pulling the density up by a controlled amount of . I find it interesting if only because it goes against my own intuition (and solution) of removing mass from the posterior by concentrating around the mode. The variance of the corresponding harmonic mean estimator is finite if the tails of do not decrease too rapidly, I think. The remainder of the paper studies the impact of this approximation technique in the setting of phylogenic models for comparing trees.