Archive for Monte Carlo Statistical Methods

more and more control variates

Posted in Statistics with tags , , , , , , , on October 5, 2018 by xi'an

A few months ago, François Portier and Johan Segers arXived a paper on a question that has always puzzled me, namely how to add control variates to a Monte Carlo estimator and when to stop if needed! The paper is called Monte Carlo integration with a growing number of control variates. It is related to the earlier Oates, Girolami and Chopin (2017) which I remember discussing with Chris when he was in Warwick. The puzzling issue of control variates is [for me] that, while the optimal weight always decreases the variance of the resulting estimate, in practical terms, implementing the method may increase the actual variance. Glynn and Szechtman at MCqMC 2000 identify six different ways of creating the estimate, depending on how the covariance matrix, denoted P(hh’), is estimated. With only one version integrating constant functions and control variates exactly. Which actually happens to be also a solution to a empirical likelihood maximisation under the empirical constraints imposed by the control variates. Another interesting feature is that, when the number m of control variates grows with the number n of simulations the asymptotic variance goes to zero, meaning that the control variate estimator converges at a faster speed.

Creating an infinite sequence of control variates sounds unachievable in a realistic situation. Legendre polynomials are used in the paper, but is there a generic and cheap way to getting these. And … control variate selection, anyone?!

Gibbs for incompatible kids

Posted in Books, Statistics, University life with tags , , , , , , , , , , on September 27, 2018 by xi'an

In continuation of my earlier post on Bayesian GANs, which resort to strongly incompatible conditionals, I read a 2015 paper of Chen and Ip that I had missed. (Published in the Journal of Statistical Computation and Simulation which I first confused with JCGS and which I do not know at all. Actually, when looking at its editorial board,  I recognised only one name.) But the study therein is quite disappointing and not helping as it considers Markov chains on finite state spaces, meaning that the transition distributions are matrices, meaning also that convergence is ensured if these matrices have no null probability term. And while the paper is motivated by realistic situations where incompatible conditionals can reasonably appear, the paper only produces illustrations on two and three states Markov chains. Not that helpful, in the end… The game is still afoot!

rethinking the ESS

Posted in Statistics with tags , , , , , , , , , on September 14, 2018 by xi'an

Following Victor Elvira‘s visit to Dauphine, one and a half year ago, where we discussed the many defects of ESS as a default measure of efficiency for importance sampling estimators, and then some more efforts (mostly from Victor!) to formalise these criticisms, Victor, Luca Martino and I wrote a paper on this notion, now arXived. (Victor most kindly attributes the origin of the paper to a 2010 ‘Og post on the topic!) The starting thread of the (re?)analysis of this tool introduced by Kong (1992) is that the ESS used in the literature is an approximation to the “true” ESS, generally unavailable. Approximation that is pretty crude and hence impacts the relevance of using it as the assessment tool for comparing importance sampling methods. In the paper, we re-derive (with the uttermost precision) the resulting approximation and list the many assumptions that [would] validate this approximation. The resulting drawbacks are many, from the absurd property of always being worse than direct sampling, to being independent from the target function and from the sample per se. Since only importance weights matter. This list of issues is not exactly brand new, but we think it is worth signaling given the fact that this approximation has been widely used in the last 25 years, due to its simplicity, as a practical rule of thumb [!] in a wide variety of importance sampling methods. In continuation of the directions drafted in Martino et al. (2017), we also indicate some alternative notions of importance efficiency. Note that this paper does not cover the use of ESS for MCMC algorithms, where it is somewhat more legit, if still too rudimentary to really catch convergence or lack thereof! [Note: I refrained from the post title resinking the ESS…]

asymptotics of M³C²L

Posted in Statistics with tags , , , , , , , on August 19, 2018 by xi'an
In a recent arXival, Blazej Miasojedow, Wojciech Niemiro and Wojciech Rejchel establish the convergence of a maximum likelihood estimator based on an MCMC approximation of the likelihood function. As in intractable normalising constants. The main result in the paper is a Central Limit theorem for the M³C²L estimator that incorporates an additional asymptotic variance term for the Monte Carlo error. Where both the sample size n and the number m of simulations go to infinity. Independently so. However, I do not fully perceive the relevance of using an MCMC chain to target an importance function [which is used in the approximation of the normalising constant or otherwise for the intractable likelihood], relative to picking an importance function h(.) that can be directly simulated.

approximative Laplace

Posted in Books, R, Statistics with tags , , , , on August 18, 2018 by xi'an

I came across this question on X validated that wondered about one of our examples in Monte Carlo Statistical Methods. We have included a section on Laplace approximations in the Monte Carlo integration chapter, with a bit of reluctance on my side as this type of integral approximation does not directly connect to Monte Carlo methods. Even less in the case of the example as we aimed at replacing a coverage probability for a Gamma distribution with a formal Laplace approximation. Formal due to the lack of asymptotics, besides the length of the interval (a,b) which probability is approximated. Hence, on top of the typos, the point of the example is not crystal clear, in that it does not show much more than the step-function approximation to the function converges as the interval length gets to zero. For instance, using instead a flat approximation produces an almost as good approximation:

>  xact(5,2,7,9)
[1] 0.1933414
> laplace(5,2,7,9)
[1] 0.1933507
> flat(5,2,7,9)
[1] 0.1953668

What may be more surprising is the resilience of the approximation as the width of the interval increases:

> xact(5,2,5,11)
[1] 0.53366
> lapl(5,2,5,11)
[1] 0.5354954
> plain(5,2,5,11)
[1] 0.5861004
> quad(5,2,5,11)
[1] 0.434131

optimal approximations for importance sampling

Posted in Mountains, pictures, Statistics, Travel with tags , , , , , , , , , , , on August 17, 2018 by xi'an

“…building such a zero variance estimator is most of the times not practical…”

As I was checking [while looking at Tofino inlet from my rental window] on optimal importance functions following a question on X validated, I came across this arXived note by Pantaleoni and Heitz, where they suggest using weighted sums of step functions to reach minimum variance. However, the difficulty with probability densities that are step functions is that they necessarily have a compact support, which thus make them unsuitable for targeted integrands with non-compact support. And making the purpose of the note and the derivation of the optimal weights moot. It points out its connection with the reference paper of Veach and Guibas (1995) as well as He and Owen (2014), a follow-up to the other reference paper by Owen and Zhou (2000).

JSM 2018 [#3]

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , on August 2, 2018 by xi'an

Third day at JSM2018 and the audience is already much smaller than the previous days! Although it is hard to tell with a humongous conference centre spread between two buildings. And not getting hooked by the tantalising view of the bay, with waterplanes taking off every few minutes…


Still, there were (too) few participants in the two computational statistics (MCMC) sessions I attended in the morning, the first one being organised by James Flegal on different assessments of MCMC convergence. (Although this small audience made the session quite homely!) In his own talk, James developed an interesting version of multivariate ESS that he related with a stopping rule for minimal precision. Vivek Roy also spoke about a multiple importance sampling construction I missed when it came upon on arXiv last May.

In the second session, Mylène Bédard exposed the construction of and improvement brought by local scaling in MALA, with 20% gain from using non-local tuning. Making me idle muse over whether block sizes in block-Gibbs sampling could also be locally optimised… Then Aaron Smith discussed how HMC should be scaled for optimal performances, under rather idealised conditions and very high dimensions. Mentioning a running time of d, the dimension, to the power ¼. But not addressing the practical question of calibrating scale versus number of steps in the discretised version. (At which time my hands were [sort of] frozen solid thanks to the absurd air conditioning in the conference centre and I had to get out!)