**T**he recent arXival by Takashi Goda of Computing the variance of a conditional expectation via non-nested Monte Carlo led me to read it as I could not be certain of the contents from only reading the title! The short paper considers the issue of estimating the variance of a conditional expectation when able to simulate the joint distribution behind the quantity of interest. The second moment E(E[f(X)|Y]²) can be written as a triple integral with two versions of x given y and one marginal y, which means that it can approximated in an unbiased manner by simulating a realisation of y then conditionally two realisations of x. The variance requires a third simulation of x, which the author seems to deem too costly and that he hence replaces with another unbiased version based on two conditional generations only. (He notes that a faster biased version is available with bias going down faster than the Monte Carlo error, which makes the alternative somewhat irrelevant, as it is also costly to derive.) An open question after reading the paper stands with the optimal version of the generic estimator (5), although finding the optimum may require more computing time than it is worth spending. Another one is whether or not this version of the expected conditional variance is more interesting (computation-wise) that the difference between the variance and the expected conditional variance as reproduced in (3) given that both quantities can equally be approximated by unbiased Monte Carlo…

## Archive for Monte Carlo Statistical Methods

## Computing the variance of a conditional expectation via non-nested Monte Carlo

Posted in Books, pictures, Statistics, University life with tags conditional probability, debiasing, Monte Carlo approximations, Monte Carlo Statistical Methods, Rao-Blackwellisation on May 26, 2016 by xi'an## exact, unbiased, what else?!

Posted in Books, Statistics, University life with tags control variate, exact subsampling, Gaussian copula, likelihood, MCMC, Monte Carlo Statistical Methods, pseudo-marginal, Russian roulette, unbiasedness on April 13, 2016 by xi'an**L**ast week, Matias Quiroz, Mattias Villani, and Robert Kohn arXived a paper on exact subsampling MCMC, a paper that contributes to the current literature on approximating MCMC samplers for large datasets, in connection with an earlier paper of Quiroz et al. discussed here last week.

The “exact” in the title is to be understood in the Russian roulette sense. By using Rhee and Glynn debiaising device, the authors achieve an unbiased estimator of the likelihood as in Bardenet et al. (2015). The central tool for the derivation of an unbiased and positive estimator is to find a control variate for each component of the log likelihood that is good enough for the difference between the component and the control to be lower bounded. By the constant *a* in the screen capture above. When the individual terms *d* in the product are iid unbiased estimates of the log likelihood difference. And *q* is the sum of the control variates. Or maybe more accurately of the cheap substitutes to the exact log likelihood components. Thus still of complexity O(n), which makes the application to tall data more difficult to contemplate.

The $64 question is obviously how to produce cheap and efficient control variates that kill the curse of the tall data. (It still irks to resort to this term of *control variate*, really!) Section 3.2 in the paper suggests clustering the data and building an approximation for each cluster, which seems to imply manipulating the whole dataset at this early stage. At a cost of O(Knd). Furthermore, because finding a correct lower bound *a* is close to impossible in practice, the authors use a “soft lower bound”, meaning that it is only an approximation and thus that (3.4) above can get negative from time to time, which cancels the validation of the method as a pseudo-marginal approach. The resolution of this difficulty is to resort to the same proxy as in the Russian roulette paper, replacing the unbiased estimator with its absolute value, an answer I already discussed for the Russian roulette paper. An additional step is proposed by Quiroz et al., namely correlating the random numbers between numerator and denominator in their final importance sampling estimator, via a Gaussian copula as in Deligiannidis et al.

This paper made me wonder (idly wonder, mind!) anew how to get rid of the vexing unbiasedness requirement. From a statistical and especially from a Bayesian perspective, unbiasedness is a second order property that cannot be achieved for most transforms of the parameter θ. And that does not keep under reparameterisation. It is thus vexing and perplexing that unbiased is so central to the validation of our Monte Carlo technique and that any divergence from this canon leaves us wandering blindly with no guarantee of ever reaching the target of the simulation experiment…

## Rémi Bardenet’s seminar

Posted in Kids, pictures, Statistics, Travel, University life with tags ABC in Roma, big data, BiPS, CREST, defense, ENSAE, Institut Henri Poincaré, MCMC algorithms, Monte Carlo Statistical Methods, Nicolas Chopin, PhD thesis, quasi-Monte Carlo methods, seminar, tall data on April 7, 2016 by xi'an**N**ext week, Rémi Bardenet is giving a seminar in Paris, Thursday April 14, 2pm, in ENSAE [room 15] on MCMC methods for tall data. Unfortunately, I will miss this opportunity to discuss with Rémi as I will be heading to La Sapienza, Roma, for Clara Grazian‘s PhD defence the next day. And on Monday afternoon, April 11, Nicolas Chopin will give a talk on quasi-Monte Carlo for sequential problems at Institut Henri Poincaré.

## afternoon on Bayesian computation

Posted in Statistics, Travel, University life with tags advanced Monte Carlo methods, Antonietta Mira, Arnaud Doucet, Bayesian computation, CRiSM, estimating a constant, Ingmar Schuster, Monte Carlo Statistical Methods, pub, United Kingdom, Université Paris Dauphine, University of Oxford, University of Reading, University of Warwick on April 6, 2016 by xi'an**R**ichard Everitt organises an afternoon workshop on Bayesian computation in Reading, UK, on April 19, the day before the Estimating Constant workshop in Warwick, following a successful afternoon last year. Here is the programme:

1230-1315 Antonietta Mira, Università della Svizzera italiana 1315-1345 Ingmar Schuster, Université Paris-Dauphine 1345-1415 Francois-Xavier Briol, University of Warwick 1415-1445 Jack Baker, University of Lancaster 1445-1515 Alexander Mihailov, University of Reading 1515-1545 Coffee break 1545-1630 Arnaud Doucet, University of Oxford 1630-1700 Philip Maybank, University of Reading 1700-1730 Elske van der Vaart, University of Reading 1730-1800 Reham Badawy, Aston University 1815-late Pub and food (SCR, UoR campus)

and the general abstract:

The Bayesian approach to statistical inference has seen major successes in the past twenty years, finding application in many areas of science, engineering, finance and elsewhere. The main drivers of these successes were developments in Monte Carlo methods and the wide availability of desktop computers. More recently, the use of standard Monte Carlo methods has become infeasible due the size and complexity of data now available. This has been countered by the development of next-generation Monte Carlo techniques, which are the topic of this meeting.

The meeting takes place in the Nike Lecture Theatre, Agriculture Building [building number 59].

## Statistical rethinking [book review]

Posted in Books, Kids, R, Statistics, University life with tags Amazon, Bayes theorem, Bayesian data analysis, Bayesian Essentials with R, book review, CHANCE, code, convergence diagnostics, E.T. Jaynes, generalised linear models, golem, maths, matrix algebra, MCMC algorithms, mixtures of distributions, Monte Carlo Statistical Methods, Prague, R, robots, STAN, statistical modelling, Statistical rethinking on April 6, 2016 by xi'anStatistical Rethinking: A Bayesian Course with Examples in R and Stan is a new book by Richard McElreath that CRC Press sent me for review in CHANCE. While the book was already discussed on Andrew’s blog three months ago, and [rightly so!] enthusiastically recommended by Rasmus Bååth on Amazon, here are the reasons why I am quite impressed by Statistical Rethinking!

“Make no mistake: you will wreck Prague eventually.” (p.10)

While the book has a lot in common with Bayesian Data Analysis, from being in the same CRC series to adopting a pragmatic and weakly informative approach to Bayesian analysis, to supporting the use of STAN, it also nicely develops its own ecosystem and idiosyncrasies, with a noticeable Jaynesian bent. To start with, I like the highly personal style with clear attempts to make the concepts memorable for students by resorting to external concepts. The best example is the call to the myth of the golem in the first chapter, which McElreath uses as an warning for the use of statistical models (which almost are anagrams to golems!). Golems and models [and robots, another concept invented in Prague!] are man-made devices that strive to accomplish the goal set to them without heeding the consequences of their actions. This first chapter of Statistical Rethinking is setting the ground for the rest of the book and gets quite philosophical (albeit in a readable way!) as a result. In particular, there is a most coherent call against hypothesis testing, which by itself justifies the title of the book. Continue reading