Our paper Rethinking the Effective Sample Size, with Victor Elvira (the driving force behind the paper!) and Luca Martino, has now been published in the International Statistical Review! As discussed earlier on this blog, we wanted to re-evaluate the pros and cons of the effective sample size (ESS), as a tool assessing the quality [or lack thereof] of a Monte Carlo approximation. It is particularly exploited in the specific context of importance sampling. Following a 1992 construction by Augustine Kong, his approximation has been widely used in the last 25 years, in part due to its simplicity as a practical rule of thumb. However, we show in this paper that the assumptions made in the derivation of this approximation make it difficult to consider it as a reasonable approximation of the ESS. Note that this reevaluation does not cover the use of ESS for Markov chain Monte Carlo algorithms, although there would also be much to tell about it..!
Archive for Markov chain Monte Carlo algorithm
rethinking the ESS published!
Posted in Statistics with tags effective sample size, ESS, importance sampling, International Statistical Review, Markov chain Monte Carlo algorithm, MCMC, Monte Carlo methods, Monte Carlo Statistical Methods, simulation on May 3, 2022 by xi'anMCqMC 2020 live and free and online
Posted in pictures, R, Statistics, Travel, University life with tags COVID-19, England, Markov chain Monte Carlo algorithm, MCQMC 2020, Monte Carlo methods, Monte Carlo Statistical Methods, Oxford, quasi-Monte Carlo methods, remote conference, tutorial, University of Oxford, virtual reality, webinar, youtube on July 27, 2020 by xi'anThe MCqMC 20202 conference that was supposed to take place in Oxford next 9-14 August has been turned into an on-line free conference since travelling remains a challenge for most of us. Tutorials and plenaries will be live with questions on Zoom, with live-streaming and recorded copies on YouTube. They will probably be during 14:00-17:00 UK time (GMT+1), 15:00-18:00 CET (GMT+2), and 9:00-12:00 ET. (Which will prove a wee bit of a challenge for West Coast and most of Asia and Australasia researchers, which is why our One World IMS-Bernoulli conference we asked plenary speakers to duplicate their talks.) All other talks will be pre-recorded by contributors and uploaded to a website, with an online Q&A discussion section for each. As a reminder here are the tutorials and plenaries:
Invited plenary speakers:
Aguêmon Yves Atchadé (Boston University)
Jing Dong (Columbia University)
Pierre L’Écuyer (Université de Montréal)
Mark Jerrum (Queen Mary University London)
Peter Kritzer (RICAM Linz)
Thomas Muller (NVIDIA)
David Pfau (Google DeepMind)
Claudia Schillings (University of Mannheim)
Mario Ullrich (JKU Linz)
Tutorials:
Fred Hickernell (IIT) — Software for Quasi-Monte Carlo Methods
Aretha Teckentrup (Edinburgh) — Markov chain Monte Carlo methods
unbiased MCMC discussed at the RSS tomorrow night
Posted in Books, Kids, pictures, Statistics, Travel, University life with tags AABI, coupling, discussion paper, Journal of the Royal Statistical Society, Markov chain Monte Carlo algorithm, MCMC, Read paper, Royal Statistical Society, Series B, unbiasedness, Université Paris Dauphine, Vancouver on December 10, 2019 by xi'anThe paper ‘Unbiased Markov chain Monte Carlo methods with couplings’ by Pierre Jacob et al. will be discussed (or Read) tomorrow at the Royal Statistical Society, 12 Errol Street, London, tomorrow night, Wed 11 December, at 5pm London time. With a pre-discussion session at 3pm, involving Chris Sherlock and Pierre Jacob, and chaired by Ioanna Manolopoulou. While I will alas miss this opportunity, due to my trip to Vancouver over the weekend, it is great that that the young tradition of pre-discussion sessions has been rekindled as it helps put the paper into perspective for a wider audience and thus makes the more formal Read Paper session more profitable. As we discussed the paper in Paris Dauphine with our graduate students a few weeks ago, we will for certain send one or several written discussions to Series B!
what if what???
Posted in Books, Statistics with tags Markov chain Monte Carlo algorithm, MCMC, Monte Carlo integration, Monte Carlo methods, what if?, wikipedia on October 7, 2019 by xi'an[Here is a section of the Wikipedia page on Monte Carlo methods which makes little sense to me. What if it was not part of this page?!]
Monte Carlo simulation versus “what if” scenarios
There are ways of using probabilities that are definitely not Monte Carlo simulations – for example, deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a “best guess” estimate. Scenarios (such as best, worst, or most likely case) for each input variable are chosen and the results recorded.[55]
By contrast, Monte Carlo simulations sample from a probability distribution for each variable to produce hundreds or thousands of possible outcomes. The results are analyzed to get probabilities of different outcomes occurring.[56] For example, a comparison of a spreadsheet cost construction model run using traditional “what if” scenarios, and then running the comparison again with Monte Carlo simulation and triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the “what if” analysis. This is because the “what if” analysis gives equal weight to all scenarios (see quantifying uncertainty in corporate finance), while the Monte Carlo method hardly samples in the very low probability regions. The samples in such regions are called “rare events”.
normal variates in Metropolis step
Posted in Books, Kids, R, Statistics, University life with tags cross validated, Gaussian random walk, Markov chain Monte Carlo algorithm, MCMC, Metropolis-Hastings algorithm, Monte Carlo Statistical Methods, normal distribution, normal generator, random variates on November 14, 2017 by xi'anA definitely puzzled participant on X validated, confusing the Normal variate or variable used in the random walk Metropolis-Hastings step with its Normal density… It took some cumulated efforts to point out the distinction. Especially as the originator of the question had a rather strong a priori about his or her background:
“I take issue with your assumption that advice on the Metropolis Algorithm is useless to me because of my ignorance of variates. I am currently taking an experimental course on Bayesian data inference and I’m enjoying it very much, i believe i have a relatively good understanding of the algorithm, but i was unclear about this specific.”
despite pondering the meaning of the call to rnorm(1)… I will keep this question in store to use in class when I teach Metropolis-Hastings in a couple of weeks.