Archive for SDEs

MCqMC 2014 [day #1]

Posted in pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , on April 9, 2014 by xi'an

Leuven2

As I have been kindly invited to give a talk at MCqMC 2014, here am I. in Leuven, Belgium, for this conference I have never attended before. (I was also invited for MCqMC 2012 in Sydney The talk topics and the attendees’ “sociology” are quite similar to those of the IMACS meeting in Annecy last summer. Namely, rather little on MCMC, particle filters, and other tools familiar in Bayesian computational statistics, but a lot on diffusions and stochastic differential equations and of course quasi-Monte Carlo methods. I thus find myself at a boundary of the conference range and a wee bit lost by some talks, which even titles make little sense to me.

For instance, I have trouble to connect with multi-level Monte Carlo within my own referential. My understanding of the method is one of a control variate version of tempering, namely of using a sequence of approximations to the true target and using rougher approximations as control variates for the finer approximations. But I cannot find on the Web a statistical application of the method outside of diffusions and SDEs, i.e. outside of continuous time processes… Maybe using a particle filter from one approximation to the next, down in terms of roughness, could help.

“Several years ago, Giles (2008) introduced an intriguing multi-level idea to deal with such biased settings that can dramatically improve the rate of convergence and can even, in some settings, achieve the canonical “square root” convergence rate associated with unbiased Monte Carlo.” Rhee and Glynn, 2012

Those were my thoughts before lunchtime. today (namely April 7, 2014). And then, after lunch, Peter Glynn gave his plenary talk that just answered those questions of mine’s!!! Essentially, he showed that formula Pierre Jacob also used in his Bernoulli factory paper to transform a converging-biased-into-an-unbiased estimator, based on a telescopic series representation and a random truncation… This approach is described in a paper with Chang-han Rhee, arXived a few years ago. The talk also covered more recent work (presumably related with Chang-han Rhee’s thesis) extending the above to Markov chains. As explained to me later by Pierre Jacob [of Statisfaction fame!], a regular chain does not converge fast enough to compensate for the explosive behaviour of the correction factor, which is why Rhee and Glynn used instead a backward chain, linking to the exact or perfect samplers of the 1990’s (which origin can be related to a 1992 paper of Asmussen, Glynn and Thorisson). This was certainly the most riveting talk I attended in the past years in that it brought a direct answer to a question I was starting to investigate. And more. I was also wondering how connected it was with our “exact” representation of the stationary distribution (in an Annals of Probability paper with Jim Hobert).   Since we use a stopping rule based on renewal and a geometric waiting time, a somewhat empirical version of the inverse probability found in Peter’s talk. This talk also led me to re-consider a recent discussion we had in my CREST office with Andrew about using square root(ed) importance weights, since one of Peter’s slides exhibited those square roots as optimal. Paradoxically, Peter started the talk by down-playing it, stating there was a single idea therein and a single important slide, making it a perfect after-lunch talk: I wish I had actually had thrice more time to examine each slide! (In the afternoon session, Éric Moulines also gave a thought-provoking talk on particle islands and double bootstrap, a research project I will comment in more detail the day it gets arXived.)

Latent Gaussian Models im Zürich [day 2]

Posted in pictures, R, Statistics, Travel, University life with tags , , , , , , , on February 7, 2011 by xi'an

The second day at the Latent Gaussian Models workshop in Zürich was equally interesting. Among the morning talks, let me mention Daniel Bové who gave a talk connected with the hyper-g prior paper he wrote with Leo Held (commented in an earlier post) and the duo of Janine Illian and Daniel Simpson who gave enthusiastic arguments as to why point pattern datasets should be analysed in a completely novel way, using partial SDEs. And showed us how this could be done via INLA. This perspective (purposedly?) contrasted with the modelling assumptions of Alan Gelfand who concluded the meeting with a highly interesting modelling/estimation of species distribution in the Cape area. He also ran a comparison with the Maxent approach to the same problem. As for my own talk, I somehow spent too much time on the introduction to ABC, trying to link the method with non-parametric perspectives, and so ended rushing through the sufficiency part and the population genetic results obtained by Jean-Marie Cornuet and Jean-Michel Marin the previous day. (The updated slides are available on slideshare.) I hope the main message was still spelled out clearly enough… In conclusion, this was a very interesting workshop, maybe the first of a kind since there is a possible follow-up next year in Trondheim. It showed the clear emergence of a very active INLA community, able to tackle old and new problems using this new technology, and illustrated once again the importance of developing user-friendly codes for promoting such technologies.