Next summer of 2017, the biennial International Conference on Monte Carlo Methods and Applications (MCM) will take place in Montréal, Québec, Canada, on July 3-7. This is a mathematically-oriented meeting that works in alternance with MCqMC and that is “devoted to the study of stochastic simulation and Monte Carlo methods in general, from the theoretical viewpoint and in terms of their effective applications in different areas such as finance, statistics, machine learning, computer graphics, computational physics, biology, chemistry, and scientific computing in general. It is one of the most prominent conference series devoted to research on the mathematical aspects of stochastic simulation and Monte Carlo methods.” I attended one edition in Annecy three years ago and enjoyed very much the range of topics and backgrounds. The program is under construction and everyone is warmly invited to contribute talks or special sessions, with a deadline on January 20, 2017. In addition, Montréal is a Monte Carlo Mecca of sorts with leading researchers in the field like Luc Devroye and Pierre Lécuyer working there. (And a great place to visit in the summer!)
Archive for Annecy
In the train to Annecy, I read the recently arXived paper by my former PhD student Pierre Jacob (now at NUS), along with Lawrence Murray (Perth), and Sylvain Rubenthaler (Nice), where they obtain precise degeneracy rates of the regular particle filter applied to hidden Markov models with a compact observation space, precise enough to consider storing the entire paths at a linear occupancy rate. Interestingly, the distance to the most common ancestor is of order N log N, if N is the number of particles. And the number of nodes is O(N log N) as well. This means indeed that the whole paths can be stored, which offers a lot of potential in terms of Rao-Blackwellisation and parallelisation. I was first bemused by a silly misunderstanding about the purpose of the algorithm: it is directed at inference at the current time index, not over the whole past and not over the parameters of the model for, else how could we consider the algorithm has converged when it degenerates to a single path at some finite horizon the past? Pierre has also written a further comment of the paper on Statistfaction.
As astute ‘Og’s readers may have gathered (!), I am now in Annecy, Savoie, for the 9th IMACS seminar on Monte Carlo Methods. Where I was kindly invited to give a talk on ABC. IMACS stands for “International Association for Mathematics and Computers in Simulation” and the conference gathers themes and sensibilities I am not familiar with. And very few statisticians. For instance, I attended a stochastic particle session that had nothing to do with my understanding of particle systems (except for Pierre Del Moral’s mean field talk). The overall focus seems to stand much more around SDEs and quasi-Monte Carlo methods. Both items for which I have a genuine interest but little background, so I cannot report much on the talks I have attended beyond reporting their title. I for instance discovered the multilevel Monte Carlo techniques for SDEs, which sounds like a control variate methodology to reduce the variance w/o reducing the discretisation step. (Another instance is that the proceedings will be published in Mathematics and Computers in Simulation or Monte Carlo Methods and Applications. Two journals I have never published in.) Although I have yet a few months before attending my first MCQMC conference, I presume this is somehow a similar spirit and mix of communities.
At another level, attending a conference in Annecy is a blessing: the city is beautiful, the lake pristine and tantalising in the hot weather, and the surrounding mountains (we are actually quite close to Chamonix!) induce me to go running on both mornings and evenings.