Archive for Pierre Jacob

Couplings and Monte Carlo [advanced graduate course at Dauphine by Pierre Jacob]

Posted in Kids, pictures, Statistics, Travel with tags , , , , , , on January 20, 2020 by xi'an

As a visiting professor at Paris-Dauphine next month, Pierre Jacob will give a series of lectures on coupling and Monte Carlo. Next month on Feb. 13, 14, 25 27, at Université Paris-Dauphine, the first two starting at 8:30 (room E) and the last two starting at 13:45 (room F and D201, respectively). Attendance is open to all and material will be made available on the lecture webpage.

correlation for maximal coupling

Posted in Books, Kids, pictures, R, Statistics, University life with tags , , , , , , on January 3, 2018 by xi'an

An interesting (if vaguely formulated) question on X validated: given two Gaussian variates that are maximally coupled, what is the correlation between these variates? The answer depends on the parameters of both Gaussian, with a correlation of one when both Gaussians are identical. Answering the question by simulation (as I could not figure out the analytical formula on Boxing Day…) led me back to Pierre Jacob’s entry on the topic on Statisfaction, where simulating the maximal coupling stems from the decompositions

p(x)=p(x)∧q(x)+{p(x)-p(x)∧q(x)}  and  q(x)=p(x)∧q(x)+{q(x)-p(x)∧q(x)}

and incidentally to the R function image.plot (from the R library fields) for including the side legend.

two Parisian talks by Pierre Jacob in January

Posted in pictures, Statistics, University life with tags , , , , , , , , , on December 21, 2017 by xi'an

While back in Paris from Harvard in early January, Pierre Jacob will give two talks on works of his:

January 09, 10:30, séminaire d’Analyse-Probabilités, Université Paris-Dauphine: Unbiased MCMC

Markov chain Monte Carlo (MCMC) methods provide consistent approximations of integrals as the number of iterations goes to infinity. However, MCMC estimators are generally biased after any fixed number of iterations, which complicates both parallel computation and the construction of confidence intervals. We propose to remove this bias by using couplings of Markov chains and a telescopic sum argument, inspired by Glynn & Rhee (2014). The resulting unbiased estimators can be computed independently in parallel, and confidence intervals can be directly constructed from the Central Limit Theorem for i.i.d. variables. We provide practical couplings for important algorithms such as the Metropolis-Hastings and Gibbs samplers. We establish the theoretical validity of the proposed estimators, and study their variances and computational costs. In numerical experiments, including inference in hierarchical models, bimodal or high-dimensional target distributions, logistic regressions with the Pólya-Gamma Gibbs sampler and the Bayesian Lasso, we demonstrate the wide applicability of the proposed methodology as well as its limitations. Finally, we illustrate how the proposed estimators can approximate the “cut” distribution that arises in Bayesian inference for misspecified models.

January 11, 10:30, CREST-ENSAE, Paris-Saclay: Better together? Statistical learning in models made of modules [Warning: Paris-Saclay is not in Paris!]

In modern applications, statisticians are faced with integrating heterogeneous data modalities relevant for an inference or decision problem. It is convenient to use a graphical model to represent the statistical dependencies, via a set of connected “modules”, each relating to a specific data modality, and drawing on specific domain expertise in their development. In principle, given data, the conventional statistical update then allows for coherent uncertainty quantification and information propagation through and across the modules. However, misspecification of any module can contaminate the update of others. In various settings, particularly when certain modules are trusted more than others, practitioners have preferred to avoid learning with the full model in favor of “cut distributions”. In this talk, I will discuss why these modular approaches might be preferable to the full model in misspecified settings, and propose principled criteria to choose between modular and full-model approaches. The question is intertwined with computational difficulties associated with the cut distribution, and new approaches based on recently proposed unbiased MCMC methods will be described.

Long enough after the New Year festivities (if any) to be fully operational for them!

Luke and Pierre at big’MC

Posted in Linux, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on May 19, 2014 by xi'an

crossing Rue Soufflot on my way to IHP from Vieux Campeur, March 28, 2013Yesterday, Luke Bornn and Pierre Jacob gave a talk at our big’MC ‘minar. While I had seen most of the slides earlier, either at MCMski IV,  Banff, Leuven or yet again in Oxford, I really enjoyed those talks as they provided further intuition about the techniques of Wang-Landau and non-negative unbiased estimators, leading to a few seeds of potential ideas for even more potential research. For instance, I understood way better the option to calibrate the Wang-Landau algorithm on levels of the target density rather than in the original space. Which means (a) a one-dimensional partition target (just as in nested sampling); (b) taking advantage of the existing computations of the likelihood function; and (b) a somewhat automatic implementation of the Wang-Landau algorithm. I do wonder why this technique is not more popular as a default option. (Like, would it be compatible with Stan?) The impossibility theorem of Pierre about the existence of non-negative unbiased estimators never ceases to amaze me. I started wondering during the seminar whether a positive (!) version of the result could be found. Namely, whether perturbations of the exact (unbiased) Metropolis-Hastings acceptance ratio could be substituted in order to guarantee positivity. Possibly creating drifted versions of the target…

One request in connection with this post: please connect the Institut Henri Poincaré to the eduroam wireless network! The place is dedicated to visiting mathematicians and theoretical physicists, it should have been the first one [in Paris] to get connected to eduroam. The cost cannot be that horrendous so I wonder what the reason is. Preventing guests from connecting to the Internet towards better concentration? avoiding “parasites” taking advantage of the network? ensuring seminar attendees are following the talks? (The irony is that Institut Henri Poincaré has a local wireless available for free, except that it most often does not work with my current machine. And hence wastes much more of my time as I attempt to connect over and over again while there.) Just in connection with IHP, a video of Persi giving a talk there about Poincaré, two years ago:

big’MC’minar next week

Posted in Kids, Statistics, Travel, University life with tags , , , , , , , on May 9, 2014 by xi'an

crossing Rue Soufflot on my way to IHP from Vieux Campeur, March 28, 2013The next big’MC seminar in Paris will be delivered on Thursday, May 15, by

15 h : Luke Bornn, Towards the Derandomization of Markov chain Monte Carlo

16 h 15 : Pierre Jacob, On exact inference and unbiased estimation 

see the seminar webpage for more details. And make sure to attend if in or near Paris! It is definitely big and MC. Most sadly (for us!), Chris Holmes will give a Smile (Statistical machine learning) seminar at the very same time a few streets away…  At least, we can conveniently meet right after for a drink!

%d bloggers like this: