## Archive for Université Paris Dauphine

## p-value graffiti in the lift [jatp]

Posted in Statistics with tags 1960s, basic statistics, course, jatp, lift, p-value, picture, teaching, Université Paris Dauphine on January 3, 2019 by xi'an## a glaringly long explanation

Posted in Statistics with tags ABC, Bayesian Choice, cross validated, exponential families, proof, socks, sufficient statistics, teaching, Thomas Bayes' portrait, undergraduates, Université Paris Dauphine on December 19, 2018 by xi'an**I**t is funny that, when I am teaching the rudiments of Bayesian statistics to my undergraduate students in Paris-Dauphine, including ABC via Rasmus’ socks, specific questions about the book (The Bayesian Choice) start popping up on X validated! Last week was about the proof that ABC is exact when the tolerance is zero. And the summary statistic sufficient.

This week is about conjugate distributions for exponential families (not that there are many others!). Which led me to explain both the validation of the conjugacy and the derivation of the posterior expectation of the mean of the natural sufficient statistic in far more details than in the book itself. Hopefully in a profitable way.

## irreversible Markov chains

Posted in Books, pictures, Statistics, University life with tags Bernie Alder, Ecole Normal Supérieure, factorised Metropolis, Nicolas Metropolis, PDMP, phase transition, prefetching, seminar, torus, Université Paris Dauphine on November 20, 2018 by xi'anWerner Krauth (ENS, Paris) was in Dauphine today to present his papers on irreversible Markov chains at the probability seminar. He went back to the 1953 Metropolis et al. paper. And mentioned a 1962 paper I had never heard of by Alder and Wainwright demonstrating phase transition can occur, via simulation. The whole talk was about simulating the stationary distribution of a large number of hard spheres on a one-dimensional ring, which made it hard for me to understand. (Maybe the triathlon before did not help.) And even to realise a part was about PDMPs… His slides included this interesting entry on factorised MCMC which reminded me of delayed acceptance and thinning and prefetching. Plus a notion of lifted Metropolis that could have applications in a general setting, if it differs from delayed rejection.

## computational statistics and molecular simulation [18w5023]

Posted in pictures, Statistics, Travel, University life with tags 18w5023, Banff, Banff International Research Station for Mathematical Innovation, BIRS, bouncy particle sampler, Casa Matemática Oaxaca, CMO, computational statistics, Donsker-Varadhan, eigenvalue, local scaling, Mars, Mexico, molecular dynamics, Monte Alban, Monte Carlo Statistical Methods, optimal acceptance rate, PDMP, spectroscopy, tempering, Université Paris Dauphine, workshop, Zapotec civilization, Zig-Zag on November 14, 2018 by xi'an**O**n Day 2, Carsten Hartmann used a representation of the log cumulant as solution to a minimisation problem over a collection of importance functions (by the Vonsker-Varadhan principle), with links to X entropy and optimal control, a theme also considered by Alain Dunmus when considering the uncorrected discretised Langevin diffusion with a decreasing sequence of discretisation scale factors (Jordan, Kinderlehrer and Otto) in the spirit of convex regularisation à la Rockafellar. Also representing ULA as an inexact gradient descent algorithm. Murray Pollock (Warwick) presented a new technique called fusion to simulate from products of d densities, as in scalable MCMC (but not only). With an (early) starting and startling remark that when simulating one realisation from each density in the product and waiting for all of them to be equal means simulating from the product, in a strong link to the (A)BC fundamentals. This is of course impractical and Murray proposes to follow d Brownian bridges all ending up in the average of these simulations, constructing an acceptance probability that is computable and validating the output.

The second “hand-on” lecture was given by Gareth Roberts (Warwick) on the many aspects of scaling MCMC algorithms, which started with the famous 0.234 acceptance rate paper in 1996. While I was aware of some of these results (!), the overall picture was impressive, including a notion of complexity I had not seen before. And a last section on PDMPs where Gareth presented very recent on the different scales of convergence of Zigzag and bouncy particle samplers, mostly to the advantage of Zigzag.In the afternoon, Jeremy Heng presented a continuous time version of simulated tempering by adding a drift to the Langevin diffusion with time-varying energy, which must be solution to the Liouville pde . Which connects to a flow transport problem when solving the pde under additional conditions. Unclear to me was the creation of the infinite sequence. This talk was very much at the interface in the spirit of the workshop! (Maybe surprisingly complex when considering the endpoint goal of simulating from a given target.) Jonathan Weare’s talk was about quantum chemistry which translated into finding eigenvalues of an operator. Turning in to a change of basis in a inhumanly large space (10¹⁸⁰ dimensions!). Matt Moore presented the work on Raman spectroscopy he did while a postdoc at Warwick, with an SMC based classification of the peaks of a spectrum (to be used on Mars?) and Alessandra Iacobucci (Dauphine) showed us the unexpected thermal features exhibited by simulations of chains of rotors subjected to both thermal and mechanical forcings, which we never discussed in Dauphine beyond joking on her many batch jobs running on our cluster!

And I remembered today that there is currently and in parallel another BIRS workshop on statistical model selection [and a lot of overlap with our themes] taking place in Banff! With snow already there! Unfair or rather #unfair, as someone much too well-known would whine..! Not that I am in a position to complain about the great conditions here in Oaxaca (except for having to truly worry about stray dogs rather than conceptually about bears makes running more of a challenge, if not the altitude since both places are about the same).

## congratulations, Dr. Wu!

Posted in pictures, Statistics, University life with tags academic position, HMC, PDMP, PhD thesis, thesis defence, Université Paris Dauphine on October 4, 2018 by xi'an**T**his afternoon, my (now former) PhD student Changye Wu defended his thesis on Accelerated methods for MCMC, for which the jury awarded him the title of Docteur de l’Université Paris Dauphine. Congratulations to him and best wishes for his job hunting!

## coordinate sampler as a non-reversible Gibbs-like MCMC sampler

Posted in Books, Kids, Statistics, University life with tags arXiv, Cox process, MCqMC 2018, NIPS 2018, PDMP, PhD students, Rennes, Université Paris Dauphine, Zig-Zag on September 12, 2018 by xi'an**I**n connection with the talk I gave last July in Rennes for MCqMC 2018, I posted yesterday a preprint on arXiv of the work that my [soon to defend!] Dauphine PhD student Changye Wu and I did on an alternative PDMP. In this novel avatar of the zig-zag sampler, a non-reversible, continuous-time MCMC sampler, that we called the Coordinate Sampler, based on a piecewise deterministic Markov process. In addition to establishing the theoretical validity of this new sampling algorithm, we show in the same line as Deligiannidis et al. (2018) that the Markov chain it induces exhibits geometrical ergodicity for distributions which tails decay at least as fast as an exponential distribution and at most as fast as a Gaussian distribution. A few numerical examples (a 2D banana shaped distribution à la Haario et al., 1999, strongly correlated high-dimensional normals, a log-Gaussian Cox process) highlight that our coordinate sampler is more efficient than the zig-zag sampler, in terms of effective sample size.Actually, we had sent this paper before the summer as a NIPS [2018] submission, but it did not make it through [the 4900 submissions this year and] the final review process, being eventually rated above the acceptance bar but not that above!

## ICM 2018

Posted in pictures, Statistics, Travel, University life with tags deep learning, ICM 2018, International Congress of Mathematicians, Maria Esteban, Michael Jordan, Rio de Janeiro, stochastic optimisation, Université Paris Dauphine on August 4, 2018 by xi'an**W**hile I am not following the International Congress of Mathematicians which just started in Rio, and even less attending, I noticed an entry on their webpage on my friend and colleague Maria Esteban which I would have liked to repost *verbatim* but cannot figure how. (ICM 2018 also features a plenary lecture by Michael Jordan on gradient based optimisation [which was also Michael’s topic at ISBA 2018] and another one by Sanjeev Arora on the maths deep learning, two talks broadly related with statistics, which is presumably a première at this highly selective maths conference!)