**S**ince this teaching semester is 100% on-line in Paris Dauphine for the third year students, I started last Monday to teach my statistical modelling course from my office on my computer. Using a Teams connection with the 180⁺ students, sharing my slides *or* my webcam with them, and writing on a Huion drawing tablet to write on the pdf slides (or on Teams whiteboard). I found the most convenient way to write on the slides was via Xournalpp. From my end of the exchange, the class went on rather smoothly, with students interacting about the practical and contents of the course. As I have also recorded my lectures (in August), and hope the students to first go through the recordings, I will see next time how much effort they have put into assimilating the material, as the on-line class will mostly concentrate on questions and applications. I also hope to have a quick multiple choice question at the end (or beginning?) of each class, but am still fishing for an interface that would (a) handle LaTeX or MathJax (b) shuffle questions from a pool of questions so that each student gets a different question (or thinks so) and (c) store the outcome of the test. Any suggestion welcome!

## Archive for Université Paris Dauphine

## on-line course

Posted in Kids, pictures, Statistics, University life with tags courses in English, drawing tablet, Huion, LaTeX, MathJax, multiple choice question, on-line course, pdf, questionnaire, teaching, Université Paris Dauphine, xournalpp on September 4, 2020 by xi'an## scalable Langevin exact algorithm [Read Paper]

Posted in Books, pictures, Statistics, University life with tags Albert Einstein, Brownian motion, control variates, COVID-19, doubly intractable problems, hazard function, Ising model, Langevin diffusion, MCMC, Ornstein-Uhlenbeck process, Paul Langevin, QSMC, quarantine, Read paper, rejection sampler, thinning, Université Paris Dauphine on June 23, 2020 by xi'an

Murray Pollock, Paul Fearnhead, Adam M. Johansen and Gareth O. Roberts (**CoI:** all with whom I have strong professional and personal connections!) have a Read Paper discussion happening tomorrow [under relaxed lockdown conditions in the UK, except for the absurd quatorzine on all travelers|, but still in a virtual format] that we discussed together [from our respective homes] at Paris Dauphine. And which I already discussed on this blog when it first came out.

Here are quotes I spotted during this virtual Dauphine discussion but we did not come up with enough material to build a significant discussion, although wondering at the potential for solving the O(n) bottleneck, handling doubly intractable cases like the Ising model. And noticing the nice features of the log target being estimable by unbiased estimators. And of using control variates, for once well-justified in a non-trivial environment.

“However, in practice this simple idea is unlikely to work. We can see this most clearly with the rejection sampler, as the probability of survival will decrease exponentially with t—and thus the rejection probability will often be prohibitively large.”

“This can be viewed as a rejection sampler to simulate from μ(x,t), the distribution of the Brownian motion at time t conditional on its surviving to time t. Any realization that has been killed is ‘rejected’ and a realization that is not killed is a draw from μ(x,t). It is easy to construct an importance sampling version of this rejection sampler.”

## Cédric Villani on COVID-19 [and Zoom for the local COVID-19 seminar]

Posted in Statistics, University life with tags app, Astérix et Obélix, capture-recapture, Cédric Villani, coronavirus epidemics, COVI, French politics, OPECST, StopCovid, survey sampling, Université Paris Dauphine, webinar, Zoom on June 19, 2020 by xi'an**F**rom the “start” of the COVID-19 crisis in France (or more accurately after lockdown on March 13), the math department at Paris-Dauphine has run an internal webinar around this crisis, not solely focusing on the math or stats aspects but also involving speakers from other domains, from epidemiology to sociology, to economics. The speaker today was [Field medalist then elected member of Parliament] Cédric Villani, as a member of the French Parliament sciences and technology committee, l’Office parlementaire d’évaluation des choix scientifiques et technologiques (OPECST), which adds its recommendations to those of the several committees advising the French government. The discussion was interesting as an insight on the political processing of the crisis and the difficulties caused by the heavy-handed French bureaucracy, which still required to fill form A-3-b6 in emergency situations. And the huge delays in launching a genuine survey of the range and diffusion of the epidemic. Which, as far as I understand, has not yet started….

## neural importance sampling

Posted in Books, Kids, pictures, Statistics, University life with tags autoencoder, Bayesian neural networks, bias, coupling layer, Dennis Prangle, Disney, importance sampling, light transport, normalizing flow, path sampling, Université Paris Dauphine on May 13, 2020 by xi'an**D**ennis Prangle signaled this paper during his talk of last week, first of our ABC ‘minars now rechristened as The One World ABC Seminar to join the “One World xxx Seminar” franchise! The paper is written by Thomas Müller and co-authors, all from Disney research [hence the illustration], and we discussed it in our internal reading seminar at Dauphine. The authors propose to parameterise the importance sampling density via neural networks, just like Dennis is using auto-encoders. Starting with the goal of approximating

(where they should assume *f* to be non-negative for the following), the authors aim at simulating from an approximation of f(x)/ℑ since this “ideal” pdf would give zero variance.

“Unfortunately, the above integral is often not solvable in closed form, necessitating its estimation with another Monte Carlo estimator.”

Among the discussed solutions, the Latent-Variable Model one is based on a pdf represented as a marginal. A mostly intractable integral, which the authors surprisingly seem to deem an issue as they do not mention the standard solution of simulating from the joint and using the conditional in the importance weight. (Or even more surprisingly and obviously wrongly see the latter as a biased approximation to the weight.)

“These “autoregressive flows” offer the desired exact evaluation of q(x;θ). Unfortunately, they generally only permiteitherefficient sample generation or efficient evaluation of q(x;θ), which makes them prohibitively expensive for our application to Mont Carlo integration.”

When presenting normalizing flows, namely the representation of the simulation output as the result of an invertible mapping of a standard (e.g., Gaussian or Uniform) random variable, x=h(u,θ), which can itself be decomposed into a composition of suchwise functions. And I am thus surprised this cannot be done in an efficient manner if transforms are well chosen…

“The key proposition of Dinh et al. (2014) is to focus on a specific class of mappings—referred to as coupling layers—that admit Jacobian matrices where determinants reduce to the product of diagonal terms.

Using a transform with a triangular Jacobian at each stage has the appeal of keeping the change of variable simple and allowing for non-linear transforms. Namely piecewise polynomials. When reading the one-blob (!) encoding , I am however uncertain the approach is more than the choice of a particular functional basis, as for instance wavelets (which may prove more costly to handle, granted!)

“Given that NICE scales well to high-dimensional problems…”

It is always unclear to me why almost *every* ML paper feels the urge to redefine & motivate the KL divergence. And to recall that it avoids bothering about the normalising constant. Looking at the variance of the MC estimator & seeking minimal values is praiseworthy, but *only* when the variance exists. What are the guarantees on the density estimate for this to happen? And where are the arguments for NICE scaling nicely to high dimensions? Interesting intrusion of path sampling, but is it of any use outside image analysis—I had forgotten Eric Veach’s original work was on light transport—?

## value of a chess game

Posted in pictures, Statistics, University life with tags CEREMADE, chess, France, Isle of Lewis, maximin, minimaxity, Paris, PNAS, Scotland, Shannon, Université Paris Dauphine, value of a game, webinar on April 15, 2020 by xi'an**I**n our (internal) webinar at CEREMADE today, Miguel Oliu Barton gave a talk on the recent result his student Luc Attia and himself obtained, namely a tractable way of finding the value of a game (when minimax equals maximin), result that got recently published in PNAS:

“Stochastic games were introduced by the Nobel Memorial Prize winner Lloyd Shapley in 1953 to model dynamic interactions in which the environment changes in response to the players’ behavior. The theory of stochastic games and its applications have been studied in several scientific disciplines, including economics, operations research, evolutionary biology, and computer science. In addition, mathematical tools that were used and developed in the study of stochastic games are used by mathematicians and computer scientists in other fields. This paper contributes to the theory of stochastic games by providing a tractable formula for the value of finite competitive stochastic games. This result settles a major open problem which remained unsolved for nearly 40 years.”

While I did not see a direct consequence of this result in regular statistics, I found most interesting the comment made at one point that chess (with forced nullity after repetitions) had a value, by virtue of Zermelo’s theorem. As I had never considered the question (contrary to Shannon!). This value remains unknown.