My colleague at Paris Dauphine, Irène Waldspurger, got one of the prestigious CNRS bronze medals this year. Irène is working on inverse problems and machine learning, with applications to sensing and imaging. Congrats!
Archive for La Défense
Irène Waldspurger, CNRS bronze medal
Posted in Statistics with tags bois de Boulogne, CNRS, CNRS Bronze Medal, inverse problems, La Défense, machine learning, Université Paris Dauphine on February 14, 2020 by xi'anABC with Gibbs steps
Posted in Statistics with tags ABC, ABCGibbs, Approximate Bayesian computation, Bayesian inference, bois de Boulogne, compatible conditional distributions, contraction, convergence, ergodicity, France, Gibbs sampler, hierarchical Bayesian modelling, incompatible conditionals, La Défense, Paris, stationarity, tolerance, Université Paris Dauphine on June 3, 2019 by xi'anWith Grégoire Clarté, Robin Ryder and Julien Stoehr, all from ParisDauphine, we have just arXived a paper on the specifics of ABCGibbs, which is a version of ABC where the generic ABC acceptreject step is replaced by a sequence of n conditional ABC acceptreject steps, each aiming at an ABC version of a conditional distribution extracted from the joint and intractable target. Hence an ABC version of the standard Gibbs sampler. What makes it so special is that each conditional can (and should) be conditioning on a different statistic in order to decrease the dimension of this statistic, ideally down to the dimension of the corresponding component of the parameter. This successfully bypasses the curse of dimensionality but immediately meets with two difficulties. The first one is that the resulting sequence of conditionals is not coherent, since it is not a Gibbs sampler on the ABC target. The conditionals are thus incompatible and therefore convergence of the associated Markov chain becomes an issue. We produce sufficient conditions for the Gibbs sampler to converge to a stationary distribution using incompatible conditionals. The second problem is then that, provided it exists, the limiting and also intractable distribution does not enjoy a Bayesian interpretation, hence may fail to be justified from an inferential viewpoint. We however succeed in producing a version of ABCGibbs in a hierarchical model where the limiting distribution can be explicited and even better can be weighted towards recovering the original target. (At least with limiting zero tolerance.)
postdoc position still open
Posted in pictures, Statistics, University life with tags ABC, Agence Nationale de la Recherche, ANR, approximate Bayesian inference, bois de Boulogne, La Défense, misspecified model, Paris, ParisSaclay campus, PhD thesis, postdoctoral position, PSL Research University, Université de Montpellier, Université Paris Dauphine, University of Oxford on May 30, 2019 by xi'anThe postdoctoral position supported by the ANR funding of our ParisSaclayMontpellier research conglomerate on approximate Bayesian inference and computation remains open for the time being. We are more particularly looking for candidates with a strong background in mathematical statistics, esp. Bayesian nonparametrics, towards the analysis of the limiting behaviour of approximate Bayesian inference. Candidates should email me (gmail address: bayesianstatistics) with a detailed vita (CV) and a motivation letter including a research plan. Letters of recommendation may also be emailed to the same address.
absint[he] postdoc on approximate Bayesian inference in Paris, Montpellier and Oxford
Posted in Statistics with tags ABC, Agence Nationale de la Recherche, ANR, approximate Bayesian inference, bois de Boulogne, La Défense, misspecified model, Paris, ParisSaclay campus, PhD thesis, postdoctoral position, Université de Montpellier, Université Paris Dauphine, University of Oxford on March 18, 2019 by xi'anAs a consequence of its funding by the Agence Nationale de la Recherche (ANR) in 2018, the ABSint research conglomerate is now actively recruiting a postdoctoral collaborator for up to 24 months. The accronym ABSint stands for Approximate Bayesian solutions for inference on large datasets and complex models. The ABSint conglomerate involves researchers located in Paris, Saclay, Montpelliers, as well as Lyon, Marseille, Nice. This call seeks candidates with an excellent research record and who are interested to collaborate with local researchers on approximate Bayesian techniques like ABC, variational Bayes, PACBayes, Bayesian nonparametrics, scalable MCMC, and related topics. A potential direction of research would be the derivation of new Bayesian tools for model checking in such complex environments. The postdoctoral collaborator will be primarily located in Université ParisDauphine, with supported periods in Oxford and visits to Montpellier. No teaching duty is attached to this research position.
Applications can be submitted in either English or French. Sufficient working fluency in English is required. While mastering some French does help with daily life in France (!), it is not a prerequisite. The candidate must hold a PhD degree by the date of application (not the date of employment). Position opens on July 01, with possible accommodation for a later start in September or October.
Deadline for application is April 30 or until position filled. Estimated gross salary is around 2500 EUR, depending on experience (years) since PhD. Candidates should contact Christian Robert (gmail address: bayesianstatistics) with a detailed vita (CV) and a motivation letter including a research plan. Letters of recommendation may also be emailed to the same address.
O’Bayes in action
Posted in Books, Kids, Statistics, University life with tags bois de Boulogne, Charles de Gaulle, invariance, Jeffreys priors, La Défense, mathematical puzzle, noninformative priors, OBayes 2017, objective Bayes, randomisation, RER B, Roissy, Université Paris Dauphine on November 7, 2017 by xi'anMy nextdoor colleague [at Dauphine] François Simenhaus shared a paradox [to be developed in an incoming test!] with Julien Stoehr and I last week, namely that, when selecting the largest number between a [observed] and b [unobserved], drawing a random boundary on a [meaning that a is chosen iff a is larger than this boundary] increases the probability to pick the largest number above ½2…
When thinking about it in the wretched RER train [train that got immobilised for at least two hours just a few minutes after I went through!, good luck to the passengers travelling to the airport…] to De Gaulle airport, I lost the argument: if a<b, the probability [for this random bound] to be larger than a and hence for selecting b is 1Φ(a), while, if a>b, the probability [of winning] is Φ(a). Hence the only case when the probability is ½ is when a is the median of this random variable. But, when discussing the issue further with Julien, I exposed an interesting noninformative prior characterisation. Namely, if I assume a,b to be iid U(0,M) and set an improper prior 1/M on M, the conditional probability that b>a given a is ½. Furthermore, the posterior probability to pick the right [largest] number with François’s randomised rule is also ½, no matter what the distribution of the random boundary is. Now, the most surprising feature of this coffee room derivation is that these properties only hold for the prior 1/M. Any other power of M will induce an asymmetry between a and b. (The same properties hold when a,b are iid Exp(M).) Of course, this is not absolutely unexpected since 1/M is the invariant prior and since the “intuitive” symmetry only holds under this prior. Power to O’Bayes!
When discussing again the matter with François yesterday, I realised I had changed his wording of the puzzle. The original setting is one with two cards hiding the unknown numbers a and b and of a player picking one of the cards. If the player picks a card at random, there is indeed a probability of ½ of picking the largest number. If the decision to switch or not depends on an independent random draw being larger or smaller than the number on the observed card, the probability to get max(a,b) in the end hits 1 when this random draw falls into (a,b) and remains ½ outside (a,b). Randomisation pays.
unusual view of my office [jatp]
Posted in pictures, Travel with tags bois de Boulogne, flight, Italia, La Défense, office, Paris, Seine, Université Paris Dauphine, Venezia on October 18, 2017 by xi'anJournée algorithmes stochastiques
Posted in Books, pictures, Statistics, University life with tags Jussieu, La Défense, Monte Carlo Statistical Methods, PACBayesian, Paris, PSL, stochastic algorithms, Université Paris Dauphine, Université Pierre et Marie Curie, workshop on September 27, 2017 by xi'anOn December 1, 2017, we will hold a day workshop on stochastic algorithms at Université ParisDauphine, with the following speakers

Rémi Bardenet – CNRS Lille / CRISTAL [10:00]

Nicolas Chopin – ENSAE / CREST [11:00]

Aymeric Dieuleveut – ENS / DI & INRIA [14:00]

Aude Genevay – Dauphine / CEREMADE & INRIA [15:00]

Pierre Monmarché – UPMC / LJLL [16:30]
Details and abstracts of the talks are available on the workshop webpage. Attendance is free, but registration is requested towards planning the morning and afternoon coffee breaks. Looking forward seeing ‘Og’s readers there, at least those in the vicinity!
And while I am targetting Parisians, cryptoBayesians, and nearlyParisians, there is another day workshop on Bayesian and PACBayesian methods on November 16, at Université Pierre et Marie Curie (campus Jussieu), with invited speakers
and a similar request for (free) registration.