**M**y colleague at Paris Dauphine, Irène Waldspurger, got one of the prestigious CNRS bronze medals this year. Irène is working on inverse problems and machine learning, with applications to sensing and imaging. Congrats!

## Archive for inverse problems

## Irène Waldspurger, CNRS bronze medal

Posted in Statistics with tags bois de Boulogne, CNRS, CNRS Bronze Medal, inverse problems, La Défense, machine learning, Université Paris Dauphine on February 14, 2020 by xi'an## G²S³18, Breckenridge, CO, June 17-30, 2018

Posted in Statistics with tags Breckenridge, Colorado, computational statistics, Edinburgh, Gene Golub, inverse problems, ISBA 2018, MCqMC 2018, Monte Carlo Statistical Methods, poster, Rennes, SIAM, summer school on October 3, 2017 by xi'an## the curious incident of the inverse of the mean

Posted in R, Statistics, University life with tags astronomy, Bayesian inference, inverse problems, parallaxes on July 15, 2016 by xi'an**A** s I figured out while working with astronomer colleagues last week, a strange if understandable difficulty proceeds from the simplest and most studied statistical model, namely the Normal model

x~N(θ,1)

Indeed, if one reparametrises this model as x~N(υ⁻¹,1) with υ>0, a *single* observation x brings very little information about υ! (This is not a toy problem as it corresponds to estimating distances from observations of parallaxes.) If x gets large, υ is very likely to be small, but if x is small or negative, υ is certainly large, with no power to discriminate between highly different values. For instance, Fisher’s information for this model and parametrisation is υ⁻² and thus collapses at zero.

While one can always hope for Bayesian miracles, they do not automatically occur. For instance, working with a Gamma prior Ga(3,10³) on υ [as informed by a large astronomy dataset] leads to a posterior expectation hardly impacted by the value of the observation x:

And using an alternative estimate like the harmonic posterior mean that is associated with the relative squared error loss does not see much more impact from the observation:

There is simply not enough information contained in one datapoint (or even several datapoints for all that matters) to infer about υ.

## EQUIP launch

Posted in Statistics, University life with tags inverse problems, model uncertainty, oil statistics, partial differential equations, postdoctoral position, University of Warwick, Warwick on October 10, 2013 by xi'an**T**oday, as I was around (!), I attended the launch of the new Warwick research project EQUIP (which stands for *Enabling quantification of uncertainty for inverse problems*). This is an EPSRC funded project merging mathematics, numerical analysis, statistics and geophysics, with a primary target application [alas!] in the oil industry. It will start hiring four (4!) postdocs pretty soon. The talks were all interesting, but I particularly liked the idea that they were addressed primarily to students who were potentially interested in the positions. In addition, Mark Girolami gaves a most appreciated insight on the modelling of uncertainty in PDE models, connecting with earlier notions set by Tony O’Hagan, modelling that I hope we can discuss further when both in Warwick!

## MCMC at ICMS (3)

Posted in pictures, Statistics, Travel, University life with tags Adapski, adaptive MCMC methods, airlines, Bristol, climate modelling, Edinburgh, ICMS, inverse problems, Langevin diffusion, MALA, MCMC, Scotland, United Kingdom, weather modelling, weather prediction, workshop on April 26, 2012 by xi'an**T**he intense pace of the two first days of our workshop on MCMC at ICMS had apparently taken an heavy toll on the participants as a part of the audience was missing this morning! Although not as a consequence of the haggis of the previous night at the conference dinner, nor even as a result of the above pace. In fact, the missing participants had opted ahead of time for leaving the workshop early, which is understandable given everyone’s busy schedule, esp. for those attending both Bristol and Edinburgh workshops, however slightly impacting the atmosphere of the final day. (*Except for Mark Girolami who most unfortunately suffered such a teeth infection that he had to seek urgent medical assistance yesterday afternoon. Best wishes to Mark for a prompt recovery, say I with a dental appointment tomorrow…!)*

**T**he plenary talk of the day was delivered by Heikki Haario, who provided us with a survey of the (adaptive) MCMC advances he and his collaborators had made in the analysis of complex and immensely high-dimensional weather models. This group of Finnish researchers, who started from inverse problem analysis rather than from MCMC, have had a major impact on the design and validation of adaptive MCMC algorithms, especially in the late 1990’s. (Heikki also was a co-organizer of the Adap’ski workshops, workshops that may be continued, stay tuned!) The next talk, by Marko Laine, was also about adaptive MCMC algorithms, with the difference that the application was climate modelling. It contained interesting directions about early stopping (“early rejection”, as opposed to “delayed rejection”) of diverging proposals (gaining 80% in computing time!) and about parallel adaptation. Still in the same theme, Gersende Fort explained the adaptive version of the equi-energy sampler she and co-authors had recently developed. Although she had briefly presented this paper in Banff a month ago, I found the talk quite informative about the implementation of the method and at the perfect technical level *(for me!)*.

**I**n *[what I now perceive as]* another recurrent theme of the workshop, namely the recourse to Gaussian structures like Gaussian processes (see, e.g., Ian Murray’s talk yesterday), Andrew Stuart gave us a light introduction to random walk Metropolis-Hastings algorithms on Hilbert spaces. In particular, he related to Ian Murray’s talk of yesterday as to the definition of a “new” random walk (due to Radford Neal) that makes a proposal

that still preserves the acceptance probability of the original (“old”) random walk proposal. The final talks of the morning were Krys Latuszynski’s and Nick Whiteley’s very pedagogical presentations of the convergence properties of manifold MALA and of particle filters for hidden Markov models. In both cases, the speakers avoided the overly technical details and provided clear intuition in the presented results, a great feat after those three intense days of talks! (Having attended Nick’s talk in Paris two weeks ago helped of course.)

**U**nfortunately, due to very limited flight options (after one week of traveling around the UK) and also being slightly worried at the idea of missing my flight!, I had to leave the meeting along with all my French colleagues right after Jean-Michel Marin’s talk on (hidden) Potts driven mixtures, explaining the computational difficulties in deriving marginal likelihoods. I thus missed the final talk of the workshop by Gareth Tribello. And delivering my final remarks at the lunch break.

**O**verall, when reflecting on those two Monte Carlo workshops, I feel I preferred the pace of the Bristol workshop, because it allowed for more interactions between the participants by scheduling less talks… This being said, the organization at ICMS was superb (as usual!) and the talks were uniformly very good so it also was a very profitable meeting, of a different kind! As written earlier, among other things, it induced (in me) some reflections on a possible new research topic with friends there. Looking forward to visit Scotland again, of course!