## high-dimensional stochastic simulation and optimisation in image processing [day #1]

Posted in pictures, Statistics, Travel, Uncategorized, University life, Wines with tags , , , , , , , , , , , on August 29, 2014 by xi'an

Even though I flew through Birmingham (and had to endure the fundamental randomness of trains in Britain), I managed to reach the “High-dimensional Stochastic Simulation and Optimisation in Image Processing” conference location (in Goldney Hall Orangery) in due time to attend the (second) talk by Christophe Andrieu. He started with an explanation of the notion of controlled Markov chain, which reminded me of our early and famous-if-unpublished paper on controlled MCMC. (The label “controlled” was inspired by Peter Green who pointed out to us the different meanings of controlled in French [meaning checked or monitored] and in English . We use it here in the English sense, obviously.) The main focus of the talk was on the stability of controlled Markov chains. With of course connections with out controlled MCMC of old, for instance the case of the coerced acceptance probability. Which happened to be not that stable! With the central tool being Lyapounov functions. (Making me wonder whether or not it would make sense to envision the meta-problem of adaptively estimating the adequate Lyapounov function from the MCMC outcome.)

As I had difficulties following the details of the convex optimisation talks in the afternoon, I eloped to work on my own and returned to the posters & wine session, where the small number of posters allowed for the proper amount of interaction with the speakers! Talking about the relevance of variational Bayes approximations and of possible tools to assess it, about the use of new metrics for MALA and of possible extensions to Hamiltonian Monte Carlo, about Bayesian modellings of fMRI and of possible applications of ABC in this framework. (No memorable wine to make the ‘Og!) Then a quick if reasonably hot curry and it was already bed-time after a rather long and well-filled day!z

## dans le noir

Posted in Kids, pictures, Travel, Wines with tags , , , , , , , on August 27, 2014 by xi'an

Yesterday night, we went to a very special restaurant in down-town Paris, called “dans le noir” where meals take place in complete darkness (truly “dans le noir”!). Complete in the sense it is impossible to see one’s hand and one’s glass. The waiters are blind and the experiment turns them into our guides, as we are unable to progress or eat in the dark! In addition to this highly informative experiment, it was fun to guess the food (easy!) and even more to fail miserably at guessing the colour of the wine (a white Minervois made from Syrah that tasted very much like a red, either from Languedoc-Roussillon or from Bordeaux…!) The food was fine if not outstanding (the owner told us how cooking too refined a meal led to terrible feedbacks from the customers as they could not guess what they were eating) and the wine very good (no picture for the ‘Og, obviously!). This was my daughter’s long-time choice for her 18th birthday dinner and a definitely outstanding idea! So if you have the opportunity to try one of those restaurants (in Barcelona Paseo Picasso, London Clerkenwell, New York, Paris Les Halles, or Saint-Petersbourg), I strongly suggest you to make the move. Eating will never feel the same!

## Mas de Martin Ecce Vino

Posted in Wines on August 20, 2014 by xi'an

## Boston skyline

Posted in pictures, Running, Travel, University life, Wines with tags , , , , , on August 9, 2014 by xi'an

## Bangalore workshop [ಬೆಂಗಳೂರು ಕಾರ್ಯಾಗಾರ]

Posted in pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , on August 3, 2014 by xi'an

As I am now back home after a rather lengthy and somewhat eventful trip [getting too early to Bangalore airport with 3 hours to spend in the nice and very quiet lounge, followed by another 5 hour wait in the very nice but no so quiet Bombay airport lounge, no visit to the cockpit this time!, and then the usual sick passenger blocking all trains from Paris-Charles de Gaulle airport for one hour, reaching home to find my 97-year old neighbour fallen in her kitchen and calling for help!], I cannot but reflect on the difference between my two trips to India, from the chaos of Varanasi to the orderly peace of the campus of the Indian Institute of Science of Bangalore and even to some extent of the whole city of Bangalore, all proportions guarded. Even managing to get a [new] pair of [new] prescription glasses (or rather spectacles) within three days!

I thus found this trip much less stressful and much profitable, from enjoying the local food to discussing with Indian statisticians. The purpose of the IFCAM workshop was to bring both groups together for potential joint projects funded by IFCAM (at the travel level). While I found most talks were driven by specific applications, esp. in genomics, there are directions where we could indeed collaborate, from capture-recapture to astrostatistics. So it may be that I’ll be back in India in a near future!

## Bangalore workshop [ಬೆಂಗಳೂರು ಕಾರ್ಯಾಗಾರ]

Posted in pictures, R, Running, Statistics, Travel, University life, Wines with tags , , , , , , on July 31, 2014 by xi'an

Second day at the Indo-French Centre for Applied Mathematics and the workshop. Maybe not the most exciting day in terms of talks (as I missed the first two plenary sessions by (a) oversleeping and (b) running across the campus!). However I had a neat talk with another conference participant that led to [what I think are] interesting questions… (And a very good meal in a local restaurant as the guest house had not booked me for dinner!)

To wit: given a target like

$\lambda \exp(-\lambda) \prod_{i=1}^n \dfrac{1-\exp(-\lambda y_i)}{\lambda}\quad (*)$

the simulation of λ can be demarginalised into the simulation of

$\pi (\lambda,\mathbf{z})\propto \lambda \exp(-\lambda) \prod_{i=1}^n \exp(-\lambda z_i) \mathbb{I}(z_i\le y_i)$

where z is a latent (and artificial) variable. This means a Gibbs sampler simulating λ given z and z given λ can produce an outcome from the target (*). Interestingly, another completion is to consider that the zi‘s are U(0,yi) and to see the quantity

$\pi(\lambda,\mathbf{z}) \propto \lambda \exp(-\lambda) \prod_{i=1}^n \exp(-\lambda z_i) \mathbb{I}(z_i\le y_i)$

as an unbiased estimator of the target. What’s quite intriguing is that the quantity remains the same but with different motivations: (a) demarginalisation versus unbiasedness and (b) zi ∼ Exp(λ) versus zi ∼ U(0,yi). The stationary is the same, as shown by the graph below, the core distributions are [formally] the same, … but the reasoning deeply differs.

Obviously, since unbiased estimators of the likelihood can be justified by auxiliary variable arguments, this is not in fine a big surprise. Still, I had not thought of the analogy between demarginalisation and unbiased likelihood estimation previously. Continue reading

## Bangalore workshop [ಬೆಂಗಳೂರು ಕಾರ್ಯಾಗಾರ]

Posted in pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , on July 30, 2014 by xi'an

First day at the Indo-French Centre for Applied Mathematics and the get-together (or speed-dating!) workshop. The campus of the Indian Institute of Science of Bangalore where we all stay is very pleasant with plenty of greenery in the middle of a very busy city. Plus, being at about 1000m means the temperature remains tolerable for me, to the point of letting me run in the morning.Plus, staying in a guest house in the campus also means genuine and enjoyable south Indian food.

The workshop is a mix of statisticians and of mathematicians of neurosciences, from both India and France, and we are few enough to have a lot of opportunities for discussion and potential joint projects. I gave the first talk this morning (hence a fairly short run!) on ABC model choice with random forests and, given the mixed audience, may have launched too quickly into the technicalities of the forests. Even though I think I kept the statisticians on-board for most of the talk. While the mathematical biology talks mostly went over my head (esp. when I could not resist dozing!), I enjoyed the presentation of Francis Bach of a fast stochastic gradient algorithm, where the stochastic average is only updated one term at a time, for apparently much faster convergence results. This is related with a joint work with Éric Moulines that both Éric and Francis presented in the past month. And makes me wonder at the intuition behind the major speed-up. Shrinkage to the mean maybe?