Archive for trail running
Sylvia Richardson gave a great talk yesterday on clustering applied to variable selection, which first raised [in me] a usual worry of the lack of background model for clustering. But the way she used this notion meant there was an infinite Dirichlet process mixture model behind. This is quite novel [at least for me!] in that it addresses the covariates and not the observations themselves. I still wonder at the meaning of the cluster as, if I understood properly, the dependent variable is not involved in the clustering. Check her R package PReMiuM for a practical implementation of the approach. Later, Adeline Samson showed us the results of using pMCM versus particle Gibbs for diffusion processes where (a) pMCMC was behaving much worse than particle Gibbs and (b) EM required very few particles and Metropolis-Hastings steps to achieve convergence, when compared with posterior approximations.
Today Pierre Druilhet explained to the audience of the summer school his measure theoretic approach [I discussed a while ago] to the limit of proper priors via q-vague convergence, with the paradoxical phenomenon that a Be(n⁻¹,n⁻¹) converges to a sum of two Dirac masses when the parameter space is [0,1] but to Haldane’s prior when the space is (0,1)! He also explained why the Jeffreys-Lindley paradox vanishes when considering different measures [with an illustration that came from my Statistica Sinica 1993 paper]. Pierre concluded with the above opposition between two Bayesian paradigms, a [sort of] tale of two sigma [fields]! Not that I necessarily agree with the first paradigm that priors are supposed to have generated the actual parameter. If only because it mechanistically excludes all improper priors…
Darren Wilkinson talked about yeast, which is orders of magnitude more exciting than it sounds, because this is Bayesian big data analysis in action! With significant (and hence impressive) results based on stochastic dynamic models. And massive variable selection techniques. Scala, Haskell, Frege, OCaml were [functional] languages he mentioned that I had never heard of before! And Daniel Rudolf concluded the [intense] second day of this Bayesian week at CIRM with a description of his convergence results for (rather controlled) noisy MCMC algorithms.
As posted earlier, this week is a Bayesian week at CIRM, the French mathematical society centre near Marseilles. Where we meet with about 80 researchers and students interested in Bayesian statistics, from all possible sides. (And possibly in climbing in the Calanques and trail running, if not swimming at this time of year…) With Jean-Michel we will be teaching a short course on Bayesian computational methods, namely ABC and MCMC, over the first two days… Here are my slides for the MCMC side:
As should be obvious from the first slides, this is a very introductory course that should only appeal to students with no previous exposure. The remainder of the week will see advanced talks on the state-of-the-art Bayesian computational methods, including some on noisy MCMC and on the mysterious expectation-propagation technique.
I received a (mass) email from Taylor & Francis about creating a few cartoons related to recent papers… As in the example above about the foot strike of Kilian Jornet. With a typo on Font-Romeu. Apart from the authors themselves, and maybe some close relatives!, I have trouble seeing the point of this offer, as cartoons are unlikely to attract academic readers interested in the contents of the paper.
This weekend, I ran another race (yes, yet another running post!) on my other “home turf” (since Malakoff is also my training ground!), Le Parc de Sceaux. This was the 30th Cross de la Ville de Sceaux (lagging one year behind Malakoff!) and there were many more runners on the starting line than last week (500 vs. 127) and some of them clearly good. (For some unfathomable reason, there are women-only (3.1k) and men-only (7.2k) races in this event.) Thanks to a strong wind, it was deadly cold if bright and sunny before the start (after it did not matter). I managed to stay with a V2 runner for most of the race, except at the very end when he pushed harder and gained a dozen meters. (It did not matter so much as we ranked 5th and 6th, almost two minutes more than the first V2…) This was not much of a cross-country race in that there was hardly any mud on the track and moderate slopes, just a few narrow passages through which runners had to squeeze on the first lap, not so much on the second.) My time is worse than last week, meaning I miss longer distance training (which are not compensated by longer bike rides!). But this was enjoyable nonetheless!