## Archive for poster

## Winter sports [poster]

Posted in Mountains, pictures, Travel with tags 1950's, advertising, Bernard Villemot, French Alps, poster, winter sports on March 11, 2018 by xi'an## Der Kunst ihre Freiheit [and the scare of the nude]

Posted in Books, pictures, Travel with tags #DerKunstihreFreiheit, #ToArtItsFreedom in English, advertising, censorship, Egon Schiele, Facebook, London, London Tube, modernism, nude, poster, Vienna, Wien, Wiener Moderne 2018 on February 17, 2018 by xi'an**A** poster campaign advertising for several exhibits of modernist painters in Vienna, including major paintings by Egon Schiele, has met with astonishing censoring from the transport companies posting these advertisements. (And by Facebook, which AIs are visibly too artificial and none too intelligent to [fail to] recognise well-known works of art.) Not very surprising, given the well-known conservatism of advertising units in transportation companies, but nonetheless appalling, especially when putting these posters against the truly indecent ones advertising for, e.g., gas guzzling machines and junk food.

## BAYSM’18 [British summer conference series]

Posted in Kids, pictures, Statistics, Travel, University life with tags BAYSM 2018, Coventry, ISBA, ISBA 2018, jISBA, poster, University of Warwick on January 22, 2018 by xi'an## G²S³18, Breckenridge, CO, June 17-30, 2018

Posted in Statistics with tags Breckenridge, Colorado, computational statistics, Edinburgh, Gene Golub, inverse problems, ISBA 2018, MCqMC 2018, Monte Carlo Statistical Methods, poster, Rennes, SIAM, summer school on October 3, 2017 by xi'an## keine Angst vor Niemand…

Posted in Statistics with tags Berlin, Berlin wall, French elections, Germany, morning run, no fear, poster, Tocotronic on April 1, 2017 by xi'an## mixtures are slices of an orange

Posted in Kids, R, Statistics with tags CFE 2015, Gaussian mixture, hyperparameter, improper priors, invariance, Lenzerheide, location-scale parameterisation, London, MCMskv, Metropolis-Hastings algorithm, mixtures of distributions, non-informative priors, poster, R, reference priors, Switzerland, Ultimixt on January 11, 2016 by xi'an**A**fter presenting this work in both London and Lenzerheide, Kaniav Kamary, Kate Lee and I arXived and submitted our paper on a new parametrisation of location-scale mixtures. Although it took a long while to finalise the paper, given that we came with the original and central idea about a year ago, I remain quite excited by this new representation of mixtures, because the use of a global location-scale (hyper-)parameter doubling as the mean-standard deviation for the mixture itself implies that all the other parameters of this mixture model [beside the weights] belong to the intersection of a unit hypersphere with an hyperplane. [Hence the title above I regretted not using for the poster at MCMskv!]This realisation that using a (meaningful) hyperparameter (μ,σ) leads to a compact parameter space for the component parameters is important for inference in such mixture models in that the hyperparameter (μ,σ) is easily estimated from the entire sample, while the other parameters can be studied using a non-informative prior like the Uniform prior on the ensuing compact space. This non-informative prior for mixtures is something I have been seeking for many years, hence my on-going excitement! In the mid-1990‘s, we looked at a Russian doll type parametrisation with Kerrie Mengersen that used the “first” component as defining the location-scale reference for the entire mixture. And expressing each new component as a local perturbation of the previous one. While this is a similar idea than the current one, it falls short of leading to a natural non-informative prior, forcing us to devise a proper prior on the variance that was a mixture of a Uniform U(0,1) and of an inverse Uniform 1/U(0,1). Because of the lack of compactness of the parameter space. Here, fixing both mean and variance (or even just the variance) binds the mixture parameter to an ellipse conditional on the weights. A space that can be turned into the unit sphere via a natural reparameterisation. Furthermore, the intersection with the hyperplane leads to a closed form spherical reparameterisation. Yay!

While I do not wish to get into the debate about the [non-]existence of “non-informative” priors at this stage, I think being able to using the invariant reference prior π(μ,σ)=1/σ is quite neat here because the inference on the mixture parameters should be location and scale equivariant. The choice of the prior on the remaining parameters is of lesser importance, the Uniform over the compact being one example, although we did not study in depth this impact, being satisfied with the outputs produced from the default (Uniform) choice.

From a computational perspective, the new parametrisation can be easily turned into the old parametrisation, hence leads to a closed-form likelihood. This implies a Metropolis-within-Gibbs strategy can be easily implemented, as we did in the derived Ultimixt R package. (Which programming I was not involved in, solely suggesting the name *Ultimixt* from ultimate mixture parametrisation, a former title that we eventually dropped off for the paper.)

Discussing the paper at MCMskv was very helpful in that I got very positive feedback about the approach and superior arguments to justify the approach and its appeal. And to think about several extensions outside location scale families, if not in higher dimensions which remain a practical challenge (in the sense of designing a parametrisation of the covariance matrices in terms of the global covariance matrix).

## MCMskv #3 [town with a view]

Posted in Statistics with tags ABC, bootstrap, doubly intractable problems, exact Monte Carlo, Holy Grail, Lenzerheide, likelihood-free methods, MCMskv, Metropolis-Hastings algorithm, Monty Python, poster, SIR, Switzerland, unbiasedness on January 8, 2016 by xi'an**T**hird day at MCMskv, where I took advantage of the gap left by the elimination of the Tweedie Race [second time in a row!] to complete and submit our mixture paper. Despite the nice weather. The rest of the day was quite busy with David Dunson giving a plenary talk on various approaches to approximate MCMC solutions, with a broad overview of the potential methods and of the need for better solutions. (On a personal basis, great line from David: “five minutes or four minutes?”. It almost beat David’s question on the previous day, about the weight of a finch that sounded suspiciously close to the question about the air-speed velocity of an unladen swallow. I was quite surprised the speaker did not reply with the Arthurian “An African or an European finch?”) In particular, I appreciated the notion that some problems were calling for a reduction in the number of parameters, rather than the number of observations. At which point I wrote down “multiscale approximations required” in my black pad, a requirement David made a few minutes later. (The talk conditions were also much better than during Michael’s talk, in that the man standing between the screen and myself was David rather than the cameraman! Joke apart, it did not really prevent me from reading them, except for most of the jokes in small prints!)

The first session of the morning involved a talk by Marc Suchard, who used continued fractions to find a closed form likelihood for the SIR epidemiology model (I love continued fractions!), and a talk by Donatello Telesca who studied non-local priors to build a regression tree. While I am somewhat skeptical about non-local testing priors, I found this approach to the construction of a tree quite interesting! In the afternoon, I obviously went to the intractable likelihood session, with talks by Chris Oates on a control variate method for doubly intractable models, Brenda Vo on mixing sequential ABC with Bayesian bootstrap, and Gael Martin on our consistency paper. I was not aware of the Bayesian bootstrap proposal and need to read through the paper, as I fail to see the appeal of the bootstrap part! I later attended a session on exact Monte Carlo methods that was pleasantly homogeneous. With talks by Paul Jenkins (Warwick) on the exact simulation of the Wright-Fisher diffusion, Anthony Lee (Warwick) on designing perfect samplers for chains with atoms, Chang-han Rhee and Sebastian Vollmer on extensions of the Glynn-Rhee debiasing technique I previously discussed on the blog. (Once again, I regretted having to make a choice between the parallel sessions!)

The poster session (after a quick home-made pasta dish with an exceptional Valpolicella!) was almost universally great and with just the right number of posters to go around all of them in the allotted time. With in particular the Breaking News! posters of Giacomo Zanella (Warwick), Beka Steorts and Alexander Terenin. A high quality session that made me regret not touring the previous one due to my own poster presentation.