Archive for poster

just in case your summer of British conferences is not yet fully-booked…

Posted in Statistics with tags , , , , , , , , , , , on May 11, 2018 by xi'an

Winter sports [poster]

Posted in Mountains, pictures, Travel with tags , , , , , on March 11, 2018 by xi'an

Der Kunst ihre Freiheit [and the scare of the nude]

Posted in Books, pictures, Travel with tags , , , , , , , , , , , , , on February 17, 2018 by xi'an

A poster campaign advertising for several exhibits of modernist painters in Vienna, including major paintings by Egon Schiele, has met with astonishing censoring from the transport companies posting these advertisements. (And by Facebook, which AIs are visibly too artificial and none too intelligent to [fail to] recognise well-known works of art.) Not very surprising, given the well-known conservatism of advertising units in transportation companies, but nonetheless appalling, especially when putting these posters against the truly indecent ones advertising for, e.g., gas guzzling machines and junk food.

BAYSM’18 [British summer conference series]

Posted in Kids, pictures, Statistics, Travel, University life with tags , , , , , , on January 22, 2018 by xi'an

G²S³18, Breckenridge, CO, June 17-30, 2018

Posted in Statistics with tags , , , , , , , , , , , , on October 3, 2017 by xi'an

keine Angst vor Niemand…

Posted in Statistics with tags , , , , , , , on April 1, 2017 by xi'an

mixtures are slices of an orange

Posted in Kids, R, Statistics with tags , , , , , , , , , , , , , , , , on January 11, 2016 by xi'an

licenceDataTempering_mu_pAfter presenting this work in both London and Lenzerheide, Kaniav Kamary, Kate Lee and I arXived and submitted our paper on a new parametrisation of location-scale mixtures. Although it took a long while to finalise the paper, given that we came with the original and central idea about a year ago, I remain quite excited by this new representation of mixtures, because the use of a global location-scale (hyper-)parameter doubling as the mean-standard deviation for the mixture itself implies that all the other parameters of this mixture model [beside the weights] belong to the intersection of a unit hypersphere with an hyperplane. [Hence the title above I regretted not using for the poster at MCMskv!]fitted_density_galaxy_data_500iters2This realisation that using a (meaningful) hyperparameter (μ,σ) leads to a compact parameter space for the component parameters is important for inference in such mixture models in that the hyperparameter (μ,σ) is easily estimated from the entire sample, while the other parameters can be studied using a non-informative prior like the Uniform prior on the ensuing compact space. This non-informative prior for mixtures is something I have been seeking for many years, hence my on-going excitement! In the mid-1990‘s, we looked at a Russian doll type parametrisation with Kerrie Mengersen that used the “first” component as defining the location-scale reference for the entire mixture. And expressing each new component as a local perturbation of the previous one. While this is a similar idea than the current one, it falls short of leading to a natural non-informative prior, forcing us to devise a proper prior on the variance that was a mixture of a Uniform U(0,1) and of an inverse Uniform 1/U(0,1). Because of the lack of compactness of the parameter space. Here, fixing both mean and variance (or even just the variance) binds the mixture parameter to an ellipse conditional on the weights. A space that can be turned into the unit sphere via a natural reparameterisation. Furthermore, the intersection with the hyperplane leads to a closed form spherical reparameterisation. Yay!

While I do not wish to get into the debate about the [non-]existence of “non-informative” priors at this stage, I think being able to using the invariant reference prior π(μ,σ)=1/σ is quite neat here because the inference on the mixture parameters should be location and scale equivariant. The choice of the prior on the remaining parameters is of lesser importance, the Uniform over the compact being one example, although we did not study in depth this impact, being satisfied with the outputs produced from the default (Uniform) choice.

From a computational perspective, the new parametrisation can be easily turned into the old parametrisation, hence leads to a closed-form likelihood. This implies a Metropolis-within-Gibbs strategy can be easily implemented, as we did in the derived Ultimixt R package. (Which programming I was not involved in, solely suggesting the name Ultimixt from ultimate mixture parametrisation, a former title that we eventually dropped off for the paper.)

Discussing the paper at MCMskv was very helpful in that I got very positive feedback about the approach and superior arguments to justify the approach and its appeal. And to think about several extensions outside location scale families, if not in higher dimensions which remain a practical challenge (in the sense of designing a parametrisation of the covariance matrices in terms of the global covariance matrix).