Archive for unknown number of components

Bill’s 80th!!!

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , on April 17, 2022 by xi'an

“It was the best of times,
it was the worst of times”
[Dickens’ Tale of Two Cities (which plays a role in my friendship with Bill!)]

My flight to NYC last week was uneventful and rather fast and I worked rather well, even though the seat in front of me was inclined to the max for the entire flight! (Still got glimpses of Aline and of Deepwater Horizon from my neighbours.) Taking a very early flight from Paris was great making a full day once in NYC,  but “forcing” me to take a taxi, which almost ended up in disaster since the Über driver did not show up. At all. And never replied to my message. Fortunately trains were running, I was also running despite the broken rib, and I arrived at the airport some time before access was closed, grateful for the low activity that day. I also had another bit of a worrying moment at the US border control in JFK as I ended up in a back-office of the Border Police after the machine could not catch my fingerprints. And another stop at the luggage control as my lack of luggage sounded suspicious!The conference was delightful in celebrating Bill’s carreer and kindness (tinted with the most gentle irony!). Among stories told at the banquet, I was surprised to learn of Bill’s jazz career side, as I had never heard him play the piano or the clarinet! Even though we had chatted about music and literature on many occasions. Since our meeting in 1989… The (scientific side of the) conference included many talks around shrinkage, from loss estimation to predictive estimation, reminding me of the roaring 70’s and 80’s [James-Stein wise]. And demonstrating the impact of Bill’s wor throughout this era (incl. on my own PhD thesis). I started wondering at the (Bayesian) use of the loss estimate, though, as I set myself facing two point estimators attached with two estimators of their loss: it did not seem a particularly good idea to systematically pick the one with the smallest estimate (and Jim Berger confirmed this feeling on a later discussion). Among the talks on less familiar topics (of mine), I discovered work of Genevera Allen‘s on inferring massive network for neuron connections under sparse information. And of Emma Jingfei Zhang, equally centred on network inference, with applications to brain connectivity.

In a somewhat remote connection with Bill’s work (and our joint and hilarious assessment of Pitman closeness), I presented part of our joint and current work with Adrien Hairault and Judith Rousseau on inferring the number of components in a mixture by Bayes factors when the alternative is an infinite mixture (i.e., a Dirichlet process mixture). Of which Ruobin Gong gave a terrific discussion. (With a connection to her current work on Sense and Sensitivity.)

I was most sorry to miss Larry Wasserman’s and Rob Strawderman’s talk to rush back to the airport, the more because I am sure Larry’s talk would have brought a new light on causality (possibly equating it with tequila and mixtures!). The flight back was uneventfull, the plane rather empty and I slept most of the time. Overall,  it was most wonderful to re-connect with so many friends. Most of whom I had not seen for ages, even before the pandemic. And to meet new friends. (Nothing original in the reported feeling, just telling that the break in conferences and workshops was primarily a hatchet job on social relations and friendships.)

a book and two chapters on mixtures

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , on January 8, 2019 by xi'an

The Handbook of Mixture Analysis is now out! After a few years of planning, contacts, meetings, discussions about notations, interactions with authors, further interactions with late authors, repeating editing towards homogenisation, and a final professional edit last summer, this collection of nineteen chapters involved thirty-five contributors. I am grateful to all participants to this piece of work, especially to Sylvia Früwirth-Schnatter for being a driving force in the project and for achieving a much higher degree of homogeneity in the book than I expected. I would also like to thank Rob Calver and Lara Spieker of CRC Press for their boundless patience through the many missed deadlines and their overall support.

Two chapters which I co-authored are now available as arXived documents:

5. Gilles Celeux, Kaniav Kamary, Gertraud Malsiner-Walli, Jean-Michel Marin, and Christian P. Robert, Computational Solutions for Bayesian Inference in Mixture Models
7. Gilles Celeux, Sylvia Früwirth-Schnatter, and Christian P. Robert, Model Selection for Mixture Models – Perspectives and Strategies

along other chapters

1. Peter Green, Introduction to Finite Mixtures
8. Bettina Grün, Model-based Clustering
12. Isobel Claire Gormley and Sylvia Früwirth-Schnatter, Mixtures of Experts Models
13. Sylvia Kaufmann, Hidden Markov Models in Time Series, with Applications in Economics
14. Elisabeth Gassiat, Mixtures of Nonparametric Components and Hidden Markov Models
19. Michael A. Kuhn and Eric D. Feigelson, Applications in Astronomy

repulsive mixtures

Posted in Books, Statistics with tags , , , , , , , , on April 10, 2017 by xi'an

Fangzheng Xie and Yanxun Xu arXived today a paper on Bayesian repulsive modelling for mixtures. Not that Bayesian modelling is repulsive in any psychological sense, but rather that the components of the mixture are repulsive one against another. The device towards this repulsiveness is to add a penalty term to the original prior such that close means are penalised. (In the spirit of the sugar loaf with water drops represented on the cover of Bayesian Choice that we used in our pinball sampler, repulsiveness being there on the particles of a simulated sample and not on components.) Which means a prior assumption that close covariance matrices are of lesser importance. An interrogation I have has is was why empty components are not excluded as well, but this does not make too much sense in the Dirichlet process formulation of the current paper. And in the finite mixture version the Dirichlet prior on the weights has coefficients less than one.

The paper establishes consistency results for such repulsive priors, both for estimating the distribution itself and the number of components, K, under a collection of assumptions on the distribution, prior, and repulsiveness factors. While I have no mathematical issue with such results, I always wonder at their relevance for a given finite sample from a finite mixture in that they give an impression that the number of components is a perfectly estimable quantity, which it is not (in my opinion!) because of the fluid nature of mixture components and therefore the inevitable impact of prior modelling. (As Larry Wasserman would pound in, mixtures like tequila are evil and should likewise be avoided!)

The implementation of this modelling goes through a “block-collapsed” Gibbs sampler that exploits the latent variable representation (as in our early mixture paper with Jean Diebolt). Which includes the Old Faithful data as an illustration (for which a submission of ours was recently rejected for using too old datasets). And use the logarithm of the conditional predictive ordinate as  an assessment tool, which is a posterior predictive estimated by MCMC, using the data a second time for the fit.

Dirichlet process mixture inconsistency

Posted in Books, Statistics with tags , , , , on February 15, 2016 by xi'an

cover of Mixture Estimation and ApplicationsJudith Rousseau pointed out to me this NIPS paper by Jeff Miller and Matthew Harrison on the possible inconsistency of Dirichlet mixtures priors for estimating the (true) number of components in a (true) mixture model. The resulting posterior on the number of components does not concentrate on the right number of components. Which is not the case when setting a prior on the unknown number of components of a mixture, where consistency occurs. (The inconsistency results established in the paper are actually focussed on iid Gaussian observations, for which the estimated number of Gaussian components is almost never equal to 1.) In a more recent arXiv paper, they also show that a Dirichlet prior on the weights and a prior on the number of components can still produce the same features as a Dirichlet mixtures priors. Even the stick breaking representation! (Paper that I already reviewed last Spring.)

Bruce Lindsay (March 7, 1947 — May 5, 2015)

Posted in Books, Running, Statistics, Travel, University life with tags , , , , , , , , , , , on May 22, 2015 by xi'an

When early registering for Seattle (JSM 2015) today, I discovered on the ASA webpage the very sad news that Bruce Lindsay had passed away on May 5.  While Bruce was not a very close friend, we had met and interacted enough times for me to feel quite strongly about his most untimely death. Bruce was indeed “Mister mixtures” in many ways and I have always admired the unusual and innovative ways he had found for analysing mixtures. Including algebraic ones through the rank of associated matrices. Which is why I first met him—besides a few words at the 1989 Gertrude Cox (first) scholarship race in Washington DC—at the workshop I organised with Gilles Celeux and Mike West in Aussois, French Alps, in 1995. After this meeting, we met twice in Edinburgh at ICMS workshops on mixtures, organised with Mike Titterington. I remember sitting next to Bruce at one workshop dinner (at Blonde) and him talking about his childhood in Oregon and his father being a journalist and how this induced him to become an academic. He also contributed a chapter on estimating the number of components [of a mixture] to the Wiley book we edited out of this workshop. Obviously, his work extended beyond mixtures to a general neo-Fisherian theory of likelihood inference. (Bruce was certainly not a Bayesian!) Last time, I met him, it was in Italia, at a likelihood workshop in Venezia, October 2012, mixing Bayesian nonparametrics, intractable likelihoods, and pseudo-likelihoods. He gave a survey talk about composite likelihood, telling me about his extended stay in Italy (Padua?) around that time… So, Bruce, I hope you are now running great marathons in a place so full of mixtures that you can always keep ahead of the pack! Fare well!

 

%d bloggers like this: