Archive for clustering

Big Bayes goes South

Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , on December 5, 2018 by xi'an

At the Big [Data] Bayes conference this week [which I found quite exciting despite a few last minute cancellations by speakers] there were a lot of clustering talks including the ones by Amy Herring (Duke), using a notion of centering that should soon appear on arXiv. By Peter Müller (UT, Austin) towards handling large datasets. Based on a predictive recursion that takes one value at a time, unsurprisingly similar to the update of Dirichlet process mixtures. (Inspired by a 1998 paper by Michael Newton and co-authors.) The recursion doubles in size at each observation, requiring culling of negligible components. Order matters? Links with Malsiner-Walli et al. (2017) mixtures of mixtures. Also talks by Antonio Lijoi and Igor Pruenster (Boconni Milano) on completely random measures that are used in creating clusters. And by Sylvia Frühwirth-Schnatter (WU Wien) on creating clusters for the Austrian labor market of the impact of company closure. And by Gregor Kastner (WU Wien) on multivariate factor stochastic models, with a video of a large covariance matrix evolving over time and catching economic crises. And by David Dunson (Duke) on distance clustering. Reflecting like myself on the definitely ill-defined nature of the [clustering] object. As the sample size increases, spurious clusters appear. (Which reminded me of a disagreement I had had with David McKay at an ICMS conference on mixtures twenty years ago.) Making me realise I missed the recent JASA paper by Miller and Dunson on that perspective.

Some further snapshots (with short comments visible by hovering on the picture) of a very high quality meeting [says one of the organisers!]. Following suggestions from several participants, it would be great to hold another meeting at CIRM in a near future. Continue reading

Handbook of Mixture Analysis [cover]

Posted in Books, Statistics, University life with tags , , , , , , , , on August 15, 2018 by xi'an

On the occasion of my talk at JSM2018, CRC Press sent me the cover of our incoming handbook on mixture analysis, courtesy of Rob Calver who managed to get it to me on very short notice! We are about ready to send the manuscript to CRC Press and hopefully the volume will get published pretty soon. It would have been better to have it ready for JSM2018, but we editors got delayed by a few months for the usual reasons.

a Ca’Foscari [first Italian-French statistics seminar]

Posted in Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on October 26, 2017 by xi'an

Apart from subjecting my [surprisingly large!] audience to three hours of ABC tutorial today, and after running Ponte della la Libertà to Mestre and back in a deep fog, I attended the second part of the 1st Italian-French statistics seminar at Ca’Foscari, Venetiarum Universitas, with talks by Stéfano Tonellato and Roberto Casarin. Stéfano discussed a most interesting if puzzling notion of clustering via Dirichlet process mixtures. Which indeed puzzles me for its dependence on the Dirichlet measure and on the potential for an unlimited number of clusters as the sample size increases. The method offers similarities with an approach from our 2000 JASA paper on running inference on mixtures without proper label switching, in that looking at pairs of allocated observations to clusters is revealing about the [true or pseudo-true] number of clusters. With divergence in using eigenvalues of Laplacians on similarity matrices. But because of the potential for the number of components to diverge I wonder at the robustness of the approach via non-parametric [Bayesian] modelling. Maybe my difficulty stands with the very notion of cluster, which I find poorly defined and mostly in the eyes of the beholder! And Roberto presented a recent work on SURE and VAR models, with a great graphical representation of the estimated connections between factors in a sparse graphical model.

at CIRM [#2]

Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , on March 2, 2016 by xi'an

Sylvia Richardson gave a great talk yesterday on clustering applied to variable selection, which first raised [in me] a usual worry of the lack of background model for clustering. But the way she used this notion meant there was an infinite Dirichlet process mixture model behind. This is quite novel [at least for me!] in that it addresses the covariates and not the observations themselves. I still wonder at the meaning of the cluster as, if I understood properly, the dependent variable is not involved in the clustering. Check her R package PReMiuM for a practical implementation of the approach. Later, Adeline Samson showed us the results of using pMCM versus particle Gibbs for diffusion processes where (a) pMCMC was behaving much worse than particle Gibbs and (b) EM required very few particles and Metropolis-Hastings steps to achieve convergence, when compared with posterior approximations.

Today Pierre Druilhet explained to the audience of the summer school his measure theoretic approach [I discussed a while ago] to the limit of proper priors via q-vague convergence, with the paradoxical phenomenon that a Be(n⁻¹,n⁻¹) converges to a sum of two Dirac masses when the parameter space is [0,1] but to Haldane’s prior when the space is (0,1)! He also explained why the Jeffreys-Lindley paradox vanishes when considering different measures [with an illustration that came from my Statistica Sinica 1993 paper]. Pierre concluded with the above opposition between two Bayesian paradigms, a [sort of] tale of two sigma [fields]! Not that I necessarily agree with the first paradigm that priors are supposed to have generated the actual parameter. If only because it mechanistically excludes all improper priors…

Darren Wilkinson talked about yeast, which is orders of magnitude more exciting than it sounds, because this is Bayesian big data analysis in action! With significant (and hence impressive) results based on stochastic dynamic models. And massive variable selection techniques. Scala, Haskell, Frege, OCaml were [functional] languages he mentioned that I had never heard of before! And Daniel Rudolf concluded the [intense] second day of this Bayesian week at CIRM with a description of his convergence results for (rather controlled) noisy MCMC algorithms.

mixtures of mixtures

Posted in pictures, Statistics, University life with tags , , , , , , , , , on March 9, 2015 by xi'an

linz4And yet another arXival of a paper on mixtures! This one is written by Gertraud Malsiner-Walli, Sylvia Frühwirth-Schnatter, and Bettina Grün, from the Johannes Kepler University Linz and the Wirtschaftsuniversitat Wien I visited last September. With the exact title being Identifying mixtures of mixtures using Bayesian estimation.

So, what is a mixture of mixtures if not a mixture?! Or if not only a mixture. The upper mixture level is associated with clusters, while the lower mixture level is used for modelling the distribution of a given cluster. Because the cluster needs to be real enough, the components of the mixture are assumed to be heavily overlapping. The paper thus spends a large amount of space on detailing the construction of the associated hierarchical prior. Which in particular implies defining through the prior what a cluster means. The paper also connects with the overfitting mixture idea of Rousseau and Mengersen (2011, Series B). At the cluster level, the Dirichlet hyperparameter is chosen to be very small, 0.001, which empties superfluous clusters but sounds rather arbitrary (which is the reason why we did not go for such small values in our testing/mixture modelling). On the opposite, the mixture weights have an hyperparameter staying (far) away from zero. The MCMC implementation is based on a standard Gibbs sampler and the outcome is analysed and sorted by estimating the “true” number of clusters as the MAP and by selecting MCMC simulations conditional on that value. From there clusters are identified via the point process representation of a mixture posterior. Using a standard k-means algorithm.

The remainder of the paper illustrates the approach on simulated and real datasets. Recovering in those small dimension setups the number of clusters used in the simulation or found in other studies. As noted in the conclusion, using solely a Gibbs sampler with such a large number of components is rather perilous since it may get stuck close to suboptimal configurations. Especially with very small Dirichlet hyperparameters.

machine learning [book review]

Posted in Books, R, Statistics, University life with tags , , , , , , , on October 21, 2013 by xi'an

I have to admit the rather embarrassing fact that Machine Learning, A probabilistic perspective by Kevin P. Murphy is the first machine learning book I really read in detail…! It is a massive book with close to 1,100 pages and I thus hesitated taking it with me around, until I grabbed it in my bag for Warwick. (And in the train to Argentan.) It is also massive in its contents as it covers most (all?) of what I call statistics (but visibly corresponds to machine learning as well!). With a Bayesian bent most of the time (which is the secret meaning of probabilistic in the title).

“…we define machine learning as a set of methods that can automatically detect patterns in data, and then use the uncovered patterns to predict future data, or to perform other kinds of decision making under uncertainty (such as planning how to collect more data!).” (p.1)

Apart from the Introduction—which I find rather confusing for not dwelling on the nature of errors and randomness and on the reason for using probabilistic models (since they are all wrong) and charming for including a picture of the author’s family as an illustration of face recognition algorithms—, I cannot say I found the book more lacking in foundations or in the breadth of methods and concepts it covers than a “standard” statistics book. In short, this is a perfectly acceptable statistics book! Furthermore, it has a very relevant and comprehensive selection of references (sometimes favouring “machine learning” references over “statistics” references!). Even the vocabulary seems pretty standard to me. All this makes me wonder why we at all distinguish between the two domains, following Larry Wasserman’s views (for once!) that the difference is mostly in the eye of the beholder, i.e. in which department one teaches… Which was already my perspective before I read the book but it comforted me even further. And the author agrees as well (“The probabilistic approach to machine learning is closely related to the field of statistics, but differs slightly in terms of its emphasis and terminology”, p.1). Let us all unite!

[..part 2 of the book review to appear tomorrow…]

AMOR at 5000ft in a water tank…

Posted in Mountains, pictures, Statistics, University life with tags , , , , , , , , , , , , , , on November 22, 2012 by xi'an

On Monday, I attended the thesis defence of Rémi Bardenet in Orsay as a member (referee) of his thesis committee. While this was a thesis in computer science, which took place in the Linear Accelerator Lab in Orsay, it was clearly rooted in computational statistics, hence justifying my presence in the committee. The justification (!) for the splashy headline of this post is that Rémi’s work was motivated by the Pierre-Auger experiment on ultra-high-energy cosmic rays, where particles are detected through a network of 1600 water tanks spread over the Argentinian Pampa Amarilla on an area the size of Rhode Island (where I am incidentally going next week).

The part of Rémi’s thesis presented during the defence concentrated on his AMOR algorithm, arXived in a paper written with Olivier Cappé and Gersende Fort. AMOR stands for adaptive Metropolis online relabelling and combines adaptive MCMC techniques with relabelling strategies to fight label-switching (e.g., in mixtures). I have been interested in mixtures for eons (starting in 1987 in Ottawa with applying Titterington, Smith, and Makov to chest radiographs) and in label switching for ages (starting at the COMPSTAT conférence in Bristol in 1998). Rémi’s approach to the label switching problem follows the relabelling path, namely a projection of the original parameter space into a smaller subspace (that is also a quotient space) to avoid permutation invariance and lack of identifiability. (In the survey I wrote with Kate Lee, Jean-Michel Marin and Kerrie Mengersen, we suggest using the mode as a pivot to determine which permutation to use on the components of the mixture.) The paper suggests using an Euclidean distance to a mean determined adaptively, μt, with a quadratic form Σt also determined on-the-go, minimising (Pθ-μt)TΣt(Pθ-μt) over all permutations P at each step of the algorithm. The intuition behind the method is that the posterior over the restricted space should look like a roughly elliptically symmetric distribution, or at least like a unimodal distribution, rather than borrowing bits and pieces from different modes. While I appreciate the technical tour de force represented by the proof of convergence of the AMOR algorithm, I remain somehow sceptical about the approach and voiced the following objections during the defence: first, the assumption that the posterior becomes unimodal under an appropriate restriction is not necessarily realistic. Secondary modes often pop in with real data (as in the counter-example we used in our paper with Alessandra Iacobucci and Jean-Michel Marin). Next, the whole apparatus of fighting multiple modes and non-identifiability, i.e. fighting label switching, is to fall back on posterior means as Bayes estimators. As stressed in our JASA paper with Gilles Celeux and Merrilee Hurn, there is no reason for doing so and there are several reasons for not doing so:

  • it breaks down under model specification, i.e., when the number of components is not correct
  • it does not improve the speed of convergence but, on the opposite, restricts the space visited by the Markov chain
  • it may fall victim to the fatal attraction of secondary modes by fitting too small an ellipse around one of those modes
  • it ultimately depends on the parameterisation of the model
  • there is no reason for using posterior means in mixture problems, posterior modes or cluster centres can be used instead

I am therefore very much more in favour of producing a posterior distribution that is as label switching as possible (since the true posterior is completely symmetric in this respect). Post-processing the resulting sample can be done by using off-the-shelf clustering in the component space, derived from the point process representation used by Matthew Stephens in his thesis and subsequent papers. It also allows for a direct estimation of the number of components.

In any case, this was a defence worth-attending that led me to think afresh about the label switching problem, with directions worth exploring next month while Kate Lee is visiting from Auckland. Rémi Bardenet is now headed for a postdoc in Oxford, a perfect location to discuss further label switching and to engage into new computational statistics research!