**R**émi Bardenet has been awarded an ERC grant on Monte Carlo integration via repulsive point processes and is now looking for a postdoc starting next March. (Our own ABSINT ANR grant still has an open offer of a postdoctoral position on approximate Bayesian methods, feel free to contact me if potentially interested.)

## Archive for repulsiveness

## repulsive postdoc!

Posted in Statistics with tags ABC, Agence Nationale de la Recherche, ANR, approximate Bayesian inference, ERC, European Research Council, France, Lille, postdoctoral position, repulsiveness, Université Paris Dauphine on December 20, 2019 by xi'an## repulsive mixtures

Posted in Books, Statistics with tags consistency, Dirichlet mixture priors, finite mixtures, Gibbs sampling, Larry Wasserman, repulsiveness, reversible jump MCMC, tequila, unknown number of components on April 10, 2017 by xi'an**F**angzheng Xie and Yanxun Xu arXived today a paper on Bayesian repulsive modelling for mixtures. Not that Bayesian modelling is repulsive in any psychological sense, but rather that the components of the mixture are repulsive one against another. The device towards this repulsiveness is to add a penalty term to the original prior such that close means are penalised. (In the spirit of the sugar loaf with water drops represented on the cover of Bayesian Choice that we used in our pinball sampler, repulsiveness being there on the particles of a simulated sample and not on components.) Which means a prior assumption that close covariance matrices are of lesser importance. An interrogation I ~~have~~ has ~~is~~ was why empty components are not excluded as well, but this does not make too much sense in the Dirichlet process formulation of the current paper. And in the finite mixture version the Dirichlet prior on the weights has coefficients less than one.

The paper establishes consistency results for such repulsive priors, both for estimating the distribution itself and the number of components, K, under a collection of assumptions on the distribution, prior, and repulsiveness factors. While I have no mathematical issue with such results, I always wonder at their relevance for a given finite sample from a finite mixture in that they give an impression that the number of components is a perfectly estimable quantity, which it is not (in my opinion!) because of the fluid nature of mixture components and therefore the inevitable impact of prior modelling. (As Larry Wasserman would pound in, mixtures like tequila are evil and should likewise be avoided!)

The implementation of this modelling goes through a “block-collapsed” Gibbs sampler that exploits the latent variable representation (as in our early mixture paper with Jean Diebolt). Which includes the Old Faithful data as an illustration (for which a submission of ours was recently rejected for using too old datasets). And use the logarithm of the conditional predictive ordinate as an assessment tool, which is a posterior predictive estimated by MCMC, using the data a second time for the fit.