## sliced Wasserstein estimation of mixtures

Posted in Books, pictures, R, Statistics with tags , , , , , , on November 28, 2017 by xi'an A paper by Soheil Kolouri and co-authors was arXived last week about using Wasserstein distance for inference on multivariate Gaussian mixtures. The basic concept is that the parameter is estimated by minimising the p-Wasserstein distance to the empirical distribution, smoothed by a Normal kernel. As the general Wasserstein distance is quite costly to compute, the approach relies on a sliced version, which means computing the Wasserstein distance between one-dimensional projections of the distributions. Optimising over the directions is an additional computational constraint.

“To fit a finite GMM to the observed data, one is required to answer the following questions: 1) how to estimate the number of mixture components needed to represent the data, and 2) how to estimate the parameters of the mixture components.”

The paper contains a most puzzling comment opposing maximum likelihood estimation to minimum Wasserstein distance estimation on the basis that the later would not suffer from multimodality. This sounds incorrect as the multimodality of a mixture model (likelihood) stems from the lack of identifiability of the parameters. If all permutations of these parameters induce exactly the same distribution, they all stand at the same distance from the data distribution, whatever the distance is. Furthermore, the above tartan-like picture clashes with the representation of the log-likelihood of a Normal mixture, as exemplified by the picture below based on a 150 sample with means 0 and 2, same unit variance, and weights 0.3 and 0.7, which shows a smooth if bimodal structure: And for the same dataset, my attempt at producing a Wasserstein “energy landscape” does return a multimodal structure (this is the surface of minus the logarithm of the 2-Wasserstein distance): “Jin et al. proved that with random initialization, the EM algorithm will converge to a bad critical point with high probability.”

This statement is most curious in that the “probability” in the assessment must depend on the choice of the random initialisation, hence on a sort of prior distribution that is not explicited in the paper. Which remains blissfully unaware of Bayesian approaches.

Another [minor mode] puzzling statement is that the p-Wasserstein distance is defined on the space of probability measures with finite p-th moment, which does not make much sense when what matters is rather the finiteness of the expectation of the distance d(X,Y) raised to the power p. A lot of the maths details either do not make sense or seem superfluous.

Posted in R, Statistics, University life with tags , , on April 14, 2011 by xi'an For my second lecture today, I need to plot a likelihood surface for a basic two-component mixture with only the means unknown: here is the R code to speed up things

```llsurf=function(trumyn=2.,wayt=.3,var2=1.,ssiz=500){
# draws the log-likelihood surface and a random sample
sd2=sqrt(var2)
parti=(runif(ssiz)>wayt)
sampl=(1-parti)*rnorm(ssiz)+parti*(trumyn+sd2*rnorm(ssiz))
mu2=mu1=seq(min(sampl),max(sampl),.1)
mo1=mu1%*%t(rep(1,length(mu2)))
mo2=(rep(1,length(mu2)))%*%t(mu2)
ca1=-0.5*mo1*mo1
ca2=-0.5*mo2*mo2
like=.1*(ca1+ca2) # log prior N(0,10)
for (i in 1:ssiz)
like=like+log(wayt*dnorm(sampl[i],mo1,sd2)+(1-wayt)*dnorm(sampl[i],mo2,1))
par(mar=c(4,4,1,1))
image(mu1,mu2,like,xlab=expression(mu),ylab=expression(mu),col=heat.colors(250))
sampl
}
```

resulting in an outcome (very) similar to the above. The EM steps are then an easy

```mu=cmu=rnorm(2,sd=5)
for (t in 1:30){
z=0.3*dnorm(sampl,mu)/(0.3*dnorm(sampl,mu)+0.7*dnorm(sampl,mu))
cmu=rbind(cmu,mu)
mu=sum(z*sampl)/sum(z)
mu=sum((1-z)*sampl)/sum(1-z)
}
lines(cmu[,1],cmu[,2])```

while the Gibbs sampler is

```mu=cmu=rnorm(2,sd=5)
for (t in 1:10^3){
z=0.3*dnorm(sampl,mu)/(0.3*dnorm(sampl,mu)+0.7*dnorm(sampl,mu))
u=runif(length(z))
mu=sum((u<z)*sampl)/(.01+sum(u<z))+rnorm(1)/sqrt(.01+sum(u<z))
mu=sum((u>z)*sampl)/(.01+sum(u>z))+rnorm(1)/sqrt(.01+sum(u>z))
cmu=rbind(cmu,mu)
}
lines(cmu[,1],cmu[,2])```