On the last day of the IFCAM workshop in Bangalore, Marc Lavielle from INRIA presented a talk on mixed effects where he illustrated his original computer language Monolix. And mentioned that his CRC Press book on Mixed Effects Models for the Population Approach was out! (Appropriately listed as out on a 14th of July on amazon!) He actually demonstrated the abilities of Monolix live and on diabets data provided by an earlier speaker from Kolkata, which was a perfect way to start initiating a collaboration! Nice cover (which is all I saw from the book at this stage!) that maybe will induce candidates to write a review for CHANCE. Estimation of those mixed effect models relies on stochastic EM algorithms developed by Marc Lavielle and Éric Moulines in the 90’s, as well as MCMC methods.
Archive for EM
My Paris colleague (and fellow-runner) Aurélien Garivier has produced an interesting comparison of 4 (or 6 if you consider scilab and octave as different from matlab) computer languages in terms of speed for producing the MLE in a hidden Markov model, using EM and the Baum-Welch algorithms. His conclusions are that
- matlab is a lot faster than R and python, especially when vectorization is important : this is why the difference is spectacular on filtering/smoothing, not so much on the creation of the sample;
- octave is a good matlab emulator, if no special attention is payed to execution speed…;
- scilab appears as a credible, efficient alternative to matlab;
- still, C is a lot faster; the inefficiency of matlab in loops is well-known, and clearly shown in the creation of the sample.
(In this implementation, R is “only” three times slower than matlab, so this is not so damning…) All the codes are available and you are free to make suggestions to improve the speed of of your favourite language!
Today I gave my first lecture in Universidad Autonoma Madrid. Apart from a shaky start due to my new computer not recognising the videoprojector, I covered EM for mixtures in the one and half hour of the course. I obviously finished the day with tapas in a nearby bar, vaguely watching Barcelona playing an improbable team to merge with the other patrons… In the second lecture, I hope to illustrate both EM and Gibbs on a simple mixture likelihood surface.
A graduate student came to see me the other day with a bivariate Poisson distribution and a question about using EM in this framework. The problem boils down to adding one correlation parameter and an extra term in the likelihood
Both terms involving sums are easy to deal with, using latent variables as in mixture models. The subtractions are trickier, as the negative parts cannot appear in a conditional distribution. Even though the problem can be handled by a direct numerical maximisation or by an almost standard Metropolis-within-Gibbs sampler, my suggestion regarding EM per se was to proceed by conditional EM, one parameter at a time. For instance, when considering conditional on both Poisson parameters, depending on whether or not, one can consider either
thus producing a Beta-like target function in after completion, or turn
to produce a Beta-like target function in after completion. In the end, this is a rather pedestrian exercise and I am still frustrated at missing the trick to handle the subtractions directly, however it was nonetheless a nice question!
We have now completed the edition of the book Mixture Estimation and Applications with Kerrie Mengersen and Mike Titterington, made of contributions from participants to the ICMS workshop on mixtures that took place in Edinburgh last March. Here is the prospective table of contents: Continue reading