Archive for University of Edinburgh

off to Edinburgh [and SMC 2024]

Posted in Books, Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , on May 12, 2024 by xi'an

IMG_9351Today I am off to Edinburgh for the SMC 2024 workshop run by the ICMS. Looking forward meeting with long time friends and new ones, and learning about novel directions in the field. And returning to Edinburgh I last visited in 2019 for the opening of the Bayes Centre. Hoping to enjoy the nearby Arthur’s Seat volcano and maybe farther away Munroes, depending on the program, train schedules, and…weather forecasts!

Mike’s obituary in the IMS Bulletin

Posted in Statistics with tags , , , , , , , , , on August 17, 2023 by xi'an

de-MCM’d

Posted in Statistics, University life with tags , , , , , , , on June 9, 2023 by xi'an


This morning I received a message from the MCM 23 conference organisers that my registration [submitted two months ago] was declined for lack of room! I wonder why the organisers did not opt for broadcasting in a second amphitheater, as was done for ISBA in Edinburgh.

Unfortunately, we have attained the maximal capacity of the amphitheater where the plenary talks will take place (this is the largest amphitheater that one can rent on the Jussieu campus). This amphitheater capacity was significantly larger than the number of attendees of previous MCM conferences. We feel really sorry that we can’t confirm your registration to MCM2023.

Which is pretty frustrating given that the program is of the highest standards and that many friends, coauthors, students, of mine’s are giving talks there. Being a local I’ll try to gatecrash some talks, of course, but I would not bet on my chances, unless I can borrow a badge!

CANSSI on HMMs

Posted in Statistics, University life with tags , , , , , , on September 21, 2020 by xi'an

The Canadian Statistical Sciences Institute/Institut canadien des sciences statistiques is launching a series of on-line seminars, held once a month.  With journal clubs to prepare the seminar and with student-only meetings with the speakers after each seminar.

Seminars will be broadcast live on the fourth Thursday of the month from 1-2:15 pm Eastern time (18 GMT+2).  Students will meet virtually with the speaker from 2:30-3:30 pm Eastern time. Talks in the fall will focus on Hidden Markov Models, starting on Thursday, September 24, 2020 with Ruth King of the University of Edinburgh.

sequential neural likelihood estimation as ABC substitute

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , , , , , , , , on May 14, 2020 by xi'an

A JMLR paper by Papamakarios, Sterratt, and Murray (Edinburgh), first presented at the AISTATS 2019 meeting, on a new form of likelihood-free inference, away from non-zero tolerance and from the distance-based versions of ABC, following earlier papers by Iain Murray and co-authors in the same spirit. Which I got pointed to during the ABC workshop in Vancouver. At the time I had no idea as to autoregressive flows meant. We were supposed to hold a reading group in Paris-Dauphine on this paper last week, unfortunately cancelled as a coronaviral precaution… Here are some notes I had prepared for the meeting that did not take place.

A simulator model is a computer program, which takes a vector of parameters θ, makes internal calls to a random number generator, and outputs a data vector x.”

Just the usual generative model then.

“A conditional neural density estimator is a parametric model q(.|φ) (such as a neural network) controlled by a set of parameters φ, which takes a pair of datapoints (u,v) and outputs a conditional probability density q(u|v,φ).”

Less usual, in that the outcome is guaranteed to be a probability density.

“For its neural density estimator, SNPE uses a Mixture Density Network, which is a feed-forward neural network that takes x as input and outputs the parameters of a Gaussian mixture over θ.”

In which theoretical sense would it improve upon classical or Bayesian density estimators? Where are the error evaluation, the optimal rates, the sensitivity to the dimension of the data? of the parameter?

“Our new method, Sequential Neural Likelihood (SNL), avoids the bias introduced by the proposal, by opting to learn a model of the likelihood instead of the posterior.”

I do not get the argument in that the final outcome (of using the approximation within an MCMC scheme) remains biased since the likelihood is not the exact likelihood. Where is the error evaluation? Note that in the associated Algorithm 1, the learning set is enlarged on each round, as in AMIS, rather than set back to the empty set ∅ on each round.

…given enough simulations, a sufficiently flexible conditional neural density estimator will eventually approximate the likelihood in the support of the proposal, regardless of the shape of the proposal. In other words, as long as we do not exclude parts of the parameter space, the way we propose parameters does not bias learning the likelihood asymptotically. Unlike when learning the posterior, no adjustment is necessary to account for our proposing strategy.”

This is a rather vague statement, with the only support being that the Monte Carlo approximation to the Kullback-Leibler divergence does converge to its actual value, i.e. a direct application of the Law of Large Numbers! But an interesting point I informally made a (long) while ago that all that matters is the estimate of the density at x⁰. Or at the value of the statistic at x⁰. The masked auto-encoder density estimator is based on a sequence of bijections with a lower-triangular Jacobian matrix, meaning the conditional density estimate is available in closed form. Which makes it sounds like a form of neurotic variational Bayes solution.

The paper also links with ABC (too costly?), other parametric approximations to the posterior (like Gaussian copulas and variational likelihood-free inference), synthetic likelihood, Gaussian processes, noise contrastive estimation… With experiments involving some of the above. But the experiments involve rather smooth models with relatively few parameters.

“A general question is whether it is preferable to learn the posterior or the likelihood (…) Learning the likelihood can often be easier than learning the posterior, and it does not depend on the choice of proposal, which makes learning easier and more robust (…) On the other hand, methods such as SNPE return a parametric model of the posterior directly, whereas a further inference step (e.g. variational inference or MCMC) is needed on top of SNL to obtain a posterior estimate”

A fair point in the conclusion. Which also mentions the curse of dimensionality (both for parameters and observations) and the possibility to work directly with summaries.

Getting back to the earlier and connected Masked autoregressive flow for density estimation paper, by Papamakarios, Pavlakou and Murray:

“Viewing an autoregressive model as a normalizing flow opens the possibility of increasing its flexibility by stacking multiple models of the same type, by having each model provide the source of randomness for the next model in the stack. The resulting stack of models is a normalizing flow that is more flexible than the original model, and that remains tractable.”

Which makes it sound like a sort of a neural network in the density space. Optimised by Kullback-Leibler minimisation to get asymptotically close to the likelihood. But a form of Bayesian indirect inference in the end, namely an MLE on a pseudo-model, using the estimated model as a proxy in Bayesian inference…