Archive for Edinburgh

asymptotically exact inference in likelihood-free models [a reply from the authors]

Posted in R, Statistics with tags , , , , , , , , , , , , , , , , , on December 1, 2016 by xi'an

[Following my post of lastTuesday, Matt Graham commented on the paper with force détails. Here are those comments. A nicer HTML version of the Markdown reply below is also available on Github.]

Thanks for the comments on the paper!

A few additional replies to augment what Amos wrote:

This however sounds somewhat intense in that it involves a quasi-Newton resolution at each step.

The method is definitely computationally expensive. If the constraint function is of the form of a function from an M-dimensional space to an N-dimensional space, with MN, for large N the dominant costs at each timestep are usually the constraint Jacobian (c/u) evaluation (with reverse-mode automatic differentiation this can be evaluated at a cost of O(N) generator / constraint evaluations) and Cholesky decomposition of the Jacobian product (c/u)(c/u) with O(N³) cost (though in many cases e.g. i.i.d. or Markovian simulated data, structure in the generator Jacobian can be exploited to give a significantly reduced cost). Each inner Quasi-Newton update involves a pair of triangular solve operations which have a O(N²) cost, two matrix-vector multiplications with O(MN) cost, and a single constraint / generator function evaluation; the number of Quasi-Newton updates required for convergence in the numerical experiments tended to be much less than N hence the Quasi-Newton iteration tended not to be the main cost.

The high computation cost per update is traded off however with often being able to make much larger proposed moves in high-dimensional state spaces with a high chance of acceptance compared to ABC MCMC approaches. Even in the relatively small Lotka-Volterra example we provide which has an input dimension of 104 (four inputs which map to ‘parameters’, and 100 inputs which map to ‘noise’ variables), the ABC MCMC chains using the coarse ABC kernel radius ϵ=100 with comparably very cheap updates were significantly less efficient in terms of effective sample size / computation time than the proposed constrained HMC approach. This was in large part due to the elliptical slice sampling updates in the ABC MCMC chains generally collapsing down to very small moves even for this relatively coarse ϵ. Performance was even worse using non-adaptive ABC MCMC methods and for smaller ϵ, and for higher input dimensions (e.g. using a longer sequence with correspondingly more random inputs) the comparison becomes even more favourable for the constrained HMC approach. Continue reading

asymptotically exact inference in likelihood-free models

Posted in Books, pictures, Statistics with tags , , , , , , , on November 29, 2016 by xi'an

“We use the intuition that inference corresponds to integrating a density across the manifold corresponding to the set of inputs consistent with the observed outputs.”

Following my earlier post on that paper by Matt Graham and Amos Storkey (University of Edinburgh), I now read through it. The beginning is somewhat unsettling, albeit mildly!, as it starts by mentioning notions like variational auto-encoders, generative adversial nets, and simulator models, by which they mean generative models represented by a (differentiable) function g that essentially turn basic variates with density p into the variates of interest (with intractable density). A setting similar to Meeds’ and Welling’s optimisation Monte Carlo. Another proximity pointed out in the paper is Meeds et al.’s Hamiltonian ABC.

“…the probability of generating simulated data exactly matching the observed data is zero.”

The section on the standard ABC algorithms mentions the fact that ABC MCMC can be (re-)interpreted as a pseudo-marginal MCMC, albeit one targeting the ABC posterior instead of the original posterior. The starting point of the paper is the above quote, which echoes a conversation I had with Gabriel Stolz a few weeks ago, when he presented me his free energy method and when I could not see how to connect it with ABC, because having an exact match seemed to cancel the appeal of ABC, all parameter simulations then producing an exact match under the right constraint. However, the paper maintains this can be done, by looking at the joint distribution of the parameters, latent variables, and observables. Under the implicit restriction imposed by keeping the observables constant. Which defines a manifold. The mathematical validation is achieved by designing the density over this manifold, which looks like

p(u)\left|\frac{\partial g^0}{\partial u}\frac{\partial g^0}{\partial u}^\text{T}\right|^{-\textonehalf}

if the constraint can be rewritten as g⁰(u)=0. (This actually follows from a 2013 paper by Diaconis, Holmes, and Shahshahani.) In the paper, the simulation is conducted by Hamiltonian Monte Carlo (HMC), the leapfrog steps consisting of an unconstrained move followed by a projection onto the manifold. This however sounds somewhat intense in that it involves a quasi-Newton resolution at each step. I also find it surprising that this projection step does not jeopardise the stationary distribution of the process, as the argument found therein about the approximation of the approximation is not particularly deep. But the main thing that remains unclear to me after reading the paper is how the constraint that the pseudo-data be equal to the observable data can be turned into a closed form condition like g⁰(u)=0. As mentioned above, the authors assume a generative model based on uniform (or other simple) random inputs but this representation seems impossible to achieve in reasonably complex settings.

MCqMC 2016 [#4]

Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on August 21, 2016 by xi'an

In his plenary talk this morning, Arnaud Doucet discussed the application of pseudo-marginal techniques to the latent variable models he has been investigating for many years. And its limiting behaviour towards efficiency, with the idea of introducing correlation in the estimation of the likelihood ratio. Reducing complexity from O(T²) to O(T√T). With the very surprising conclusion that the correlation must go to 1 at a precise rate to get this reduction, since perfect correlation would induce a bias. A massive piece of work, indeed!

The next session of the morning was another instance of conflicting talks and I hoped from one room to the next to listen to Hani Doss’s empirical Bayes estimation with intractable constants (where maybe SAME could be of interest), Youssef Marzouk’s transport maps for MCMC, which sounds like an attractive idea provided the construction of the map remains manageable, and Paul Russel’s adaptive importance sampling that somehow sounded connected with our population Monte Carlo approach. (With the additional step of considering transform maps.)

An interesting item of information I got from the final announcements at MCqMC 2016 just before heading to Monash, Melbourne, is that MCqMC 2018 will take place in the city of Rennes, Brittany, on July 2-6. Not only it is a nice location on its own, but it is most conveniently located in space and time to attend ISBA 2018 in Edinburgh the week after! Just moving from one Celtic city to another Celtic city. Along with other planned satellite workshops, this occurrence should make ISBA 2018 more attractive [if need be!] for participants from oversea.

even dogs in the wild

Posted in Books, Mountains, Travel with tags , , , , , , , on August 10, 2016 by xi'an

A new Rankin, a new Rebus! (New as in 2015 since I waited to buy the paperback version.) Sounds like Ian Rankin cannot let his favourite character rest for his retirement and hence set in back into action, along with the new Malcom Fox [working in the Complaints] and most major characters of the Rebus series. Including the unbreakable villain, Big Ger Cafferty. This as classical as you get, borrows from half a dozen former Rebus novels, not to mention this neo-Holmes novel I reviewed a while ago. But it is gritty, deadly efficient and captivating. I read the book within a few days from returning from Warwick.

About the title, this is a song by The Associates that plays a role in the book. I did not this band, but looking for it got me to a clip that used an excerpt from the Night of the Hunter. Fantastic movie, one of my favourites.

ISBA 2016 [#7]

Posted in Mountains, pictures, Running, Statistics, Travel, Wines with tags , , , , , , , , , , , , , on June 20, 2016 by xi'an

This series of posts is most probably getting by now an imposition on the ‘Og readership, which either attended ISBA 2016 and does (do?) not need my impressions or did not attend and hence does (do?) not need vague impressions about talks they (it?) did not see, but indulge me in reminiscing about this last ISBA meeting (or more reasonably ignore this post altogether). Now that I am back home (with most of my Sard wine bottles intact!, and a good array of Sard cheeses).

This meeting seems to be the largest ISBA meeting ever, with hundreds of young statisticians taking part in it (despite my early misgivings about the deterrent represented by the overall cost of attending the meeting. I presume holding the meeting in Europe made it easier and cheaper for most Europeans to attend (and hopefully the same will happen in Edinburgh in 2018!), as was the (somewhat unsuspected) wide availability of rental alternatives in the close vicinity of the conference resort. I also presume the same travel opportunities would not have been true in Banff, although local costs would have been lower. It was fantastic to see so many new researchers interested in Bayesian statistics and to meet some of them. And to have more sessions run by the j-Bayes section of ISBA (although I found it counterproductive that such sessions do not focus on a thematically coherent theme). As a result, the meeting was more intense than ever and I found it truly exhausting, despite skipping most poster sessions. Maybe also because I did not skip a single session thanks to the availability of an interesting theme for each block in the schedule. (And because I attended more [great] Sard dinners than I originally intended.) Having five sessions in parallel indeed means there is a fabulous offer of themes for every taste. It also means there are inevitably conflicts when picking one’s session.

Back to poster sessions, I feel I missed an essential part of the meeting, which made ISBA meetings so unique, but it also seems to me the organisation of those sessions should be reconsidered against the rise in attendance. (And my growing inability to stay up late!) One solution suggested by my recent AISTATS experience is to select posters towards lowering the number of posters in the four poster sessions. The success rate for the Cadiz meeting was 35%.) The obvious downsizes are the selection process (but this was done quite efficiently for AISTATS) and the potential reduction in the number of participants. A medium ground could see a smaller fraction of posters to be selected by this process (and published one way or another as in machine-learning conferences) and presented during the evening poster sessions, with other posters being given during the coffee breaks [which certainly does not help in reducing the intensity of the schedule]. Another and altogether solution is to extend the parallelism of oral sessions to poster sessions, by regrouping them into five or six themes or keywords chosen by the presenters and having those presented in different rooms to split the attendance down to human level and tolerable decibels. Nothing preventing participants to visit several rooms in a given evening. Or to keep posters for several nights in a row if the number of rooms allows.

It may also be that this edition of ISBA 2016 sees the end of the resort-style meeting in the spirit of the early Valencia meetings. Edinburgh 2018 will certainly be an open-space conference in that meals and lodgings will be “on” the participants who may choose where and how much. I have heard many times the argument that conferences held in single hotels or resorts facilitated the contacts between young and senior researchers, but I fear this is not sustainable against the growth of the audience. Holding the meeting in a reasonably close and compact location, as a University building, should allow for a sufficient degree of interaction, as was the case at ISBA 2016. (Kerrie Mengersen also suggested that a few restaurants nearby could be designated as “favourites” for participants to interact at dinner time.) Another suggestion to reinforce networking and interacting would be to hold more satellite workshops before the main conference. It seems there could be a young Bayesian workshop in England the prior week as well as a summer short course on simulation methods.

Organising meetings is getting increasingly complex and provides few rewards at the academic level, so I am grateful to the organisers of ISBA 2016 to have agreed to carry the burden this year. And to the scientific committee for setting the quality bar that high. (A special thought too for my friend Walter Racugno who had the ultimate bad luck of having an accident the very week of the meeting he had contributed to organise!)

[Even though I predict this is my last post on ISBA 2016 I would be delighted to have guest posts on others’ impressions on the meeting. Feel free to send me entries!]

the comforts of a muddy Saturday [book review]

Posted in Books, Travel, University life with tags , , , , on March 12, 2016 by xi'an

Besides the fantastic No. 1 Ladies Detective Agency series, which takes place in Botswana, Alexander McCall Smith has also written another series located in Edinburgh and featuring Isabel Dalhousie, a philosopher plus occasional detective. While the detective story is light to the point of being evanescent (and me losing interest by the middle of the book), the book The comforts of a muddy Saturday was still pleasant to re-read as Isabel is the editor of a philosophy academic journal, Review of Applied Ethics, and reflects on her duties as editor as well as brings philosophical musings into the novel.

“In fact, sometimes we publish papers that I suspect next to nobody reads.”

There is also a somewhat melancholic tone to the book in that it takes place at a time when submissions and replies were sent by regular mails, and faxes were for administrative aspects and only those. The description of Isabel’s duties is such that I am not convinced she needs 37 hours per week (!) to handle the submissions and editorial duties connected with the journal, although she ponders and hesitates so much before sending a particularly poor piece on the trolley dilemma that this may indeed end up in a full time job! Light reading for a rainy Saturday afternoon, then…

Bruce Lindsay (March 7, 1947 — May 5, 2015)

Posted in Books, Running, Statistics, Travel, University life with tags , , , , , , , , , , , on May 22, 2015 by xi'an

When early registering for Seattle (JSM 2015) today, I discovered on the ASA webpage the very sad news that Bruce Lindsay had passed away on May 5.  While Bruce was not a very close friend, we had met and interacted enough times for me to feel quite strongly about his most untimely death. Bruce was indeed “Mister mixtures” in many ways and I have always admired the unusual and innovative ways he had found for analysing mixtures. Including algebraic ones through the rank of associated matrices. Which is why I first met him—besides a few words at the 1989 Gertrude Cox (first) scholarship race in Washington DC—at the workshop I organised with Gilles Celeux and Mike West in Aussois, French Alps, in 1995. After this meeting, we met twice in Edinburgh at ICMS workshops on mixtures, organised with Mike Titterington. I remember sitting next to Bruce at one workshop dinner (at Blonde) and him talking about his childhood in Oregon and his father being a journalist and how this induced him to become an academic. He also contributed a chapter on estimating the number of components [of a mixture] to the Wiley book we edited out of this workshop. Obviously, his work extended beyond mixtures to a general neo-Fisherian theory of likelihood inference. (Bruce was certainly not a Bayesian!) Last time, I met him, it was in Italia, at a likelihood workshop in Venezia, October 2012, mixing Bayesian nonparametrics, intractable likelihoods, and pseudo-likelihoods. He gave a survey talk about composite likelihood, telling me about his extended stay in Italy (Padua?) around that time… So, Bruce, I hope you are now running great marathons in a place so full of mixtures that you can always keep ahead of the pack! Fare well!