Archive for Edinburgh

G²S³18, Breckenridge, CO, June 17-30, 2018

Posted in Statistics with tags , , , , , , , , , , , , on October 3, 2017 by xi'an

wet summer reads [book reviews]

Posted in Books, Kids, Mountains, pictures, Travel with tags , , , , , , , , , , , , , , , , , , , , on September 24, 2017 by xi'an

“‘Oh ye of little faith.’ Rebus picked up his lamb chop and bit into it.” Ian Rankin, Rather be the Devil

Rebus’ latest case, a stray cat, a tree that should not be there, psychological worries in Uppsala, maths formulas, these are the themes of some of my vacation books. I read more than usual because of the heavy rains we faced in Northern Italy (rather than Scotland!). Ian Rankin’s latest novel Rather be the Devil reunites most of the characters of past novels, from John Rebus to Siobhan Clarke, Malcolm Fox, Big Ger’ Cafferty, and others. The book is just as fun to read as the previous ones (but only if one has read those I presume!), not particularly innovative in its plot, which recalls some earlier ones, and a wee bit disappointing in the way Big Ger’ seems to get the upper hand against Rebus and the (actual) police. Nonetheless pleasant for the characters themselves, including the City of Edinburgh itself!, and the dialogues. Rebus is not dead yet (spoiler?!) so there should be more volumes to come as Rankin does not seem to manage without his trademark detective. (And the above quote comes in connection with the muttonesque puzzle I mention in my post about Skye.)

The second book is a short story by Takashi Hiraide called The Guest Cat (in French, The cat who came from Heaven, both differing from the Japanese Neko ko kyaku) and which reads more like a prose poem than like a novel. It is about a (Japanese) middle-aged childless couple living in a small rented house that is next to a beautiful and decaying Japanese garden. And starting a relation with the neighbours’ beautiful and mysterious cat. Until the cat dies, somewhat inexplicably, and the couple has to go over its sorrow, compounded by the need to leave the special place where they live. This does not sound much of a story but I appreciated the beautiful way it is written (and translated), as well as related to it because of the stray cat that also visits us on a regular basis! (I do not know how well the book has been translated from Japanese into English.)

The third book is called Debout les Morts (translated as The Three Evangelists) and is one of the first detective stories of Fred Vargas, written in 1995. It is funny with well-conceived characters (although they sometimes verge so much on the caricature as to make the novel neo-picaresque) and a fairly original scenario that has a Russian doll or onion structure, involving many (many) layers. I was definitely expecting anything but the shocking ending! The three main characters (hence the English translation title) in the novel are 35-ish jobless historians whose interests range from hunter-gatherers [shouldn’t then he be a pre-historian?!] to the Great [WWI] War, with a medieval expert in the middle. (The author herself is a medieval historian.) As written above, it is excessive in everything, from the characters to the plot, to the number of murders, but or maybe hence it is quite fun to read.

The fourth book is Kjell Eriksson‘s Jorden ma rämna that I would translate from the French version as The earth may well split (as it is not translated in English at this stage), the second volume of the Ann Lindell series, which takes place in Uppsala, and in the nearby Swede countryside. I quite enjoyed this book as the detective part was is almost irrelevant. To the point of having the killer known from the start. As in many Scandinavian noir novels, especially Swedish ones, the social and psychological aspects are predominant, from the multiple events leading a drug addict to commit a series of crimes, to the endless introspection of both the main character and her solitude-seeking boyfriend, from the failures of the social services to deal with the addict to a global yearning for the old and vanished countryside community spirit, to the replacement of genuine workers’ Unions by bureaucratic structures. Not the most comforting read for a dark and stormy night, but definitely a good and well-written book.

And the last book is yet again a Japanese novel by Yôko Ogawa, The Housekeeper and The Professor, which title in French is closer to the Japanese title, The professor’s favourite equation (博士の愛した数式), is about a invalid maths professor who has an 80 minutes memory span, following a car accident. His PhD thesis was about the Artin conjecture. And about his carer (rather than housekeeper) who looks after him and manages to communicate despite the 80 mn barrier. And about the carer’s son who is nicknamed Root for having a head like a square root symbol (!). The book is enjoyable enough to read, with a few basic explanations of number theory, but the whole construct is very contrived as why would the professor manage to solve mathematical puzzles and keep some memory of older baseball games despite the 80mn window. (I also found the naivety of the carer as represented throughout the book a wee bit on the heavy side.)

Not a bad summer for books, in the end!!!

fast ε-free ABC

Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , on June 8, 2017 by xi'an

Last Fall, George Papamakarios and Iain Murray from Edinburgh arXived an ABC paper on fast ε-free inference on simulation models with Bayesian conditional density estimation, paper that I missed. The idea there is to approximate the posterior density by maximising the likelihood associated with a parameterised family of distributions on θ, conditional on the associated x. The data being then the ABC reference table. The family chosen there is a mixture of K Gaussian components, which parameters are then estimated by a (Bayesian) neural network using x as input and θ as output. The parameter values are simulated from an adaptive proposal that aims at approximating the posterior better and better. As in population Monte Carlo, actually. Except for the neural network part, which I fail to understand why it makes a significant improvement when compared with EM solutions. The overall difficulty with this approach is that I do not see a way out of the curse of dimensionality: when the dimension of θ increases, the approximation to the posterior distribution of θ does deteriorate, even in the best of cases, as any other non-parametric resolution. It would have been of (further) interest to see a comparison with a most rudimentary approach, namely the one we proposed based on empirical likelihoods.

ISBA 2018, Edinburgh, 24-28 June

Posted in Statistics with tags , , , , , , , , , , on March 1, 2017 by xi'an

Edinburgh, Sept. 03, 2011The ISBA 2018 World Meeting will take place in Edinburgh, Scotland, on 24-29 June 2018. (Since there was some confusion about the date, it is worth stressing that these new dates are definitive!) Note also that there are other relevant conferences and workshops in the surrounding weeks:

  • a possible ABC in Edinburgh the previous weekend, 23-24 June [to be confirmed!]
  • the Young Bayesian Meeting (BaYSM) in Warwick, 2-3 July 2018
  • a week-long school on fundamentals of simulation in Warwick, 9-13 July 2018 with courses given by Nicolas Chopin, Art Owen, Jeff Rosenthal and others
  • MCqMC 2018 in Rennes, 1-6 July 208
  • ICML 2018 in Stockholm, 10-15 July 2018
  • the 2018 International Biometrics Conference in Barcelona, 8-13 July 2018

asymptotically exact inference in likelihood-free models [a reply from the authors]

Posted in R, Statistics with tags , , , , , , , , , , , , , , , , , on December 1, 2016 by xi'an

[Following my post of lastTuesday, Matt Graham commented on the paper with force détails. Here are those comments. A nicer HTML version of the Markdown reply below is also available on Github.]

Thanks for the comments on the paper!

A few additional replies to augment what Amos wrote:

This however sounds somewhat intense in that it involves a quasi-Newton resolution at each step.

The method is definitely computationally expensive. If the constraint function is of the form of a function from an M-dimensional space to an N-dimensional space, with MN, for large N the dominant costs at each timestep are usually the constraint Jacobian (c/u) evaluation (with reverse-mode automatic differentiation this can be evaluated at a cost of O(N) generator / constraint evaluations) and Cholesky decomposition of the Jacobian product (c/u)(c/u) with O(N³) cost (though in many cases e.g. i.i.d. or Markovian simulated data, structure in the generator Jacobian can be exploited to give a significantly reduced cost). Each inner Quasi-Newton update involves a pair of triangular solve operations which have a O(N²) cost, two matrix-vector multiplications with O(MN) cost, and a single constraint / generator function evaluation; the number of Quasi-Newton updates required for convergence in the numerical experiments tended to be much less than N hence the Quasi-Newton iteration tended not to be the main cost.

The high computation cost per update is traded off however with often being able to make much larger proposed moves in high-dimensional state spaces with a high chance of acceptance compared to ABC MCMC approaches. Even in the relatively small Lotka-Volterra example we provide which has an input dimension of 104 (four inputs which map to ‘parameters’, and 100 inputs which map to ‘noise’ variables), the ABC MCMC chains using the coarse ABC kernel radius ϵ=100 with comparably very cheap updates were significantly less efficient in terms of effective sample size / computation time than the proposed constrained HMC approach. This was in large part due to the elliptical slice sampling updates in the ABC MCMC chains generally collapsing down to very small moves even for this relatively coarse ϵ. Performance was even worse using non-adaptive ABC MCMC methods and for smaller ϵ, and for higher input dimensions (e.g. using a longer sequence with correspondingly more random inputs) the comparison becomes even more favourable for the constrained HMC approach. Continue reading

asymptotically exact inference in likelihood-free models

Posted in Books, pictures, Statistics with tags , , , , , , , on November 29, 2016 by xi'an

“We use the intuition that inference corresponds to integrating a density across the manifold corresponding to the set of inputs consistent with the observed outputs.”

Following my earlier post on that paper by Matt Graham and Amos Storkey (University of Edinburgh), I now read through it. The beginning is somewhat unsettling, albeit mildly!, as it starts by mentioning notions like variational auto-encoders, generative adversial nets, and simulator models, by which they mean generative models represented by a (differentiable) function g that essentially turn basic variates with density p into the variates of interest (with intractable density). A setting similar to Meeds’ and Welling’s optimisation Monte Carlo. Another proximity pointed out in the paper is Meeds et al.’s Hamiltonian ABC.

“…the probability of generating simulated data exactly matching the observed data is zero.”

The section on the standard ABC algorithms mentions the fact that ABC MCMC can be (re-)interpreted as a pseudo-marginal MCMC, albeit one targeting the ABC posterior instead of the original posterior. The starting point of the paper is the above quote, which echoes a conversation I had with Gabriel Stolz a few weeks ago, when he presented me his free energy method and when I could not see how to connect it with ABC, because having an exact match seemed to cancel the appeal of ABC, all parameter simulations then producing an exact match under the right constraint. However, the paper maintains this can be done, by looking at the joint distribution of the parameters, latent variables, and observables. Under the implicit restriction imposed by keeping the observables constant. Which defines a manifold. The mathematical validation is achieved by designing the density over this manifold, which looks like

p(u)\left|\frac{\partial g^0}{\partial u}\frac{\partial g^0}{\partial u}^\text{T}\right|^{-\textonehalf}

if the constraint can be rewritten as g⁰(u)=0. (This actually follows from a 2013 paper by Diaconis, Holmes, and Shahshahani.) In the paper, the simulation is conducted by Hamiltonian Monte Carlo (HMC), the leapfrog steps consisting of an unconstrained move followed by a projection onto the manifold. This however sounds somewhat intense in that it involves a quasi-Newton resolution at each step. I also find it surprising that this projection step does not jeopardise the stationary distribution of the process, as the argument found therein about the approximation of the approximation is not particularly deep. But the main thing that remains unclear to me after reading the paper is how the constraint that the pseudo-data be equal to the observable data can be turned into a closed form condition like g⁰(u)=0. As mentioned above, the authors assume a generative model based on uniform (or other simple) random inputs but this representation seems impossible to achieve in reasonably complex settings.

MCqMC 2016 [#4]

Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on August 21, 2016 by xi'an

In his plenary talk this morning, Arnaud Doucet discussed the application of pseudo-marginal techniques to the latent variable models he has been investigating for many years. And its limiting behaviour towards efficiency, with the idea of introducing correlation in the estimation of the likelihood ratio. Reducing complexity from O(T²) to O(T√T). With the very surprising conclusion that the correlation must go to 1 at a precise rate to get this reduction, since perfect correlation would induce a bias. A massive piece of work, indeed!

The next session of the morning was another instance of conflicting talks and I hoped from one room to the next to listen to Hani Doss’s empirical Bayes estimation with intractable constants (where maybe SAME could be of interest), Youssef Marzouk’s transport maps for MCMC, which sounds like an attractive idea provided the construction of the map remains manageable, and Paul Russel’s adaptive importance sampling that somehow sounded connected with our population Monte Carlo approach. (With the additional step of considering transform maps.)

An interesting item of information I got from the final announcements at MCqMC 2016 just before heading to Monash, Melbourne, is that MCqMC 2018 will take place in the city of Rennes, Brittany, on July 2-6. Not only it is a nice location on its own, but it is most conveniently located in space and time to attend ISBA 2018 in Edinburgh the week after! Just moving from one Celtic city to another Celtic city. Along with other planned satellite workshops, this occurrence should make ISBA 2018 more attractive [if need be!] for participants from oversea.