Archive for ABC in Helsinki

ABC gas

Posted in pictures, Running, Travel with tags , , , , , , , on August 9, 2017 by xi'an

art brut

Posted in pictures, Travel with tags , , , , , on June 4, 2016 by xi'an

window on the Silja Symphony

ABC random forests for Bayesian parameter inference

Posted in Books, Kids, R, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , , on May 20, 2016 by xi'an

Before leaving Helsinki, we arXived [from the Air France lounge!] the paper Jean-Michel presented on Monday at ABCruise in Helsinki. This paper summarises the experiments Louis conducted over the past months to assess the great performances of a random forest regression approach to ABC parameter inference. Thus validating in this experimental sense the use of this new approach to conducting ABC for Bayesian inference by random forests. (And not ABC model choice as in the Bioinformatics paper with Pierre Pudlo and others.)

I think the major incentives in exploiting the (still mysterious) tool of random forests [against more traditional ABC approaches like Fearnhead and Prangle (2012) on summary selection] are that (i) forests do not require a preliminary selection of the summary statistics, since an arbitrary number of summaries can be used as input for the random forest, even when including a large number of useless white noise variables; (b) there is no longer a tolerance level involved in the process, since the many trees in the random forest define a natural if rudimentary distance that corresponds to being or not being in the same leaf as the observed vector of summary statistics η(y); (c) the size of the reference table simulated from the prior (predictive) distribution does not need to be as large as for in usual ABC settings and hence this approach leads to significant gains in computing time since the production of the reference table usually is the costly part! To the point that deriving a different forest for each univariate transform of interest is truly a minor drag in the overall computing cost of the approach.

An intriguing point we uncovered through Louis’ experiments is that an unusual version of the variance estimator is preferable to the standard estimator: we indeed exposed better estimation performances when using a weighted version of the out-of-bag residuals (which are computed as the differences between the simulated value of the parameter transforms and their expectation obtained by removing the random trees involving this simulated value). Another intriguing feature [to me] is that the regression weights as proposed by Meinshausen (2006) are obtained as an average of the inverse of the number of terms in the leaf of interest. When estimating the posterior expectation of a transform h(θ) given the observed η(y), this summary statistic η(y) ends up in a given leaf for each tree in the forest and all that matters for computing the weight is the number of points from the reference table ending up in this very leaf. I do find this difficult to explain when confronting the case when many simulated points are in the leaf against the case when a single simulated point makes the leaf. This single point ends up being much more influential that all the points in the other situation… While being an outlier of sorts against the prior simulation. But now that I think more about it (after an expensive Lapin Kulta beer in the Helsinki airport while waiting for a change of tire on our airplane!), it somewhat makes sense that rare simulations that agree with the data should be weighted much more than values that stem from the prior simulations and hence do not translate much of an information brought by the observation. (If this sounds murky, blame the beer.) What I found great about this new approach is that it produces a non-parametric evaluation of the cdf of the quantity of interest h(θ) at no calibration cost or hardly any. (An R package is in the making, to be added to the existing R functions of abcrf we developed for the ABC model choice paper.)

snapshot from Stockholm

Posted in pictures, Running, Travel with tags , , , , on May 18, 2016 by xi'an

ABC in Stockholm [on-board again]

Posted in Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , on May 18, 2016 by xi'an

abcruiseAfter a smooth cruise from Helsinki to Stockholm, a glorious sunrise over the Ålend Islands, and a morning break for getting an hasty view of the city, ABC in Helsinki (a.k.a. ABCruise) resumed while still in Stockholm. The first talk was by Laurent Calvet about dynamic (state-space) models, when the likelihood is not available and replaced with a proximity between the observed and the simulated observables, at each discrete time in the series. The authors are using a proxy predictive for the incoming observable and derive an optimal—in a non-parametric sense—bandwidth based on this proxy. Michael Gutmann then gave a presentation that somewhat connected with his talk at ABC in Roma, and poster at NIPS 2014, about using Bayesian optimisation to reduce the rejections in ABC algorithms. Which means building a model of a discrepancy or distance by Bayesian optimisation. I definitely like this perspective as it reduces the simulation to one of a discrepancy (after a learning step). And does not require a threshold. Aki Vehtari expanded on this idea with a series of illustrations. A difficulty I have with the approach is the construction of the acquisition function… The last session while pretty late was definitely exciting with talks by Richard Wilkinson on surrogate or emulator models, which goes very much in a direction I support, namely that approximate models should be accepted on their own, by Julien Stoehr with clustering and machine learning tools to incorporate more summary statistics, and Tim Meeds who concluded with two (small) talks!, centred on the notion of deterministic algorithms that explicitly incorporate the random generators within the comparison, resulting in post-simulation recentering à la Beaumont et al. (2003), plus new advances with further incorporations of those random generators turned deterministic functions within variational Bayes inference

On Wednesday morning, we will land back in Helsinki and head back to our respective homes, after another exciting ABC in… workshop. I am terribly impressed by the way this workshop at sea operated, providing perfect opportunities for informal interactions and collaborations, without ever getting claustrophobic or dense. Enjoying very long days also helped. While it seems unlikely we can repeat this successful implementation, I hope we can aim at similar formats in the coming occurrences. Kitos paljon to our Finnish hosts!

sunrise on Bothnia

Posted in pictures, Running, Travel with tags , , , , , , , , on May 17, 2016 by xi'an

ABC in Helsinki [on-board]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on May 17, 2016 by xi'an

abcruiseABC in Helsinki (a.k.a. ABCruise) has started! With a terrific weather most adequate for a cruise on the Baltic. The ship on which the workshop takes place is certainly larger than any I have been on, including the Channel ferries, and the inside alley looks rather like a shopping centre! However, the setting is exceptional, with comfy sea-facing cabins and pleasant breaks (including fancy tea!) Plus,  we have a quiet and cosy conference room that makes one forgets one is on a boat. Until it starts rocking. Or listing! The cruise boat is definitely large enough to be fairly stable. A unique experience we could consider for future (AB-see) workshops (with the caveat that we benefited from exceptional circumstances that brought the costs down to ridiculous amounts).

Richard Everitt talked about the synthetic likelihood approach and its connection with ABC. Making clear for me a point I had somewhat forgotten, namely that the approximative likelihood is a Gaussian at the observed summary statistics, but one centred at empirical moments derived from the simulation of pseudo summaries based on a given value of the parameter θ. So it is not an exact approach in that it does not converge to the true likelihood as the number of simulation grows to infinity. (While a kernel would converge.) That means it may (will) misrepresent the tails unless the distribution of the summary statistic is close to Normal. Richard also introduced bootstrap or bags of little bootstraps in order to speed up the generation of the pseudo-data, which makes sense albeit it moves the sampling away from the true model since it is conditional on  a single simulation.

Jean-Michel Marin introduced the ABC inference algorithm we are currently working on, using regression random forests that differ from the classification forests we used for model selection. (The paper is close to completion so I hope to be able to tell more in a near future!) Clara Grazian presented her semi-parametric work using ABC with Brunero Liseo. That was part of her thesis. Thomas Schön presented an extension of his particle Gibbs with adaptive sampling to the case of degenerate transitions, using an ABC approximation to get around this central problem. A very interesting entry that I need to study deeper. And Caroline Colijn talked about ABC for trees, mostly about the selection of summary statistics towards comparing tree topologies, with  a specific distance between trees that caters to the topology and only the topology.