## Pre-processing for approximate Bayesian computation in image analysis

Posted in R, Statistics, University life with tags , , , , , , , , , , , , , on March 21, 2014 by xi'an

With Matt Moores and Kerrie Mengersen, from QUT, we wrote this short paper just in time for the MCMSki IV Special Issue of Statistics & Computing. And arXived it, as well. The global idea is to cut down on the cost of running an ABC experiment by removing the simulation of a humongous state-space vector, as in Potts and hidden Potts model, and replacing it by an approximate simulation of the 1-d sufficient (summary) statistics. In that case, we used a division of the 1-d parameter interval to simulate the distribution of the sufficient statistic for each of those parameter values and to compute the expectation and variance of the sufficient statistic. Then the conditional distribution of the sufficient statistic is approximated by a Gaussian with these two parameters. And those Gaussian approximations substitute for the true distributions within an ABC-SMC algorithm à la Del Moral, Doucet and Jasra (2012).

Across 20 125 × 125 pixels simulated images, Matt’s algorithm took an average of 21 minutes per image for between 39 and 70 SMC iterations, while resorting to pseudo-data and deriving the genuine sufficient statistic took an average of 46.5 hours for 44 to 85 SMC iterations. On a realistic Landsat image, with a total of 978,380 pixels, the precomputation of the mapping function took 50 minutes, while the total CPU time on 16 parallel threads was 10 hours 38 minutes. By comparison, it took 97 hours for 10,000 MCMC iterations on this image, with a poor effective sample size of 390 values. Regular SMC-ABC algorithms cannot handle this scale: It takes 89 hours to perform a single SMC iteration! (Note that path sampling also operates in this framework, thanks to the same precomputation: in that case it took 2.5 hours for 10⁵ iterations, with an effective sample size of 10⁴…)

Since my student’s paper on Seaman et al (2012) got promptly rejected by TAS for quoting too extensively from my post, we decided to include me as an extra author and submitted the paper to this special issue as well.

## Statistics and Computing special MCMSk’issue [call for papers]

Posted in Books, Mountains, R, Statistics, University life with tags , , , , , , , , , , , on February 7, 2014 by xi'an

Following the exciting and innovative talks, posters and discussions at MCMski IV, the editor of Statistics and Computing, Mark Girolami (who also happens to be the new president-elect of the BayesComp section of ISBA, which is taking over the management of future MCMski meetings), kindly proposed to publish a special issue of the journal open to all participants to the meeting. Not only to speakers, mind, but to all participants.

So if you are interested in submitting a paper to this special issue of a computational statistics journal that is very close to our MCMski themes, I encourage you to do so. (Especially if you missed the COLT 2014 deadline!) The deadline for submissions is set on March 15 (a wee bit tight but we would dearly like to publish the issue in 2014, namely the same year as the meeting.) Submissions are to be made through the Statistics and Computing portal, with a mention that they are intended for the special issue.

An editorial committee chaired by Antonietta Mira and composed of Christophe Andrieu, Brad Carlin, Nicolas Chopin, Jukka Corander, Colin Fox, Nial Friel, Chris Holmes, Gareth Jones, Peter Müller, Antonietta Mira, Geoff Nicholls, Gareth Roberts, Håvård Rue, Robin Ryder, and myself, will examine the submissions and get back within a few weeks to the authors. In a spirit similar to the JRSS Read Paper procedure, submissions will first be examined collectively, before being sent to referees. We plan to publish the reviews as well, in order to include a global set of comments on the accepted papers. We intend to do it in The Economist style, i.e. as a set of edited anonymous comments. Usual instructions for Statistics and Computing apply, with the additional requirements that the paper should be around 10 pages and include at least one author who took part in MCMski IV.

## MCMSki IV [mistakes and regrets]

Posted in Books, Mountains, pictures, R, Statistics, Travel, University life, Wines with tags , , , , , , on January 13, 2014 by xi'an

Now that the conference and the Bayesian non-parametric satellite workshop (thanks to Judith!) are over, with (almost) everyone back home, and that the post-partum conference blues settles in (!), I can reflect on how things ran for those meetings and what I could have done to improve them… (Not yet considering to propose a second edition of MCMSki in Chamonix, obviously!)

Although this was clearly a side issue for most participants, the fact that the ski race did not take place still rattles me!  In retrospect, adding a mere 5€ amount to the registration fees for all participants would have been enough to cover the (fairly high) fares asked by the local ski school. Late planning for the ski race led to overlook this basic fact…

Since MCMSki is now the official conference of the BayesComp section of ISBA, I should have planned well in advance a section meeting within the program, if only to discuss the structure of the next meeting and how to keep the section alive. Waiting till the end of the last section of the final day was not the best idea!

Another thing I postponed for too long was seeking some sponsors: fortunately, the O’Bayes meeting in Duke woke me up to the potential of a poster prize and re-fortunately Academic Press, CRC Press, and Springer-Verlag reacted quickly enough to have plenty of books to hand to the winners. If we could have had another group of sponsors financing a beanie or something similar, it would have been an additional perk… Even though I gathered enough support from participants about the minimalist conference “package” made of a single A4 sheet.

Last, I did not advertise properly on the webpage and at all during the meeting for the special issue of Statistics and Computing open to all presenters at MCMSki IV! We now need to send a reminder to them…

## Approximate Bayesian computational methods on-line

Posted in R, Statistics, University life with tags , , , , , , on October 25, 2011 by xi'an

Fig. 4 – Boxplots of the evolution [against ε] of ABC approximations to the Bayes factor. The representation is made in terms of frequencies of visits to [accepted proposals from] models MA(1) and MA(2) during an ABC simulation when ε corresponds to the 10,1,.1,.01% quantiles on the simulated autocovariance distances. The data is a time series of 50 points simulated from a MA(2) model. The true Bayes factor is then equal to 17.71, corresponding to posterior probabilities of 0.95 and 0.05 for the MA(2) and MA(1) models, resp.

The survey we wrote with Jean-Michel Marin, Pierre Pudlo, and Robin Ryder is now published in [the expensive] Statistics and Computing (on-line). Beside recycling a lot of Og posts on ABC, this paper has the (personal) appeal of giving us the first hint that all was not so rosy in terms of ABC model choice. I wonder whether or not it will be part of the ABC special issue.

## Questions on ABC

Posted in Statistics, University life with tags , , , , , , on May 31, 2011 by xi'an

Our ABC survey for Statistics and Computing (and the ABC special issue!) has been quickly revised, resubmitted, and rearXived. Here is our conclusion about some issues that remain unsolved (much more limited in scope than the program drafted by Halton!):

1. the convergence results obtained so far are unpractical in that they require either the tolerance to go to zero or the sample size to go to infinity. Obtaining exact error bounds for positive tolerances and finite sample sizes would bring a strong improvement in both the implementation of the method and in the assessment of its worth.
2. in particular, the choice of the tolerance is so far handled from a very empirical perspective. Recent theoretical assessments show that a balance between Monte Carlo variability and target approximation is necessary, but the right amount of balance must be reached towards a practical implementation.
3.  even though ABC is often presented as a converging method that approximates Bayesian inference, it can also be perceived as an inference technique per se and hence analysed in its own right. Connections with indirect inference have already been drawn, however the fine asymptotics of ABC would be most useful to derive. Moreover, it could indirectly provide indications about the optimal calibration of the algorithm.
4. in connection with the above, the connection of ABC-based inference with other approximative methods like variational Bayes inference is so far unexplored. Comparing and interbreeding those different methods should become a research focus as well.
5. the construction and selection of the summary statistics is so far highly empirical. An automated approach based on the principles of data analysis and approximate sufficiency would be much more attractive and convincing, especially in non-standard and complex settings. \item the debate about ABC-based model choice is so far inconclusive in that we cannot guarantee the validity of the approximation, while considering that a “large enough” collection of summary statistics provides an acceptable level of approximation. Evaluating the discrepancy by exploratory methods like the bootstrap would shed a much more satisfactory light on this issue.
6.  the method necessarily faces limitations imposed by large datasets or complex models, in that simulating pseudo-data may itself become an impossible task. Dimension-reducing techniques that would simulate directly the summary statistics will soon become necessary.

## ABC in London [quick recap']

Posted in Statistics, Travel, University life with tags , , , , on May 6, 2011 by xi'an

The meeting yesterday went on very smoothly and nicely. Despite a tight schedule of 12 talks that made the meeting a very full day (and a very early start from Paris),  it did not feel that exhausting, as also shown by the ensuing discussion in the Queens Arm after the talks. (The organisation of the meeting by Michael Stumpf and his group at Imperial was splendid, with plenty of tea and food to sustain the audience, and a very nice conference room.) It obviously helped that I had read a large portion of the papers related to the talks.

The meeting started with David Balding recalling a few quotes from Alan Templeton to stress that ABC was not uniformly well-received in all circles, then Adam Powell gave a fascinating talk about an implementation of ABC on tracking the evolution of dairy farming in Europe. One amazing result in this work was that the whole of European cattle originated from a small herd of a few hundred domesticated aurochs in the Fertile Crescent! Simon Tavaré presented an equally fascinating study on the ancestral tree of primates that used a mix of ABC and MCM, recently published in System Biology, with the age of the common ancestor estimated to be between 80 and 90 million years ago (and an additional estimation of the divergence between humans and chimpanzees to be closer to 8 million years than 5 million years as thought previously). Tina Toni talked about the application of ABC-SMC and ABC model choice to complex biochemical dynamics. Pierre Pudlo and Mohammed Sedki introduced the new ABC-SMC scheme for selecting the tolerance we are developing (with Jean-Michel Marin and Jean-Marie Cornuet), which builds on Del Moral, Doucet and Jasra’s ABC-SMC (and hopefully completed soon to be submitted to Statistics and Computing special ABC issue). Oliver Ratmann showed an implementation of his model assessment to several epidemic data, including a superb influenza sequence. Ajay Jasra explained the main ideas in the ABC HMM paper I recently discussed (even mentioning the post during the talk!). Mark Beaumont started with a recollection of the developments on his GIMH algorithm and illustrated the use of particle MCMC with an ABC target in a dynamic admixture model with a sort of Dirichlet random walk on the admixture parameters. Michael Blum presented his study on the clear estimation error improvement brought by linear and non-linear adjustments to the raw ABC output. Dennis Prangle then followed by a pedagogical introduction to the semi-automated ABC discussed several times on the ‘Og. In the final session on ABC model choice, Xavier Didelot started the discussion by stating the problem about Bayes factor approximation and the resolution in the case of exponential families and Chris Barnes showed us a new method for picking summary statistics by a Kullback-Leibler criterion (Michael Stumpf had sent me the draft of the paper a few days ago and I will comment on the approach once it is available on arXiv).

Again, a very full but exhilarating day! Looking forward the next edition in Roma!

## Statistics and Computing and ABC

Posted in R, Statistics with tags , , on February 23, 2011 by xi'an

Statistics and Computing has received several papers on ABC and plans to make a special ABC issue out of these. All submissions related to ABC that are made prior to late June 2011 and that are accepted will be published in this special issue. The special issue is identified as a specific article type on the on-line submissions page.

In case you have questions or requests about this special issue, please directly contact the Editor Gilles Celeux or the publishing editor. Not me! I am simply forwarding the announcement from the Editor to all those interested.