ABC in London [quick recap’]

The meeting yesterday went on very smoothly and nicely. Despite a tight schedule of 12 talks that made the meeting a very full day (and a very early start from Paris),  it did not feel that exhausting, as also shown by the ensuing discussion in the Queens Arm after the talks. (The organisation of the meeting by Michael Stumpf and his group at Imperial was splendid, with plenty of tea and food to sustain the audience, and a very nice conference room.) It obviously helped that I had read a large portion of the papers related to the talks.

The meeting started with David Balding recalling a few quotes from Alan Templeton to stress that ABC was not uniformly well-received in all circles, then Adam Powell gave a fascinating talk about an implementation of ABC on tracking the evolution of dairy farming in Europe. One amazing result in this work was that the whole of European cattle originated from a small herd of a few hundred domesticated aurochs in the Fertile Crescent! Simon Tavaré presented an equally fascinating study on the ancestral tree of primates that used a mix of ABC and MCM, recently published in System Biology, with the age of the common ancestor estimated to be between 80 and 90 million years ago (and an additional estimation of the divergence between humans and chimpanzees to be closer to 8 million years than 5 million years as thought previously). Tina Toni talked about the application of ABC-SMC and ABC model choice to complex biochemical dynamics. Pierre Pudlo and Mohammed Sedki introduced the new ABC-SMC scheme for selecting the tolerance we are developing (with Jean-Michel Marin and Jean-Marie Cornuet), which builds on Del Moral, Doucet and Jasra’s ABC-SMC (and hopefully completed soon to be submitted to Statistics and Computing special ABC issue). Oliver Ratmann showed an implementation of his model assessment to several epidemic data, including a superb influenza sequence. Ajay Jasra explained the main ideas in the ABC HMM paper I recently discussed (even mentioning the post during the talk!). Mark Beaumont started with a recollection of the developments on his GIMH algorithm and illustrated the use of particle MCMC with an ABC target in a dynamic admixture model with a sort of Dirichlet random walk on the admixture parameters. Michael Blum presented his study on the clear estimation error improvement brought by linear and non-linear adjustments to the raw ABC output. Dennis Prangle then followed by a pedagogical introduction to the semi-automated ABC discussed several times on the ‘Og. In the final session on ABC model choice, Xavier Didelot started the discussion by stating the problem about Bayes factor approximation and the resolution in the case of exponential families and Chris Barnes showed us a new method for picking summary statistics by a Kullback-Leibler criterion (Michael Stumpf had sent me the draft of the paper a few days ago and I will comment on the approach once it is available on arXiv).

Again, a very full but exhilarating day! Looking forward the next edition in Roma!

5 Responses to “ABC in London [quick recap’]”

  1. […] They add to this impressive battery of methods the potential use of AIC and BIC. (Last year after ABC in London I reported here on the use of the alternative DIC by Francois and Laval, but the paper is not in […]

  2. […] Approaches to Achieving Sufficiency for ABC model selection, was presented by Chris Barnes during ABC in London two months ago. (Note that all talks of the meeting are now available in Nature Precedings. A neat […]

  3. […] another paper on ABC model choice was posted on arXiv a few days ago, just prior to the ABC in London meeting that ended in the pub above (most conveniently located next to my B&B!). It is written by […]

  4. Yes, an interesting meeting. I was surprised that there wasn’t a little more debate about the problem of model choice (I’m obviously as much to blame as anyone, having kept quiet…) — you seemed to be about the only one to take a pessimistic point of view, others shrugging their shoulders a little and saying we rely on summaries for parameter estimation so it’s not a big deal to do the same for model choice. The point that David Balding raised that apparently small issues such as different metrics and different tolerances can make quite a large difference to model choice (even in the case of full-data use) was a valid one and was not addressed by anyone.

    The message for me was that, when doing ABC model choice, one needs be very careful that the summaries are informative for model choice and that the results are not overly sensitive to settings of the particular algorithm used. Michael Blum’s abc R package looks worth a look and includes relevant sensitivity checks.

    Also, a small typo in the post: it is Adam Powell, not Alan.

    • This is a comment I heard from several participants, that we should have had more of a debate. Maybe this was not the right time, at the end of twelve talks the audience was exhausted, maybe my talk was not sufficiently convincing, because the 24 parameter case did not show “such” a discrepancy… I am working on a more theoretical appraisal to show that indeed this is intrinsically different from parameter estimation (a discussion we started with David Balding and Mark Beaumont at the Queens Arm). I think my final slide is worth repeating, though, namely that turning away from the Bayes factor and the posterior probabilities [towards a data analytic empirical assessment, missing the elegance of decision theory but much more practical] seems to be the only safe option at the moment!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.