Lack of confidence in ABC model choice

Over the past weeks, we have worked a population genetic illustration to our ABC model choice paper. Jean-Marie Cornuet and Jean-Michel Marin set up an experiment where two scenarios including three populations were compared, two populations having diverged 100 generations ago and the third one resulting of a recent admixture between the first two populations (scenario 1) or simply diverging from population 1 (scenario 2) at the same time of 5 generations in the past. In scenario 1, the admixture rate is 0.7 from population 1. Pseudo observed datasets (100) of the same size as in experiment 1 (15 diploid individuals per population, 5 independent microsatellite loci) have been generated assuming an effective population size of 1000 and mutation rates of 0.0005. There are six parameters (provided with the corresponding priors): admixture rate (U[0.1,0.9]), three effective population sizes (U[200,2000]), the time of admixture/second divergence (U[1,10]) and the time of the first divergence (U[50,500]). Although this is rather costly in computing time, the posterior probability can nonetheless be estimated by importance sampling, based on 1000 parameter values and 1000 trees per parameter value, based on the modules of Stephens and Donnelly (JRSS Series B, 2000).  The ABC  approximation is obtained from DIYABC, using a reference sample of two million parameters and 24 summary statistics. The result of this experiment is shown above, with a clear divergence in the numerical values despite stability in both approximations. Taking the importance sampling approximation as the reference value, the error rates in using the ABC approximation to choose between scenarios 1 and 2 are 14.5\% and 12.5\% (under scenarios 1 and 2), respectively. Although a simpler experiment with a single parameter and the same 24 summary statistics shows a reasonable agreement between both approximations, this result comes an additional support to our earlier warning about a blind use of ABC for model selection. The article written with Jean-Marie Cornuet, Jean-Michel Marin and Natesh Pillai is now posted on arXiv (and submitted to PNAS).

20 Responses to “Lack of confidence in ABC model choice”

  1. […] make sense of the flow of information provided by simulation itself. (To pick on the above quote, our recent work on ABC model choice showed that summary statistics may critically miss to “provide information about the […]

  2. […] 11.45 Prof. Christian Robert,Université Paris-Dauphine ABC methods for Bayesian model choice […]

  3. […] on DIC for running model selection. Although I disagree with the reasons given for abandoning Bayes factors in favour of this more rudimentary indicator, I consider the paper (and the trend) an interesting […]

  4. […] than 5 million years as thought previously). Tina Toni talked about the application of ABC-SMC and ABC model choice to complex biochemical dynamics. Pierre Pudlo and Mohammed Sedki introduced the new ABC-SMC scheme […]

  5. […] the comments on our earlier submission to PNAS, we have written (and re-arXived) a revised version where we try to spell out (better) the […]

  6. […] in Bristol, I received the very good news that our ABC model choice submission to PNAS had passed the first round, namely, the manuscript was not rejected but instead the editor asks for […]

  7. […] the earlier posts about our lack of confidence in ABC model choice, I got an interesting email from Christopher […]

  8. […] and can be downloaded. The nine steps are summarised in the above graph. Step 6 corresponds to ABC model selection, but Step 7 follows the suggestion found in DIYABC to generate pseudo-observed data (pods) to […]

  9. ihateaphids Says:

    Hi Christian
    15-20% seems problematic, but within the realm of improvement (and at least using “fake” data sets you can test to see if your models are likely to suffer from the problem of false inferences). Do you and Jean-Marie see any possible improvement to this issue? Many like the idea of ABC for its flexibility in terms of model specification (I have yet to jump on the bandwagon, phew now that this study has emerged), as well as for model choice. I for one, would be willing to sacrifice CPU time for accuracy however, so do you think it will be possible to develop a method that incorporates model flexibility and improved discriminatory power, if perhaps at the sake of time?
    Jeff

    • Jeff: we do not think model flexibility is at stake here, in that using several models and running through them with ABC is feasible and already proposed in the current ABC softwares. The difficulty that I see in using ABC is that the “posterior probability” associated with the ABC output may have nothing to do with the true posterior probability. This is not directly related with the computing power. Even though increasing the number of summary statistics should help.

    • ihateaphids Says:

      Well, I suppose what I meant,is do you foresee a method that can accurately “choose” among models, while still allowing the model flexibility?

    • “Accurately” is the keyword! Our point in the paper is that you cannot trust the ABC approximation to accurately approximate the true Bayes factor, even when the tolerance is zero. However, if you can run a massive Monte Carlo experiment to evaluate the performances of a selection process based on the ABC approximation and show that under all scenarios the probability to pick the wrong model is very small, this brings sufficient confidence to use the procedure. (Hope this helps!)

    • ihateaphids Says:

      Thanks xi’an! Yes, I think the PODS approach is the only useful way to check how well your model choice is likely to work at this point.

      By the way, out of curiosity, how long does the importance sampling method take that you used in the paper, relative to the ABC sims? You trust that this approach can validly ‘choose’ models?

    • In the case of Scenario 2, it takes 5 to 6 hours per simulated dataset to produce an IS approximation to the Bayes Factor. Since this is a simple enough scenario, it shows how demanding IS is in terms of computing time.

  10. For those of us who are empiricists, false allocation rates (alluded to in the last paragraph of the arxiv manuscript) might be more useful at this point? It seems that false positives/negatives are easy to calculate from prior predictive distributions, and at least provide an ad hoc measure for the ability of ABC methods to discriminate among a discrete set of a priori models.

    • Chris: Certainly, empirical (or ad hoc) exploitation of the ABC output is still feasible and useful. One should simply be aware of the limitations of interpretations. False allocation rates should not be interpreted as probabilities, as p-values too often are. From a Bayesian point of view, the drawback is that the evaluation is on average rather than conditional on the data and current parameter values.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: