Error in ABC versus error in model choice

Following the earlier posts about our lack of confidence in ABC model choice, I got an interesting email from Christopher Drummond, who is a postdoc at University of Idaho, working on an empirical project with the landscape genetics of tailed frogs. Along the lines of the empirical test we advocated at the end of our paper, Chris evaluated the type I error (or the false allocation rate) on a controlled ABC experiment with simulated pseudo-observed data (pods) for validation, and ended up with an large overall error on the order of 10% across four different models, ranging from 5-25% for each.. He further reported that “there is not much improvement of an exponentially decreasing rate of improvement in predictive accuracy as the number of ABC simulations increases” and then extrapolated about the huge [impossibly large] number of ABC  simulations [hence the value of the ABC tolerance] that is required to achieve, say, a 5% error rate. This was a most  interesting extrapolation and we ended up exchanging a few emails around this theme… My main argument in the ensuing discussion was that there is a limiting error rate that presumably is different from zero simply because Bayesian procedures are fallible, just like any other statistical procedure, unless the priors are highly differentiated from one model to the next.

Chris also noticed that calibrating the value of the Bayes factor in terms of the false allocation rate itself rather than an absolute scale like Jeffrey’s might provide some trust about the actual (log10) ABC Bayes factors recovered for the models fit to the actual data he observed, since validation simulations indicated no wrong allocation for values above log(10) BF > 5, versus log10(BF) ~ 8 for the model that best fit the observed data collected from real frogs. Although this sounds like a Bayesian p-value, it illustrates very precisely our suggestion in the conclusion of our paper of turning to empirical measures as such to calibrate the ABC output without overly trusting the ABC approximation of the Bayes factor itself.

3 Responses to “Error in ABC versus error in model choice”

  1. […] another paper on ABC model choice was posted on arXiv a few days ago, just prior to the ABC in London meeting that ended in the pub […]

  2. […] some rather excellent discussion of these issues on Christian Robert’s blog, again thanks to Leo for pointing me to this, still some more reading to do on this before we here […]

  3. An update and additional thoughts, as I have been rapidly adding simulations… Interestingly, as the number of ABC simulations increases beyond ca. 100K, the overall false allocation rate remains nearly constant, but the threshold for Bayes factor ‘error’ decreases considerably. For example, log10(BF) > 5 for reference table w/ 250K simulations always selects the ‘true’ model from 1000 pseudo-observed data sets, but w/ 500K simulations this threshold drops to log10(BF) > 2, closer to Jeffreys absolute scale for ‘decisive’ evidence. (NB in this case the BF comparison is between best-fit and second-best-fit models for each pods.) While these results are specific to the data at hand, I find this pattern encouraging…

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.