Archive for false allocation rate

Error in ABC versus error in model choice

Posted in pictures, Statistics, University life with tags , , , , on March 8, 2011 by xi'an

Following the earlier posts about our lack of confidence in ABC model choice, I got an interesting email from Christopher Drummond, who is a postdoc at University of Idaho, working on an empirical project with the landscape genetics of tailed frogs. Along the lines of the empirical test we advocated at the end of our paper, Chris evaluated the type I error (or the false allocation rate) on a controlled ABC experiment with simulated pseudo-observed data (pods) for validation, and ended up with an large overall error on the order of 10% across four different models, ranging from 5-25% for each.. He further reported that “there is not much improvement of an exponentially decreasing rate of improvement in predictive accuracy as the number of ABC simulations increases” and then extrapolated about the huge [impossibly large] number of ABC  simulations [hence the value of the ABC tolerance] that is required to achieve, say, a 5% error rate. This was a most  interesting extrapolation and we ended up exchanging a few emails around this theme… My main argument in the ensuing discussion was that there is a limiting error rate that presumably is different from zero simply because Bayesian procedures are fallible, just like any other statistical procedure, unless the priors are highly differentiated from one model to the next.

Chris also noticed that calibrating the value of the Bayes factor in terms of the false allocation rate itself rather than an absolute scale like Jeffrey’s might provide some trust about the actual (log10) ABC Bayes factors recovered for the models fit to the actual data he observed, since validation simulations indicated no wrong allocation for values above log(10) BF > 5, versus log10(BF) ~ 8 for the model that best fit the observed data collected from real frogs. Although this sounds like a Bayesian p-value, it illustrates very precisely our suggestion in the conclusion of our paper of turning to empirical measures as such to calibrate the ABC output without overly trusting the ABC approximation of the Bayes factor itself.

%d bloggers like this: