ABC for model criticism

An ABC paper by Oliver Ratmann, Christophe Andrieu, Carsten Wiuf and Sylvia Richardson has just appeared on PNAS Early Edition, (It will also be presented at the “ABC in Paris” meeting of June 26.) It is about the use of the ABC approximation error $\epsilon$ in an altogether different way, namely as a tool assessing the goodness of fit of a given model. It reminds me of Richard Wilkinson’s earlier ABC paper, discussed on that post, but the scope is somehow different. The fundamental idea is to use $\epsilon$ as an additional parameter of the model, simulating from a joint posterior distribution

$f(\theta,\epsilon|x_0) \propto \xi(\epsilon|x_0,\theta)\times\pi_\theta(\theta)\times\pi_\epsilon(\epsilon)$

where $x_0$ is the data and $\xi(\epsilon|x_0,\theta)$ plays the role of the likelihood. (The $\pi's$ are obviously the priors on $\theta$ and $\epsilon$.) In fact, $\xi(\epsilon|x_0,\theta)$ is the prior predictive density of $\rho(S(x),S(x_0))$ given $\theta$ and $x_0$ when $x$ is distributed from $f(x|\theta)$. The authors then derive an ABC algorithm they call ABCμ to simulate an MCMC chain targeting this joint distribution, replacing $\xi(\epsilon|x_0,\theta)$ with a non-parametric kernel approximation. For each model under comparison, the marginal posterior distribution on the error $\epsilon$ is then used to assess the fit of the model, the logic of it being that this posterior should include 0 in a reasonable credible interval. (Contrary to other ABC papers, $\epsilon$ can be negative and multidimensional in this paper.)

As written above, this is a very interesting paper, full of innovations, that should span new directions in the way one perceives ABC. It is also quite challenging, partly due to the frustrating constraints PNAS imposes on the organisation (and submission) of papers. The paper thus contains a rather sketchy main part, a Materials and Methods addendum, and a Supplementary Material file! Flipping back and forth between those files certainly does not improve reading. I have never understood why PNAS was is so rigid about a format that does not suit non-experimental sciences…

Given the wealth of innovations contained in the paper, I will certainly post again on it, but let me add here that, while the authors stress they use the data once (a point always uncertain to me), they also define the above target by using simultaneously a prior distribution on $\epsilon$ and a conditional distribution on the same $\epsilon$—that they interpret as the likelihood in $(\epsilon,\theta)$. The product being most often defined as a density in $(\epsilon,\theta),$ it can be simulated from, but I have trouble seeing this as a regular Bayesian problem, especially because it seems the prior on $\epsilon$ significantly contributes to the final assessment (but is not particularly discussed in the paper, except in the S1.10 section).

Another Bayesian conundrum is the fact that both $\theta$ and $\epsilon$ are taken to be the sames across models. In a sense, I presume $\theta$ can be completely different, but using the same prior on $\epsilon$ over all models under comparison is more of an issue…

8 Responses to “ABC for model criticism”

1. Any thoughts on the use of “discrepancy variables” for “Model checking using posterior predictive simulation”, from Gelman and Meng in Gilks, Richardson, Spiegelhalter (eds), MCMC IN PRACTICE? It looks a lot like a setup for a generic set of summary statistics, $T(y,\theta)$, for ABC, but, then, perhaps you’ve written on that some place?

2. […] assessments also are available is quite correct, as demonstrated in the multicriterion approach of Olli Ratmann and co-authors. This is simply another approach, not followed by most geneticists so […]

3. […] six steps concluding with preferring M_1 to M_0 avoids the problem. This is why ABC solutions like Ollie Ratmann‘s or others based on predictive performances would be of huge interest to bypass the delicate […]

4. […] the approximation of Bayes factor. The only solution seems to be using discrepancy measures as in Ratmann et al. (2009), ie (empirical) model criticism rather than (decision-theoretic) model choice. Bayes […]

5. […] in 2/(2+d). There should thus be a way to link this decision-theoretic approach with the one of Ratmann et al. since the latter take h to be part of the parameter […]

6. […] Indian restaurant closed!, I enjoy very much reading this very rich and unusual thesis. As posted earlier, I have disagreements with some of the choices made in this thesis, in particular the […]

7. […] argument in the criticisms of predictive Bayes inference. I have difficulties with the concept in general and, in the present case, there is no difficulty with using to predict the distribution of […]

8. […] on ABC for model criticism As noted in an earlier post, the ABCμ paper by Oliver Ratmann, Christophe Andrieu, Carsten Wiuf and Sylvia Richardson that […]