Thanks, Pierre, it [alas!] shows I read the paper too quickly!! The afternoon post (Frankly &tc.) would apply to me as well.

]]>introduced in Stoehr et al. (2014) and the posterior (predictive)

error rates in Pudlo, Marin et al.

The only common feature between both local error rates is that they

depend on the observed data (through some summaries because of ABC).

The first one is the conditional expected value of the

misclassification loss knowing that the data (or more precisely some

summaries of the data) are what we have observed. Hence, when we

integrate this conditional error over the marginal distribution of the

summaries of the data, we recover the misclassification error integrated

over the whole prior space.

The posterior (predictive) error rate (presented in the paper “*ABC via random
forests*“) relies on an expected value over the predictive distribution

knowing the observed data. Thus, it includes a second integral over

the data space which does not appear in the condition error rate of

Stoehr et al. and its computation requires new simulations drawn from

the posterior distribution.

As a consequence, the conditional error rates of Stoehr et al. is on

the same ground as the posterior probabilities (see, for instance

Proposition 2 of the paper), a feature not shared by the posterior

predictive error.