## ABC of simulation estimation with auxiliary statistics

“In the ABC literature, an estimator that uses a general kernel is known as a noisy ABC estimator.”

**A**nother arXival relating M-estimation econometrics techniques with ABC. Written by Jean-Jacques Forneron and Serena Ng from the Department of Economics at Columbia University, the paper tries to draw links between indirect inference and ABC, following the tracks of Drovandi and Pettitt [not quoted there] and proposes a *reverse* ABC sampler by

- given a randomness realisation, ε, creating a
*one-to-one*transform of the parameter θ that corresponds to a realisation of a summary statistics; - determine the value of the parameter θ that minimises the distance between this summary statistics and the observed summary statistics;
- weight the above value of the parameter θ by π(θ) J(θ) where J is the Jacobian of the one-to-one transform.

I have difficulties to see why this sequence produces a weighted sample associated with the posterior. Unless perhaps when the minimum of the distance is zero, in which case this amounts to some inversion of the summary statistic (function). And even then, the role of the random bit ε is unclear. Since there is no rejection. The inversion of the summary statistics seems hard to promote in practice since the transform of the parameter θ into a (random) summary is most likely highly complex.

“The posterior mean of θ constructed from the reverse sampler is the same as the posterior mean of θ computed under the original ABC sampler.”

The authors also state (p.16) that the estimators derived by their reverse method are the same as the original ABC approach but this only happens to hold asymptotically in the sample size. And I am not even sure of this weaker statement as the tolerance does not seem to play a role then. And also because the authors later oppose ABC to their reverse sampler as the latter produces iid draws from the posterior (p.25).

“The prior can be potentially used to further reduce bias, which is a feature of the ABC.”

As an aside, while the paper reviews extensively the literature on minimum distance estimators (called M-estimators in the statistics literature) and on ABC, the first quote is missing the meaning of noisy ABC, which consists in a randomised version of ABC where the observed summary statistic is randomised at the same level as the simulated statistics. And the last quote does not sound right either, as it should be seen as a feature of the Bayesian approach rather than of the ABC algorithm. The paper also attributes the paternity of ABC to Don Rubin’s 1984 paper, “who suggested that computational methods can be used to estimate the posterior distribution of interest even when a model is analytically intractable” (pp.7-8). This is incorrect in that Rubin uses ABC to explain the nature of the Bayesian reasoning, but does not in the least address computational issues.

March 10, 2015 at 3:59 am

I agree with the sentiment of your post, Xian. I guess my comments on the reverse ABC method would be the following. The method promises to be easy to implement (one only needs access to an indirect inference estimator with a different random ‘seed’ substituted each time it is called) and is embarrassingly parallel. However, the method is limited to the case where there is a 1-1 correspondence between the parameter and summary statistic, so the summary statistic at the least has to be the same dimension as the parameter. Thus the method is already limited in application. Further, I do not see how to obtain the Jacobian term, as the relationship between the summary statistic and parameter is typically unknown. Perhaps it is possible to estimate the derivative terms based on the first stage (i.e. prior to weighting the samples) output of this method. One setting where I could see the method ‘working’ is the case where Fearnhead and Prangle (2012) summary statistics (estimates of posterior means) are used so that there is one summary per parameter and it might be assumed that there is a linear relationship between parameters and summaries.