## ABC and SMC²

**T**oday, Nicolas Chopin gave his talk at CREST. While he tried to encompass as much as possible of the background on ABC for a general audience (he is also giving the same talk in Nice this week), the talk led me to think about the parameterisation of ABC methods. He chose a non-parametric presentation, as in Fearnhead and Prangle. From this viewpoint, the choice of the kernel ** K** and of the distance measure should not matter so much, when compared with the choice of the bandwith/tolerance. Further, the non-parametric flavour tells us that the tolerance itself can be optimised for a given sample size, i.e. in ABC settings a given number of iterations. When looking at MCMC-ABC I briefly wondered at whether or not the tolerance should be optimised against the Metropolis-Hastings acceptance probability. Because from this perspective the ratio of the kernels is a ratio of estimators of the densities of the data at both values of the parameter(s). (Rather than an estimator of the ratio.)

**T**he second part of Nicolas’ talk was about SMC² and hence unrelated to ABC, except that he mentioned that SMC is an (unbiased) approximation for the Metropolis-Hastings acceptance probability. Which is also an interpretation of the ideal ABC (zero tolerance) and the noisy ABC. (Plus, Marc Beaumont is a common denominator for both perspectives!) Unfortunately, Nicolas ran out of time (because of a tight schedule) and did not give much detail on SMC². Overall, his motivational introduction was quite worth it, though! Here are his slides:

**T**his talk also led me to reflect on the organisation of my incoming PhD class on ABC: it should include

- justify MCMC-ABC from first principles, for regular and noisy ABC;
- aggregate the literature on non-parametric justifications of ABC. The 2002 paper by Beaumont et al. remains a reference there;
- understand the link with indirect inference (the book by Gouriéroux and Monfort is sitting on my desk!);
- answer the questions “is aBc the sole and unique solution?” and “how can we evaluate/estimate the error?”;

January 26, 2012 at 12:12 am

[…] mentioned in the latest post on ABC, I am giving a short doctoral course on ABC methods and convergence at CREST next week. I have now […]

January 17, 2012 at 11:17 pm

Dani

The simulated method of moments is a particular case of indirect inference, where the simulated instrumental (auxiliary) model is the correctly specified model, and is used only to generate the moment conditions, not available in analytic forms.

In the article by Ron and Rob the statistical model can be overidentified (the number of parameters of the statistical model is greater than the number of parameters of the scientific model) , and so the mapping is not necessarily 1-1.

The connection between moment conditions and approximate bayesian inference is explored in:

http://www.dynare.org/wp-repo/dynarewp008.pdf

INDIRECT LIKELIHOOD INFERENCE

MICHAEL CREEL AND DENNIS KRISTENSEN

ABSTRACT. Given a sample from a fully specified parametric model, let Zn be a given finite-dimensional statistic – for example, an initial estimator or a set of sample moments.

We propose to (re-)estimate the parameters of the model by maximizing the likelihood of Zn. We call this the maximum indirect likelihood (MIL) estimator. We also propose a computationally

tractable Bayesian version of the estimator which we refer to as a Bayesian Indirect Likelihood (BIL) estimator. In most cases, the density of the statistic will be of unknown form, and we develop simulated versions of the MIL and BIL estimators. We show that the indirect likelihood estimators are consistent and asymptotically normally distributed, with the same asymptotic variance as that of the corresponding efficient twostep GMM estimator based on the same statistic. However, our likelihood-based estimators, by taking into account the full finite-sample distribution of the statistic, are higher order efficient relative to GMM-type estimators. Furthermore, in many cases they enjoy a bias reduction property similar to that of the indirect inference estimator. Monte Carlo

results for a number of applications including dynamic and nonlinear panel data models, a structural auction model and two DSGE models show that the proposed estimators indeed have attractive finite sample properties.

January 17, 2012 at 7:22 am

I recall I had read and liked a lot the paper by Ron and Rob.

One thing that I believing is limiting their approach is the “identifiability” of the mapping (1 to 1 mapping) between scientific and statistical models.

I am new to the ABC literature (and this is the main motivation why I started reading your blog regularly)… please Christian correct me if I am wrong.. but ABC – I believe – does not assume any form “identifiability”.

The way I see it is that if there are two sets of parameters that generate the same summary statistics through the simulated data, we still have valid ABC-posteriors (even if they will probably appear multimodal).

January 17, 2012 at 7:27 am

Correct! There is no assumption of that kind. Once a set of statistics is chosen, ABC approximates the posterior distribution associated with those statistics. If the set is too small and fails to identify the parameters, the posterior will be the prior (on some combinations of the parameters)…

January 17, 2012 at 2:38 am

Christian

Another connection between Bayesian methods and indirect inference is:

“On the Determination of General Scientific Models With Application to Asset Pricing”

A. Ronald Gallant, Robert E. McCulloch. Journal of the American Statistical Association. March 1, 2009, 104(485): 117-131. doi:10.1198/jasa.2009.0008.

from the abstract:

The habit model exhibits four characteristics that are often present in models developed from scientific considerations: (1) a likelihood is not available; (2) prior information is available; (3) a portion of the prior information is expressed in terms of functionals of the model that cannot be converted into an analytic prior on model parameters; (4) the model can be simulated. The underpinning of our approach is that, in addition, (5) a parametric statistical model for the data, determined without reference to the scientific model, is known. In general one can expect to be able to determine a model that satisfies (5) because very richly parameterized statistical models are easily accommodated. We develop a computationally intensive, generally applicable, Bayesian strategy for estimation and inference for scientific models that meet this description together with methods for assessing model adequacy. An important adjunct to the method is that a map from the parameters of the scientific model to functionals of the scientific and statistical models becomes available. This map is a powerful tool for understanding the properties of the scientific model.

January 17, 2012 at 7:04 am

Thanks Marcio, I will look at the paper today!

January 19, 2012 at 8:34 am

I tried to read the paper but found it extremely convoluted. From what I understand, this is a pseudo-likelihood method with a simulation component but there is no approximation assessment as in ABC, as far as I can judge. The pseudo-model stands as it.

January 17, 2012 at 1:28 am

I see you are correctly pointing to the indirect inference literature. What about econometric techniques like the Simulated Method of Moments (SMM)? Do you think they have a connection with ABC as well?

The way I see I see it, is that in (SMM) the “summary statistics” are collection of moments that are thought to be relevant by the economists. The way they are matched is through a moment-score equation following the GMM literature. You could formulate similar ABC algorithms..

D.

January 17, 2012 at 7:05 am

Yes, this is right: the simulated method of moments is a special case of simulated inference and thus relates to ABC algorithms as well.