αBC

The week before last, Peters, Sisson and Fan posted a new ABC paper on arXiv, where they propose to use ABC to run Bayesian inference on α-stable distributions. Since those distributions are defined via a property of their characteristic functions,

\varphi(t) = \exp \left\{ \iota t \mu - |ct|^\alpha(1-\iota \beta \text{sign}(t)\Phi ) \right\}

where \iota^2=-1 and \alpha,\beta,\mu,c are the parameters of the distribution, it is certainly of interest to find a way to handle those complex distributions by likelihood-free methods. My former student Roberto Casarin worked on this problem in the univariate case and proposed an MCMC approach, but he was alas not successful in getting it published, due to another paper appearing at the time…. From a quick perusal, I think Peters et al.’s approach relies on a discretisation of the characteristic function which, while maybe unavoidable, modifies the nature of the problem. Within this framework, the paper examines the performance of several sets of summary statistics over the mean square error for the true parameters., as well as compares it with MCMC approaches when available. Because the behaviour of a genuine Bayesian estimate is unknown in this case, this comparison, while interesting, does not tell us whether or not the ABC approximation is doing well in this complex setting.

One Response to “αBC”

  1. Dear Christian,
    Your blog is an infinite source of information. Thanks for that.
    Here you write that we do not know if ABC is doing well in this setting because we do not have a genuine Bayesian estimate. I am quite puzzled by your opinion. If we have a genuine Bayesian estimate, we do not need ABC and we do not really care to know if ABC is doing well or not. If it is a complex setting in which we need ABC, it is likely that there is no genuine estimate for performing comparison.
    In the favorable situation in which there is a MCMC algorithm for the same problem, we can compare the ABC and MCMC posterior distribution. However we have no guarantee than the MCMC posterior is the right one. Because of convergence issue, the MCMC posterior can be too concentrated around its mode for instance. I find validation of ABC algorithms quite difficult and the mean square error criterion does not seem Bayesian to me.

    Cheers, Michael

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.