Bayesian synthetic likelihood [a reply from the authors]

[Following my comments on the Bayesian synthetic likelihood paper in JGCS, the authors sent me the following reply by Leah South (previously Leah Price).]

Thanks Christian for your comments!

ucgsThe pseudo-marginal idea is useful here because it tells us that in the ideal case in which the model statistic is normal and if we use the unbiased density estimator of the normal then we have an MCMC algorithm that converges to the same target regardless of the value of n (number of model simulations per MCMC iteration). It is true that the bias reappears in the case of misspecification. We found that the target based on the simple plug-in Gaussian density was also remarkably insensitive to n. Given this insensitivity, we consider calling again on the pseudo-marginal literature to offer guidance in choosing n to minimise computational effort and we recommend the use of the plug-in Gaussian density in BSL because it is simpler to implement.

“I am also lost to the argument that the synthetic version is more efficient than ABC, in general”

Given the parametric approximation to the summary statistic likelihood, we expect BSL to be computationally more efficient than ABC. We show this is the case theoretically in a toy example in the paper and find empirically on a number of examples that BSL is more computationally efficient, but we agree that further analysis would be of interest.

The concept of using random forests to handle additional summary statistics is interesting and useful. BSL was able to utilise all the information in the high dimensional summary statistics that we considered rather than resorting to dimension reduction (implying a loss of information), and we believe that is a benefit of BSL over standard ABC. Further, in high-dimensional parameter applications the summary statistic dimension will necessarily be large even if there is one statistic per parameter. BSL can be very useful in such problems. In fact we have done some work on exactly this, combining variational Bayes with synthetic likelihood.

Another benefit of BSL is that it is easier to tune (there are fewer tuning parameters and the BSL target is highly insensitive to n). Surprisingly, BSL performs reasonably well when the summary statistics are not normally distributed — as long as they aren’t highly irregular!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.