In the context of Minh-Ngoc Tran’s talk, having an unbiased estimator of the log-likelihood (of the summary statistic) is useful in the optimisation algorithm for Variational Bayes (http://eprints.qut.edu.au/98023/8/98023.pdf).

So the quest for unbiasedness in these two papers has nothing to do with how the target of the synthetic likelihood methods approximates the actual posterior.

The hypothesis test comment is very interesting and something that we considered (and also commented on in Wood 2010). But to have sufficient power in the test for multivariate normality the value of n needs to be too large, losing the computational gains of the synthetic likelihood in the first place. I guess it is possible to at least look at normality on the marginals. Although I’m not sure how useful that might be as presumably with n large enough this test would give a small p-value most of the time as the summary statistic is very unlikely to be perfectly normally distributed. We have found that the BSL approach seems to be quite robust to some deviation away from normality but have also considered examples with summaries with very heavy tails where BSL will completely fail.

You are correct that BSL has a hidden curse of dimensionality in that the normality assumption is likely to get worse as summary statistics are added. One must be careful with the way summaries are chosen. In future research we plan to try and relax the normality assumption (and hopefully maintain some computational advantage).

Sorry for the long reply.

]]>Thanks, Anthony!, for such a prompt return. I understand and appreciate the connection with pseudo-marginal, as it produces a validation of the algorithm in that the Markov chain converges to a well-defined limit. In line with your last paragraph, I would suggest using a goodness-of-fit test on simulated data (since available) in order to assess how good the synthetic likelihood fits the distribution of the summary statistics. Or a subset of those.

]]>There, the use of a non-negative, unbiased estimate of the auxiliary likelihood is important primarily because it ensures that the Markov chain is a pseudo-marginal Markov chain with a known invariant probability measure (assuming the product of the auxiliary likelihood and the prior is integrable). The adequacy of the approximation is then determined solely by the closeness of the well-defined auxiliary likelihood to the actual likelihood.

I completely agree that it would be interesting if there were rigorously-justified ways to assess this adequacy in practice, taking into consideration both the parametric auxiliary model as well as the choice of n, which together define the auxiliary likelihood.

]]>