**S**ome explanation for all these acronyms! I am giving a Actuarial Mathematics & Statistics (AMS) seminar at Heriot-Watt (HW) University, in Edinburgh, tomorow. But in the (new) Bayes Centre, at the University of Edinburgh, rather than on the campus of Heriot-Watt, as this is also the launching day of the Centre for Doctoral Training (CDT) on Mathematical Modelling, Analysis, & Computation (MAG) shared between Heriot-Watt, and the University of Edinburgh, funded by the EPSRC and located in the Maxwell Institute Graduate School (MIGS) in its Bayes Centre. My talk will be on ABC convergence and misspecification.

## Archive for misspecification

## HW AMS & EPSRC MAG-MIGS CDT seminar

Posted in Statistics with tags ABC convergence, Bayes Centre, Centre for Doctoral Training, Edinburgh, EPSRC, Heriot-Watt University, Maxwell Institute Graduate School, misspecification, Scotland, University of Edinburgh on October 10, 2019 by xi'an## asymptotics of synthetic likelihood [a reply from the authors]

Posted in Books, Statistics, University life with tags ABC, approximate Bayesian inference, Bayesian inference, Bayesian synthetic likelihood, central limit theorem, effective sample size, frequentist confidence, local regression, misspecification, pseudo-marginal MCMC, response, tolerance, uncertainty quantification on March 19, 2019 by xi'an*[Here is a reply from David, Chris, and Robert on my earlier comments, highlighting some points I had missed or misunderstood.]*

Dear Christian

Thanks for your interest in our synthetic likelihood paper and the thoughtful comments you wrote about it on your blog. We’d like to respond to the comments to avoid some misconceptions.

Your first claim is that we don’t account for the differing number of simulation draws required for each parameter proposal in ABC and synthetic likelihood. This doesn’t seem correct, see the discussion below Lemma 4 at the bottom of page 12. The comparison between methods is on the basis of effective sample size per model simulation.

As you say, in the comparison of ABC and synthetic likelihood, we consider the ABC tolerance \epsilon and the number of simulations per likelihood estimate M in synthetic likelihood as functions of n. Then for tuning parameter choices that result in the same uncertainty quantification asymptotically (and the same asymptotically as the true posterior given the summary statistic) we can look at the effective sample size per model simulation. Your objection here seems to be that even though uncertainty quantification is similar for large n, for a finite n the uncertainty quantification may differ. This is true, but similar arguments can be directed at almost any asymptotic analysis, so this doesn’t seem a serious objection to us at least. We don’t find it surprising that the strong synthetic likelihood assumptions, when accurate, give you something extra in terms of computational efficiency.

We think mixing up the synthetic likelihood/ABC comparison with the comparison between correctly specified and misspecified covariance in Bayesian synthetic likelihood is a bit unfortunate, since these situations are quite different. The first involves correct uncertainty quantification asymptotically for both methods. Only a very committed reader who looked at our paper in detail would understand what you say here. The question we are asking with the misspecified covariance is the following. If the usual Bayesian synthetic likelihood analysis is too much for our computational budget, can something still be done to quantify uncertainty? We think the answer is yes, and with the misspecified covariance we can reduce the computational requirements by an order of magnitude, but with an appropriate cost statistically speaking. The analyses with misspecified covariance give valid frequentist confidence regions asymptotically, so this may still be useful if it is all that can be done. The examples as you say show something of the nature of the trade-off involved.

We aren’t quite sure what you mean when you are puzzled about why we can avoid having M to be O(√n). Note that because of the way the summary statistics satisfy a central limit theorem, elements of the covariance matrix of S are already O(1/n), and so, for example, in estimating μ(θ) as an average of M simulations for S, the elements of the covariance matrix of the estimator of μ(θ) are O(1/(Mn)). Similar remarks apply to estimation of Σ(θ). I’m not sure whether that gets to the heart of what you are asking here or not.

In our email discussion you mention the fact that if M increases with n, then the computational burden of a single likelihood approximation and hence generating a single parameter sample also increases with n. This is true, but unavoidable if you want exact uncertainty quantification asymptotically, and M can be allowed to increase with n at any rate. With a fixed M there will be some approximation error, which is often small in practice. The situation with vanilla ABC methods will be even worse, in terms of the number of proposals required to generate a single accepted sample, in the case where exact uncertainty quantification is desired asymptotically. As shown in Li and Fearnhead (2018), if regression adjustment is used with ABC and you can find a good proposal in their sense, one can avoid this. For vanilla ABC, if the focus is on point estimation and exact uncertainty quantification is not required, the situation is better. Of course as you show in your nice ABC paper for misspecified models jointly with David Frazier and Juidth Rousseau recently the choice of whether to use regression adjustment can be subtle in the case of misspecification.

In our previous paper Price, Drovandi, Lee and Nott (2018) (which you also reviewed on this blog) we observed that if the summary statistics are exactly normal, then you can sample from the summary statistic posterior exactly with finite M in the synthetic likelihood by using pseudo-marginal ideas together with an unbiased estimate of a normal density due to Ghurye and Olkin (1962). When S satisfies a central limit theorem so that S is increasingly close to normal as n gets large, we conjecture that it is possible to get exact uncertainty quantification asymptotically with fixed M if we use the Ghurye and Olkin estimator, but we have no proof of that yet (if it is true at all).

Thanks again for being interested enough in the paper to comment, much appreciated.

David, Chris, Robert.

## ISBA 18 tidbits

Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags ABC, ABC in Edinburgh, Bayesian inference, coreset, deep learning, Edinburgh, empirical likelihood, Ironman Edinburgh, ISBA 2018, kilt, marginal likelihood, misspecification, misspecified model, non-local priors, non-parametrics, PAC-Bayesian, posters, Scotland on July 2, 2018 by xi'an**A**mong a continuous sequence of appealing sessions at this ISBA 2018 meeting [says a member of the scientific committee!], I happened to attend two talks [with a wee bit of overlap] by Sid Chib in two consecutive sessions, because his co-author Ana Simoni (CREST) was unfortunately sick. Their work was about models defined by a collection of moment conditions, as often happens in econometrics, developed in a recent JASA paper by Chib, Shin, and Simoni (2017). With an extension about moving to defining conditional expectations by use of a functional basis. The main approach relies on exponentially tilted empirical likelihoods, which reminded me of the empirical likelihood [BCel] implementation we ran with Kerrie Mengersen and Pierre Pudlo a few years ago. As a substitute to ABC. This problematic made me wonder on how much Bayesian the estimating equation concept is, as it should somewhat involve a nonparametric prior under the moment constraints.

Note that Sid’s [talks and] papers are disconnected from ABC, as everything comes in closed form, apart from the empirical likelihood derivation, as we actually found in our own work!, but this could become a substitute model for ABC uses. For instance, identifying the parameter θ of the model by identifying equations. Would that impose too much input from the modeller? I figure I came with this notion mostly because of the emphasis on proxy models the previous day at ABC in ‘burgh! Another connected item of interest in the work is the possibility of accounting for misspecification of these moment conditions by introducing a vector of errors with a spike & slab distribution, although I am not sure this is 100% necessary without getting further into the paper(s) [blame conference pressure on my time].

Another highlight was attending a fantastic poster session Monday night on computational methods except I would have needed four more hours to get through every and all posters. This new version of ISBA has split the posters between two sites (great) and themes (not so great!), while I would have preferred more sites covering all themes over all nights, to lower the noise (still bearable this year) and to increase the possibility to check all posters of interest in a particular theme…

Mentioning as well a great talk by Dan Roy about assessing deep learning performances by what he calls non-vacuous error bounds. Namely, through PAC-Bayesian bounds. One major comment of his was about deep learning models being much more non-parametric (number of parameters rising with number of observations) than parametric models, meaning that generative adversarial constructs as the one I discussed a few days ago may face a fundamental difficulty as models are taken at face value there.

On closed-form solutions, a closed-form Bayes factor for component selection in mixture models by Fũqene, Steel and Rossell that resemble the Savage-Dickey version, without the measure theoretic difficulties. But with non-local priors. And closed-form conjugate priors for the probit regression model, using unified skew-normal priors, as exhibited by Daniele Durante. Which are product of Normal cdfs and pdfs, and which allow for closed form marginal likelihoods and marginal posteriors as well. (The approach is not exactly conjugate as the prior and the posterior are not in the same family.)

And on the final session I attended, there were two talks on scalable MCMC, one on coresets, which will require some time and effort to assimilate, by Trevor Campbell and Tamara Broderick, and another one using Poisson subsampling. By Matias Quiroz and co-authors. Which did not completely convinced me (but this was the end of a long day…)

All in all, this has been a great edition of the ISBA meetings, if quite intense due to a non-stop schedule, with a very efficient organisation that made parallel sessions manageable and poster sessions back to a reasonable scale [although I did not once manage to cross the street to the other session]. Being in unreasonably sunny Edinburgh helped a lot obviously! I am a wee bit disappointed that no one else follows my call to wear a kilt, but I had low expectations to start with… And too bad I missed the Ironman 70.3 Edinburgh by one day!

## Bayesian synthetic likelihood [a reply from the authors]

Posted in Books, pictures, Statistics, University life with tags Bayesian synthetic likelihood, misspecification, pseudo-marginal, variational Bayes methods on December 26, 2017 by xi'an*[Following my comments on the Bayesian synthetic likelihood paper in JGCS, the authors sent me the following reply by Leah South (previously Leah Price).]*

Thanks Christian for your comments!

The pseudo-marginal idea is useful here because it tells us that in the ideal case in which the model statistic is normal and if we use the unbiased density estimator of the normal then we have an MCMC algorithm that converges to the same target regardless of the value of n (number of model simulations per MCMC iteration). It is true that the bias reappears in the case of misspecification. We found that the target based on the simple plug-in Gaussian density was also remarkably insensitive to n. Given this insensitivity, we consider calling again on the pseudo-marginal literature to offer guidance in choosing n to minimise computational effort and we recommend the use of the plug-in Gaussian density in BSL because it is simpler to implement.

“I am also lost to the argument that the synthetic version is more efficient than ABC, in general”

Given the parametric approximation to the summary statistic likelihood, we expect BSL to be computationally more efficient than ABC. We show this is the case theoretically in a toy example in the paper and find empirically on a number of examples that BSL is more computationally efficient, but we agree that further analysis would be of interest.

The concept of using random forests to handle additional summary statistics is interesting and useful. BSL was able to utilise all the information in the high dimensional summary statistics that we considered rather than resorting to dimension reduction (implying a loss of information), and we believe that is a benefit of BSL over standard ABC. Further, in high-dimensional parameter applications the summary statistic dimension will necessarily be large even if there is one statistic per parameter. BSL can be very useful in such problems. In fact we have done some work on exactly this, combining variational Bayes with synthetic likelihood.

Another benefit of BSL is that it is easier to tune (there are fewer tuning parameters and the BSL target is highly insensitive to n). Surprisingly, BSL performs reasonably well when the summary statistics are not normally distributed — as long as they aren’t highly irregular!