**D**avid Frazier (Monash University) and Chris Drovandi (QUT) have recently come up with a robustness study of Bayesian synthetic likelihood that somehow mirrors our own work with David. In a sense, Bayesian synthetic likelihood is definitely misspecified from the start in assuming a Normal distribution on the summary statistics. When the data generating process is misspecified, even were the Normal distribution the “true” model or an appropriately converging pseudo-likelihood, the simulation based evaluation of the first two moments of the Normal is biased. Of course, for a choice of a summary statistic with limited information, the model can still be *weakly compatible* with the data in that there exists a pseudo-true value of the parameter θ⁰ for which the synthetic mean μ(θ⁰) is the mean of the statistics. (Sorry if this explanation of mine sounds unclear!) Or rather the Monte Carlo estimate of μ(θ⁰) coincidences with that mean.The same Normal toy example as in our paper leads to very poor performances in the MCMC exploration of the (unsympathetic) synthetic target. The robustification of the approach as proposed in the paper is to bring in an extra parameter to correct for the bias in the mean, using an additional Laplace prior on the bias to aim at sparsity. Or the same for the variance matrix towards inflating it. This over-parameterisation of the model obviously avoids the MCMC to get stuck (when implementing a random walk Metropolis with the target as a scale).

## Archive for pseudo-likelihood

## robust Bayesian synthetic likelihood

Posted in Statistics with tags ABC, Australia, Bayesian synthetic likelihood, Brisbane, industrial ruins, MCMC, Melbourne, Metropolis-Hastings algorithm, misspecified model, Monash University, pseudo-likelihood, QUT, summary statistics, Sydney Harbour on May 16, 2019 by xi'an## “more Bayesian” GANs

Posted in Books, Statistics with tags Bayesian GANs, compatible conditional distributions, GANs, MCMC convergence, pseudo-likelihood on December 21, 2018 by xi'an**O**n X validated, I got pointed to this recent paper by He, Wang, Lee and Tiang, that proposes a new form of Bayesian GAN. Although I do not see it as really Bayesian, as explained below.

“[The]existing Bayesian method (Saatchi & Wilson, 2017) may lead to incompatible conditionals, which suggest that the underlying joint distribution actually does not exist.”

## weak convergence (…) in ABC

Posted in Books, Statistics, University life with tags ABC, Bernstein-von Mises theorem, consistency, likelihood-free methods, maximum likelihood estimation, pseudo-likelihood, summary statistics on January 18, 2016 by xi'anSamuel Soubeyrand and Eric Haon-Lasportes recently published a paper in Statistics and Probability Letters that has some common features with the ABC consistency paper we wrote a few months ago with David Frazier and Gael Martin. And to the recent Li and Fearnhead paper on the asymptotic normality of the ABC distribution. Their approach is however based on a Bernstein-von Mises [CLT] theorem for the MLE or a pseudo-MLE. They assume that the density of this estimator is asymptotically equivalent to a Normal density, in which case the true posterior conditional on the estimator is also asymptotically equivalent to a Normal density centred at the (p)MLE. Which also makes the ABC distribution normal when both the sample size grows to infinity and the tolerance decreases to zero. Which is not completely unexpected. However, in complex settings, establishing the asymptotic normality of the (p)MLE may prove a formidable or even impossible task.

## scalable Bayesian inference for the inverse temperature of a hidden Potts model

Posted in Books, R, Statistics, University life with tags ABC, Approximate Bayesian computation, Australia, Brisbane, exchange algorithm, Ising model, JCGS, path sampling, Potts model, pseudo-likelihood, QUT, Statistics and Computing on April 7, 2015 by xi'an**M**att Moores, Tony Pettitt, and Kerrie Mengersen arXived a paper yesterday comparing different computational approaches to the processing of hidden Potts models and of the intractable normalising constant in the Potts model. This is a very interesting paper, first because it provides a comprehensive survey of the main methods used in handling this annoying normalising constant Z(β), namely pseudo-likelihood, the exchange algorithm, path sampling (a.k.a., thermal integration), and ABC. A massive simulation experiment with individual simulation times up to 400 hours leads to select path sampling (what else?!) as the (XL) method of choice. Thanks to a pre-computation of the expectation of the sufficient statistic E[S(Z)|β]. I just wonder why the same was not done for ABC, as in the recent Statistics and Computing paper we wrote with Matt and Kerrie. As it happens, I was actually discussing yesterday in Columbia of potential if huge improvements in processing Ising and Potts models by approximating first the distribution of S(X) for some or all β before launching ABC or the exchange algorithm. (In fact, this is a more generic desiderata for all ABC methods that simulating directly if approximately the summary statistics would being huge gains in computing time, thus possible in final precision.) Simulating the distribution of the summary and sufficient Potts statistic S(X) reduces to simulating this distribution with a null correlation, as exploited in Cucala and Marin (2013, JCGS, Special ICMS issue). However, there does not seem to be an efficient way to do so, i.e. without reverting to simulating the entire grid X…

## another instance of ABC?

Posted in Statistics with tags ABC, ABC-MCMC, Kullback-Leibler divergence, measure theory, Metropolis-Hastings algorithms, pseudo-likelihood on December 2, 2014 by xi'an

“These characteristics are (1) likelihood is not available; (2) prior information is available; (3) a portion of the prior information is expressed in terms of functionals of the model that cannot be converted into an analytic prior on model parameters; (4) the model can be simulated. Our approach depends on an assumption that (5) an adequate statistical model for the data are available.”

**A** 2009 JASA paper by Ron Gallant and Rob McCulloch, entitled “On the Determination of General Scientific Models With Application to Asset Pricing”, may have or may not have connection with ABC, to wit the above quote, but I have trouble checking whether or not this is the case.

The true (scientific) model parametrised by θ is replaced with a (statistical) substitute that is available in closed form. And parametrised by g(θ). [If you can get access to the paper, I’d welcome opinions about *Assumption 1* therein which states that the intractable density is equal to a closed-form density.] And the latter is over-parametrised when compared with the scientific model. As in, e.g., a N(θ,θ²) scientific model versus a N(μ,σ²) statistical model. In addition, the prior information is only available on θ. However, this does not seem to matter that much since (a) the Bayesian analysis is operated on θ only and (b) the Metropolis approach adopted by the authors involves simulating a massive number of pseudo-observations, given the current value of the parameter θ *and* the scientific model, so that the transform g(θ) can be estimated by maximum likelihood over the statistical model. The paper suggests using a secondary Markov chain algorithm to find this MLE. Which is claimed to be a simulated annealing resolution (p.121) although I do not see the temperature decreasing. The pseudo-model is then used in a primary MCMC step.

Hence, not truly an ABC algorithm. In the same setting, ABC would use a simulated dataset the same size as the observed dataset, compute the MLEs for both and compare them. Faster if less accurate when Assumption 1 [that the statistical model holds for a restricted parametrisation] does not stand.

Another interesting aspect of the paper is about creating and using a prior distribution around the manifold η=g(θ). This clearly relates to my earlier query about simulating on measure zero sets. The paper does not bring a definitive answer, as it never simulates exactly on the manifold, but this constitutes another entry on this challenging problem…

## O’Bayes 2013 [#2]

Posted in pictures, Running, Statistics, Travel, University life with tags copulas, Duke University, Durham, ISBA, O-Bayes 2013, physics, pseudo-likelihood, reference priors on December 19, 2013 by xi'an**A**nother day at O’Bayes 2013, recovering from the flow of reminiscences of yesterday. Talks from Guido Consonni on running reference model selection in complex designs, from Dimitris Fouskakis on integrating out imaginary observations in a *g*-prior, which seems to bring more sparsity than the hyper-*g* prior in variable selection, from François Perron on Bayesian inference for copulas, with an innovative parametrisation and links with Polya trees, from Nancy Reid and Laura Ventura on likelihood approximations and pseudo-likelihoods, offering a wide range of solutions for ABC (or BC) references (with the lingering question of the validation of the approximation for a given sample, as discussed by Brunero Liseo) and from two physicists to conclude the day! Tomorrow is the final day and I hope I can go running a last time in the woods before the flights back to Paris.

## Julian Besag memorial

Posted in Statistics, Travel, University life with tags Bristol, Gibbs sampling, Hammersley-Clifford theorem, Julian Besag, MCMC, pseudo-likelihood, spatial statistics on April 3, 2011 by xi'an

Homme libre, toujours tu chériras la mer!

La mer est ton miroir; tu contemples ton âme

Dans le déroulement infini de sa lame,

Et ton esprit n’est pas un gouffre moins amer.

Charles Baudelaire,Les Fleurs du Mal

**T**he first afternoon of the memorial session for Julian Besag in Bristol was an intense and at times emotional moment, where friends and colleagues of Julian shared memories and stories. This collection of tributes showed how much of a larger-than-life character he was, from his long-termed and wide-ranged impact on statistics to his very high expectations, both for himself and for others, leading to a total and uncompromising research ethics, to his passion for [extreme] sports and outdoors. (The stories during and after diner were of a more personal nature, but at least as much enjoyable!) The talks on the second day showed how much and how deeply Julian had contributed to spatial statistics and agricultural experiments, to pseudo-likelihood, to Markov random fields and image analysis, and to MCMC methodology and practice. I hope I did not botch too much my presentation on the history of MCMC, while I found reading through the 1974, 1986 and 1993 Read Papers and their discussions an immensely rewarding experiment (I wish I had done prior to completing our Statistical Science paper, but it was bound to be incomplete by nature!). Some interesting links made by the audience were the prior publication of proofs of the Hammersley-Clifford theorem in 1973 (by Grimmet, Preston, and Steward, respectively), as well as the proposal of a Gibbs sampler by Brian Ripley as early as 1977 (even though Hastings did use Gibbs steps in one of his examples). Christophe Andrieu also pointed out to me a very early Monte Carlo review by John Halton in the 1970 SIAM Rewiew, review that I will read (and commment) as soon as possible. Overall, I am quite glad I could take part in this memorial and I am grateful to both Peters for organising it as a fitting tribute to Julian.