Archive for ABC

delayed-acceptance. ADA boosted

Posted in Statistics with tags , , , , , on August 11, 2019 by xi'an

Samuel Wiqvist and co-authors from Scandinavia have recently arXived a paper on a new version of delayed acceptance MCMC. The ADA in the novel algorithm stands for approximate and accelerated, where the approximation in the first stage is to use a Gaussian process to replace the likelihood. In our approach, we used subsets for partial likelihoods, ordering them so that the most varying sub-likelihoods were evaluated first. Furthermore, if a parameter reaches the second stage, the likelihood is not necessarily evaluated, based on the global probability that a second stage is rejected or accepted. Which of course creates an approximation. Even when using a local predictor of the probability. The outcome of a comparison in two complex models is that the delayed approach does not necessarily do better than particle MCMC in terms of effective sample size per second, since it does reject significantly more. Using various types of surrogate likelihoods and assessments of the approximation effect could boost the appeal of the method. Maybe using ABC first could suggest another surrogate?

a problem that did not need ABC in the end

Posted in Books, pictures, Statistics, Travel with tags , , , , , , , , , , , , on August 8, 2019 by xi'an

While in Denver, at JSM, I came across [across validated!] this primarily challenging problem of finding the posterior of the 10³ long probability vector of a Multinomial M(10⁶,p) when only observing the range of a realisation of M(10⁶,p). This sounded challenging because the distribution of the pair (min,max) is not available in closed form. (Although this allowed me to find a paper on the topic by the late Shanti Gupta, who was chair at Purdue University when I visited 32 years ago…) This seemed to call for ABC (especially since I was about to give an introductory lecture on the topic!, law of the hammer…), but the simulation of datasets compatible with the extreme values of both minimum and maximum, m=80 and M=12000, proved difficult when using a uniform Dirichlet prior on the probability vector, since these extremes called for both small and large values of the probabilities. However, I later realised that the problem could be brought down to a Multinomial with only three categories and the observation (m,M,n-m-M), leading to an obvious Dirichlet posterior and a predictive for the remaining 10³-2 realisations.

unbiased product of expectations

Posted in Books, Statistics, University life with tags , , , , , , , , on August 5, 2019 by xi'an

m_biomet_106_2coverWhile I was not involved in any way, or even aware of this research, Anthony Lee, Simone Tiberi, and Giacomo Zanella have an incoming paper in Biometrika, and which was partly written while all three authors were at the University of Warwick. The purpose is to design an efficient manner to approximate the product of n unidimensional expectations (or integrals) all computed against the same reference density. Which is not a real constraint. A neat remark that motivates the method in the paper is that an improved estimator can be connected with the permanent of the n x N matrix A made of the values of the n functions computed at N different simulations from the reference density. And involves N!/ (N-n)! terms rather than N to the power n. Since it is NP-hard to compute, a manageable alternative uses random draws from constrained permutations that are reasonably easy to simulate. Especially since, given that the estimator recycles most of the particles, it requires a much smaller version of N. Essentially N=O(n) with this scenario, instead of O(n²) with the basic Monte Carlo solution, towards a similar variance.

This framework offers many applications in latent variable models, including pseudo-marginal MCMC, of course, but also for ABC since the ABC posterior based on getting each simulated observation close enough from the corresponding actual observation fits this pattern (albeit the dependence on the chosen ordering of the data is an issue that can make the example somewhat artificial).

Introductory overview lecture: the ABC of ABC [JSM19 #1]

Posted in Statistics with tags , , , , , , , , , , , on July 28, 2019 by xi'an

Here are my slides [more or less] for the introductory overview lecture I am giving today at JSM 2019, 4:00-5:50, CC-Four Seasons I. There is obviously quite an overlap with earlier courses I gave on the topic, although I refrained here from mentioning any specific application (like population genetics) to focus on statistical and computational aspects.

Along with the other introductory overview lectures in this edition of JSM:

off to Denver! [JSM2019]

Posted in Statistics with tags , , , , , , , , , on July 27, 2019 by xi'an

As off today, I am attending JSM 2019 in Denver, giving an “Introductory Overview Lecture” on The ABC of Approximate Bayesian Computation on Sunday afternoon and chairing an ABC session on Monday morning. As far as I know these are the only ABC sessions at JSM this year… And hence the only sessions I will be attending. (I have not been to Denver and the area since 1993, when I visited Kerrie Mengersen and Richard Tweedie in Fort Collins. And hiked up to Long Peak with Gerard. Alas, no time for climbing in the Rockies this time.)

uncertainty in the ABC posterior

Posted in Statistics with tags , , , , , , on July 24, 2019 by xi'an

In the most recent Bayesian Analysis, Marko Järvenpää et al. (including my coauthor Aki Vehtari) consider an ABC setting where the number of available simulations of pseudo-samples  is limited. And where they want to quantify the amount of uncertainty resulting from the estimation of the ABC posterior density. Which is a version of the Monte Carlo error in practical ABC, in that this is the difference between the ABC posterior density for a given choice of summaries and a given choice of tolerance, and the actual approximation based on a finite number of simulations from the prior predictive. As in earlier works by Michael Gutmann and co-authors, the focus stands in designing a sequential strategy to decide where to sample the next parameter value towards minimising a certain expected loss. And in adopting a Gaussian process modelling for the discrepancy between observed data and simulated data, hence generalising the synthetic likelihood approach. This allows them to compute the expectation and the variance of the unnormalised ABC posterior, based on plugged-in estimators. From where the authors derive a loss as the expected variance of the acceptance probability (although it is not parameterisation invariant). I am unsure I see the point for this choice in that there is no clear reason for the resulting sequence of parameter choices to explore the support of the posterior distribution in a relatively exhaustive manner. The paper also mentions alternatives where the next parameter is chosen at the location where “the uncertainty of the unnormalised ABC posterior is highest”. Which sounds more pertinent to me. And further avoids integrating out the parameter. I also wonder if ABC mis-specification analysis could apply in this framework since the Gaussian process is most certainly a “wrong” model. (When concluding this post, I realised I had written a similar entry two years ago about the earlier version of the paper!)

locusts in a random forest

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , on July 19, 2019 by xi'an

My friends from Montpellier, where I am visiting today, Arnaud Estoup, Jean-Michel Marin, and Louis Raynal, along with their co-authors, have recently posted on biorXiv a paper using ABC-RF (Random Forests) to analyse the divergence of two populations of desert locusts in Africa. (I actually first heard of their paper by an unsolicited email from one of these self-declared research aggregates.)

“…the present study is the first one using recently developed ABC-RF algorithms to carry out inferences about both scenario choice and parameter estimation, on a real multi-locus microsatellite dataset. It includes and illustrates three novelties in statistical analyses (…): model grouping analyses based on several key evolutionary events, assessment of the quality of predictions to evaluate the robustness of our inferences, and incorporation of previous information on the mutational setting of the used microsatellite markers”.

The construction of the competing models (or scenarios) is built upon data of past precipitations and desert evolution spanning several interglacial periods, back to the middle Pleistocene, concluding at a probable separation in the middle-late stages of the Holocene, which corresponds to the last transition from humid to arid conditions in the African continent. The probability of choosing the wrong model is exploited to determine which model(s) lead(s) to a posterior [ABC] probability lower than the corresponding prior probability, and only one scenario stands this test. As in previous ABC-RF implementations, the summary statistics are complemented by pure noise statistics in order to determine a barrier in the collection of statistics, even though those just above the noise elements (which often cluster together) may achieve better Gini importance by mere chance. An aspect of the paper that I particularly like is the discussion of the various prior modellings one can derive from existing information (or lack thereof) and the evaluation of the impact of these modellings on the resulting inference based on simulated pseudo-data.