Archive for divergence

locusts in a random forest

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , on July 19, 2019 by xi'an

My friends from Montpellier, where I am visiting today, Arnaud Estoup, Jean-Michel Marin, and Louis Raynal, along with their co-authors, have recently posted on biorXiv a paper using ABC-RF (Random Forests) to analyse the divergence of two populations of desert locusts in Africa. (I actually first heard of their paper by an unsolicited email from one of these self-declared research aggregates.)

“…the present study is the first one using recently developed ABC-RF algorithms to carry out inferences about both scenario choice and parameter estimation, on a real multi-locus microsatellite dataset. It includes and illustrates three novelties in statistical analyses (…): model grouping analyses based on several key evolutionary events, assessment of the quality of predictions to evaluate the robustness of our inferences, and incorporation of previous information on the mutational setting of the used microsatellite markers”.

The construction of the competing models (or scenarios) is built upon data of past precipitations and desert evolution spanning several interglacial periods, back to the middle Pleistocene, concluding at a probable separation in the middle-late stages of the Holocene, which corresponds to the last transition from humid to arid conditions in the African continent. The probability of choosing the wrong model is exploited to determine which model(s) lead(s) to a posterior [ABC] probability lower than the corresponding prior probability, and only one scenario stands this test. As in previous ABC-RF implementations, the summary statistics are complemented by pure noise statistics in order to determine a barrier in the collection of statistics, even though those just above the noise elements (which often cluster together) may achieve better Gini importance by mere chance. An aspect of the paper that I particularly like is the discussion of the various prior modellings one can derive from existing information (or lack thereof) and the evaluation of the impact of these modellings on the resulting inference based on simulated pseudo-data.

X divergence for approximate inference

Posted in Statistics with tags , , , , , , , on March 14, 2017 by xi'an

Dieng et al. arXived this morning a new version of their paper on using the Χ divergence for variational inference. The Χ divergence essentially is the expectation of the squared ratio of the target distribution over the approximation, under the approximation. It is somewhat related to Expectation Propagation (EP), which aims at the Kullback-Leibler divergence between the target distribution and the approximation, under the target. And to variational Bayes, which is the same thing just the opposite way! The authors also point a link to our [adaptive] population Monte Carlo paper of 2008. (I wonder at a possible version through Wasserstein distance.)

Some of the arguments in favour of this new version of variational Bayes approximations is that (a) the support of the approximation over-estimates the posterior support; (b) it produces over-dispersed versions; (c) it relates to a well-defined and global objective function; (d) it allows for a sandwich inequality on the model evidence; (e) the function of the [approximation] parameter to be minimised is under the approximation, rather than under the target. The latest allows for a gradient-based optimisation. While one of the applications is on a Bayesian probit model applied to the Pima Indian women dataset [and will thus make James and Nicolas cringe!], the experimental assessment shows lower error rates for this and other benchmarks. Which in my opinion does not tell so much about the original Bayesian approach.

%d bloggers like this: