An ‘Og’s reader pointed me to this paper by Li and Malik, which made it to arXiv after not making it to NIPS. While the NIPS reviews were not particularly informative and strongly discordant, the authors point out in the comments that they are available for the sake of promoting discussion. (As made clear in earlier posts, I am quite supportive of this attitude! Disclaimer: I was not involved in an evaluation of this paper, neither for NIPS nor for another conference or journal!!) Although the paper does not seem to mention ABC in the setting of implicit likelihoods and generative models, there is a reference to the early (1984) paper by Peter Diggle and Richard Gratton that is often seen as the ancestor of ABC methods. The authors point out numerous issues with solutions proposed for parameter estimation in such implicit models. For instance, for GANs, they signal that “minimizing the JensenShannon divergence or the Wasserstein distance between the empirical data distribution and the model distribution does not necessarily minimize the same between the true data distribution and the model distribution.” (Not mentioning the particular difficulty with Bayesian GANs.) Their own solution is the implicit maximum likelihood estimator, which picks the value of the parameter θ bringing a simulated sample the closest to the observed sample. Closest in the sense of the Euclidean distance between both samples. Or between the minimum of several simulated samples and the observed sample. (The modelling seems to imply the availability of n>1 observed samples.) They advocate using a stochastic gradient descent approach for finding the optimal parameter θ which presupposes that the dependence between θ and the simulated samples is somewhat differentiable. (And this does not account for using a min, which would make differentiation close to impossible.) The paper then meanders in a lengthy discussion as to whether maximising the likelihood makes sense, with a rather naïve view on why using the empirical distribution in a KullbackLeibler divergence does not make sense! What does not make sense is considering the finite sample approximation to the KullbackLeibler divergence with the true distribution in my opinion.
Archive for likelihoodfree methods
Implicit maximum likelihood estimates
Posted in Statistics with tags ABC, Approximate Bayesian computation, GANs, Hyvärinen score, KullbackLeibler divergence, likelihoodfree methods, maximum likelihood estimation, NIPS 2018, Peter Diggle, untractable normalizing constant, Wasserstein distance on October 9, 2018 by xi'anABC in print
Posted in Books, pictures, Statistics, University life with tags ABC, Approximate Bayesian computation, CRC Press, hanbook, Handbook of Approximate Bayesian computation, handbook of mixture analysis, likelihoodfree methods, Mark Beaumont, Scott Sisson, Yanan Fan on September 5, 2018 by xi'anThe CRC Press Handbook of ABC is now out, after a rather long delay [the first version of our model choice chapter was written in 2015!] due to some late contributors Which is why I did not spot it at JSM 2018. As announced a few weeks ago, our Handbook of Mixture Analysis is soon to be published as well. (Not that I necessarily advocate the individual purchase of these costly volumes!, especially given most chapters are available online.)
ABC for vampires
Posted in Books, pictures, Statistics, University life with tags ABC, ABCpy, Bhattacharya distance, likelihoodfree methods, platelet, Python on September 4, 2018 by xi'anRitabrata Dutta (Warwick), along with coauthors including Anto Mira, published last week a paper in frontiers in physiology about using ABC for deriving the posterior distribution of the parameters of a dynamic blood (platelets) deposition model constructed by Bastien Chopard, the second author. While based on only five parameters, the model does not enjoy a closed form likelihood and even the simulation of a new platelet deposit takes about 10 minutes. The paper uses the simulated annealing ABC version, due to Albert, Künsch, and Scheidegger (2014), which relies a sequence of Metropolis kernels, associated with a decreasing sequence of tolerances, and claims better efficiency at reaching a stable solution. It also relies on the language abcpy, written by Ritabrata Dutta, in Python, for various aspects of ABC analysis. One feature of interest is the use of 24 summary statistics to conduct the inference on the 5 model parameters, a ratio of 24 to 5 that possibly gets improved by a variable selection tool such as random forests. Which would also avoid the choice of a specific loss function called the Bhattacharya distance (which sounds like entropy distance for the normal case).
ABCDay [arXivals]
Posted in Books, Statistics, University life with tags ABC, Approximate Bayesian computation, arXiv, Handbook of Approximate Bayesian computation, high dimensions, likelihoodfree methods, Scott Sisson on March 2, 2018 by xi'anA bunch of ABC papers on arXiv yesterday, most of them linked to the incoming Handbook of ABC:


Overview of Approximate Bayesian Computation S. A. Sisson, Y. Fan, M. A. Beaumont

Kernel Recursive ABC: Point Estimation with Intractable Likelihood Takafumi Kajihara, Keisuke Yamazaki, Motonobu Kanagawa, Kenji Fukumizu

Highdimensional ABC D. J. Nott, V. M.H. Ong, Y. Fan, S. A. Sisson
 ABC Samplers Y. Fan, S. A. Sisson
