Archive for Università degli studi di Padova

γ-ABC

Posted in Statistics with tags , , , , , , , on March 24, 2021 by xi'an

An AISTATS 2021 paper by Masahiro Fujisawa,Takeshi Teshima, Issei Sato and Masashi Sugiyama (RIKEN, Tokyo) just appeared on arXiv.  (AISTATS 2021 is again virtual this year.)

“ABC can be sensitive to outliers if a data discrepancy measure is chosen inappropriately (…) In this paper, we propose a novel outlier-robust and computationally-efficient discrepancy measure based on the γ-divergence”

The focus is on measure of robustness for ABC distances as those can be lethal if insufficient summarisation is used. (Note that a referenced paper by Erlis Ruli, Nicola Sartori and Laura Ventura from Padova appeared last year on robust ABC.) The current approach mixes the γ-divergence of Fujisawa and Eguchi, with a k-nearest neighbour density estimator. Which may not prove too costly, of order O(n log n), but also may be a poor if robust approximation, even if it provides an asymptotic unbiasedness and almost surely convergent approximation. These properties are those established in the paper, which only demonstrates convergence in the sample size n to an ABC approximation with the true γ-divergence but with a fixed tolerance ε, when the most recent results are rather concerned with the rates of convergence of ε(n) to zero. (An extensive simulation section compares this approach with several ABC alternatives, incl. ours using the Wasserstein distance. If I read the comparison graphs properly, it does not look as if there is a huge discrepancy between the two approaches under no contamination.) Incidentally, the paper contains a substantial survey section and has a massive reference list, if missing the publication more than a year earlier of our Wasserstein paper in Series B.

discussione a Padova

Posted in Statistics, University life with tags , , , , , , , , , , , , on March 25, 2013 by xi'an

Here are the slides of my talk in Padova for the workshop Recent Advances in statistical inference: theory and case studies (very similar to the slides for the Varanasi and Gainesville meetings, obviously!, with Peter Müller commenting [at last!] that I had picked the wrong photos from Khajuraho!)

The worthy Padova addendum is that I had two discussants, Stefano Cabras from Universidad Carlos III in Madrid, whose slides are :

and Francesco Pauli, from Trieste, whose slides are:

These were kind and rich discussions with many interesting openings: Stefano’s idea of estimating the pivotal function h is opening new directions, obviously, as it indicates an additional degree of freedom in calibrating the method. Esp. when considering the high variability of the empirical likelihood fit depending on the the function h. For instance, one could start with a large collection of candidate functions and build a regression or a principal component reparameterisation from this collection… (Actually I did not get point #1 about ignoring f: the empirical likelihood is by essence ignoring anything outside the identifying equation, so as long as the equation is valid..) Point #2: Opposing sample free and simulation free techniques is another interesting venue, although I would not say ABC is “sample free”. As to point #3, I will certainly get a look at Monahan and Boos (1992) to see if this can drive the choice of a specific type of pseudo-likelihoods. I like the idea of checking the “coverage of posterior sets” and even more “the likelihood must be the density of a statistic, not necessarily sufficient” as it obviously relates with our current ABC model comparison work… Esp. when the very same paper is mentioned by Francesco as well. Grazie, Stefano! I also appreciate the survey made by Francesco on the consistency conditions, because I think this is an important issue that should be taken into consideration when designing ABC algorithms. (Just pointing out again that, in the theorem of Fearnhead and Prangle (2012) quoting Bernardo and Smith (1992), some conditions are missing for the mathematical consistency to apply.) I also like the agreement we seem to reach about ABC being evaluated per se rather than an a poor man’s Bayesian method. Francesco’s analysis of Monahan and Boos (1992) as validating or not empirical likelihood points out a possible link with the recent coverage analysis of Prangle et al., discussed on the ‘Og a few weeks ago. And an unsuspected link with Larry Wasserman! Grazie, Francesco!