Archive for Wasserstein distance

nonparametric ABC [seminar]

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , , , on June 3, 2022 by xi'an

Puzzle: How do you run ABC when you mistrust the model?! We somewhat considered this question in our misspecified ABC paper with David and Judith. An AISTATS 2022 paper by Harita Dellaporta (Warwick), Jeremias KnoblauchTheodoros Damoulas (Warwick), and François-Xavier Briol (formerly Warwick) is addressing this same question and Harita presented the paper at the One World ABC webinar yesterday.

It is inspired from Lyddon, Walker & Holmes (2018), who place a nonparametric prior on the generating model, in top of the assumed parametric model (with an intractable likelihood). This induces a push-forward prior on the pseudo-true parameter, that is, the value that brings the parametric family the closest possible to the true distribution of the data. Here defined as a minimum distance parameter, the maximum mean discrepancy (MMD). Choosing RKHS framework allows for a practical implementation, resorting to simulations for posterior realisations from a Dirichlet posterior and from the parametric model, and stochastic gradient for computing the pseudo-true parameter, which may prove somewhat heavy in terms of computing cost.

The paper also containts a consistency result in an ε-contaminated setting (contamination of the assumed parametric family). Comparisons like the above with a fully parametric Wasserstein-ABC approach show that this alter resists better misspecification, as could be expected since the later is not constructed for that purpose.

Next talk is on 23 June by Cosma Shalizi.

accronyms [CDT lectures]

Posted in Books, Statistics with tags , , , , , , , , , , , , , , , on May 16, 2022 by xi'an

This week, I gave a short and introductory course in Warwick for the CDT (PhD) students on my perceived connections between reverse logistic regression à la Geyer and GANS, among other things. The first attempt was cancelled in 2020 due to the pandemic, the second one in 2021 was on-line and thus offered little possibilities for interactions. Preparing for this third attempt made me read more papers on some statistical analyses of GANs and WGANs, which was more satisfactory [for me] even though I could not get into the technical details…

Concentration and robustness of discrepancy-based ABC [One World ABC ‘minar, 28 April]

Posted in Statistics, University life with tags , , , , , , , , , , , on April 15, 2022 by xi'an

Our next speaker at the One World ABC Seminar will be Pierre Alquier, who will talk about “Concentration and robustness of discrepancy-based ABC“, on Thursday April 28, at 9.30am UK time, with an abstract reported below.
Approximate Bayesian Computation (ABC) typically employs summary statistics to measure the discrepancy among the observed data and the synthetic data generated from each proposed value of the parameter of interest. However, finding good summary statistics (that are close to sufficiency) is non-trivial for most of the models for which ABC is needed. In this paper, we investigate the properties of ABC based on integral probability semi-metrics, including MMD and Wasserstein distances. We exhibit conditions ensuring the contraction of the approximate posterior. Moreover, we prove that MMD with an adequate kernel leads to very strong robustness properties.

ABC for COVID spread reconstruction

Posted in Books, pictures, Statistics, Travel with tags , , , , , , , , , on December 27, 2021 by xi'an

A recent Nature paper by Jessica Davis et al. (with an assessment by Simon Cauchemez and X from INSERM) reassessed the appearance of COVID in European and American States. Accounting for the massive under-reporting in the early days since there was no testing. The approach is based on a complex dynamic model whose parameters are estimated by an ABC algorithm (the reference being the PLoS article that initiated the ABC Wikipedia page). Results are quite interesting in that the distribution of the entry dates covers a calendar as early as December 2019 in most cases. And a proportion of missed cases as high as 99%.

“As evidence, E, we considered the cumulative number of SARS-CoV-2 cases internationally imported from China up to January 21, 2020″

The model behind remain a classical SLIR model but with a discrete and stochastic dynamical and a geographical compartmentalization based on a Voronoi tessellation centred at airports, commuting intensity and population density. Interventions by local and State authorities are also accounted for. The ABC version is a standard rejection algorithm with distance based on the evidence as quoted above. Which is a form of cdf distance (as in our Wasserstein ABC paper). For the posterior distribution of the IFR,  a second ABC algorithm uses the relative distance between observed and generated deaths (per country). The paper further investigates different introduction sources (countries) before local transmission was established. For instance, China is shown to be the dominant source for the first EU countries impacted by the pandemics such as Italy, UK, Germany, France and Spain. Using a “counterfactual scenario where the surveillance systems of the US states and European countries are imagined to operate at levels able to identify 50% of all imported and locally generated infections”, the authors conclude that

“broadening testing specifications could have considerably slowed the pandemic progression, buying considerable time to prepare mitigation responses.”

ABC in Svalbard [#2]

Posted in Books, Mountains, pictures, R, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , on April 14, 2021 by xi'an

The second day of the ABC wwworkshop got a better start than yesterday [for me] as I managed to bike to Dauphine early enough to watch the end of Gael’s talk and Matias Quiroz’ in full on the Australian side (of zoom). With an interesting take on using frequency-domain (pseudo-)likelihoods in complex models. Followed by two talks by David Frazier from Monash and Chris Drovandi from Brisbane on BSL, the first on misspecification with a finer analysis as to why synthetic likelihood may prove worse: the Mahalanobis distance behind it may get very small and the predictive distribution of the distance may become multimodal. Also pointing out the poor coverage of both ABC and BSL credible intervals. And Chris gave a wide-ranging coverage of summary-free likelihood-free approaches, with examples where they were faring well against some summary-based solutions. Olivier from Grenoble [with a co-author from Monash, keeping up the Australian theme] discussed dimension reductions which could possibly lead to better summary statistics, albeit unrelated with ABC!

Riccardo Corradin considered this most Milanese problem of all problems (!), namely how to draw inference on completely random distributions. The clustering involved in this inference being costly, the authors using our Wasserstein ABC approach on the partitions, with a further link to our ABC-Gibbs algorithm (which Grégoire had just presented) for the tolerance selection. Marko Järvenpää presented an approach related with a just-published paper in Bayesian Analysis. with a notion of noisy likelihood modelled as a Gaussian process. Towards avoiding evaluating the (greedy) likelihood too often, as in the earlier Korrakitara et al. (2014). And coining the term of Bayesian Metropolis-Hastings sampler (as the regular Metropolis (Rosenbluth) is frequentist)! And Pedro Rodrigues discussed using normalising flows in poorly identified (or inverse) models. Raising the issue of validating this approximation to the posterior and connecting with earlier talks.

The afternoon session was a reply of the earliest talks from the Australian mirrors. Clara Grazian gave the first talk yesterday on using and improving a copula-based ABC, introducing empirical likelihood, Gaussian processes and splines. Leading to a question as to whether or not the copula family could be chosen by ABC tools. David Nott raised the issue of conflicting summary statistics. Illustrated by a Poisson example where using the pair made by the empirical mean and the empirical variance  as summary: while the empirical mean is sufficient, conditioning on both leads to a different ABC outcome. Which indirectly relates to a work in progress in our Dauphine group. Anthony Ebert discussed the difficulty of handling state space model parameters with ABC. In an ABCSMC² version, the likelihood is integrated out by a particle filter approximation but leading to difficulties with the associated algorithm, which I somewhat associate with the discrete nature of the approximation, possibly incorrectly. Jacob Priddle’s talked about a whitening version of Bayesian synthetic likelihood. By arguing that the variance of the Monte Carlo approximation to the moments of the Normal synthetic likelihood is much improved when assuming that the components of the summary statistic are independent. I am somewhat puzzled by the proposal, though, in that the whitening matrix need be estimated as well.

Thanks to all colleagues and friends involved in building and running the mirrors and making some exchanges possible despite the distances and time differences! Looking forward a genuine ABC meeting in a reasonable future, and who knows?!, reuniting in Svalbard for real! (The temperature in Longyearbyen today was -14⁰, if this makes some feel better about missing the trip!!!) Rather than starting a new series of “ABC not in…”

%d bloggers like this: