Archive for GANs

Adversarial Bayesian Simulation [One World ABC’minar]

Posted in Statistics with tags , , , , , , , , , on November 15, 2022 by xi'an

The next One World ABC webinar will take place on 24 November, at 1:30 UK Time (GMT) and will be presented by Yi Yuexi Wang (University of Chicago) on “Adversarial Bayesian Simulation”, available on arXiv. [The link to the webinar is available to those who have registered.]

In the absence of explicit or tractable likelihoods, Bayesians often resort to approximate Bayesian computation (ABC) for inference. In this talk, we will cover two summary-free ABC approaches, both inspired by adversarial learning. The first one adopts a classification-based KL estimator to quantify the discrepancy between real and simulated datasets. We consider the traditional accept/reject kernel as well as an exponential weighting scheme which does not require the ABC acceptance threshold. In the second paper, we develop a Bayesian GAN (B-GAN) sampler that directly targets the posterior by solving an adversarial optimization problem. B-GAN is driven by a deterministic mapping learned on the ABC reference by conditional GANs. Once the mapping has been trained, iid posterior samples are obtained by filtering noise at a negligible additional cost. We propose two post-processing local refinements using (1) data-driven proposals with importance reweighting, and (2) variational Bayes. For both methods, we support our findings with frequentist-Bayesian theoretical results and highly competitive performance in empirical analysis. (Joint work with Veronika Rockova)

accronyms [CDT lectures]

Posted in Books, Statistics with tags , , , , , , , , , , , , , , , on May 16, 2022 by xi'an

This week, I gave a short and introductory course in Warwick for the CDT (PhD) students on my perceived connections between reverse logistic regression à la Geyer and GANS, among other things. The first attempt was cancelled in 2020 due to the pandemic, the second one in 2021 was on-line and thus offered little possibilities for interactions. Preparing for this third attempt made me read more papers on some statistical analyses of GANs and WGANs, which was more satisfactory [for me] even though I could not get into the technical details…

improving bridge samplers by GANs

Posted in Books, pictures, Statistics with tags , , , , , , , on July 20, 2021 by xi'an

Hanwen Xing from Oxford recently posted a paper on arXiv about using GANs to improve the overlap bewtween the densities in bridge sampling. Bringing out new connections with noise contrastive estimation. The idea is to optimise a transform of one of the densities h() to bring it closer to the other density k(), using for instance normalising flows. (The call to transforms for bridge is not new, dating at least to Voter in 1985, the year I was starting my PhD!) Furthermore, using an f-divergence as a measure of functional distance allows for a reasonably straightforward update of the transform. That can be reformulated as a GAN target, which is somewhat natural in that the transform aims at confusing simulation from the transform of h and from k. This is quite an interesting proposal,  even though calculating the optimal transform is time-consuming and subjet to the curse of dimensionality. I also wonder at whether or not iterating the optimisation, one density after the other, would be bring further improvement.

ISBA 2021 low key

Posted in Kids, Mountains, pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , , , , , , , , , , , , on July 2, 2021 by xi'an

Fourth day of ISBA (and ISB@CIRM), which was a bit low key for me as I had a longer hike with my wife in the morning, including a swim in a sea as cold as the Annecy lake last month!, but nonetheless enjoyable and crystal clear, then attacked my pile of Biometrika submissions that had accumulated beyond the reasonable since last week, chased late participants who hadn’t paid yet, reviewed a paper that was due two weeks ago, chatted with participants before they left, discussed a research problem, and as a result ended attending only four sessions over the whole day. Including one about Models and Methods for Networks and Graphs, with interesting computation challenges, esp. in block models, the session in memoriam of Hélène Massam, where Gérard Letac (part of ISB@CIRM!), Jacek Wesolowski, and Reza Mohammadi, all coauthors of Hélène, made presentations on their joint advances. Hélène was born in Marseille, actually, in 1949, and even though she did not stay in France after her École Normale studies, it was a further commemoration to attend this session in her birth-place. I also found out about them working on the approximation of a ratio of normalising constants for the G-Wishart. The last session of my data was the Susie Bayarri memorial lecture, with Tamara Roderick as the lecturer. Reporting on an impressive bunch of tricks to reduce computing costs for hierarchical models with Gaussian processes.

ISBA 2021.3

Posted in Kids, Mountains, pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on July 1, 2021 by xi'an

Now on the third day which again started early with a 100% local j-ISBA session. (After a group run to and around Mont Puget, my first real run since 2020!!!) With a second round of talks by junior researchers from master to postdoc level. Again well-attended. A talk about Bayesian non-parametric sequential taxinomy by Alessandro Zito used the BayesANT acronym, which reminded me of the new vave group Adam and the Ants I was listening to forty years ago, in case they need a song as well as a logo! (Note that BayesANT is also used for a robot using Bayesian optimisation!) And more generally a wide variety in the themes. Thanks to the j-organisers of this 100% live session!

The next session was on PDMPs, which I helped organise, with Manon Michel speaking from Marseille, exploiting the symmetry around the gradient, which is distribution-free! Then, remotely, Kengo Kamatani, speaking from Tokyo, who expanded the high-dimensional scaling limit to the Zig-Zag sampler, exhibiting an argument against small refreshment rates, and Murray Pollock, from Newcastle, who exposed quite clearly the working principles of the Restore algorithm, including why coupling from the past was available in this setting. A well-attended session despite the early hour (in the USA).

Another session of interest for me [which I attended by myself as everyone else was at lunch in CIRM!] was the contributed C16 on variational and scalable inference that included a talk on hierarchical Monte Carlo fusion (with my friends Gareth and Murray as co-authors), Darren’s call to adopt functional programming in order to save Bayesian computing from extinction, normalising flows for modularisation, and Dennis’ adversarial solutions for Bayesian design, avoiding the computation of the evidence.

Wes Johnson’s lecture was about stories with setting prior distributions based on experts’ opinions. Which reminded me of the short paper Kaniav Kamary and myself wrote about ten years ago, in response to a paper on the topic in the American Statistician. And could not understand the discrepancy between two Bayes factors based on Normal versus Cauchy priors, until I was told they were mistakenly used repeatedly.

Rushing out of dinner, I attended both the non-parametric session (live with Marta and Antonio!) and the high-dimension computational session on Bayesian model choice (mute!). A bit of a schizophrenic moment, but allowing to get a rough picture in both areas. At once. Including an adaptive MCMC scheme for selecting models by Jim Griffin. Which could be run directly over the model space. With my ever-going wondering at the meaning of neighbour models.

%d bloggers like this: