Archive for seminar
a year ago, a world away
Posted in Statistics with tags Bristol, COVID-19, demonstration, England, flight, Greta Thunberg, pandemics, plane trip, seminar, Travel, United Kingdom, University of Bristol, Wales on February 24, 2021 by xi'anMetropolis-Hastings via classification
Posted in pictures, Statistics, Travel, University life with tags ABC, ABC consistency, Chicago, Chicago Booth School of Business, deep learning, discriminant analysis, GANs, logistic regression, seminar, summary statistics, synthetic likelihood, University of Oxford, webinar, winter running on February 23, 2021 by xi'anVeronicka Rockova (from Chicago Booth) gave a talk on this theme at the Oxford Stats seminar this afternoon. Starting with a survey of ABC, synthetic likelihoods, and pseudo-marginals, to motivate her approach via GANs, learning an approximation of the likelihood from the GAN discriminator. Her explanation for the GAN type estimate was crystal clear and made me wonder at the connection with Geyer’s 1994 logistic estimator of the likelihood (a form of discriminator with a fixed generator). She also expressed the ABC approximation hence created as the actual posterior times an exponential tilt. Which she proved is of order 1/n. And that a random variant of the algorithm (where the shift is averaged) is unbiased. Most interestingly requiring no calibration and no tolerance. Except indirectly when building the discriminator. And no summary statistic. Noteworthy tension between correct shape and correct location.
ABC World seminar
Posted in Books, pictures, Statistics, Travel, University life with tags ABC, ABC in Edinburgh, ABC World seminar, bad map projection, Blackboard Collaborate, lunch, Scotland, seminar, University of Edinburgh, University of Warwick, virtual reality, webinar, xkcd on April 4, 2020 by xi'anWith most of the World being more or less confined at home, conferences cancelled one after the other, including ABC in Grenoble!, we are launching a fortnightly webinar on approximation Bayesian computation, methods, and inference. The idea is to gather members and disseminate results and innovation during these coming weeks and months under lock-down. And hopefully after!
At this point, the interface will be Blackboard Collaborate, run from Edinburgh by Michael Gutmann, for which neither registration nor software is required. Before each talk, a guest link will be mailed to the mailing list. Please register here to join the list.
The seminar is planned on Thursdays at either 9am or more likely 11:30 am UK (+1GMT) time, as we are still debating the best schedule to reach as many populated time zones as possible!, and the first speakers are
09.04.2020 | Dennis Prangle | Distilling importance sampling | |
23.04.2020 | Ivis Kerama and Richard Everitt | Rare event SMC² | |
07.05.2020 | Umberto Picchini | Stratified sampling and bootstrapping for ABC |
Julyan’s talk on priors in Bayesian neural networks [cancelled!]
Posted in pictures, Statistics, Travel, University life with tags All about that Bayes, École Normale de Cachan, Bayesian deep learning, Bayesian neural networks, Cachan, conference cancellation, coronavirus epidemics, ENS Paris-Saclay, Gaussian priors, machine learning, neural network, ReLU, seminar, Université Paris-Saclay on March 5, 2020 by xi'anNext Friday, 13 March at 1:30p.m., Julyan Arbel, researcher at Inria Grenoble will give a All about that Bayes talk at CMLA, ENS Paris-Saclay (building D’Alembert, room Condorcet, Cachan, RER stop Bagneux) on
Understanding Priors in Bayesian Neural Networks at the Unit Level
We investigate deep Bayesian neural networks with Gaussian weight priors and a class of ReLU-like nonlinearities. Bayesian neural networks with Gaussian priors are well known to induce an L², “weight decay”, regularization. Our results characterize a more intricate regularization effect at the level of the unit activations. Our main result establishes that the induced prior distribution on the units before and after activation becomes increasingly heavy-tailed with the depth of the layer. We show that first layer units are Gaussian, second layer units are sub-exponential, and units in deeper layers are characterized by sub-Weibull distributions. Our results provide new theoretical insight on deep Bayesian neural networks, which we corroborate with simulation experiments.