Archive for seminar

Julyan’s talk on priors in Bayesian neural networks [cancelled!]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on March 5, 2020 by xi'an

Next Friday, 13 March at 1:30p.m., Julyan Arbel, researcher at Inria Grenoble will give a All about that Bayes talk at CMLA, ENS Paris-Saclay (building D’Alembert, room Condorcet, Cachan, RER stop Bagneux) on

Understanding Priors in Bayesian Neural Networks at the Unit Level

We investigate deep Bayesian neural networks with Gaussian weight priors and a class of ReLU-like nonlinearities. Bayesian neural networks with Gaussian priors are well known to induce an L², “weight decay”, regularization. Our results characterize a more intricate regularization effect at the level of the unit activations. Our main result establishes that the induced prior distribution on the units before and after activation becomes increasingly heavy-tailed with the depth of the layer. We show that first layer units are Gaussian, second layer units are sub-exponential, and units in deeper layers are characterized by sub-Weibull distributions. Our results provide new theoretical insight on deep Bayesian neural networks, which we corroborate with simulation experiments.

 

Gabriel’s talk at Warwick on optimal transport

Posted in Statistics with tags , , , , , , on March 4, 2020 by xi'an

in Bristol for the day

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on February 28, 2020 by xi'an

I am in Bristol for the day, giving a seminar at the Department of Statistics where I had not been for quite a while (and not since the Department has moved to a beautifully renovated building). The talk is on ABC-Gibbs, whose revision is on the verge of being resubmitted. (I also hope Greta will let me board my plane tonight…)

Jana de Wiljes’ colloquium at Warwick

Posted in Statistics with tags , , , , , , on February 25, 2020 by xi'an

unbiased MCMC with couplings [4pm, 26 Feb., Paris]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , on February 24, 2020 by xi'an

On Wednesday, 26 February, Pierre Jacob (Havard U, currently visiting Paris-Dauphine) is giving a seminar on unbiased MCMC methods with couplings at AgroParisTech, bvd Claude Bernard, Paris 5ième, Room 32, at 4pm in the All about that Bayes seminar.

MCMC methods yield estimators that converge to integrals of interest in the limit of the number of iterations. This iterative asymptotic justification is not ideal; first, it stands at odds with current trends in computing hardware, with increasingly parallel architectures; secondly, the choice of “burn-in” or “warm-up” is arduous. This talk will describe recently proposed estimators that are unbiased for the expectations of interest while having a finite computing cost and a finite variance. They can thus be generated independently in parallel and averaged over. The method also provides practical upper bounds on the distance (e.g. total variation) between the marginal distribution of the chain at a finite step and its invariant distribution. The key idea is to generate “faithful” couplings of Markov chains, whereby pairs of chains coalesce after a random number of iterations. This talk will provide an overview of this line of research.

Judith’s colloquium at Warwick

Posted in Statistics with tags , , , , , , , , on February 21, 2020 by xi'an

séminaire P de S

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , on February 18, 2020 by xi'an

As I was in Paris and free for the occasion (!), I attended the Paris Statistics seminar this afternoon, in the Latin Quarter. With a first talk by Kweku Abraham on Bayesian inverse problems set a prior on the quantity of interest, γ, rather than its transform G(γ), observed with noise. Always perturbed by the juggling of different distances, like L² versus Kullback-Leibler, in non-parametric frameworks. Reminding me of probabilistic numerics, at least in the framework, since the crux of the talk was 100% about convergence. And a second talk by Leanaïc Chizat on convex neural networks corresponding to an infinite number of neurons, with surprising properties, including implicit bias. And a third talk by Anne Sabourin on PCA for extremes. Which assumed very little on the model but more on the geometry of the distribution, like extremes being concentrated on a subspace. As I was rather tired from an intense week at Warwick, and after a weekend of reading grant applications and Biometrika submissions (!), my foggy brain kept switching to these proposals, trying to make connections with the talks, not completely inappropriately in two cases out of three. (I am afraid the same may happen tomorrow at our probability seminar on computer-based proofs!)