Archive for Stein’s method

JB³ [Junior Bayes beyond the borders]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , , , , on June 22, 2020 by xi'an

Bocconi and j-ISBA are launcing a webinar series for and by junior Bayesian researchers. The first talk is on 25 June, 25 at 3pm UTC/GMT (5pm CET) with Francois-Xavier Briol, one of the laureates of the 2020 Savage Thesis Prize (and a former graduate of OxWaSP, the Oxford-Warwick doctoral training program), on Stein’s method for Bayesian computation, with as a discussant Nicolas Chopin.

As pointed out on their webpage,

Due to the importance of the above endeavor, JB³ will continue after the health emergency as an annual series. It will include various refinements aimed at increasing the involvement of the whole junior Bayesian community and facilitating a broader participation to the online seminars all over the world via various online solutions.

Thanks to all my friends at Bocconi for running this experiment!

Stein’s method in machine learning [workshop]

Posted in pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , on April 5, 2019 by xi'an

There will be an ICML workshop on Stein’s method in machine learning & statistics, next July 14 or 15, located in Long Beach, CA. Organised by François-Xavier Briol (formerly Warwick), Lester Mckey, Chris Oates (formerly Warwick), Qiang Liu, and Larry Golstein. To quote from the webpage of the workshop

Stein’s method is a technique from probability theory for bounding the distance between probability measures using differential and difference operators. Although the method was initially designed as a technique for proving central limit theorems, it has recently caught the attention of the machine learning (ML) community and has been used for a variety of practical tasks. Recent applications include goodness-of-fit testing, generative modeling, global non-convex optimisation, variational inference, de novo sampling, constructing powerful control variates for Monte Carlo variance reduction, and measuring the quality of Markov chain Monte Carlo algorithms.

Speakers include Anima Anandkumar, Lawrence Carin, Louis Chen, Andrew Duncan, Arthur Gretton, and Susan Holmes. I am quite sorry to miss two workshops dedicated to Stein’s work in a row, the other one being at NUS, Singapore, around the Stein paradox.

Michael Jordan’s course at CREST

Posted in Statistics, University life with tags , , , , , , , , on March 26, 2013 by xi'an

Next month, Michael Jordan will give an advanced course at CREST-ENSAE, Paris, on Recent Advances at the Interface of Computation and Statistics. The course will take place on April 4 (14:00, ENSAE, Room #11), 11 (14:00, ENSAE, Room #11), 15 (11:00, ENSAE, Room #11) and 18 (14:00, ENSAE, Room #11). It is open to everyone and attendance is free. The only constraint is a compulsory registration with Nadine Guedj (email: guedj[AT]ensae.fr) for security issues. I strongly advise all graduate students who can take advantage of this fantastic opportunity to grasp it! Here is the abstract to the course:

“I will discuss several recent developments in areas where statistical science meets computational science, with particular concern for bringing statistical inference into contact with distributed computing architectures and with recursive data structures :

  1. How does one obtain confidence intervals in massive data sets? The bootstrap principle suggests resampling data to obtain fluctuations in the values of estimators, and thereby confidence intervals, but this is infeasible computationally with massive data. Subsampling the data yields fluctuations on the wrong scale, which have to be corrected to provide calibrated statistical inferences. I present a new procedure, the “bag of little bootstraps,” which circumvents this problem, inheriting the favorable theoretical properties of the bootstrap but also having a much more favorable computational profile.

  2. The problem of matrix completion has been the focus of much recent work, both theoretical and practical. To take advantage of distributed computing architectures in this setting, it is natural to consider divide-and-conquer algorithms for matrix completion. I show that these work well in practice, but also note that new theoretical problems arise when attempting to characterize the statistical performance of these algorithms. Here the theoretical support is provided by concentration theorems for random matrices, and I present a new approach to matrix concentration based on Stein’s method.

  3. Bayesian nonparametrics involves replacing the “prior distributions” of classical Bayesian analysis with “prior stochastic processes.” Of particular value are the class of “combinatorial stochastic processes,” which make it possible to express uncertainty (and perform inference) over combinatorial objects that are familiar as data structures in computer science.”

References are available on Michael’s homepage.