Andrew Gelman will be visiting Paris-Dauphine and CREST next academic year, with support from those institutions as well as CNRS and Ville de Paris). Which is why he is learning how to pronounce Le loup est revenu. (Maybe not why, as this is not the most useful sentence in downtown Paris…) Very exciting news for all of us local Bayesians (or bayésiens). In addition, Andrew will teach from the latest edition of his book Bayesian Data Analysis, co-authored by John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Don Rubin. He will actually start teaching mi-October, which means the book will not be out yet: so the students at Paris-Dauphine and ENSAE will get a true avant-première of Bayesian Data Analysis. Of course, this item of information will be sadistically tantalising to ‘Og’s readers who cannot spend the semester in Paris. For those who can, I presume there is a way to register for the course as auditeur libre at either Paris-Dauphine or ENSAE.
Archive for ENSAE
Following Ed George (Wharton) and Feng Liang (University of Illinois at Urbana-Champaign) talks today in Dauphine, Natalia Bochkina (University of Edinburgh) will give a talk on Thursday, June 20, at 2pm in Room 18 at ENSAE (Malakoff) [not Dauphine!]. Here is her abstract:
2 am: Simultaneous local and global adaptivity of Bayesian wavelet estimators in nonparametric regression by Natalia Bochkina
We consider wavelet estimators in the context of nonparametric regression, with the aim of finding estimators that simultaneously achieve the local and global adaptive minimax rate of convergence. It is known that one estimator – James-Stein block thresholding estimator of T.Cai (2008) – achieves simultaneously both optimal rates of convergence but over a limited set of Besov spaces; in particular, over the sets of spatially inhomogeneous functions (with 1≤ p<2) the upper bound on the global rate of this estimator is slower than the optimal minimax rate.
Another possible candidate to achieve both rates of convergence simultaneously is the Empirical Bayes estimator of Johnstone and Silverman (2005) which is an adaptive estimator that achieves the global minimax rate over a wide rage of Besov spaces and Besov balls. The maximum marginal likelihood approach is used to estimate the hyperparameters, and it can be interpreted as a Bayesian estimator with a uniform prior. We show that it also achieves the adaptive local minimax rate over all Besov spaces, and hence it does indeed achieve both local and global rates of convergence simultaneously over Besov spaces. We also give an example of how it works in practice.
Slides (in French) of a presentation of my Master TSI in ENSAE today:
Next month, Michael Jordan will give an advanced course at CREST-ENSAE, Paris, on Recent Advances at the Interface of Computation and Statistics. The course will take place on April 4 (14:00, ENSAE, Room #11), 11 (14:00, ENSAE, Room #11), 15 (11:00, ENSAE, Room #11) and 18 (14:00, ENSAE, Room #11). It is open to everyone and attendance is free. The only constraint is a compulsory registration with Nadine Guedj (email: guedj[AT]ensae.fr) for security issues. I strongly advise all graduate students who can take advantage of this fantastic opportunity to grasp it! Here is the abstract to the course:
“I will discuss several recent developments in areas where statistical science meets computational science, with particular concern for bringing statistical inference into contact with distributed computing architectures and with recursive data structures :
How does one obtain confidence intervals in massive data sets? The bootstrap principle suggests resampling data to obtain fluctuations in the values of estimators, and thereby confidence intervals, but this is infeasible computationally with massive data. Subsampling the data yields fluctuations on the wrong scale, which have to be corrected to provide calibrated statistical inferences. I present a new procedure, the “bag of little bootstraps,” which circumvents this problem, inheriting the favorable theoretical properties of the bootstrap but also having a much more favorable computational profile.
The problem of matrix completion has been the focus of much recent work, both theoretical and practical. To take advantage of distributed computing architectures in this setting, it is natural to consider divide-and-conquer algorithms for matrix completion. I show that these work well in practice, but also note that new theoretical problems arise when attempting to characterize the statistical performance of these algorithms. Here the theoretical support is provided by concentration theorems for random matrices, and I present a new approach to matrix concentration based on Stein’s method.
Bayesian nonparametrics involves replacing the “prior distributions” of classical Bayesian analysis with “prior stochastic processes.” Of particular value are the class of “combinatorial stochastic processes,” which make it possible to express uncertainty (and perform inference) over combinatorial objects that are familiar as data structures in computer science.”
References are available on Michael’s homepage.
On Monday, Ildar Ibragimov (St.Petersburg Department of Steklov Mathematical Institute, Russia) will give a seminar at CREST on “The Darmois – Skitovich and Ghurye – Olkin theorems revisited“. This sounds more like probability than statistics, as those theorems state that, if two linear combinations of iid rv’s are independent, then those rv’s are normal. See those remarks by Prof. Abram Kagan for historical details. Nonetheless, I find it quite an event to have a local seminar given by one of the fathers of asymptotic Bayesian theory. Here is the abstract to the talk. (The talk will be at ENSAE, Salle S8, at 3pm on Monday, March 18.)