Archive for Besov spaces

reading classics (#8)

Posted in Books, Kids, Statistics, University life with tags , , , on January 23, 2014 by xi'an

La Défense from Paris-Dauphine, Nov. 15, 2012The Reading Classics Seminar today was about (the classic) Donoho and Johnstone’s denoising through wavelets, a 1995 JASA paper entitled Adapting to unknown smoothness via wavelet shrinkage. Two themes (shrinkage and wavelets) I discovered during my PhD years. (Although I did not work on wavelets then, I simply attended seminars where wavelets were the new thing.) My student Anouar Seghir gave a well-prepared presentation, introducing wavelets and Stein’s estimator of the risk. He clearly worked hard on the paper. However, I still feel the talk focussed too much on the maths and not enough on the motivations. For instance I failed to understand why the variance in the white noise was known and where the sparsity indicator came from. (This is anyway a common flaw in those Reading Classics presentations.) The presentation was helped by an on-line demonstration in Matlab, using the resident velvet code. Here are the slides:

top model choice week (#2)

Posted in Statistics, University life with tags , , , , , , , , , , , , on June 18, 2013 by xi'an

La Défense and Maison-Lafitte from my office, Université Paris-Dauphine, Nov. 05, 2011Following Ed George (Wharton) and Feng Liang (University of Illinois at Urbana-Champaign) talks today in Dauphine, Natalia Bochkina (University of Edinburgh) will  give a talk on Thursday, June 20, at 2pm in Room 18 at ENSAE (Malakoff) [not Dauphine!]. Here is her abstract:

2 am: Simultaneous local and global adaptivity of Bayesian wavelet estimators in nonparametric regression by Natalia Bochkina

We consider wavelet estimators in the context of nonparametric regression, with the aim of finding estimators that simultaneously achieve the local and global adaptive minimax rate of convergence. It is known that one estimator – James-Stein block thresholding estimator of T.Cai (2008) – achieves simultaneously both optimal rates of convergence but over a limited set of Besov spaces; in particular, over the sets of spatially inhomogeneous functions (with 1≤ p<2) the upper bound on the global rate of this estimator is slower than the optimal minimax rate.

Another possible candidate to achieve both rates of convergence simultaneously is the Empirical Bayes estimator of Johnstone and Silverman (2005) which is an adaptive estimator that achieves the global minimax rate over a wide rage of Besov spaces and Besov balls. The maximum marginal likelihood approach is used to estimate the hyperparameters, and it can be interpreted as a Bayesian estimator with a uniform prior. We show that it also achieves the adaptive local minimax rate over all Besov spaces, and hence it does indeed achieve both local and global rates of convergence simultaneously over Besov spaces. We also give an example of how it works in practice.