Archive for bounded normal mean problem

reading classics (#9,10)

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , , on January 28, 2014 by xi'an

La Défense from Paris-Dauphine, Nov. 15, 2012Today was the very last session of our Reading Classics Seminar for the academic year 2013-2014. We listened two presentations, one on the Casella and Strawderman (1984) paper on the estimation of the normal bounded mean. And one on the Hartigan and Wong’s 1979 K-Means Clustering Algorithm paper in JRSS C. The first presentation did not go well as my student had difficulties with the maths behind the paper. (As he did not come to ask me or others for help, it may well be that he put this talk together at the last minute, at a time busy with finals and project deliveries. He also failed to exploit those earlier presentations of the paper.) The innovative part in the talk was the presentation of several R simulations comparing the risk of the minimax Bayes estimator with the one for the MLE. Although the choice of simulating different samples of standard normals for different values of the parameters and even for both estimators made the curves (unnecessarily) all wiggly.

By contrast, the second presentation was very well-designed, with great Beamer slides, interactive features and a software oriented focus. My student Mouna Berrada started from the existing R function kmeans to explain the principles of the algorithm, recycling the interactive presentation of last year as well (with my permission), and creating a dynamic flowchart that was most helpful. So she made the best of this very short paper! Just (predictably) missing the question of the statistical model behind the procedure. During the discussion, I mused why k-medians clustering was not more popular as it offered higher robustness guarantees, albeit further away from a genuine statistical model. And why k-means clustering was not more systematically compared with mixture (EM) estimation.

Here are the slides for the second talk

bounded normal mean

Posted in R, Statistics, University life with tags , , , , , , , , , on November 25, 2011 by xi'an

A few days ago, one of my students, Jacopo Primavera (from La Sapienza, Roma) presented his “reading the classic” paper, namely the terrific bounded normal mean paper by my friends George Casella and Bill Strawderman (1981, Annals of Statistics). Even though I knew this paper quite well, having read (and studied) it myself many times, starting in 1987 in Purdue with Mary Ellen Bock, it was a pleasure to spend another hour on it, as I came up with new perspectives and new questions. Above are my scribbled notes on the back of the [Epson] beamer documentation. One such interesting question is whether or not it is possible to devise a computer code that would [approximately] produce the support of the least favourable prior for a given bound m (in a reasonable time). Another open question is to find the limiting bounds for which a 2 point, a 3 point, &tc., support prior is the least favourable prior. This was established in Casella and Strawderman for bounds less than 1.08 and for bounds between 1.4 and 1.6, but I am not aware of other results in that direction… Here are the slides used by Jacopo:

%d bloggers like this: