## Archive for shrinkage estimation

## 35 years ago…

Posted in Books, Kids, Statistics, Travel, University life with tags Jean-Pierre Raoult, José de Sam Lazaro, Normandy, PhD thesis, postdoctoral position, Purdue University, shrinkage estimation, Université de Rouen, visiting position on July 2, 2022 by xi'an## Bill’s 80th birthday

Posted in Statistics, Travel, University life with tags 80th birthday, Cornell University, flight, frequentist inference, James-Stein estimator, mathematical statistics, New York, Pitman nearness, Rutgers University, shrinkage estimation, William Strawderman on March 30, 2022 by xi'an## estimation of a normal mean matrix

Posted in Statistics with tags Biometrika, Charles Stein, Cornell University, James-Stein estimator, Purdue University, Rutgers University, shrinkage estimation, Springer-Verlag, superharmonicity, Université de Rouen on May 13, 2021 by xi'an**A** few days ago, I noticed the paper Estimation under matrix quadratic loss and matrix superharmonicity by Takeru Matsuda and my friend Bill Strawderman had appeared in Biometrika. *(Disclaimer: I was not involved in handling the submission!)* This is a “classical” shrinkage estimation problem in that covariance matrix estimators are compared under under a quadratic loss, using Charles Stein’s technique of unbiased estimation of the risk is derived. The authors show that the Efron–Morris estimator is minimax. They also introduce superharmonicity for matrix-variate functions towards showing that generalized Bayes estimator with respect to a matrix superharmonic priors are minimax., including a generalization of Stein’s prior. Superharmonicity that relates to (much) earlier results by Ed George (1986), Mary-Ellen Bock (1988), Dominique Fourdrinier, Bill Strawderman, and Marty Wells (1998). (All of whom I worked with in the 1980’s and 1990’s! in Rouen, Purdue, and Cornell). This paper also made me realise Dominique, Bill, and Marty had published a Springer book on Shrinkage estimators a few years ago and that I had missed it..!

## a Bayesian interpretation of FDRs?

Posted in Statistics with tags baseball data, empirical Bayes methods, false discovery rate, FDRs, ferry harbour, FNR, hypothesis testing, multiple tests, Seattle, shrinkage estimation, Washington State on April 12, 2018 by xi'an**T**his week, I happened to re-read John Storey’ 2003 “The positive discovery rate: a Bayesian interpretation and the q-value”, because I wanted to check a connection with our testing by mixture [still in limbo] paper. I however failed to find what I was looking for because I could not find any Bayesian flavour in the paper apart from an FRD expressed as a “posterior probability” of the null, in the sense that the setting was one of opposing two simple hypotheses. When there is an unknown parameter common to the multiple hypotheses being tested, a prior distribution on the parameter makes these multiple hypotheses connected. What makes the connection puzzling is the assumption that the observed statistics defining the significance region are *independent* (Theorem 1). And it seems to depend on the choice of the significance region, which should be induced by the Bayesian modelling, not the opposite. (This alternative explanation does not help either, maybe because it is on baseball… Or maybe because the sentence “If a player’s [posterior mean] is above .3, it’s more likely than not that their true average is as well” does not seem to appear naturally from a Bayesian formulation.) *[Disclaimer: I am not hinting at anything wrong or objectionable in Storey’s paper, just being puzzled by the Bayesian tag!]*

## same risk, different estimators

Posted in Statistics with tags Annals of Statistics, complete statistics, hierarchical Bayesian modelling, Jim Berger, shrinkage estimation, William Strawderman on November 10, 2017 by xi'an**A**n interesting question on X validated reminded me of the epiphany I had some twenty years ago when reading a Annals of Statistics paper by Anirban Das Gupta and Bill Strawderman on shrinkage estimators, namely that some estimators shared the same risk function, meaning their integrated loss was the same for all values of the parameter. As indicated in this question, Stefan‘s instructor seems to believe that two estimators having the same risk function must be a.s. identical. Which is not true as exemplified by the James-Stein (1960) estimator with scale 2(p-2), which has constant risk p, just like the maximum likelihood estimator. I presume the confusion stemmed from the concept of *completeness*, where having a function with constant expectation under all values of the parameter implies that this function is constant. But, for loss functions, the concept does not apply since the loss depends both on the observation (that is complete in a Normal model) and on the parameter.