While reading the IMS Bulletin (of March 2020), I found out that Canadian statistician Colin Blyth had died last summer. While we had never met in person, I remember his very distinctive and elegant handwriting in a few letters he sent me, including the above I have kept (along with an handwritten letter from Lucien Le Cam!). It contains suggestions about revising our Is Pitman nearness a reasonable criterion?, written with Gene Hwang and William Strawderman and which took three years to publish as it was deemed somewhat controversial. It actually appeared in JASA with discussions from Malay Ghosh, John Keating and Pranab K Sen, Shyamal Das Peddada, C. R. Rao, George Casella and Martin T. Wells, and Colin R. Blyth (with a much stronger wording than in the above letter!, like “What can be said but “It isn’t I, it’s you that are crazy?”). While I had used some of his admissibility results, including the admissibility of the Normal sample average in dimension one, e.g. in my book, I had not realised at the time that Blyth was (a) the first student of Erich Lehmann (b) the originator of [the name] Simpson’s paradox, (c) the scribe for Lehmann’s notes that would eventually lead to Testing Statistical Hypotheses and Theory of Point Estimation, later revised with George Casella. And (d) a keen bagpipe player and scholar.
Archive for Erich Lehmann
Colin Blyth (1922-2019)
Posted in Books, pictures, Statistics, University life with tags bagpipes, C.R. Rao, caligraphy, Canada, Colin Blyth, decision theory, discussion paper, Erich Lehmann, IMS Bulletin, JASA, La Trobe University, Lucien Le Cam, Melbourne, obituary, Ontario, Pitman nearness, Simpson's paradox, transitivity on March 19, 2020 by xi'anBertrand-Borel debate
Posted in Books, Statistics with tags Bayes factor, Bayesian hypothesis testing, Bayesian model selection, Bertrand's paradox, conditioning, Deborah Mayo, Emile Borel, Erich Lehmann, Joseph Bertrand, Le Hasard, Pierre Simon Laplace, Pleiades, posterior probability, uniformly most powerful tests on May 6, 2019 by xi'anOn her blog, Deborah Mayo briefly mentioned the Bertrand-Borel debate on the (in)feasibility of hypothesis testing, as reported [and translated] by Erich Lehmann. A first interesting feature is that both [starting with] B mathematicians discuss the probability of causes in the Bayesian spirit of Laplace. With Bertrand considering that the prior probabilities of the different causes are impossible to set and then moving all the way to dismiss the use of probability theory in this setting, nipping the p-values in the bud..! And Borel being rather vague about the solution probability theory has to provide. As stressed by Lehmann.
“The Pleiades appear closer to each other than one would naturally expect. This statement deserves thinking about; but when one wants to translate the phenomenon into numbers, the necessary ingredients are lacking. In order to make the vague idea of closeness more precise, should we look for the smallest circle that contains the group? the largest of the angular distances? the sum of squares of all the distances? the area of the spherical polygon of which some of the stars are the vertices and which contains the others in its interior? Each of these quantities is smaller for the group of the Pleiades than seems plausible. Which of them should provide the measure of implausibility? If three of the stars form an equilateral triangle, do we have to add this circumstance, which is certainly very unlikely apriori, to those that point to a cause?” Joseph Bertrand (p.166)
“But whatever objection one can raise from a logical point of view cannot prevent the preceding question from arising in many situations: the theory of probability cannot refuse to examine it and to give an answer; the precision of the response will naturally be limited by the lack of precision in the question; but to refuse to answer under the pretext that the answer cannot be absolutely precise, is to place oneself on purely abstract grounds and to misunderstand the essential nature of the application of mathematics.” Emile Borel (Chapter 4)
Another highly interesting objection of Bertrand is somewhat linked with his conditioning paradox, namely that the density of the observed unlikely event depends on the choice of the statistic that is used to calibrate the unlikeliness, which makes complete sense in that the information contained in each of these statistics and the resulting probability or likelihood differ to an arbitrary extend, that there are few cases (monotone likelihood ratio) where the choice can be made, and that Bayes factors share the same drawback if they do not condition upon the entire sample. In which case there is no selection of “circonstances remarquables”. Or of uniformly most powerful tests.
absurdly unbiased estimators
Posted in Books, Kids, Statistics with tags best unbiased estimator, completeness, conditioning, Erich Lehmann, sufficiency, The American Statistician, UMVUE, unbiased estimation on November 8, 2018 by xi'an“…there are important classes of problems for which the mathematics forces the existence of such estimators.”
Recently I came through a short paper written by Erich Lehmann for The American Statistician, Estimation with Inadequate Information. He analyses the apparent absurdity of using unbiased estimators or even best unbiased estimators in settings like the Poisson P(λ) observation X producing the (unique) unbiased estimator of exp(-bλ) equal to
which is indeed absurd when b>1. My first reaction to this example is that the question of what is “best” for a single observation is not very meaningful and that adding n independent Poisson observations replaces b with b/n, which gets eventually less than one. But Lehmann argues that the paradox stems from a case of missing information, as for instance in the Poisson example where the above quantity is the probability P(T=0) that T=0, when T=X+Y, Y being another unobserved Poisson with parameter (b-1)λ. In a lot of such cases, there is no unbiased estimator at all. When there is any, it must take values outside the (0,1) range, thanks to a lemma shown by Lehmann that the conditional expectation of this estimator given T is either zero or one.
I find the short paper quite interesting in exposing some reasons why the estimators cannot find enough information within the data (often a single point) to achieve an efficient estimation of the targeted function of the parameter, even though the setting may appear rather artificial.
best unbiased estimators
Posted in Books, Kids, pictures, Statistics, University life with tags best unbiased estimator, complete statistics, cross validated, Erich Lehmann, Lehmann-Scheffé theorem, mathematical statistics, maximum likelihood estimation, Pitman best equivariant estimator, Rao-Blackwell theorem, Sankhya, sufficiency, Theory of Point Estimation, UMVUE on January 18, 2018 by xi'anA question that came out on X validated today kept me busy for most of the day! It relates to an earlier question on the best unbiased nature of a maximum likelihood estimator, to which I pointed out the simple case of the Normal variance when the estimate is not unbiased (but improves the mean square error). Here, the question is whether or not the maximum likelihood estimator of a location parameter, when corrected from its bias, is the best unbiased estimator (in the sense of the minimal variance). The question is quite interesting in that it links to the mathematical statistics of the 1950’s, of Charles Stein, Erich Lehmann, Henry Scheffé, and Debabrata Basu. For instance, if there exists a complete sufficient statistic for the problem, then there exists a best unbiased estimator of the location parameter, by virtue of the Lehmann-Scheffé theorem (it is also a consequence of Basu’s theorem). And the existence is pretty limited in that outside the two exponential families with location parameter, there is no other distribution meeting this condition, I believe. However, even if there is no complete sufficient statistic, there may still exist best unbiased estimators, as shown by . But Lehmann and Scheffé in their magisterial 1950 Sankhya paper exhibit a counter-example, namely the U(θ-1,θ-1) distribution:
since no non-constant function of θ allows for a best unbiased estimator.
Looking in particular at the location parameter of a Cauchy distribution, I realised that the Pitman best equivariant estimator is unbiased as well [for all location problems] and hence dominates the (equivariant) maximum likelihood estimator which is unbiased in this symmetric case. However, as detailed in a nice paper of Gabriela Freue on this problem, I further discovered that there is no uniformly minimal variance estimator and no uniformly minimal variance unbiased estimator! (And that the Pitman estimator enjoys a closed form expression, as opposed to the maximum likelihood estimator.) This sounds a bit paradoxical but simply means that there exists different unbiased estimators which variance functions are not ordered and hence not comparable. Between them and with the variance of the Pitman estimator.
which parameters are U-estimable?
Posted in Books, Kids, Statistics, University life with tags cross validated, epiphany, Erich Lehmann, George Casella, mathematical statistics, Theory of Point Estimation, U-estimability, unbiased estimation on January 13, 2015 by xi'anToday (01/06) was a double epiphany in that I realised that one of my long-time beliefs about unbiased estimators did not hold. Indeed, when checking on Cross Validated, I found this question: For which distributions is there a closed-form unbiased estimator for the standard deviation? And the presentation includes the normal case for which indeed there exists an unbiased estimator of σ, namely
which derives directly from the chi-square distribution of the sum of squares divided by σ². When thinking further about it, if a posteriori!, it is now fairly obvious given that σ is a scale parameter. Better, any power of σ can be similarly estimated in a unbiased manner, since
And this property extends to all location-scale models.
So how on Earth was I so convinced that there was no unbiased estimator of σ?! I think it stems from reading too quickly a result in, I think, Lehmann and Casella, result due to Peter Bickel and Erich Lehmann that states that, for a convex family of distributions F, there exists an unbiased estimator of a functional q(F) (for a sample size n large enough) if and only if q(αF+(1-α)G) is a polynomial in 0≤α≤1. Because of this, I had this [wrong!] impression that only polynomials of the natural parameters of exponential families can be estimated by unbiased estimators… Note that Bickel’s and Lehmann’s theorem does not apply to the problem here because the collection of Gaussian distributions is not convex (a mixture of Gaussians is not a Gaussian).
This leaves open the question as to which transforms of the parameter(s) are unbiasedly estimable (or U-estimable) for a given parametric family, like the normal N(μ,σ²). I checked in Lehmann’s first edition earlier today and could not find an answer, besides the definition of U-estimability. Not only the question is interesting per se but the answer could come to correct my long-going impression that unbiasedness is a rare event, i.e., that the collection of transforms of the model parameter that are U-estimable is a very small subset of the whole collection of transforms.