**W**hile the classical definition of a *statistic* is one of a real valued random variable or vector, less usual situations call for broader definitions… For instance, in an homework problem from Mark Schervish’s Theory of Statistics, a sample from the uniform distribution of a ball of unknown centre θ and radius ς is associated with the convex hull of said sample as “sufficient statistic”, albeit the object being a set. Similarly, if the radius ς is known, the set made of the intersection of all the balls of radius ς centred at the observations is sufficient, in that the likelihood is constant for θ inside and zero outside. As discussed in this X validated question, this does not define an optimal estimator of the center θ, while Pitman’s best location equivariant does, while the centre of this sufficient set, but it is not sufficient as a statistic and is not necessarily the MVUE, if unbiased.

## Archive for best unbiased estimator

## set-valued sufficient statistic

Posted in Books, Kids, Statistics with tags best unbiased estimator, cross validated, sufficiency, sufficient statistics, theory of statistics, UMVUE, uniform distribution on June 18, 2022 by xi'an## Yikes! “AI can predict which criminals may break laws again better than humans”

Posted in Books, pictures, Statistics with tags best unbiased estimator, COMPAS, crime prediction, criminals, predictive justice, redicivism, RSS, Science on February 28, 2020 by xi'anScience (the journal!) has this heading on its RSS feed page, which makes me wonder if they have been paying any attention to the well-documented issues with AI driven “justice”.

“…some research has given reason to doubt that algorithms are any better at predicting arrests than humans are.”

Among other issues, the study compared volunteers with COMPAS‘ or LSI-R predictive abilities for predicting violent crime behaviour, based on the same covariates. Volunteers, not experts! And the algorithms are only correct 80% of the time which is a terrible perfomance when someone’s time in jail depends on it!

“Since neither humans nor algorithms show amazing accuracy at predicting whether someone will commit a crime two years down the line, “should we be using [those forecasts] as a metric to determine whether somebody goes free?” Farid says. “My argument is no.””

## efficiency and the Fréchet-Darmois-Cramèr-Rao bound

Posted in Books, Kids, Statistics with tags Académie des Sciences, best unbiased estimator, Canada, Canadian Journal of Statistics, Cramer-Rao lower bound, cross validated, efficiency, Fréchet-Darmois-Cramèr-Rao bound, George Darmois, James-Stein estimator, mathematical statistics, Maurice Fréchet on February 4, 2019 by xi'an**F**ollowing some entries on X validated, and after grading a mathematical statistics exam involving Cramèr-Rao, or Fréchet-Darmois-Cramèr-Rao to include both French contributors pictured above, I wonder as usual at the relevance of a concept of *efficiency* outside [and even inside] the restricted case of unbiased estimators. The general (frequentist) version is that the variance of an estimator δ of [any transform of] θ with bias b(θ) is

I(θ)⁻¹ (1+b'(θ))²

while a Bayesian version is the van Trees inequality on the integrated squared error loss

(E(I(θ))+I(π))⁻¹

where I(θ) and I(π) are the Fisher information and the prior entropy, respectively. But this opens a whole can of worms, in my opinion since

- establishing that a given estimator is efficient requires computing both the bias and the variance of that estimator, not an easy task when considering a Bayes estimator or even the James-Stein estimator. I actually do not know if any of the estimators dominating the standard Normal mean estimator has been shown to be efficient (although there exist results for closed form expressions of the James-Stein estimator quadratic risk, including one of mine the Canadian Journal of Statistics published verbatim in 1988). Or is there a result that a Bayes estimator associated with the quadratic loss is by default efficient in either the first or second sense?
- while the initial Fréchet-Darmois-Cramèr-Rao bound is restricted to unbiased estimators (i.e., b(θ)≡0) and unable to produce efficient estimators in all settings but for the natural parameter in the setting of exponential families, moving to the general case means there exists one efficiency notion for every bias function b(θ), which makes the notion quite weak, while not necessarily producing efficient estimators anyway, the major impediment to taking this notion seriously;
- moving from the variance to the squared error loss is not more “natural” than using any [other] convex combination of variance and squared bias, creating a whole new class of optimalities (a grocery of cans of worms!);
- I never got into the van Trees inequality so cannot say much, except that the comparison between various priors is delicate since the integrated risks are against different parameter measures.

## absurdly unbiased estimators

Posted in Books, Kids, Statistics with tags best unbiased estimator, completeness, conditioning, Erich Lehmann, sufficiency, The American Statistician, UMVUE, unbiased estimation on November 8, 2018 by xi'an

“…there are important classes of problems for which the mathematics forces the existence of such estimators.”

**R**ecently I came through a short paper written by Erich Lehmann for The American Statistician, Estimation with Inadequate Information. He analyses the apparent absurdity of using unbiased estimators or even best unbiased estimators in settings like the Poisson P(λ) observation X producing the (unique) unbiased estimator of exp(-bλ) equal to

which is indeed absurd when b>1. My first reaction to this example is that the question of what is “best” for a single observation is not very meaningful and that adding n independent Poisson observations replaces b with b/n, which gets eventually less than one. But Lehmann argues that the paradox stems from a case of missing information, as for instance in the Poisson example where the above quantity is the probability **P**(T=0) that T=0, when T=X+Y, Y being another unobserved Poisson with parameter (b-1)λ. In a lot of such cases, there is no unbiased estimator at all. When there is any, it must take values outside the (0,1) range, thanks to a lemma shown by Lehmann that the conditional expectation of this estimator given T is either zero or one.

I find the short paper quite interesting in exposing some reasons why the estimators cannot find enough information within the data (often a single point) to achieve an efficient estimation of the targeted function of the parameter, even though the setting may appear rather artificial.

## best unbiased estimator of θ² for a Poisson model

Posted in Books, Kids, pictures, Statistics, Travel, University life with tags Bahamas, best unbiased estimator, complete statistics, counterexample, cross validated, George Forsythe, John von Neumann, Monte Carlo Statistical Methods, Poisson distribution, Rao-Blackwell theorem, sailing, shark, simulation on May 23, 2018 by xi'an**A** mostly traditional question on X validated about the “best” [minimum variance] unbiased estimator of θ² from a Poisson P(θ) sample leads to the Rao-Blackwell solution

and a similar estimator could be constructed for θ³, θ⁴, … With the interesting limitation that this procedure stops at the power equal to the number of observations (minus one?). But, since the expectation of a power of the sufficient statistics S [with distribution P(nθ)] is a polynomial in θ, there is *de facto* no limitation. More interestingly, there is no unbiased estimator of negative powers of θ in this context, while this neat comparison on Wikipedia (borrowed from the great book of counter-examples by Romano and Siegel, 1986, selling for a mere $180 on amazon!) shows why looking for an unbiased estimator of exp(-2θ) is particularly foolish: the only solution is (-1) to the power S [for a single observation]. (There is however a first way to circumvent the difficulty if having access to an arbitrary number of generations from the Poisson, since the Forsythe – von Neuman algorithm allows for an unbiased estimation of exp(-F(x)). And, as a second way, as remarked by Juho Kokkala below, a sample of at least two Poisson observations leads to a more coherent best unbiased estimator.)