**T**oday *(01/06)* was a double epiphany in that I realised that one of my long-time beliefs about unbiased estimators did not hold. Indeed, when checking on Cross Validated, I found this question: For which distributions is there a closed-form unbiased estimator for the standard deviation? And the presentation includes the normal case for which indeed there exists an unbiased estimator of σ, namely

which derives directly from the chi-square distribution of the sum of squares divided by σ². When thinking further about it, if *a posteriori*!, it is now fairly obvious given that σ is a *scale* parameter. Better, any power of σ can be similarly estimated in a unbiased manner, since

And this property extends to all location-scale models.

So how on Earth was I so convinced that there was no unbiased estimator of σ?! I think it stems from reading too quickly a result in, I think, Lehmann and Casella, result due to Peter Bickel and Erich Lehmann that states that, for a convex family of distributions *F*, there exists an unbiased estimator of a functional *q(F)* (for a sample size *n* large enough) if and only if *q(αF+(1-α)G)* is a polynomial in *0≤α≤1*. Because of this, I had this

This leaves open the question as to which transforms of the parameter(s) are unbiasedly estimable (or U-estimable) for a given parametric family, like the normal N(μ,σ²). I checked in Lehmann’s first edition earlier today and could not find an answer, besides the definition of U-estimability. Not only the question is interesting *per se* but the answer could come to correct my long-going impression that unbiasedness is a rare event, i.e., that the collection of transforms of the model parameter that are U-estimable is a very small subset of the whole collection of transforms.