## Bayes and unbiased

**S**iamak Noorbaloochi and Glen Meeden have arXived a note on some mathematical connections between Bayes estimators and unbiased estimators. To start with, I never completely understood the reasons for [desperately] seeking unbiasedness in estimators, given that most transforms of a parameter θ *do not allow* for unbiased estimators when based on a finite sample from a parametric family with such parameter θ (Lehmann, 1983). (This is not even for a Bayesian reason!) The current paper first seems to use *unbiasedness* in a generalised sense introduced earlier by the authors in 1983, which is a less intuitive notion since it depends on loss and prior, still without being guaranteed to exist since it involves an infimum over a non-compact space. However, for the squared error loss adopted in this paper, it seems to reduce to the standard notion.

A first mathematical result therein is that the Bayes [i.e., posterior mean] and unbiasedness [i.e., sample mean] operators are adjoint in a Hilbert sense. But this does not seem much more than a consequence of Fubini’s theorem. The authors then proceeds to expose the central (decomposition) result of the paper, namely that *every estimable function γ(θ) can be orthogonally decomposed into a function with an unbiased estimator plus a function whose Bayes estimator is zero* and that conversely *every square-integrable estimator can be orthogonally decomposed into a Bayes estimator (of something) plus an unbiased estimator of zero*. This is a neat duality result, whose consequences I however fail to see because the Bayes estimator is estimating *something else*. And, somewhere, somehow, I have some trouble with the notion of a function α whose Bayes estimator [i.e., posterior mean] is zero for *all values* of the sample, esp. outside problems with finite observation and finite parameter spaces. For instance, if the sampling distribution is within an exponential family, the above property means that the Laplace transform of this function α is uniformly zero, hence that the function itself is uniformly zero.

December 21, 2015 at 2:34 pm

Unbiasedness has, beyond its confusing frequentist meaning, a practical meaning for animal breeders practicing selection.

If you select bulls to get more milk, you are interested in fairly comparing “old” bulls (with more information) and “young” bulls (with less information, and more selected, so likely better than old ones). In particular, you do not want to systematically over- or under- estimate the value of these young animals. Among other reasons because a bad choice can cost you a fortune.

If you can do this “fair comparison”, you have some guarantee that the selection will not fail (much). Because these selection decisions are taken all the time, a frequentist interpretation is easy. (“I want a procedure that across years does guarantee me unbiasedness…”).

This is basically what CR Henderson had in mind when he enthusiastically supported unbiasedness (e.g. in his 1973 famous paper) as a desirable property for BLUP genetic evaluation. Not that this is the only desired property, but he thought it was important.

December 21, 2015 at 1:45 am

Nobody could tell me why one should use unbiesed estimators:

Does there exist any optimal property for them?