visualising bias and unbiasedness
A question on X validated led me to wonder at the point made by Christopher Bishop in his Pattern Recognition and Machine Learning book about the MLE of the Normal variance being biased. As it is illustrated by the above graph that opposes the true and green distribution of the data (made of two points) against the estimated and red distribution. While it is true that the MLE under-estimates the variance on average, the pictures are cartoonist caricatures in their deviance permanence across three replicas. When looking at 10⁵ replicas, rather than three, and at samples of size 10, rather than 2, the distinction between using the MLE (left) and the unbiased estimator of σ² (right).
When looking more specifically at the case n=2, the humongous variability of the density estimate completely dwarfs the bias issue:
Even when averaging over all 10⁵ replications, the difference is hard to spot (and both estimations are more dispersed than the truth!):
April 29, 2019 at 5:08 am
Unbiasedness is one of those properties that at first sight may seem intuitively appealing, but which can then turn itself into an obsession that is detached from any deep-rooted justification.
Frequentists would appear to be particularly guilty of this, but also Bayesians often behave in this kind of obsessive way.
From what you have illustrated, we can clearly see that the case in favour of even perhaps the most well-known bias correction, i.e. of the MLE of a normal variance, is not strong. In other situations, it appears that some may be tempted to go to the extreme of trying to achieve unbiasedness by making their estimator further away from the truth on average (on an appropriate scale of course).