a [counter]example of minimaxity

A chance question on X validated made me reconsider about the minimaxity over the weekend. Consider a Geometric G(p) variate X. What is the minimax estimator of p under squared error loss ? I thought it could be obtained via (Beta) conjugate priors, but following Dyubin (1978) the minimax estimator corresponds to a prior with point masses at ¼ and 1, resulting in a constant estimator equal to ¾ everywhere, except when X=0 where it is equal to 1. The actual question used a penalised qaudratic loss, dividing the squared error by p(1-p), which penalizes very strongly errors at p=0,1, and hence suggested an estimator equal to 1 when X=0 and to 0 otherwise. This proves to be the (unique) minimax estimator. With constant risk equal to 1. This reminded me of this fantastic 1984 paper by Georges Casella and Bill Strawderman on the estimation of the normal bounded mean, where the least favourable prior is supported by two atoms if the bound is small enough. Figure 1 in the Negative Binomial extension by Morozov and Syrova (2022) exploits the same principle. (Nothing Orwellian there!) If nothing else, a nice illustration for my Bayesian decision theory course!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: