## loss functions for credible regions

**W**hen Éric Marchand came to give a talk last week, we discussed about minimality and Bayesian estimation for confidence/credible regions. In the early 1990’s, George Casella and I wrote a paper in this direction, entitled “*Distance weighted losses for testing and confidence set evaluation*” and published in TEST. It was restricted to the univariate case but one could consider evaluating *α*-level confidence regions with a loss function like

where the projection of the parameter over *C* is the element in *C* that is closest to the parameter. As in the original paper, this loss function brings a penalty of how far is the parameter from the region, compared the rudimentary 0-1 loss function which penalises all misses the same way. The posterior loss is not straightforward to minimise, though. Unless one considers an approximation based on a sample from the posterior and picks the *(1-α)*-fraction that gives the smallest sum of distances to the remaining *α*-fraction. And then takes a convexification of the *α*-fraction. This is not particularly “clean” and I would prefer to find an HPD-like region, i.e. an HPD linked to a modified prior… But this may require another loss function than the one above. Incidentally, I was also playing with an alternative loss function that would avoid setting the level *α*. Namely

which simultaneously penalises non-coverage and size. However, the choice of *τ* makes the function difficult to motivate in a realistic setting.

March 15, 2012 at 1:35 am

very interesting..

the \tau looks like a “shrinkage parameter” in a ridge type penalty.

by the way of analogy, maybe you could use something like “cross-validation”…