## loss functions for credible regions

When Éric Marchand came to give a talk last week, we discussed about minimality and Bayesian estimation for confidence/credible regions. In the early 1990’s, George Casella and I wrote a paper in this direction, entitled “Distance weighted losses for testing and confidence set evaluation” and published in TEST. It was restricted to the univariate case but one could consider evaluating α-level confidence regions with a loss function like

$L(\theta,C) = \left(\theta-\text{proj}_C(\theta)\right)^2$

where the projection of the parameter over C is the element in C that is closest to the parameter. As in the original paper, this loss function brings a penalty of how far is the parameter from the region, compared the rudimentary 0-1 loss function which penalises all misses the same way. The posterior loss is not straightforward to minimise, though. Unless one considers an approximation based on a sample from the posterior and picks the (1-α)-fraction that gives the smallest sum of distances to the remaining α-fraction. And then takes a convexification of the α-fraction. This is not particularly “clean” and I would prefer to find an HPD-like region, i.e. an HPD linked to a modified prior… But this may require another loss function than the one above. Incidentally, I was also playing with an alternative loss function that would avoid setting the level α. Namely

$L(\theta,C) = \left(\theta-\text{proj}_C(\theta)\right)^2 + \tau\, \text{diam}(C)^2,$

which simultaneously penalises non-coverage and size. However, the choice of τ makes the function difficult to motivate in a realistic setting.

### One Response to “loss functions for credible regions”

1. very interesting..

the \tau looks like a “shrinkage parameter” in a ridge type penalty.

by the way of analogy, maybe you could use something like “cross-validation”…

This site uses Akismet to reduce spam. Learn how your comment data is processed.