I obviously don’t think that! A prior is appropriate for a given circumstance. So it’s never true that all proper priors could be used for any problem.

Regarding admissibility, my understanding is that there are essentially two types of inadmissible priors: the howlingly inadmissible and the ones that are only just beaten.

I guess this view comes from my background in numerics. I just don’t trust computers with big numbers or small number, so if the maths is dichotomizing estimators based on properties way out in the tails, then I think it’s hard to take the dichotomy seriously.

For instance, if you can beat my prior when |theta| > 10^10, but it performs better for more reasonable values, I’m not enormously concerned with the resulting estimator being inadmissible.

You know the maths of this much better than I do, but it was always my impression that the improper priors come from playing those sorts of games with infinity and infinitesimals.

For a lot of models, such as a poisson-glm withif a log-link, parameter values in [-20,20] cover well beyond the reasonable range (if your data has a mean of 10^8, you should rescale it before doing things in a computer!), so arguments around infinity are not very meaningful. And priors that put non-trivial mass outside that interval (or that are defined as the limit of things that put non-trivial mass outside that interval) seem a-statistical.

]]>The problem (or potential problem) is that the set of sigma finite measures is strictly bigger than the set of finite measures. So you might be able to “game” the scoring rule on the larger set (find a minimum that isn’t at the true distribution) or the rule may no longer be strictly proper (because there is an unnormalisable measure that has the same score as the true distribution).

]]>Thanks! Looking forward to your emails.

]]>I still do not get the argument why a finite measure has more meaning than a sigma-finite measure. Limits are necessary “evils” in a topological world…

]]>I don’t agree that, for complicated nonlinear functions of the prior, things that work when the prior is proper still work when the prior is at its improper limit. If that were true we could safely use gamma(epsilon,epsilon) priors.

I really don’t want to say it doesn’t work. I have literally no idea. But I’ve been burnt before. That’s why we have maths. And no one seems to have done it.

]]>If you look at their case study in linear models, you’ll see that they consider improper priors as limits of sequences of proper priors. Then, if you agree that the score has a meaning for any proper prior, and that its limit (as the prior becomes improper) is well-defined, this immediately “gives a meaning” to this score when using improper priors.

We have a similar toy example on page 2 of our paper, with a simple Normal-Normal conjugate model.

Or maybe it’s the idea of giving meaning through a limit argument that makes you incomfortable? If so, then there’s not much to argue about, it’s a conceptual disagreement. I’m happy e.g. with derivatives being defined as limits of finite differences, and integrals as limits of averages.

]]>All Dawid and Musio do is say that you can still compute it even if the distribution is improper. They don’t seem to talk about what that means. The theory in Hyvärinen’s paper (which deals with unknown normalizations not unnormalisable densities) that shows propriety of the score all assumes that the density is normalisable. It might still be proper against sigma-finite measures, but someone needs to prove it somewhere…

]]>This is well-explained in Dawid and Musio’s 2015 paper.

]]>