 There was a poster by Timothy Wallstrom yesterday night at the O-Bayes09 poster session about marginalisation paradoxes and we had a nice chat about this topic. Marginalisation paradoxes are fascinating and I always mention them in my Bayesian class, because I think they illustrate the limitations of how much one can interpret an improper prior. There is a consequent literature on how to “solve” marginalisation paradoxes, following Jaynes’ comments on the foundational paper of David, Stone and Zidek (Journal of the Royal Statistical Society, 1974), but—and this is where I disagree with Timothy—I do not think they need to be “solved” either by uncovering the group action on the problem (left Haar versus right Haar) or by using different proper prior sequences. For me, the core of the “paradox” is that writing an improper prior as $\pi(\theta,\zeta) = \pi_1(\theta) \pi_2(\zeta)$
does not imply that $\pi_2$ is the marginal prior on $\zeta$ when $\pi_1$ is improper. The interpretation of $\pi_2$ as such is what leads to the “paradox” but there is no mathematical difficulty in the issue. Starting with the joint improper prior $\pi(\theta,\zeta)$ leads to an undefined posterior if we only consider the part of the observations that depends on $\zeta$ because $\theta$ does not integrate out. Defining improper priors as limits of proper priors—as Jaynes and Timothy Wallstrom do—can also be attempted from a mathematical point of view, but (a) I do not think a global resolution is possible this way in that all Bayesian procedures for the improper prior cannot be constructed as limits from the corresponding Bayesian procedures for the proper prior sequence, think eg about testing, and (b) this is trying to give a probabilistic meaning to the improper priors and thus gets back to the over-interpretation danger mentioned above. Hence a very nice poster discussion!