## On Congdon’s estimator

I got the following email from Bob:

I’ve been looking at some methods for Bayesian model selection, and read your critique in Bayesian Analysis of Peter Congdon’s method. I was wondering if it could be fixed simply by including the prior densities of the pseudo-priors in the calculation of P(M=k|y), i.e. simply removing the approximation in Congdon’s eqn. 3 so that the product over the parameters of the other models (i.e. j≠k) is included in the calculation of $P(M=k|y, \theta^(t))$? This seems an easy fix, so I’m wondering why you didn’t suggest it.

This relates to our Bayesian Analysis criticism of Peter Congdon’s approximation of posterior model probabilities. The difficulty with the estimator is that it uses simulations from the separate [model-based] posteriors when it should rely on simulations from the marginal [model-integrated] posterior (in order to satisfy an unbiasedness property). After a few email exchanges with Bob, I think I understand correctly the fix he proposes, i.e. that the “other model” parameters are simulated from the corresponding model-based posteriors, rather than being jointly simulated with the parameter from the “current model” from the joint posterior. However, the correct weight in Carlin and Chib’s approximation then involves the product of the [model-based] posteriors (including the normalisation constant) as “pseudo-priors”. I also think that even if the exact [model-based] posteriors were used, the fact that the weight involves a product over a large number of densities should induce an asymmetric behaviour. Indeed this product, while on average equal to one (or 1/M if M is the number of models), is more likely to take very small values than to take very large values (by a supermartingale argument)…

This site uses Akismet to reduce spam. Learn how your comment data is processed.