are pseudopriors required in Bayesian model selection?
An interesting question from X validated about constructing pseudo-priors for Bayesian model selection. Namely, how useful are these for the concept rather than the implementation? The only case where I am aware of pseudo-priors being used is in Bayesian MCMC algorithms such as Carlin and Chib (1995), where the distributions are used to complement the posterior distribution conditional on a single model (index) into a joint distribution across all model parameters. The trick of this construction is that the pseudo-priors can be essentially anything, including depending on the data as well. And while the impact the ability of the resulting Markov chain to move between spaces, they have no say on the resulting inference, either when choosing a model or when estimating the parameters of a chosen model. The concept of pseudo-priors was also central to the mis-interpretations found in Congdon (2006) and Scott (2002). Which we reanalysed with Jean-Michel Marin in Bayesian Analysis (2008) as the distinction between model-based posteriors and joint pseudo-posteriors.
March 8, 2020 at 9:13 pm
Pseudopriors are also used in Gibbs variable selection methods to match dimension of nested regression/glm models; see Dellaportas, Forster and Ntzoufras (2002, Statistics and Computing). Any Gibbs based method with variable inclusion indicators (except SSVS) indirectly uses pseudopriors. Similarly if we Metropolize any of these Gibbs methods, then the pseudoprior plays the same role as the proposal of the extra terms in RJMCMC.