But as I said, it’s a differnt problem to the one you’re considering.

]]>And about the title: Testing sounded more generic and encompassing that Model choice or Model selection, I presume, so since we wanted to address the general problem it seemed more appropriate to use Testing… Of course, this is mostly a posteriori rationalisation.

]]>To built on the remark by Dan Simpson: if we do a standard model selection via BF or alike, we only compare the ability of two (or more) model structures to conform with the data. If we formulate the model selection via a mixture model, we are essentially offering a number of additional intermediate models, which could have very different properties in terms of distribution etc.

So, if I get an alpha = 0.5, I wonder how we can distinguish whether both models are equally likely given the data, or whether the mixture is a lot more likely that either of the two.

Side remark, but this is just semantics: I wondered why you used “testing hypotheses” and not “model selection” in the title.

]]>This is analogous to how a Bayes factor can be computed by MCMC (with, say, the Carlin&Chib pseudo-prior approach, or RJMCMC) by picking a working prior over model probabilities and then taking the ratio of the posterior model probability to the prior model probability. And just as *that* working prior ought to be chosen for its computational properties, I would argue that MCMC in the estimation-as-model-testing setting should use a working prior chosen (or tuned on-line!) to provide good MCMC performance.

]]>