Posterior likelihood

At the Edinburgh mixture estimation workshop, Murray Aitkin presented his proposal to compare models via the posterior distribution of the likelihood ratio.

$\dfrac{L_1(\theta_1|x)}{L_2(\theta_2|x)}$

As already commented in a post last July, the positive aspect of looking at this quantity rather than at the Bayes factor is that the priors are then allowed to be improper if one simulates from the posteriors for each model, as in Aitkin et al. (2007). My overall feeling has not changed though, namely the ratio should be instead considered under the joint posterior of $(\theta_1,\theta_2)$, which is [proportional to]

$p_1 m_1(x) \pi_1(\theta_1|x) \pi_2(\theta_2)+p_2 m_2(x) \pi_2(\theta_2|x) \pi_1(\theta_1)$

instead of the product of both posteriors. This of course makes a whole difference, as shown on the next R graph that compares the distribution of the likelihood ratio under the true posterior and under the product of posteriors (when comparing a Poisson model against a negative binomial with $m=5$ successes trials, when $x=3$). The joint simulation produces a much more supportive argument in favour of the negative binomial model, when compared with the product of the posteriors.

Obviously, this joint perspective also cancels the appeal of the approach under improper priors.

2 Responses to “Posterior likelihood”

1. […] to use in parallel simulations from the posteriors under each model. As discussed in an earlier post, this is not possible. The book suggests to run model comparison via the distribution of the […]

2. […] Posterior likelihood « Xi'an's Og […]

This site uses Akismet to reduce spam. Learn how your comment data is processed.