beyond Kingman’s coalescent

Zeeman Building, University of WarwickThree of my colleagues at Warwick, Jere Koskela, Paul Jenkins, and Dario Spanò, just arXived a paper entitled computational inference beyond Kingman’s coalescent. The paper is rather technical (for me) but the essence is in extending Kingman’s coalescent, used to model population genetic evolutions from a common ancestor. And in proposing importance samplers that apply to those extensions and that compare (favourably) with the reference importance sampler of Stephens and Donnelly (2000, JRSS Series B, Read Paper). The processes under study are called Λ-coalescent (“which allows for multiple mergers but only permits one merger at a time”) and Ξ-coalescent (“which permits any number of simultaneous, multiple mergers”). As in Stephens and Donnelly (2000), the authors derive optimal conditional (importance) sampling distributions. Which are approximated to achieve manageable proposals. The importance sampler performs better than Stephen’s and Donnelly’s (2000) when the model is not a Kingman’s coalescent. There is also a comparison with an alternative approach based on products of approximate conditionals (PAC), which approximate rather well the MLEs if not the likelihood functions and hence can be used as calibration tools. I obviously wonder what a comparison with ABC (and the use of PAC proposals in an empirical likelihood version) would produce in this case.

Besides the appeal in studying new importance samplers in this setting, an additional feature of the paper is that Jere Koskela worked on this project as part of his MASDOC (MSc) training at Warwick, which demonstrates the excellency of this elite Master (in math and stats) programme.

2 Responses to “beyond Kingman’s coalescent”

  1. Not quite your point, but in this paper to appear in JCGS we propose a `marginal adjustment’ for ABC, whereby given a joint distribution ABC approximation, we separately estimate each parameter margin (more accurately than in the joint), and then replace those of the joint distribution to get an improved estimate (recommended!). Now in the extreme, where the dimension of your summary statistics is high, the dependence structure of the joint distribution will be poor and not too far away from the prior. In this case, the `marginal adjustment’ will produce an ABC posterior that is equivalent to a product of approximate marginals (PAM anyone?). Not quite PAC, but here the PAM is a legitimate ABC posterior approximation …

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.