Archive for reversible jump

improved approximate-Bayesian model-choice method for estimating shared evolutionary history [reply from the author]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , on June 3, 2014 by xi'an

[Here is a very kind and detailed reply from Jamie Oakes to the comments I made on his ABC paper a few days ago:]

First of all, many thanks for your thorough review of my pre-print! It is very helpful and much appreciated. I just wanted to comment on a few things you address in your post.

I am a little confused about how my replacement of continuous uniform probability distributions with gamma distributions for priors on several parameters introduces a potentially crippling number of hyperparameters. Both uniform and gamma distributions have two parameters. So, the new model only has one additional hyperparameter compared to the original msBayes model: the concentration parameter on the Dirichlet process prior on divergence models. Also, the new model offers a uniform prior over divergence models (though I don’t recommend it).

Your comment about there being no new ABC technique is 100% correct. The model is new, the ABC numerical machinery is not. Also, your intuition is correct, I do not use the divergence times to calculate summary statistics. I mention the divergence times in the description of the ABC algorithm with the hope of making it clear that the times are scaled (see Equation (12)) prior to the simulation of the data (from which the summary statistics are calculated). This scaling is simply to go from units proportional to time, to units that are proportional to the expected number of mutations. Clearly, my attempt at clarity only created unnecessary opacity. I’ll have to make some edits.

Regarding the reshuffling of the summary statistics calculated from different alignments of sequences, the statistics are not exchangeable. So, reshuffling them in a manner that is not conistent across all simulations and the observed data is not mathematically valid. Also, if elements are exchangeable, their order will not affect the likelihood (or the posterior, barring sampling error). Thus, if our goal is to approximate the likelihood, I would hope the reshuffling would also have little affect on the approximate posterior (otherwise my approximation is not so good?).

You are correct that my use of “bias” was not well defined in reference to the identity line of my plots of the estimated vs true probability of the one-divergence model. I think we can agree that, ideally (all assumptions are met), the estimated posterior probability of a model should estimate the probability that the model is correct. For large numbers of simulation
replicates, the proportion of the replicates for which the one-divergence model is true will approximate the probability that the one-divergence model is correct. Thus, if the method has the desirable (albeit “frequentist”) behavior such that the estimated posterior probability of the one-divergence model is an unbiased estimate of the probability that the one-divergence model is correct, the points should fall near the identity line. For example, let us say the method estimates a posterior probability of 0.90 for the one-divergence model for 1000 simulated datasets. If the method is accurately estimating the probability that the one-divergence model is the correct model, then the one-divergence model should be the true model for approximately 900 of the 1000 datasets. Any trend away from the identity line indicates the method is biased in the (frequentist) sense that it is not correctly estimating the probability that the one-divergence model is the correct model. I agree this measure of “bias” is frequentist in nature. However, it seems like a worthwhile goal for Bayesian model-choice methods to have good frequentist properties. If a method strongly deviates from the identity line, it is much more difficult to interpret the posterior probabilites that it estimates. Going back to my example of the posterior probability of 0.90 for 1000 replicates, I would be alarmed if the model was true in only 100 of the replicates.

My apologies if my citation of your PNAS paper seemed misleading. The citation was intended to be limited to the context of ABC methods that use summary statistics that are insufficient across the models under comparison (like msBayes and the method I present in the paper). I will definitely expand on this sentence to make this clearer in revisions. Thanks!

Lastly, my concluding remarks in the paper about full-likelihood methods in this domain are not as lofty as you might think. The likelihood function of the msBayes model is tractable, and, in fact, has already been derived and implemented via reversible-jump MCMC (albeit, not readily available yet). Also, there are plenty of examples of rich, Kingman-coalescent models implemented in full-likelihood Bayesian frameworks. Too many to list, but a lot of them are implemented in the BEAST software package. One noteworthy example is the work of Bryant et al. (2012, Molecular Biology and Evolution, 29(8), 1917–32) that analytically integrates over all gene trees for biallelic markers under the coalescent.

Biometrika, volume 100

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , on March 5, 2013 by xi'an

I had been privileged to have a look at a preliminary version of the now-published retrospective written by Mike Titterington on the 100 first issues of Biometrika (more exactly, “from volume 28 onwards“, as the title state). Mike was the dedicated editor of Biometrika for many years and edited a nice book for the 100th anniversary of the journal. He started from the 100th most highly cited papers within the journal to build a coherent chronological coverage. From a Bayesian perspective, this retrospective starts with Maurice Kendall trying to reconcile frequentists and non-frequentists in 1949, while having a hard time with fiducial statistics. Then Dennis Lindley makes it to the top 100 in 1957 with the Lindley-Jeffreys paradox. From 1958 till 1961, Darroch is quoted several times for his (fine) formalisation of the capture-recapture experiments we were to study much later (Biometrika, 1992) with Ed George… In the 1960′s, Bayesian papers became more visible, including Don Fraser (1961) and Arthur Dempster’ Demspter-Shafer theory of evidence, as well as George Box and co-authors (1965, 1968) and Arnold Zellner (1964). Keith Hastings’ 1970 paper stands as the fifth most highly cited paper, even though it was ignored for almost two decades. The number of Bayesian papers kept increasing. including Binder’s (1978) cluster estimation, Efron and Morris’ (1972) James-Stein estimators, and Efron and Thisted’s (1978) terrific evaluation of Shakespeare’s vocabulary. From then, the number of Bayesian papers gets too large to cover in its entirety. The 1980′s saw papers by Julian Besag (1977, 1989, 1989 with Peter Clifford, which was yet another precursor MCMC) and Luke Tierney’s work (1989) on Laplace approximation. Carter and Kohn’s (1994) MCMC algorithm on state space models made it to the top 40, while Peter Green’s (1995) reversible jump algorithm came close to Hastings’ (1970) record, being the 8th most highly cited paper. Since the more recent papers do not make it to the top 100 list, Mike Titterington’s coverage gets more exhaustive as the years draw near, with an almost complete coverage for the final years. Overall, a fascinating journey through the years and the reasons why Biometrika is such a great journal and constantly so.

reversible jump on HMMs

Posted in Books, Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , on December 19, 2011 by xi'an

Here is an email I received a few weeks ago about a paper written more than a decade ago in Glasgow with Tobias Rydén and Mike Titterington:

Sorry to bother you. I am a PhD student in economics. Recently, I am very interested in your paper “Bayesian inference in hidden Markov models through the reversible jump Markov chain Monte Carlo method”. I would like to use your method in estimating some regime-switching economic model. Unfortunately, I am not exactly understand your paper. Hence, I am writing to ask for your help. My questions are:

  1. A split or merge move is determined at the same time or sequentially? If the moves are determined at that same time, then accepting a split move implies that we can not accept a merge move any more in the same sweep. If the moves are determined sequentially, it means that we can accept a split move first, then accept a merge move in the same sweep. [Answer: First interpretation is correct. Except that the type of move is first selected at random, then only the corresponding move is generated and potentially accepted.]
  2.  In the paper, you discuss how to generate new transition probabilities in a split move in details. However, you did not discuss (probably, I am wrong) how to generate probabilities in each new state (series Zt in your paper).  Could you please tell me how to generate the series Zt? [Answer: check eqn (3).]
  3. My economic model is a multiple series (a vector hidden Markov model), will you refer me to some other papers for the vector model? [Answer: If the observed series is multidimensional, the extension is formally straightforward, if potentially prone to slow mixing and low acceptance rates. If the hidden Markov chain is multidimensional, I have not seen a version of reversible jump in this setting. Maybe an extension of the variational methods described in Ghahramani and Jordan would help.]

to which I replied that the questions showed a deep lack of understanding of what reversible jump is and that the PhD student should first check the literature, for instance the great intro paper by Charlie Geyer in Handbook of Markov chain Monte Carlo and then the original papers by Green (1995) and Richardson and Green (1997).

Handbook of Markov chain Monte Carlo

Posted in Books, R, Statistics, University life with tags , , , , , , , , , , , , , , on September 22, 2011 by xi'an

At JSM, John Kimmel gave me a copy of the Handbook of Markov chain Monte Carlo, as I had not (yet?!) received it. This handbook is edited by Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng, all first-class jedis of the MCMC galaxy. I had not had a chance to get a look at the book until now as Jean-Michel Marin took it home for me from Miami, but, as he remarked in giving it back to me last week, the outcome truly is excellent! Of course, authors and editors being friends of mine, the reader may worry about the objectivity of this assessment; however the quality of the contents is clearly there and the book appears as a worthy successor to the tremendous Markov chain Monte Carlo in Practice by Wally Gilks, Sylvia Richardson and David Spiegelhalter. (I can attest to the involvement of the editors from the many rounds of reviews we exchanged about our MCMC history chapter!) The style of the chapters is rather homogeneous and there are a few R codes here and there. So, while I will still stick to our Monte Carlo Statistical Methods book for teaching MCMC to my graduate students next month, I think the book can well be used at a teaching level as well as a reference on the state-of-the-art MCMC technology. Continue reading

Posterior model probabilities computed from model-specific Gibbs output [arXiv:1012.0073]

Posted in Books, Statistics with tags , , , , , on December 9, 2010 by xi'an

“Expressing RJMCMC as simple Gibbs sampling provides the key innovation of our formulation: it allows us to fit models one at a time using ordinary MCMC and then compute model weights or Bayes factors by post-processing the Monte Carlo output.”

Richard Barker (from the University of Otago, Dunedin, New Zealand) and William Link posted this new paper on arXiv. A point in their abstract attracted my attention, namely that they produce a “representation [that] allows [them] to fit models one at a time using ordinary MCMC and then compute model weights or Bayes factors by post-processing the Monte Carlo output”. This is quite interesting in that most attempts at building Bayes factors approximations from separate chains running each on a separate model have led to erroneous solutions. It appears however that the paper builds upon a technique fully exposed in the book written by the authors. Continue reading

Follow

Get every new post delivered to your Inbox.

Join 598 other followers