Archive for Ising

plenty of new arXivals!

Posted in Statistics, University life with tags , , , , , on October 2, 2014 by xi'an

Here are some entries I spotted in the past days as of potential interest, for which I will have not enough time to comment:

  • arXiv:1410.0163: Instrumental Variables: An Econometrician’s Perspective by Guido Imbens
  • arXiv:1410.0123: Deep Tempering by Guillaume Desjardins, Heng Luo, Aaron Courville, Yoshua Bengio
  • arXiv:1410.0255: Variance reduction for irreversible Langevin samplers and diffusion on graphs by Luc Rey-Bellet, Konstantinos Spiliopoulos
  • arXiv:1409.8502: Combining Particle MCMC with Rao-Blackwellized Monte Carlo Data Association for Parameter Estimation in Multiple Target Tracking by Juho Kokkala, Simo Särkkä
  • arXiv:1409.8185: Adaptive Low-Complexity Sequential Inference for Dirichlet Process Mixture Models by Theodoros Tsiligkaridis, Keith W. Forsythe
  • arXiv:1409.7986: Hypothesis testing for Markov chain Monte Carlo by Benjamin M. Gyori, Daniel Paulin
  • arXiv:1409.7672: Order-invariant prior specification in Bayesian factor analysis by Dennis Leung, Mathias Drton
  • arXiv:1409.7458: Beyond Maximum Likelihood: from Theory to Practice by Jiantao Jiao, Kartik Venkat, Yanjun Han, Tsachy Weissman
  • arXiv:1409.7419: Identifying the number of clusters in discrete mixture models by Cláudia Silvestre, Margarida G. M. S. Cardoso, Mário A. T. Figueiredo
  • arXiv:1409.7287: Identification of jump Markov linear models using particle filters by Andreas Svensson, Thomas B. Schön, Fredrik Lindsten
  • arXiv:1409.7074: Variational Pseudolikelihood for Regularized Ising Inference by Charles K. Fisher

Another ABC paper

Posted in Statistics with tags , , , , , , , on July 24, 2010 by xi'an

“One aim is to extend the approach of Sisson et al. (2007) to provide an algorithm that is robust to implement.”

C.C. Drovandi & A.N. Pettitt

A paper by Drovandi and Pettit appeared in the Early View section of Biometrics. It uses a combination of particles and of MCMC moves to adapt to the true target, with an acceptance probability

\min\left\{1,\dfrac{\pi(\theta^*)q(\theta_c|\theta^*)}{\pi(\theta^*)q(\theta^*|\theta_c)}\right\}

where \theta^* is the proposed value and \theta_c is the current value (picked at random from the particle population), while q is a proposal kernel used to simulate the proposed value. The algorithm is adaptive in that the previous population of particles is used to make the choice of the proposal q, as well as of the tolerance level \epsilon_t. Although the method is valid as a particle system applied in the ABC setting, I have difficulties to gauge the level of novelty of the method (then applied to a model of Riley et al., 2003, J. Theoretical Biology). Learning from previous particle populations to build a better kernel q is indeed a constant feature in SMC methods, from Sisson et al.’s ABC-PRC (2007)—note that Drovandi and Pettitt mistakenly believe the ABC-PRC method to include partial rejection control, as argued in this earlier post—, to Beaumont et al.’s ABC-PMC (2009). The paper also advances the idea of adapting the tolerance on-line as an \alpha quantile of the previous particle population, but this is the same idea as in Del Moral et al.’s ABC-SMC. The only strong methodological difference, as far as I can tell, is that the MCMC steps are repeated “numerous times” in the current paper, instead of once as in the earlier papers. This however partly cancels the appeal of an O(N) order method versus the O() order PMC and SMC methods. An interesting remark made in the paper is that more advances are needed in cases when simulating the pseudo-observations is highly costly, as in Ising models. However, replacing exact simulation [as we did in the model choice paper] with a Gibbs sampler cannot be that detrimental.

ABC methods for model choice in Gibbs random fields

Posted in Statistics with tags , , , , , , on February 19, 2009 by xi'an

1tqga from Thermotoga maritimaWe have resubmitted to Bayesian Analysis a revised version of our paper ” ABC methods for model choice in Gibbs random fields” available on arXiv. The only major change is the addition of a second protein example in the biophysical illustration. The core idea in this paper is that, for Gibbs random fields and in particular for Ising models, when comparing several neighbourhood structures, the computation of the posterior probabilities of the models/structures under competition can be operated by likelihood-free simulation techniques akin to the Approximate Bayesian Computation (ABC) algorithm often discussed here. The point for this resolution is that, due to the specific structure of Gibbs random field distributions, there exists a sufficient statistic across models which allows for an exact (rather than Approximate) simulation from the posterior probabilities of the models. Obviously, when the structures grow more complex, it becomes necessary to introduce a true ABC step with a tolerance threshold\mathbf{\epsilon}in order to avoid running the algorithm for too long. Our toy example shows that the accuracy of the approximation of the Bayes factor can be greatly improved by resorting to the original ABC approach, since it allows for the inclusion of many more simulations. In the biophysical application to the choice of a folding structure for two proteins, we also demonstrate that we can implement the ABC solution on realistic datasets and, in the examples processed there, that the Bayes factors allow for a ranking more standard methods (FROST, TM-score) do not.