Archive for image analysis

Bayesian goodness of fit

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , on April 10, 2018 by xi'an

 

Persi Diaconis and Guanyang Wang have just arXived an interesting reflection on the notion of Bayesian goodness of fit tests. Which is a notion that has always bothered me, in a rather positive sense (!), as

“I also have to confess at the outset to the zeal of a convert, a born again believer in stochastic methods. Last week, Dave Wright reminded me of the advice I had given a graduate student during my algebraic geometry days in the 70’s :`Good Grief, don’t waste your time studying statistics. It’s all cookbook nonsense.’ I take it back! …” David Mumford

The paper starts with a reference to David Mumford, whose paper with Wu and Zhou on exponential “maximum entropy” synthetic distributions is at the source (?) of this paper, and whose name appears in its very title: “A conversation for David Mumford”…, about his conversion from pure (algebraic) maths to applied maths. The issue of (Bayesian) goodness of fit is addressed, with card shuffling examples, the null hypothesis being that the permutation resulting from the shuffling is uniformly distributed if shuffling takes enough time. Interestingly, while the parameter space is compact as a distribution on a finite set, Lindley’s paradox still occurs, namely that the null (the permutation comes from a Uniform) is always accepted provided there is no repetition under a “flat prior”, which is the Dirichlet D(1,…,1) over all permutations. (In this finite setting an improper prior is definitely improper as it does not get proper after accounting for observations. Although I do not understand why the Jeffreys prior is not the Dirichlet(½,…,½) in this case…) When resorting to the exponential family of distributions entertained by Zhou, Wu and Mumford, including the uniform distribution as one of its members, Diaconis and Wang advocate the use of a conjugate prior (exponential family, right?!) to compute a Bayes factor that simplifies into a ratio of two intractable normalising constants. For which the authors suggest using importance sampling, thermodynamic integration, or the exchange algorithm. Except that they rely on the (dreaded) harmonic mean estimator for computing the Bayes factor in the following illustrative section! Due to the finite nature of the space, I presume this estimator still has a finite variance. (Remark 1 calls for convergence results on exchange algorithms, which can be found I think in the just as recent arXival by Christophe Andrieu and co-authors.) An interesting if rare feature of the example processed in the paper is that the sufficient statistic used for the permutation model can be directly simulated from a Multinomial distribution. This is rare as seen when considering the benchmark of Ising models, for which the summary and sufficient statistic cannot be directly simulated. (If only…!) In fine, while I enjoyed the paper a lot, I remain uncertain as to its bearings, since defining an objective alternative for the goodness-of-fit test becomes quickly challenging outside simple enough models.

Advances in scalable Bayesian computation [day #2]

Posted in Books, Mountains, pictures, R, Statistics, University life with tags , , , , , , , , , , , on March 5, 2014 by xi'an

polyptych painting within the TransCanada Pipeline Pavilion, Banff Centre, Banff, March 21, 2012And here is the second day of our workshop Advances in Scalable Bayesian Computation gone! This time, it sounded like the “main” theme was about brains… In fact, Simon Barthelmé‘s research originated from neurosciences, while Dawn Woodard dissected a brain (via MRI) during her talk! (Note that the BIRS website currently posts Simon’s video as being Dan Simpson’s talk, the late change in schedule being due to Dan most unfortunately losing his passport during a plane transfer and most unfortunately being prevented from attending…) I found Simon’s talk quite inspiring, with this Tibshirani et al.’s trick of using logistic regression to estimate densities as a classification problem central to the method and suggesting a completely different vista for handling normalising constants… Then Raazesh Sainudiin gave a detailed explanation and validation of his approach to density estimation by multidimensional pavings/histograms, with a tree representation allowing for fast merging of different estimators. Raaz had given a preliminary version of the talk at CREST last Fall, which helped with focussing on the statistical aspects of the method. Chris Strickland then exposed an image analysis of flooded Northern Queensland landscapes, using a spatio-temporal model with changepoints and about 18,000 parameters. still managing to get an efficiency of O(np) thanks to two tricks. Then it was time for the group photograph outside in a balmy -18⁰ and an open research time that was quite profitable.

In the afternoon sessions, Paul Fearnhead presented an auxiliary variable approach to particle Gibbs, which again opened new possibilities for handling state-space models, but also reminding me of Xiao-Li Meng’s reparameterisation devices. And making me wonder (out loud) whether or not the SMC algorithm was that essential in a static setting, since the sequence could be explored in any possible order for a fixed time horizon. Then Emily Fox gave a 2-for-1 talk, mostly focussing on the first talk, where she introduced a new technique for approximating the gradient in Hamiltonian (or Hockey!) Monte Carlo, using second order Langevin. She did not have much time for the second talk, which intersected with the one she gave at BNP’ski in Chamonix, but focussed on a notion of sandwiched slice sampling where the target density only needs bounds that can get improved if needed. A cool trick! And the talks ended with Dawn Woodard‘s analysis of time varying 3-D brain images towards lesion detection, through an efficient estimation of a spatial mixture of normals.