**H**ere is an announcement from Oliver Ratman for a postdoc position at Imperial College London with partners in Seattle, on epidemiology and new Bayesian methods for estimating sources of transmission with phylogenetics. As stressed by Ollie, no pre-requisites in phylogenetics are required, they are really looking for someone with solid foundations in Mathematics/Statistics, especially Bayesian Statistics, and good computing skills (R, github, MCMC, Stan). The search is officially for a Postdoc in Statistics and Pathogen Phylodynamics. Reference number is NS2017189LH. Deadline is April 07, 2018.

## Archive for Durham

## postdoc position in London plus Seattle

Posted in Statistics with tags academic position, Bristol, Britain, Durham, Durham university, England, Imperial College London, London, postdoctoral position, professor of statistics, United Kingdom, University of Bristol on March 21, 2018 by xi'an## more positions in the UK [postdoc & professor]

Posted in Statistics with tags academic position, Bristol, Britain, Durham, Durham university, England, Imperial College London, London, postdoctoral position, professor of statistics, United Kingdom, University of Bristol on October 13, 2017 by xi'an**I** have received additional emails from England advertising for positions in Bristol, Durham, and London, so here they are, with links to the complete advertising!

- The University of Bristol is seeking to appoint a number of Chairs in any areas of Mathematics or Statistical Science, in support of a major strategic expansion of the School of Mathematics. Deadline is December 4.
- Durham University is opening a newly created position of Professor of Statistics, with research and teaching duties. Deadline is November 6.
- Oliver Ratman, in the Department of Mathematics at Imperial College London, is seeking a Research Associate in Statistics and Pathogen Phylodynamics. Deadline is October 30.

## Hamiltonian MC on discrete spaces [a reply from the authors]

Posted in Books, pictures, Statistics, University life with tags BNP11, discrete parameters, Duke University, Durham, Hamiltonian Monte Carlo, reply on July 8, 2017 by xi'an*Q. Why not embed discrete parameters so that the resulting surrogate density function is smooth?*

A. This is only possible in very special settings. Let’s say we have a target distribution π(θ, n), where θ is continuous and ‘n’ is discrete. To construct a surrogate smooth density, we would need to somehow smoothly interpolate a collection of functions f_{n}(θ) = π(θ, n) for n = 1, 2, …. It is not clear to us how we can achieve this in a general and tractable way.

*Q. How to generalize the algorithm to a more complex parameter space?*

A. We provide a clear solution to dealing with a discontinuous target density defined on a continuous parameter space. We agree, however, that there remains the question of whether and how a more complex parameter space can be embedded into a continuous space. This certainly deserves a further investigation. For example, a binary tree can be embedded in to an interval [0,1] through a dyadic expansion of a real number.

*Q. Physical intuition of discontinuous Hamiltonian dynamics is not clear from a theory of differential measure-valued equation and selection principle.*

A. Hamiltonian dynamics with a discontinuous potential energy has long been used by physicists as a natural model for some physical phenomena (also known as “impulsive systems”). The main difference from a smooth system is that a gradient become a “delta function” at the discontinuity, causing an instantaneous “push” toward the direction of lower potential energy. A theory of differential measure-valued equation / inclusion and selection principle is only a mathematical formalization of such physical systems.

*Q. (A special case of) DHMC looks like taking multiple Gibbs steps?*

A. The crucial difference from Metropolis-within-Gibbs is the presence of momentum in DHMC, which helps guide a Markov chain toward a high density region.

The effect of momentum is evident in the Jolly-Seber example of Section 5.1, where DHMC shows 60-fold efficiency improvement over a sampler “NUTS-Gibbs” based on conditional updates. Also, a direct comparison of DHMC and Metropolis-within-Gibbs can be found in Section S4.1 where DHMC, thanks to the momentum, is about 7 times more efficient than Metropolis-within-Gibbs (with optimal proposal variances).

*Q. Unlike HMC, DHMC does not seem to use structural information about the parameter space and local information about the target density?*

A. It does. After all, other than the use of Laplace momentum and discontinuity in the target density, DHMC is based on the same principle as HMC — simulating Hamiltonian dynamics to generate a proposal.

The confusion is perhaps due to the fact that the coordinate-wise integrator of DHMC does not require gradients. The gradient of the log density — which may be a “delta” function at discontinuities — plays a clear role if you look at Hamilton’s equations Eq (10) corresponding to a Laplace momentum. It’s just that, thanks to a property of a Laplace momentum and conservation of energy principle, we can approximate the exact dynamics without ever computing the gradient. This is in fact a remarkable property of a Laplace momentum and our coordinate-wise integrator.

## Duke gardens

Posted in pictures, Running, Travel, University life with tags Duke University, Durham, pine trees, sunrise on December 26, 2013 by xi'an## O’Bayes 2013 [#3]

Posted in pictures, Running, Statistics, Travel, University life with tags Duke University, Durham, hyper-g-prior, ISBA, median density, O-Bayes 2013, parallelisation, reference priors on December 23, 2013 by xi'an**A** final day for this O’Bayes 2013 conference, where I missed the final session for travelling reasons. Several talks had highly attractive features (for me), from David Dunson’s on his recently arXived paper on parallel MCMC, that provides an alternative to the embarrassingly parallel algorithm I discussed a few weeks ago, to be discussed further in a future post, to Marty Wells hindered by poor weather and delivering by phone a talk on L1 shrinkage estimators (a bit of a paradox since, as discussed by Yuzo Maruyama, most MAP estimators cannot be minimax and, more broadly, since they cannot be expressed as resolutions of loss minimisation), to Malay Ghosh revisiting g-priors from an almost frequentist viewpoint, to Gonzalo Garci-Donato presenting criteria for objective Bayesian model choice in a vision that was clearly the closest to my own perspective on the topic. Overall, when reflecting upon the diversity and high quality of the talks at this O’Bayes meeting, and also as the incoming chair-elect of the corresponding section of ISBA, I think that what emerges most significantly from those talks is an ongoing pondering on the nature of (objective Bayesian) testing, not only in the works extending the g-priors in various directions, but also in the whole debate between Bayes factors and information criteria, model averaging versus model selection. During the discussion on Gonzalo’s talk, David Draper objected to the search for an automated approach to the comparison of models, but I strongly lean towards Gonzalo’s perspective as we need to provide a reference solution able to tackle less formal and more realistic problems. I do hope to see more of those realistic problems tackled at O’Bayes 2015 (which location is not yet settled). In the meanwhile, a strong thank you! to the local organising committee and most specifically to Jim Berger!