## Hamilton confronting intractability (with a li’le help from Metropolis)

**L**ast day of a great workshop! I filled more pages of my black notebook (“bloc”) than in the past month!!! This morning started with an Hamiltonian session, Paul Fearnhead presenting recent developments in this area. I liked his coverage very much, esp. because it went away from the physics analogies that always put me off. The idea of getting away from the quadratic form had always seemed natural to me and provided an interesting range for investigations. (I think I rediscovered the topic during the talks, rephrasing almost the same questions as for Girolami’s and Calderhead’s Read Paper!) One thing that still intrigues me is the temporal dimension of the Hamiltonian representation. Indeed, it is “free” in the sense that the simulation problem does not depend on the time the pair (x,p) is moved along the equipotential curve. (In practice, there is a cost in running this move because it needs to be discretised.) But there is no clear target function to set the time “right”. The only scale I can think of is when the pair comes back by its starting point. Which is less silly than it sounds because the discretisation means that all intermediate points can be used, as suggested by Paul via a multiple try scheme. Mark then presented an application of Hamiltonian ideas and schemes to biochemical dynamics, with a supplementary trick of linearisation. Christian Lorenz Müller gave an ambitious *grand tour* of gradient free optimisation techniques that sounded appealing from a simulation perspective (but would require a few more hours to apprehend!), Geoff Nicholls presented on-going research on approximating Metropolis-Hastings acceptance probabilities in a more general perspective than *à la* Andrieu-Robert, i.e. accepting some amount of bias, an idea he has explained to me when I visited Oxford. And Pierre Jacob concluded the meeting in the right tone with a *pot-pourri* of his papers on Wang-Landau. (Once again a talk I had already heard but that helped me make more sense of a complex notion…)

**O**verall and talk-by-talk, a truly exceptional meeting. Which also set the bar quite high for us to compete at the ICMS meeting on advances in MCM next Monday! Esp. when a portion of the audience in Bristol will appear in Edinburgh as well!an In the meanwhile, I have to rewrite my talk for the seminar in Glasgow tomorrow in order to remove the overlap with my talk there last year… *(I note that I have just managed to fly to Scotland with no lost bag, a true achievement!)*

April 21, 2012 at 12:48 am

Hoffman and Gelman’s No-U-Turn Sampler deals with setting the number of leapfrog steps. Roughly speaking, NUTS takes increasing numbers of leapfrog steps until the path starts heading back to where it started, then slice samples from the elements on the path. The details are in their Arxiv paper and there’s a C++ implementation in Stan and a MATLAB implementation of Matt’s. In some cases, it can even be more efficient than optimally tuned HMC.

April 20, 2012 at 1:16 pm

How biased can the acceptance probability be? For large GMRF calculations (where the log-density is the “most impossible” thing to compute), it’s fairly easy to get an unbiassed estimate of the log of the acceptance probability, but exponentiating it obviously leads to bias. Is this enough? (This has been annoying me since I did my phd….)