**M**ichael Betancourt, my colleague from Warwick, arXived a month ago a paper about a differential geometry approach to relaxation. *(In the Monte Carlo rather than the siesta sense of the term relaxation!)* He is considering the best way to link a simple base measure ϖ to a measure of interest π by the sequence

where Z(β) is the normalising constant (or *partition function* in the thermodynamic translation). Most methods are highly dependent on how the sequence of β’s is chosen. A first nice result (for me) is that the Kullback-Leibler distance and the partition function are strongly related in that

which means that the variation in the normalising constant is driving the variation in the Kullback-Leibler distance. The next section goes into differential geometry and the remains from my Master course in differential geometry alas are much too scattered for me to even remember some notions like that of a *bundle*… So, like Andrew, I have trouble making sense of the resulting algorithm, which updates the temperature β along with the position and speed. (It sounds like an extra and corresponding energy term is added to the original Hamiltonian function.) Even the Beta-Binomial

example is somewhat too involved for me. So I tried to write down the algorithm step by step in this special case. Which led to

- update β into β-εδp’²
- update p into p-εδp’
- update p’ into p’+ε{(1-a)/p+(b-1)/(1-p)}
- compute the average log-likelihood, λ* under the tempered version of the target (at temperature β)
- update p’ into p’+2εβ{(1-a)/p+(b-1)/(1-p)}-ε[λ-λ*]p’
- update p’ into p’+ε{(1-a)/p+(b-1)/(1-p)}
- update β into β-εδp’²
- update p into p-εδp’

where p’ denotes the momentum auxiliary variable associated with the kinetic energy. And λ is the current log-likelihood. (The parameter ε was equal to 0.005 and I could not find the value of δ.) The only costly step in the above list is the approximation of the log-likelihood average λ*. The above details make the algorithm quite clear but I am still missing the intuition behind…