Archive for Barker’s algorithm

Monte Carlo calculations of the radial distribution functions for a proton-electron plasma

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , on October 11, 2017 by xi'an

“In conclusion, the Monte Carlo method of calculating radial distribution functions in a plasma is a feasible approach if significant computing time is available (…) The results indicate that at least 10000 iterations must be completed before the system can be considered near to its equilibrium state, and for a badly chosen starting configuration, the run would need to be considerably longer (…) for more conclusive results a longer run is needed so that the energy of the system can settle into an equilibrium pattern and steady-state radial distribution functions can be obtained.” A.A. Barker

Looking for the history behind Barker’s formula the other day made me look for the original 1965 paper. Which got published in the Australian Journal of Physics at the beginning of Barker’s PhD at the University of Adelaide.

As shown in the above screenshot, the basis  of Barker’s algorithm is indeed Barker’s acceptance probability, albeit written in a somewhat confusing way since the current value of the chain is kept if a Uniform variate is smaller than what is actually the rejection probability. No mistake there! And more interestingly, Barker refers to Wood and Parker (1957) for the “complete and rigorous theory” behind the method. (Both Wood and Parker being affiliated with Los Alamos Scientific Laboratory, while Barker acknowledges support from both the Australian Institute of Nuclear Science and Engineering and the Weapons Research Establishment, Salisbury… This were times when nuclear weapon research was driving MCMC. Hopefully we will not come back to such times. Or, on the pessimistic side, we will not have time to come back to such times!)

As in Metropolis et al. (1953), the analysis is made on a discretised (finite) space, building the Markov transition matrix, stating the detailed balance equation (called microscopic reversibility). Interestingly, while Barker acknowledges that there are other ways of assigning the transition probability, his is the “most rapid” in terms of mixing. And equally interestingly, he discusses the scale of the random walk in the [not-yet-called] Metropolis-within-Gibbs move as major, targetting 0.5 as the right acceptance rate, and suggesting to adapt this scale on the go. There is also a side issue that is apparently not processed with all due rigour, namely the fact that the particles in the system cannot get arbitrarily close from one another. It is unclear how a proposal falling below this distance is processed by Barker’s algorithm. When implemented on 32 particles, this algorithm took five hours to execute 6100 iterations. With a plot of the target energy function that does not shout convergence, far from it! As acknowledged by Barker himself (p.131).

The above quote is from the conclusion and its acceptance of the need for increased computing times comes as a sharp contrast with this week when one of our papers was rejected based on this very feature..!

Barker at the Bernoulli factory

Posted in Books, Statistics with tags , , , , , , , on October 5, 2017 by xi'an

Yesterday, Flavio Gonçalves, Krzysztof Latuszýnski, and Gareth Roberts (Warwick) arXived a paper on Barker’s algorithm for Bayesian inference with intractable likelihoods.

“…roughly speaking Barker’s method is at worst half as good as Metropolis-Hastings.”

Barker’s acceptance probability (1965) is a smooth if less efficient version of Metropolis-Hastings. (Barker wrote his thesis in Adelaide, in the Mathematical Physics department. Most likely, he never interacted with Ronald Fisher, who died there in 1962) This smoothness is exploited by devising a Bernoulli factory consisting in a 2-coin algorithm that manages to simulate the Bernoulli variable associated with the Barker probability, from a coin that can simulate Bernoulli’s with probabilities proportional to [bounded] π(θ). For instance, using a bounded unbiased estimator of the target. And another coin that simulates another Bernoulli on a remainder term. Assuming the bound on the estimate of π(θ) is known [or part of the remainder term]. This is a neat result in that it expands the range of pseudo-marginal methods (and resuscitates Barker’s formula from oblivion!). The paper includes an illustration in the case of the far-from-toyish Wright-Fisher diffusion. [Making Fisher and Barker meeting, in the end!]

understanding the Hastings algorithm

Posted in Books, Statistics with tags , , , , , on August 26, 2014 by xi'an

David Minh and Paul Minh [who wrote a 2001 Applied Probability Models] have recently arXived a paper on “understanding the Hastings algorithm”. They revert to the form of the acceptance probability suggested by Hastings (1970):

\rho(x,y) = s(x,y) \left(1+\dfrac{\pi(x) q(y|x)}{\pi(y) q(x|y)}\right)^{-1}

where s(x,y) is a symmetric function keeping the above between 0 and 1, and q is the proposal. This obviously includes the standard Metropolis-Hastings form of the ratio, as well as Barker’s (1965):

\rho(x,y) = \left(1+\dfrac{\pi(x) q(y|x)}{\pi(y) q(x|y)}\right)^{-1}

which is known to be less efficient by accepting less often (see, e.g., Antonietta Mira’s PhD thesis). The authors also consider the alternative

\rho(x,y) = \min(\pi(y)/ q(y|x),1)\,\min(q(x|y)/\pi(x),1)

which I had not seen earlier. It is a rather intriguing quantity in that it can be interpreted as (a) a simulation of y from the cutoff target corrected by reweighing the previous x into a simulation from q(x|y); (b) a sequence of two acceptance-rejection steps, each concerned with a correspondence between target and proposal for x or y. There is an obvious caveat in this representation when the target is unnormalised since the ratio may then be arbitrarily small… Yet another alternative could be proposed in this framework, namely the delayed acceptance probability of our paper with Marco and Clara, one special case being

\rho(x,y) = \min(\pi_1(y)q(x|y)/\pi_1(x) q(y|x),1)\,\min(\pi_2(y)/\pi_1(x),1)

where

\pi(x)\propto\pi_1(x)\pi_2(x)

is an arbitrary decomposition of the target. An interesting remark in the paper is that any Hastings representation can alternatively be written as

\rho(x,y) = \min(\pi(y)/k(x,y)q(y|x),1)\,\min(k(x,y)q(x|y)/\pi(x),1)

where k(x,y) is a (positive) symmetric function. Hence every single Metropolis-Hastings is also a delayed acceptance in the sense that it can be interpreted as a two-stage decision.

The second part of the paper considers an extension of the accept-reject algorithm where a value y proposed from a density q(y) is accepted with probability

\min(\pi(y)/ Mq(y),1)

and else the current x is repeated, where M is an arbitrary constant (incl. of course the case where it is a proper constant for the original accept-reject algorithm). Curiouser and curiouser, as Alice would say! While I think I have read some similar proposal in the past, I am a wee intrigued at the appear of using only the proposed quantity y to decide about acceptance, since it does not provide the benefit of avoiding generations that are rejected. In this sense, it appears as the opposite of our vanilla Rao-Blackwellisation. (The paper however considers the symmetric version called the independent Markovian minorizing algorithm that only depends on the current x.) In the extension to proposals that depend on the current value x, the authors establish that this Markovian AR is in fine equivalent to the generic Hastings algorithm, hence providing an interpretation of the “mysterious” s(x,y) through a local maximising “constant” M(x,y). A possibly missing section in the paper is the comparison of the alternatives, albeit the authors mention Peskun’s (1973) result that exhibits the Metropolis-Hastings form as the optimum.