Archive for accept-reject algorithm

uniform on the sphere [or not]

Posted in pictures, R, Statistics with tags , , , , , , , , , , , , on March 8, 2018 by xi'an

While looking at X validated questions, I came upon this comment that simulating a uniform distribution on a d-dimensional unit sphere does not proceed from generating angles at random on (0,2π) and computing spherical coordinates… Which I must confess would have been my initial suggestion! This is obvious, nonetheless, when computing the Jacobian of the spherical coordinate transform, which involves powers of the sines of the angles, in a decreasing sequence from d-1 to zero. This means that the angles should be simulated according to their respective sine-power densities. However, except for the d=3 case, where simulating from the density sin(φ) is straightforward by inverse cdf, i.e. φ=acos(1-2u), the cdfs for the higher powers are combinations of sines and cosines, and as such are not easily inverted. Take for instance the eighth power:

F⁸(φ)=(840 φ – 672 sin(2 φ) + 168 sin(4 φ) – 32 sin(6 φ) + 3 sin(8 φ))/3072

While the densities are bounded by sin(φ), up to a constant, and hence an accept-reject can be easily derived, the efficiency decreases with the dimension according to the respective ratio of the Wallis’ integrals, unsurprisingly. A quick check for d=4 shows that the Normal simulation+projection-by-division-by-its-norm is faster.

Puzzling a bit further about this while running, I wondered at the simultaneous simulations from sin(φ), sin(φ)², sin(φ)³, &tc., but cannot see a faster way to recycle simulations from sin(φ). Points (φ,u) located in-between two adjacent power curves are acceptable simulations from the corresponding upper curve but they need be augmented by points (φ,u) under the lower curve to constitute a representative sample. In the end, this amounts to multiplying simulations from the highest power density as many times as there are powers. No gain in sight… Sigh!

However, a few days later, while enjoying the sunset over Mont Blanc(!), I figured out that there exists a direct and efficient way to simulate from these powers of the sine function. Indeed, when looking at the density of cos(φ), it happens to be the signed root of a Beta(½,(d-1)/2), which avoids the accept-reject step. Presumably this is well-known, but I have not seen this proposal associated with the uniform distribution on the sphere.

R package truncnorm

Posted in Statistics with tags , , , , , on November 8, 2017 by xi'an

This week in Warwick, thanks to a (rather incomprehensible) X validated question, I came across the CRAN R package truncnorm, which provides the “density, distribution function, quantile function, random generation and expected value function for the truncated normal distribution”. The short description of the sampler states that the method follows the accept-reject solution of John Geweke (1991), which I reproduced [independently!] a few years later. I may have missed the right code, but checking on the Github depository associated with this package, I did not find in the C code a trace of our optimal solution via a translated exponential proposal, since the exponential proosal, when used, relies on a scale equal to the left truncation point, a in the above picture. Obviously, this does not make a major difference in the execution time (and the algorithm is still correct!).

Why is it necessary to sample from the posterior distribution if we already KNOW the posterior distribution?

Posted in Statistics with tags , , , , , , , , on October 27, 2017 by xi'an

I found this question on X validated somewhat hilarious, the more because of the shouted KNOW! And the confused impression that because one can write down π(θ|x) up to a constant, one KNOWS this distribution… It is actually one of the paradoxes of simulation that, from a mathematical perspective, once π(θ|x) is available as a function of (θ,x), all other quantities related with this distribution are mathematically perfectly and uniquely defined. From a numerical perspective, this does not help. Actually, when starting my MCMC course at ENSAE a few days later, I had the same question from a student who thought facing a density function like

f(x) ∞ exp{-||x||²-||x||⁴-||x||⁶}

was enough to immediately produce simulations from this distribution. (I also used this example to show the degeneracy of accept-reject as the dimension d of x increases, using for instance a Gamma proposal on y=||x||. The acceptance probability plunges to zero with d, with 9 acceptances out of 10⁷ for d=20.)

ARS: when to update?

Posted in Books, Kids, Statistics, University life with tags , , , , , on May 25, 2017 by xi'an

An email I got today from Heng Zhou wondered about the validity of the above form of the ARS algorithm. As printed in our book Monte Carlo Statistical Methods. The worry is that in the original version of the algorithm the envelope of the log-concave target f(.) is only updated for rejected values. My reply to the question is that there is no difference in the versions towards returning a value simulated from f, since changing the envelope between simulations does not modify the accept-reject nature of the algorithm. There is no issue of dependence between the simulations of this adaptive accept-reject method, all simulations remain independent. The question is rather one about efficiency, namely does it pay to update the envelope(s) when accepting a new value and I think it does because the costly part is the computation of f(x), rather than the call to the piecewise-exponential envelope. Correct me if I am wrong!

complexity of the von Neumann algorithm

Posted in Statistics with tags , , , , , , , , , on April 3, 2017 by xi'an

“Without the possibility of computing infimum and supremum of the density f over compact subintervals of the domain of f, sampling absolutely continuous distribution using the rejection method seems to be impossible in total generality.”

The von Neumann algorithm is another name for the rejection method introduced by von Neumann circa 1951. It was thus most exciting to spot a paper by Luc Devroye and Claude Gravel appearing in the latest Statistics and Computing. Assessing the method in terms of random bits and precision. Specifically, assuming that the only available random generator is one of random bits, which necessarily leads to an approximation when the target is a continuous density. The authors first propose a bisection algorithm for distributions defined on a compact interval, which compares random bits with recursive bisections of the unit interval and stops when the interval is small enough.

In higher dimension, for densities f over the unit hypercube, they recall that the original algorithm consisted in simulating uniforms x and u over the hypercube and [0,1], using the uniform as the proposal distribution and comparing the density at x, f(x), with the rescaled uniform. When using only random bits, the proposed method is based on a quadtree that subdivides the unit hypercube into smaller and smaller hypercubes until the selected hypercube is entirely above or below the density. And is small enough for the desired precision. This obviously requires for the computation of the upper and lower bound of the density over the hypercubes to be feasible, with Devroye and Gravel considering that this is a necessary property as shown by the above quote. Densities with non-compact support can be re-expressed as densities on the unit hypercube thanks to the cdf transform. (Actually, this is equivalent to the general accept-reject algorithm, based on the associated proposal.)

“With the oracles introduced in our modification of von Neumann’s method, we believe that it is impossible to design a rejection algorithm for densities that are not Riemann-integrable, so the question of the design of a universally valid rejection algorithm under the random bit model remains open.”

In conclusion, I enjoyed very much reading this paper, especially the reflection it proposes on the connection between Riemann integrability and rejection algorithms. (Actually, I cannot think straight away of a simulation algorithm that would handle non-Riemann-integrable densities, apart from nested sampling. Or of significant non-Riemann-integrable densities.)

Sobol’s Monte Carlo

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , on December 10, 2016 by xi'an


The name of Ilya Sobol is familiar to researchers in quasi-Monte Carlo methods for his Sobol’s sequences. I was thus surprised to find in my office a small book entitled The Monte Carlo Method by this author, which is a translation of his 1968 book in Russian. I have no idea how it reached my office and I went to check with the library of Paris-Dauphine around the corner [of my corridor] whether it had been lost: apparently, the library got rid of it among a collection of old books… Now, having read through this 67 pages book (or booklet as Sobol puts it) makes me somewhat agree with the librarians, in that there is nothing of major relevance in this short introduction. It is quite interesting to go through the book and see the basics of simulation principles and Monte Carlo techniques unfolding, from the inverse cdf principle [established by a rather convoluted proof] to importance sampling, but the amount of information is about equivalent to the Wikipedia entry on the topic. From an historical perspective, it is also captivating to see the efforts to connect physical random generators (such as those based on vacuum tube noise) to shift-register pseudo-random generators created by Sobol in 1958. On a Soviet Strela computer.

While Googling the title of that book could not provide any connection, I found out that a 1994 version had been published under the title of A Primer for the Monte Carlo Method, which is mostly the same as my version, except for a few additional sections on pseudo-random generation, from the congruential method (with a FORTRAN code) to the accept-reject method being then called von Neumann’s instead of Neyman’s, to the notion of constructive dimension of a simulation technique, which amounts to demarginalisation, to quasi-Monte Carlo [for three pages]. A funny side note is that the author notes in the preface that the first translation [now in my office] was published without his permission!

Poisson process model for Monte Carlo methods

Posted in Books with tags , , , , , , , on February 25, 2016 by xi'an

gumblegum“Taken together this view of Monte Carlo simulation as a maximization problem is a promising direction, because it connects Monte Carlo research with the literature on optimization.”

Chris Maddison arXived today a paper on the use of Poisson processes in Monte Carlo simulation. based on the so-called Gumbel-max trick, which amounts to add to the log-probabilities log p(i) of the discrete target, iid Gumbel variables, and to take the argmax as the result of the simulation. A neat trick as it does not require the probability distribution to be normalised. And as indicated in the above quote to relate simulation and optimisation. The generalisation considered here replaces the iid Gumbel variates by a Gumbel process, which is constructed as an “exponential race”, i.e., a Poisson process with an exponential auxiliary variable. The underlying variates can be generated from a substitute density, à la accept-reject, which means this alternative bounds the true target.  As illustrated in the plot above.

The paper discusses two implementations of the principle found in an earlier NIPS 2014 paper [paper that contains most of the novelty about this method], one that refines the partition and the associated choice of proposals, and another one that exploits a branch-and-bound tree structure to optimise the Gumbel process. With apparently higher performances. Overall, I wonder at the applicability of the approach because of the accept-reject structure: it seems unlikely to apply to high dimensional problems.

While this is quite exciting, I find it surprising that this paper completely omits references to Brian Ripley’s considerable input on simulation and point processes. As well as the relevant Geyer and Møller (1994). (I am obviously extremely pleased to see that our 2004 paper with George Casella and Marty Wells is quoted there. We had written this paper in Cornell, a few years earlier, right after the 1999 JSM in Baltimore, but it has hardly been mentioned since then!)