**W**hen looking for a link in a recent post, I came across Richard Brent’ arXival of historical comments on George Forsythe’s last paper (in 1972). Which is about the Forsythe-von Neumann approach to simulating exponential variates, covered in Luc Devroye’s Non-Uniform Random Variate Generation in a special section, Section 2 of Chapter 4, is about generating a random variable from a target density proportional to g(x)exp(-F(x)), where g is a density and F is a function on (0,1). Then, after generating a realisation x⁰ from g and computing F(x⁰), generate a sequence u¹,u²,… of uniforms as long as they keep decreasing, i.e., F(x⁰) >u¹>u²>… If the maximal length k of this sequence is odd, the algorithm exists with a value x⁰ generated from g(x)exp(-F(x)). Von Neumann (1949) treated the special case when g is constant and F(x)=x, which leads to an Exponential generator that never calls an exponential function. Which does not make the proposal a particularly efficient one as it rejects O(½) of the simulations. Refinements of the algorithm lead to using on average 1.38 uniforms per Normal generation, which does not sound much faster than a call to the Box-Muller method, despite what is written in the paper. (Brent also suggests using David Wallace’s 1999 Normal generator, which I had not encountered before. And which I am uncertain is relevant at the present time.)

## Archive for George Forsythe

## George Forsythe’s last paper

Posted in Books, Statistics, University life with tags George Forsythe, John von Neumann, last paper, random number generator on May 25, 2018 by xi'an## best unbiased estimator of θ² for a Poisson model

Posted in Books, Kids, pictures, Statistics, Travel, University life with tags Bahamas, best unbiased estimator, complete statistics, counterexample, cross validated, George Forsythe, John von Neumann, Monte Carlo Statistical Methods, Poisson distribution, Rao-Blackwell theorem, sailing, shark, simulation on May 23, 2018 by xi'an**A** mostly traditional question on X validated about the “best” [minimum variance] unbiased estimator of θ² from a Poisson P(θ) sample leads to the Rao-Blackwell solution

and a similar estimator could be constructed for θ³, θ⁴, … With the interesting limitation that this procedure stops at the power equal to the number of observations (minus one?). But, since the expectation of a power of the sufficient statistics S [with distribution P(nθ)] is a polynomial in θ, there is *de facto* no limitation. More interestingly, there is no unbiased estimator of negative powers of θ in this context, while this neat comparison on Wikipedia (borrowed from the great book of counter-examples by Romano and Siegel, 1986, selling for a mere $180 on amazon!) shows why looking for an unbiased estimator of exp(-2θ) is particularly foolish: the only solution is (-1) to the power S [for a single observation]. (There is however a first way to circumvent the difficulty if having access to an arbitrary number of generations from the Poisson, since the Forsythe – von Neuman algorithm allows for an unbiased estimation of exp(-F(x)). And, as a second way, as remarked by Juho Kokkala below, a sample of at least two Poisson observations leads to a more coherent best unbiased estimator.)

## more e’s [and R’s]

Posted in Kids, pictures, R, Statistics with tags cross validated, debiasing, George Forsythe, Monte Carlo Statistical Methods, R, Russian roulette, simulation, unbiased estimation on February 22, 2016 by xi'an**A**lex Thiéry suggested debiasing the biased estimate of e by Rhee and Glynn truncated series method, so I tried the method to see how much of an improvement (if any!) this would bring. I first attempted to naïvely implement the raw formula of Rhee and Glynn

with a (large) Poisson distribution on the stopping rule N, but this took ages. I then realised that the index n did not have to be absolute, i.e. to start at n=1 and proceed snailwise one integer at a time: the formula remains equally valid after a change of time, i.e. n=can start at an arbitrary value and proceeds by steps of arbitrary size, which obviously speeds things up! Continue reading

## Гнеде́нко and Forsythe [and e]

Posted in Books, Kids, R, Statistics, University life with tags Гнеде́нко, exponential distribution, George Forsythe, Gnedenko, John von Neumann, Luc Devroye, Monte Carlo Statistical Methods, Theory of Probability, uniform distribution on February 16, 2016 by xi'an**I**n the wake of my earlier post on the Monte Carlo estimation of *e* and *e⁻¹*, after a discussion with my colleague Murray Pollock (Warwick) Gnedenko’s solution, he pointed out another (early) Monte Carlo approximation called Forsythe’s method. That is detailed quite neatly in Luc Devroye’s bible, Non-uniform random variate generation (a free bible!). The idea is to run a sequence of uniform generations until the sequence goes up, i.e., until the latest uniform is larger than the previous one. The expectation of the corresponding stopping rule, N, which is the number of generations the uniform sequence went down continuously is then *e*, while the probability that N is odd is *e⁻¹, *most unexpectedly! Forsythe’s method actually aims at a much broader goal, namely simulating from any density of the form g(x) exp{-F(x)}, F taking values in (0,1). This method generalises von Neuman’s exponential generator (see Devroye, p.126) which only requires uniform generations.

This is absolutely identical to Gnedenko’s approach in that both events have the same 1/n! probability to occur [as pointed out by Gérard Letac in a comment on the previous entry]. *(I certainly cannot say whether or not one of the authors was aware of the other’s result: Forsythe generalised von Neumann‘s method around 1972, while Gnedenko published Theory of Probability at least in 1969, but this may be the date of the English translation, I have not been able to find the reference on the Russian wikipedia page…)* Running a small R experiment to compare both distributions of N, the above barplot shows that they look quite similar:

n=1e6 use=runif(n) # Gnedenko's in action: gest=NULL i=1 while (i<(n-100)){ sumuse=cumsum(use[i:(i+10)]) if (sumuse[11]<1]) sumuse=cumsum(use[i:(i+100)]) j=min((1:length(sumuse))[sumuse>1]) gest=c(gest,j) i=i+j} #Forsythe's method: fest=NULL i=1 while (i<(n-100)){ sumuse=c(-1,diff(use[i:(i+10)])) if (max(sumuse)<0]) sumuse=c(-1,diff(use[i:(i+100)])) j=min((1:length(sumuse))[sumuse>0]) fest=c(fest,j) i=i+j}

And the execution times of both approaches [with this rudimentary R code!] are quite close.

## Forsythe’s algorithm

Posted in R, Statistics with tags Bernoulli factory, George Forsythe, John von Neumann, Monte Carlo methods, normal distribution, series expansion, simulation on May 9, 2010 by xi'an**I**n connection with the Bernoulli factory post of last week, Richard Brent arXived a short historical note recalling George Forsythe’s algorithm for simulating variables with density when (the extension to any upper bound is straightforward). The idea is to avoid computing the exponential function by simulating uniforms until

since the probability of this event is

its expectation is and the probability that *n* is even is . This turns into a generation method if the support of *G* is bounded. In relation with the Bernoulli factory problem, I think this has potential applications in that, when the function *G(x)* is replaced with an unbiased estimator the subsequent steps remain valid. This approach would indeed involve computing one single value of *G(x)*, but this is also the case with Latuszyński et al.’s and our solutions… So I am uncertain as to whether or not this has practical implications. (Brent mentions normal simulation but this is more history than methodology.)