## The answer is e, what was the question?!

**A** rather exotic question on X validated: *since π can be approximated by random sampling over a unit square, is there an equivalent for approximating e?* This is an interesting question, as, indeed, why not focus on *e* rather than* π *after all?! But very quickly the very artificiality of the problem comes back to hit one in one’s face… With no restriction, it is straightforward to think of a Monte Carlo average that converges to *e* as the number of simulations grows to infinity. However, such methods like Poisson and normal simulations require some complex functions like sine, cosine, or exponential… But then someone came up with a connection to the great Russian probabilist Gnedenko, who gave as an exercise that the average number of uniforms one needs to add to exceed 1 is exactly *e*, because it writes as

(The result was later detailed in the American Statistician as an introductory simulation exercise akin to Buffon’s needle.) This is a brilliant solution as it does not involve anything but a standard uniform generator. I do not think it relates in any close way to the generation from a Poisson process with parameter λ=1 where the probability to exceed one in one step is *e*⁻¹, hence deriving a Geometric variable from this process leads to an unbiased estimator of *e* as well. As an aside, W. Huber proposed the following elegantly concise line of R code to implement an approximation of *e*:

`1/mean(n*diff(sort(runif(n+1))) > 1)`

Hard to beat, isn’t it?! (Although it is more exactly a Monte Carlo approximation of

which adds a further level of approximation to the solution….)

February 15, 2016 at 5:48 am

By the way, the only condition is that the sequence is exchangeable. There is no need to suppose that the r.v.’s are iid uniforms. The result is much more general.

February 15, 2016 at 4:10 pm

True. This is a rank property.

February 15, 2016 at 5:46 am

This wonderful vectorized R one-liner by Huber made laugh out loud about the “C in R” code I wrote answering this question last year!

http://stats.stackexchange.com/questions/148962/suppose-x-1-x-2-dotsc-x-n-are-i-i-d-random-variables-when-is-the-sequenc?rq=1

February 15, 2016 at 4:09 pm

This is very concise indeed. I am seeking a similar line for the Gnedenko solution, aiming at analysing a long uniform vector in one go with no recursion.

February 17, 2016 at 6:19 pm

I also have a one-line solution for the Forsythe’s method! Soon to be posted on the ‘Og, but already updated within my resolution on X validated.

February 12, 2016 at 3:05 pm

As correctly pointed out by Alex Thiery, the only non trivial thing to prove is that when the ‘s are independent and uniform on (0,1). The best is to imagine that is uniform in the cube and consider the volume of the tetrahedron defined by

February 12, 2016 at 5:52 pm

Very nice and intuitive explanation as to why the probability is 1/k!

February 12, 2016 at 4:49 am

[…] article was first published on R – Xi’an’s Og , and kindly contributed […]

February 12, 2016 at 4:45 am

That’s neat!

One can also think about it in terms of random truncations (a.k.a.Russian roulette). Indeed, an unbiased estimate of “e” is given by the sum from k=1 to k=N of 1/[k! * P(k <= N)] where N is any reasonable integer valued random variable.

If N is the number of uniform random variables one needs to add up to exceed 1 then one precisely has that P(k <= N) = 1/k! and this gives back the mentioned result. But there are many other choices…

February 12, 2016 at 7:02 am

Right! The Glynn-Rhee approach has indeed the power to turn it into an unbiased estimator. Thanks.