## flea circus

Posted in Books, Kids, pictures, R, Statistics with tags , , , , , , , , , , , on December 8, 2016 by xi'an

An old riddle found on X validated asking for Monte Carlo resolution  but originally given on Project Euler:

A 30×30 grid of squares contains 30² fleas, initially one flea per square. When a bell is rung, each flea jumps to an adjacent square at random. What is the expected number of unoccupied squares after 50 bell rings, up to six decimal places?

The debate on X validated is whether or not a Monte Carlo resolution is feasible. Up to six decimals, certainly not. But with some lower precision, certainly. Here is a rather basic R code where the 50 steps are operated on the 900 squares, rather than the 900 fleas. This saves some time by avoiding empty squares.

xprmt=function(n=10,T=50){

mean=0
for (t in 1:n){

board=rep(1,900)
for (v in 1:T){

beard=rep(0,900)
if (board[1]>0){
poz=c(0,1,0,30)
ne=rmultinom(1,board[1],prob=(poz!=0))
beard[1+poz]=beard[1+poz]+ne}
#
for (i in (2:899)[board[-1][-899]>0]){
u=(i-1)%%30+1;v=(i-1)%/%30+1
poz=c(-(u>1),(u<30),-30*(v>1),30*(v<30))
ne=rmultinom(1,board[i],prob=(poz!=0))
beard[i+poz]=beard[i+poz]+ne}
#
if (board[900]>0){
poz=c(-1,0,-30,0)
ne=rmultinom(1,board[900],prob=(poz!=0))
beard[900+poz]=beard[900+poz]+ne}
board=beard}
mean=mean+sum(board==0)}
return(mean/n)}


The function returns an empirical average over n replications. With a presumably awkward approach to the borderline squares, since it involves adding zeros to keep the structure the same… Nonetheless, it produces an approximation that is rather close to the approximate expected value, in about 3mn on my laptop.

> exprmt(n=1e3)
[1] 331.082
> 900/exp(1)
[1] 331.0915


Further gains follow from considering only half of the squares, as there are two independent processes acting in parallel. I looked at an alternative and much faster approach using the stationary distribution, with the stationary being the Multinomial (450,(2/1740,3/1740…,4/1740,…,2/1740)) with probabilities proportional to 2 in the corner, 3 on the sides, and 4 in the inside. (The process, strictly speaking, has no stationary distribution, since it is periodic. But one can consider instead the subprocess indexed by even times.) This seems to be the case, though, when looking at the occupancy frequencies, after defining the stationary as:

inva=function(B=30){
return(c(2,rep(3,B-2),2,rep(c(3,
rep(4,B-2),3),B-2),2,rep(3,B-2),2))}


namely

> mn=0;n=1e8 #14 clock hours!
> proz=rep(c(rep(c(0,1),15),rep(c(1,0),15)),15)*inva(30)
> for (t in 1:n)
+ mn=mn+table(rmultinom(1,450,prob=rep(1,450)))[1:4]
> mn=mn/n
> mn[1]=mn[1]-450
> mn
0      1      2     3
166.11 164.92  82.56 27.71
> exprmt(n=1e6) #55 clock hours!!
[1] 165.36 165.69 82.92 27.57

my original confusion being that the Poisson approximation had not yet taken over… (Of course, computing the first frequency for the stationary distribution does not require any simulation, since it is the sum of the complement probabilities to the power 450, i.e., 166.1069.)

## random walk on a torus [riddle]

Posted in Books, Kids, pictures with tags , , , , , , , , , on September 16, 2016 by xi'an

The Riddler of this week(-end) has a simple riddle to propose, namely given a random walk on the {1,2,…,N} torus with a ⅓ probability of death, what is the probability of death occurring at the starting point?

The question is close to William Feller’s famous Chapter III on random walks. With his equally famous reflection principle. Conditioning on the time n of death, which as we all know is definitely absorbing (!), the event of interest is a passage at zero, or any multiple of N (omitting the torus cancellation), at time n-1 (since death occurs the next time). For a passage in zero, this does not happen if n is even (since n-1 is odd) and else it is a Binomial event with probability

${n \choose \frac{n-1}{2}} 2^{-n}$

For a passage in kN, with k different from zero, kN+n must be odd and the probability is then

${n \choose \frac{n-1+kN}{2}} 2^{-n}$

which leads to a global probability of

$\sum_{n=0}^\infty \dfrac{2^n}{3^{n+1}} \sum_{k=-\lfloor (n-1)/N \rfloor}^{\lfloor (n+1)/N \rfloor} {n \choose \frac{n-1+kN}{2}} 2^{-n}$

i.e.

$\sum_{n=0}^\infty \dfrac{1}{3^{n+1}} \sum_{k=-\lfloor (n-1)/N \rfloor}^{\lfloor (n+1)/N \rfloor} {n \choose \frac{n-1+kN}{2}}$

Since this formula is rather unwieldy I looked for another approach in a métro ride [to downtown Paris to enjoy a drink with Stephen Stiegler]. An easier one is to allocate to each point on the torus a probability p[i] to die at position 1 and to solve the system of equations that is associated with it. For instance, when N=3, the system of equations is reduced to

$p_0=1/3+2/3 p_1, \quad p_1=1/3 p_0 + 1/3 p_1$

which leads to a probability of ½ to die at position 0 when leaving from 0. When letting N grows to infinity, the torus structure no longer matters and the probability of dying at position 0 implies returning in position 0, which is a special case of the above combinatoric formula, namely

$\sum_{m=0}^\infty \dfrac{1}{3^{2m+1}} {2m \choose m}$

which happens to be equal to

$\dfrac{1}{3}\,\dfrac{1}{\sqrt{1-4/9}}=\dfrac{1}{\sqrt{5}}\approx 0.4472$

as can be [unnecessarily] checked by a direct R simulation. This √5 is actually the most surprising part of the exercise!

Posted in Books, Kids, University life with tags , , , , , on June 30, 2016 by xi'an

A very interesting issue of Nature I read this morning while having breakfast. A post-brexit read of a pre-brexit issue. Apart from the several articles arguing against Brexit and its dire consequences on British science [but preaching to the converted for which percentage of the Brexit voters does read Nature?!], a short vignette on the differences between fields for the average time spent for refereeing a paper (maths takes twice as long as social sciences and academics older than 65 half the time of researchers under 36!). A letter calling for action against predatory publishers. And the first maths paper published since I started reading Nature on an almost-regular basis: it studies mean first-passage time for non-Markov random walks. Which are specified as time-homogeneous increments. It is sort of a weird maths paper in that I do not see where the maths novelty stands and why the paper only contains half a dozen formulas… Maybe not a maths paper after all.

## optimal simulation on a convex set

Posted in R, Statistics with tags , , , , , , on February 4, 2016 by xi'an

This morning, we had a jam session at the maths department of Paris-Dauphine where a few researchers & colleagues of mine presented their field of research to the whole department. Very interesting despite or thanks to the variety of topics, with forays into the three-body problem(s) [and Poincaré‘s mistake], mean fields for Nash equilibrium (or how to exit a movie theatre), approximate losses in machine learning and so on. Somehow, there was some unity as well through randomness, convexity and optimal transport. One talk close to my own interests was obviously the study of simulation within convex sets by Joseph Lehec from Paris-Dauphine [and Sébastien Bubeck & Ronen Eldan] as they had established a total variation convergence result at a speed only increasing polynomially with the dimension.  The underlying simulation algorithm is rather theoretical in that it involves random walk (or Langevin corrected) moves where any excursion outside the convex support is replaced with its projection on the set. Projection that may prove pretty expensive to compute if the convex set is defined for instance as the intersection of many hyperplanes. So I do not readily see how the scheme can be recycled into a competitor to a Metropolis-Hastings solution in that the resulting chain hits the boundary from time to time. With the same frequency over iterations. A solution is to instead use Metropolis-Hastings of course, while another one is to bounce on the boundary and then correct by Metropolis-Hastings… The optimal scales in the three different cases are quite different, from √d in the Metropolis-Hastings cases to d√d in the projection case. (I did not follow the bouncing option to the end, as it lacks a normalising constant.) Here is a quick and not particularly helpful comparison of the exploration patterns of both approaches in dimension 50 for the unit sphere and respective scales of 10/d√d [blue] and 1/√d [gold].

## R typos

Posted in Books, Kids, R, Statistics, Travel, University life with tags , , , , , , , , on January 27, 2016 by xi'an

At MCMskv, Alexander Ly (from Amsterdam) pointed out to me some R programming mistakes I made in the introduction to Metropolis-Hastings algorithms I wrote a few months ago for the Wiley on-line encyclopedia! While the outcome (Monte Carlo posterior) of the corrected version is moderately changed this is nonetheless embarrassing! The example (if not the R code) was a mixture of a Poisson and a Geometric distributions borrowed from our testing as mixture paper. Among other things, I used a flat prior on the mixture weights instead of a Beta(1/2,1/2) prior and a simple log-normal random walk on the mean parameter instead of a more elaborate second order expansion discussed in the text. And I also inverted the probabilities of success and failure for the Geometric density. The new version is now available on arXiv, and hopefully soon on the Wiley site, but one (the?) fact worth mentioning here is that the (right) corrections in the R code first led to overflows, because I was using the Beta random walk Be(εp,ε(1-p)) which major drawback I discussed here a few months ago. With the drag that nearly zero or one values of the weight parameter produced infinite values of the density… Adding 1 (or 1/2) to each parameter of the Beta proposal solved the problem. And led to a posterior on the weight still concentrating on the correct corner of the unit interval. In any case, a big thank you to Alexander for testing the R code and spotting out the several mistakes…

## independent Metropolis-Hastings

Posted in Books, Statistics with tags , , , , , , on November 24, 2015 by xi'an

“In this paper we have demonstrated the potential benefits, both theoretical and practical, of the independence sampler over the random walk Metropolis algorithm.”

Peter Neal and Tsun Man Clement Lee arXived a paper on optimising the independent Metropolis-Hastings algorithm. I was a bit surprised at this “return” of the independent sampler, which I hardly mention in my lectures, so I had a look at the paper. The goal is to produce an equivalent to what Gelman, Gilks and Wild (1996) obtained for random walk samplers.  In the formal setting when the target is a product of n identical densities f, the optimal number k of components to update in one Metropolis-Hastings (within Gibbs) round is approximately 2.835/I, where I is the symmetrised Kullback-Leibler distance between the (univariate) target f and the independent proposal q. When I is finite. The most surprising part is that the optimal acceptance rate is again 0.234, as in the random walk case. This is surprising in that I usually associate the independent Metropolis-Hastings algorithm with high acceptance rates. But this is of course when calibrating the proposal q, not the block size k of the Gibbs part. Hence, while this calibration of the independent Metropolis-within-Gibbs sampler is worth the study and almost automatically applicable, it remains that it only applies to a certain category of problems where blocking can take place. As in the disease models illustrating the paper. And requires an adequate choice of proposal distribution for, otherwise, the above quote becomes inappropriate.

## Non-reversible Markov Chains for Monte Carlo sampling

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , on September 24, 2015 by xi'an

This “week in Warwick” was not chosen at random as I was aware there is a workshop on non-reversible MCMC going on. (Even though CRiSM sponsored so many workshops in September that almost any week would have worked for the above sentence!) It has always been kind of a mystery to me that non-reversibility could make a massive difference in practice, even though I am quite aware that it does. And I can grasp some of the theoretical arguments why it does. So it was quite rewarding to sit in this Warwick amphitheatre and learn about overdamped Langevin algorithms and other non-reversible diffusions, to see results where convergence times moved from n to √n, and to grasp some of the appeal of lifting albeit in finite state spaces. Plus, the cartoon presentation of Hamiltonian Monte Carlo by Michael Betancourt was a great moment, not only because of the satellite bursting into flames on the screen but also because it gave a very welcome intuition about why reversibility was inefficient and HMC appealing. So I am grateful to my two colleagues, Joris Bierkens and Gareth Roberts, for organising this exciting workshop, with a most profitable scheduling favouring long and few talks. My next visit to Warwick will also coincide with a workshop on intractable likelihood, next November. This time part of the new Alan Turing Institute programme.