**M**att Asher posted an R experiment on R-bloggers yesterday simulating the random walk

which has the property of avoiding zero by quickly switching to a large value as soon as is small. He was then wondering about the “convergence” of the random walk given that it moves very little once is large enough. The values he found for various horizons *t* seemed to indicate a stable regime.

**I** reran the same experiment as Matt in a Monte Carlo perspective, using the R program

resu=matrix(0,ncol=100,nrow=25) sampl=rnorm(100) for (i in 1:25){ for (t in 2^(i-1):2^i) sampl=sampl+rnorm(100)/sampl resu[i,]=sampl } boxplot(as.data.frame(t(abs(resu))),name=as.character(1:25),col="wheat3")

**T**he outcome of this R code plotted above shows that the range and the average of the 100 replications is increasing with *t*. This behaviour indicates a transient behaviour of the Markov chain, which almost surely goes to infinity and never comes back (because at infinity the variance is zero). Another indication for transience is shown by the fact that comes back to the interval *(-1,1)* with probability , a probability which goes to zero with . As suggested to me by Randal Douc, this transience can be established rigorously by considering

which is thus bounded from below by a null recurrent process, which almost surely goes to infinity. Therefore the above Markov chain cannot have a stationary distribution or even a stationary measure: it almost surely goes to (plus or minus) infinity.