Archive for system.time

quantile functions: mileage may vary

Posted in Books, R, Statistics with tags , , , , , , on May 12, 2015 by xi'an

When experimenting with various quantiles functions in R, I was shocked [ok this is a bit excessive, let us say surprised] by how widely the execution times would vary. To the point of blaming a completely different feature of R. Borrowing from Charlie Geyer’s webpage on the topic of probability distributions in R, here is a table for some standard distributions: I ran

u=runif(1e7)
system.time(x<-qcauchy(u))

choosing an arbitrary parameter whenever needed.

Distribution Function Time
Cauchy qcauchy 2.2
Chi-Square qchisq 43.8
Exponential qexp 0.95
F qf 34.2
Gamma qgamma 37.2
Logistic qlogis 1.7
Log Normal qlnorm 2.2
Normal qnorm 1.4
Student t qt 31.7
Uniform qunif 0.86
Weibull qweibull 2.9

Of course, it does not mean much in that all the slow distributions (except for Weibull) are parameterised. Nonetheless, that a chi-square inversion take 50 times longer than a uniform inversion remains puzzling as to why it is not coded more efficiently. In particular, I was wondering why the chi-square inversion was slower than the Gamma inversion. Rerunning both inversions showed that they are equivalent:

> u=runif(1e7)
> system.time(x<-qgamma(u,sha=1.5))
utilisateur système écoulé
 21.534 0.016 21.532
> system.time(x<-qchisq(u,df=3))
utilisateur système écoulé
21.372 0.008 21.361

Which also shows how variable system.time can be.

take those hats off [from R]!

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , on May 5, 2015 by xi'an

from my office, La Défense & Bois de Boulogne, Paris, May 15, 2012This is presumably obvious to most if not all R programmers, but I became aware today of a hugely (?) delaying tactic in my R codes. I was working with Jean-Michel and Natesh [who are visiting at the moment] and when coding an MCMC run I was telling them that I usually preferred to code Nsim=10000 as Nsim=10^3 for readability reasons. Suddenly, I became worried that this representation involved a computation, as opposed to Nsim=1e3 and ran a little experiment:

> system.time(for (t in 1:10^8) x=10^3)
utilisateur     système      écoulé
     30.704       0.032      30.717
> system.time(for (t in 1:1e8) x=10^3)
utilisateur     système      écoulé
     30.338       0.040      30.359
> system.time(for (t in 1:10^8) x=1000)
utilisateur     système      écoulé
      6.548       0.084       6.631
> system.time(for (t in 1:1e8) x=1000)
utilisateur     système      écoulé
      6.088       0.032       6.115
> system.time(for (t in 1:10^8) x=1e3)
utilisateur     système      écoulé
      6.134       0.029       6.157
> system.time(for (t in 1:1e8) x=1e3)
utilisateur     système      écoulé
      6.627       0.032       6.654
> system.time(for (t in 1:10^8) x=exp(3*log(10)))
utilisateur     système      écoulé
     60.571        0.000     57.103

 So using the usual scientific notation with powers is taking its toll! While the calculator notation with e is cost free… Weird!

I understand that the R notation 10^6 is an abbreviation for a power function that can be equally applied to pi^pi, say, but still feel aggrieved that a nice scientific notation like 10⁶ ends up as a computing trap! I thus asked the question to the Stack Overflow forum, getting the (predictable) answer that the R code 10^6 meant calling the R power function, while 1e6 was a constant. Since 10⁶ does not differ from ππ, there is no reason 10⁶ should be recognised by R as a million. Except that it makes my coding more coherent.

> system.time( for (t in 1:10^8) x=pi^pi)
utilisateur     système      écoulé
     44.518       0.000      43.179
> system.time( for (t in 1:10^8) x=10^6)
utilisateur     système      écoulé
     38.336       0.000      37.860

Another thing I discovered from this answer to my question is that negative integers are also requesting call to a function:

> system.time( for (t in 1:10^8) x=1)
utilisateur     système      écoulé
     10.561       0.801      11.062
> system.time( for (t in 1:10^8) x=-1)
utilisateur     système      écoulé
     22.711       0.860      23.098

This sounds even weirder.

scale acceleration

Posted in pictures, R, Statistics, Travel, University life with tags , , , , , , , , on April 24, 2015 by xi'an

thalysKate Lee pointed me to a rather surprising inefficiency in matlab, exploited in Sylvia Früwirth-Schnatter’s bayesf package: running a gamma simulation by rgamma(n,a,b) takes longer and sometimes much longer than rgamma(n,a,1)/b, the latter taking advantage of the scale nature of b. I wanted to check on my own whether or not R faced the same difficulty, so I ran an experiment [while stuck in a Thalys train at Brussels, between Amsterdam and Paris…] Using different values for a [click on the graph] and a range of values of b. To no visible difference between both implementations, at least when using system.time for checking.

a=seq(.1,4,le=25)
for (t in 1:25) a[t]=system.time(
     rgamma(10^7,.3,a[t]))[3]
a=a/system.time(rgamma(10^7,.3,1))[3]

Once arrived home, I wondered about the relevance of the above comparison, since rgamma(10^7,.3,1) forces R to use 1 as a scale, which may differ from using rgamma(10^7,.3), where 1 is known to be the scale [does this sentence make sense?!]. So I rerun an even bigger experiment as

a=seq(.1,4,le=25)
for (t in 1:25) a[t]=system.time(
     rgamma(10^8,.3,a[t]))[3]
a=a/system.time(rgamma(10^8,.3))[3]

and got the graph below. Which is much more interesting because it shows that some values of a are leading to a loss of efficiency of 50%. Indeed. (The most extreme cases correspond to a=0.3, 1.1., 5.8. No clear pattern emerging.)thalys2Update

As pointed out by Martyn Plummer in his comment, the C function behind the R rgamma function and Gamma generator does take into account the scale nature of the second parameter, so the above time differences are not due to this function but rather to whatever my computer was running at the same time…! Apologies to anyone I scared with this void warning!

R/Rmetrics in Paris [alas!]

Posted in Mountains, pictures, R, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , on June 30, 2014 by xi'an

Bernard1Today I gave a talk on Bayesian model choice in a fabulous 13th Century former monastery in the Latin Quarter of Paris… It is the Collège des Bernardins, close to Jussieu and Collège de France, unbelievably hidden to the point I was not aware of its existence despite having studied and worked in Jussieu since 1982… I mixed my earlier San Antonio survey on importance sampling approximations to Bayes factors with an entry to our most recent work on ABC with random forests. This was the first talk of the 8th R/Rmetrics workshop taking place in Paris this year. (Rmetrics is aiming at aggregating R packages with econometrics and finance applications.) And I had a full hour and a half to deliver my lecture to the workshop audience. Nice place, nice people, new faces and topics (and even andouille de Vire for lunch!): why should I complain with an alas in the title?!Bernard2What happened is that the R/Rmetrics meetings have been till this year organised in Meielisalp, Switzerland. Which stands on top of Thuner See and… just next to the most famous peaks of the Bernese Alps! And that I had been invited last year but could not make it… Meaning I lost a genuine opportunity to climb one of my five dream routes, the Mittelegi ridge of the Eiger. As the future R/Rmetrics meetings will not take place there.

A lunch discussion at the workshop led me to experiment the compiler library in R, library that I was unaware of. The impact on the running time is obvious: recycling the fowler function from the last Le Monde puzzle,

> bowler=cmpfun(fowler)
> N=20;n=10;system.time(fowler(pred=N))
   user  system elapsed 
 52.647   0.076  56.332 
> N=20;n=10;system.time(bowler(pred=N))
   user  system elapsed 
 51.631   0.004  51.768 
> N=20;n=15;system.time(bowler(pred=N))
   user  system elapsed 
 51.924   0.024  52.429 
> N=20;n=15;system.time(fowler(pred=N))
   user  system elapsed 
 52.919   0.200  61.960 

shows a ten- to twenty-fold gain in system time, if not in elapsed time (re-alas!).

optimising accept-reject

Posted in R, Statistics, University life with tags , , , , , , on November 21, 2012 by xi'an

I spotted on R-bloggers a post discussing optimising the efficiency of programming accept-reject algorithms. While it is about SAS programming, and apparently supported by the SAS company, there are two interesting features with this discussion. The first one is about avoiding the dreaded loop in accept-reject algorithms. For instance, taking the case of the truncated-at-one Poisson distribution, the code

rtpois=function(n,lambda){
  sampl=c()
  while (length(sampl)<n){
    x=rpois(1,lambda)
    if (x!=0) sampl=c(sampl,x)}
  return(sampl)
  }

is favoured by my R course students but highly inefficient:

> system.time(rtpois(10^5,.5))
   user  system elapsed
61.600  27.781  98.727

both for the stepwise increase in the size of the vector and for the loop. For instance, defining the vector sampl first requires a tenth of the above time (note the switch from 10⁵ to 10⁶):

> system.time(rtpois(10^6,.5))
   user  system elapsed
 54.155   0.200  62.635

As discussed by the author of the post, a more efficient programming should aim at avoiding the loop by predicting the number of proposals necessary to accept a given number of values. Since the bound M used in accept-reject algorithms is also the expected number of attempts for one acceptance, one should start with something around Mn proposed values. (Assuming of course all densities are properly normalised.) For instance, in the case of the truncated-at-one Poisson distribution based on proposals from the regular Poisson, the bound is 1/1-e. A first implementation of this principle is to build the sample via a few loops:

rtpois=function(n,lambda){
propal=rpois(ceiling(n/(1-exp(-lambda))),lambda)
propal=propal[propal>0]
n0=length(propal)
if (n0>=n)
return(propal[1:n])
else return(c(propal,rtpois(n-n0,lambda)))
}

with a higher efficiency:

> system.time(rtpois(10^6,.5))
   user  system elapsed
  0.816   0.092   0.928

Replacing the expectation with an upper bound using the variance of the negative binomial distribution does not make a significant dent in the computing time

rtpois=function(n,lambda){
  M=1/(1-exp(-lambda))
  propal=rpois(ceiling(M*(n+2*sqrt(n/M)/(M-1))),lambda)
  propal=propal[propal>0]
  n0=length(propal)
  if (n0>=n)
   return(propal[1:n])
  else return(c(propal,rtpois(n-n0,lambda)))}

since we get

> system.time(rtpois(10^6,.5))
   user  system elapsed
  0.824   0.048   0.877

The second point about this Poisson example is that simulating a distribution with a restricted support using another distribution with a larger support is quite inefficient. Especially when λ goes to zero By comparison, using a Poisson proposal with parameter μ and translating it by 1 may bring a considerable improvement: without getting into the gory details, it can be shown that the optimal value of μ (in terms of maximal acceptance probability) is λ and that the corresponding probability of acceptance is

\dfrac{1-\exp\{-\lambda\}}{\lambda}

which is larger than the probability of the original approach when λ is less than one. As shown by the graph below, this allows for a lower bound in the probability of acceptance that remains tolerable.