Archive for R

Bayesian Essentials with R [book review]

Posted in Books, R, Statistics, University life with tags , , , , , , , on July 28, 2016 by xi'an

[A review of Bayesian Essentials that appeared in Technometrics two weeks ago, with the first author being rechristened Jean-Michael!]

“Overall this book is a very helpful and useful introduction to Bayesian methods of data analysis. I found the use of R, the code in the book, and the companion R package, bayess, to be helpful to those who want to begin using  Bayesian methods in data analysis. One topic that I would like to see added is the use of Bayesian methods in change point problems, a topic that we found useful in a recent article and which could be added to the time series chapter. Overall this is a solid book and well worth considering by its intended audience.”
David E. BOOTH
Kent State University

Extending R

Posted in Books, Kids, R, Statistics with tags , , , , , , , , , , , , , , , , , on July 13, 2016 by xi'an

As I was previously unaware of this book coming up, my surprise and excitement were both extreme when I received it from CRC Press a few weeks ago! John Chambers, one of the fathers of S, precursor of R, had just published a book about extending R. It covers some reflections of the author on programming and the story of R (Parts 2 and 1),  and then focus on object-oriented programming (Part 3) and the interfaces from R to other languages (Part 4). While this is “only” a programming book, and thus not strictly appealing to statisticians, reading one of the original actors’ thoughts on the past, present, and future of R is simply fantastic!!! And John Chambers is definitely not calling to simply start over and build something better, as Ross Ihaka did in this [most read] post a few years ago. (It is also great to see the names of friends appearing at times, like Julie, Luke, and Duncan!)

“I wrote most of the original software for S3 methods, which were useful for their application, in the early 1990s.”

In the (hi)story part, Chambers delves into the details of the evolution of S at Bells Labs, as described in his [first]  “blue book” (which I kept on my shelf until very recently, next to the “white book“!) and of the occurrence of R in the mid-1990s. I find those sections fascinating maybe the more because I am somewhat of a contemporary, having first learned Fortran (and Pascal) in the mid-1980’s, before moving in the early 1990s to C (that I mostly coded as translated Pascal!), S-plus and eventually R, in conjunction with a (forced) migration from Unix to Linux, as my local computer managers abandoned Unix and mainframe in favour of some virtual Windows machines. And as I started running R on laptops with the help of friends more skilled than I (again keeping some of the early R manuals on my shelf until recently). Maybe one of the most surprising things about those reminiscences is that the very first version of R was dated Feb 29, 2000! Not because of Feb 29, 2000 (which, as Chambers points out, is the first use of the third-order correction to the Gregorian calendar, although I would have thought 1600 was the first one), but because I would have thought it appeared earlier, in conjunction with my first Linux laptop, but this memory is alas getting too vague!

As indicated above, the book is mostly about programming, which means in my case that some sections are definitely beyond my reach! For instance, reading “the onus is on the person writing the calling function to avoid using a reference object as the argument to an existing function that expects a named list” is not immediately clear… Nonetheless, most sections are readable [at my level] and enlightening about the mottoes “everything that exists is an object” and “everything that happens is a function” repeated throughout.  (And about my psycho-rigid ways of translating Pascal into every other language!) I obviously learned about new commands and notions, like the difference between

x <- 3

and

x <<- 3

(but I was disappointed to learn that the number of <‘s was not related with the depth or height of the allocation!) In particular, I found the part about replacement fascinating, explaining how a command like

diag(x)[i] = 3

could modify x directly. (While definitely worth reading, the chapter on R packages could have benefited from more details. But as Chambers points out there are whole books about this.) Overall, I am afraid the book will not improve my (limited) way of programming in R but I definitely recommend it to anyone even moderately skilled in the language.

automatic variational ABC

Posted in pictures, Statistics with tags , , , , , , , , , , on July 8, 2016 by xi'an

Amster11“Stochastic Variational inference is an appealing alternative to the inefficient sampling approaches commonly used in ABC.”

Moreno et al. [including Ted Meeds and Max Welling] recently arXived a paper merging variational inference and ABC. The argument for turning variational is computational speedup. The traditional (in variational inference) divergence decomposition of the log-marginal likelihood is replaced by an ABC version, parameterised in terms of intrinsic generators (i.e., generators that do not depend on cyber-parameters, like the U(0,1) or the N(0,1) generators). Or simulation code in the authors’ terms. Which leads to the automatic aspect of the approach. In the paper the derivation of the gradient is indeed automated.

“One issue is that even assuming that the ABC likelihood is an unbiased estimator of the true likelihood (which it is not), taking the log introduces a bias, so that we now have a biased estimate of the lower bound and thus biased gradients.”

I wonder how much of an issue this is, since we consider the variational lower bound. To be optimised in terms of the parameters of the variational posterior. Indeed, the endpoint of the analysis is to provide an optimal variational approximation, which remains an approximation whether or not the likelihood estimator is unbiased. A more “severe” limitation may be in the inversion constraint, since it seems to eliminate Beta or Gamma distributions. (Even though calling qbeta(runif(1),a,b) definitely is achievable… And not rejected by a Kolmogorov-Smirnov test.)

Incidentally, I discovered through the paper the existence of the Kumaraswamy distribution, which main appeal seems to be the ability to produce a closed-form quantile function, while bearing some resemblance with the Beta distribution. (Another arXival by Baltasar Trancón y Widemann studies some connections between those, but does not tell how to select the parameters to optimise the similarity.)

ABC random forests for Bayesian parameter inference [version 2.0]

Posted in Books, Kids, pictures, Statistics, Travel, University life, Wines with tags , , , , , , on June 30, 2016 by xi'an

Just mentioning that a second version of our paper has been arXived and submitted to JMLR, the main input being the inclusion of a reference to the abcrf package. And just repeating our best selling arguments that (i) forests do not require a preliminary selection of the summary statistics, since an arbitrary number of summaries can be used as input for the random forest, even when including a large number of useless white noise variables; (b) there is no longer a tolerance level involved in the process, since the many trees in the random forest define a natural if rudimentary distance that corresponds to being or not being in the same leaf as the observed vector of summary statistics η(y); (c) the size of the reference table simulated from the prior (predictive) distribution does not need to be as large as for in usual ABC settings and hence this approach leads to significant gains in computing time since the production of the reference table usually is the costly part! To the point that deriving a different forest for each univariate transform of interest is truly a minor drag in the overall computing cost of the approach.

Le Monde puzzle [#965]

Posted in Kids, R with tags , , , on June 14, 2016 by xi'an

A game-related Le Monde mathematical puzzle:

Starting with a pile of 10⁴ tokens, Bob plays the following game: at each round, he picks one of the existing piles with at least 3 tokens, takes away one of the tokens in this pile, and separates the remaining ones into two non-empty piles of arbitrary size. Bob stops when all piles have identical size. What is this size and what is the maximal number of piles?

First, Bob can easily reach a decomposition that prevents all piles to be of the same size: for instance, he can start with a pile of 1 and another pile of 2. Looking at the general perspective, an odd number of tokens, n=2k+1, can be partitioned into (1,1,2k-1). Which means that the decomposition (1,1,…,1) involving k+1 ones can always be achieved. For an even number, n=2k, this is not feasible. If the number 2k can be partitioned into equal numbers u, this means that the sequence 2k-(u+1),2k-2(u+1),… ends up with u, hence that there exist m such that 2k-m(u+1)=u or that 2k+1 is a multiple of (u+1). Therefore, the smallest value is made of the smallest factor of 2k+1. Minus one. For 2k=10⁴, this value is equal to 72, while it is 7 for 10³. The decomposition is impossible for 2k=100, since 101 is prime. Here are the R functions used to check this analysis (with small integers, if not 10⁴):

solvant <- function(piles){
 if ((length(piles)>1)&((max(piles)==2)||(min(piles)==max(piles)))){
  return(piles)}else{
   i=sample(rep(1:length(piles),2),1,prob=rep(piles-min(piles)+.1,2))
   while (piles[i]<3)
    i=sample(rep(1:length(piles),2),1,prob=rep(piles-min(piles)+.1,2))
   split=sample(rep(2:(piles[i]-1),2),1,
        prob=rep(abs(2:(piles[i]-1)-piles[i]/2)+.1,2))
   piles=c(piles[-i],split-1,piles[i]-split)
   solvant(piles)}}

disolvant <- function(piles){
 sol=solvant(piles)
 while (min(sol)<max(sol))
 sol=solvant(piles)
 return(sol)}

resolvant <- function(piles){
 sol=disolvant(piles)
 lasol=sol;maxle=length(sol)
 for (t in 1:piles){
 sol=disolvant(piles)
 if (length(sol)>maxle){
 lasol=sol;maxle=length(sol)}}
 return(lasol)}

the new version of abcrf

Posted in pictures, R, Statistics, University life with tags , , , , , , on June 7, 2016 by xi'an

gaarden tree, Jan. 16, 2012A new version of the R package abcrf has been posted on Friday by Jean-Michel Marin, in conjunction with the recent arXival of our paper on point estimation via ABC and random forests. The new R functions come to supplement the existing ones towards implementing ABC point estimation:

  1. covRegAbcrf, which predicts the posterior covariance between those two response variables, given a new dataset of summaries.
  2. plot.regAbcrf, which provides a variable importance plot;
  3. predict.regabcrf, which predicts the posterior expectation, median, variance, quantiles for a given parameter and a new dataset;
  4. regAbcrf, which produces a regression random forest from a reference table aimed out predicting posterior expectation, variance and quantiles for a parameter;
  5. snp, a simulated example in population genetics used as reference table in our Bioinformatics paper.

Unfortunately, we could not produce directly a diyabc2abcrf function for translating a regular DIYABC output into a proper abcrf format, since the translation has to occur in DIYABC instead. And even this is not a straightforward move (to be corrected in the next version of DIYABC).

Le Monde puzzle [#964]

Posted in Books, Kids, R with tags , , on June 2, 2016 by xi'an

A not so enticing Le Monde mathematical puzzle:

Find the minimal value of a five digit number divided by the sum of its digits.

This can formalised as finding the minimum of N/(a+b+c+d+e) when N writes abcde. And solved by brute force. Using a rough approach to finding the digits of a five-digit number, the question can be easily solved as

pris=1e6
for (i in 1e4:1e5){
 pres=i/sum((i %% 10^{5:1})) %/% 10^{4:0})
 if (pres<pris){
 pris=pres;sol=i}}

which returns N=10999 as its solution. (The solution for six digits is 10999.) The mathematical solution as provided in the newspaper certainly sounded more exciting.

Follow

Get every new post delivered to your Inbox.

Join 1,077 other followers