Archive for the R Category

new kids on the block

Posted in Kids, R, Statistics, University life with tags , , , on September 22, 2014 by xi'an

La Defense, Dec. 10, 2010This summer, for the first time, I took three Dauphine undergraduate students into research projects thinking they had had enough R training (with me!) and several stats classes to undertake such projects. In all cases, the concept was pre-defined and “all they had to do” was running a massive flow of simulations in R (or whatever language suited them best!) to check whether or not the idea was sound. Unfortunately, for two projects, by the end of the summer, we had not made any progress in any of the directions I wanted to explore… Despite a fairly regular round of meetings and emails with those students. In one case the student had not even managed to reproduce the (fairly innocuous) method I wanted to improve upon. In the other case, despite programming inputs from me, the outcome was impossible to trust.  A mostly failed experiment which makes me wonder why it went that way. Granted that those students had no earlier training in research, either in exploiting the literature or in pushing experiments towards logical extensions. But I gave them entries, discussed with them those possible new pathways, and kept updating schedules and work-charts. And the students were volunteers with no other incentive than discovering research (I even had two more candidates in the queue).  So it may be (based on this sample of 3!) that our local training system is missing in this respect. Somewhat failing to promote critical thinking and innovation by imposing too long presence hours and by evaluating the students only through standard formalised tests. I do wonder, as I regularly see [abroad] undergraduate internships and seminars advertised in the stats journals. Or even conferences.

STEM forums

Posted in Books, R, Statistics, University life with tags , , , , , on August 15, 2014 by xi'an

“I can calculate the movement of stars, but not the madness of men.” Isaac Newton

When visiting the exhibition hall at JSM 2014, I spoke with people from STEM forums on the Springer booth. The concept of STEM (why STEM? Nothing to do with STAN! Nor directly with Biology. It stands as the accronym for Science, Technology, Engineering, and Mathematics.) is to create a sort of peer-reviewed Cross Validated where questions would be filtered (in order to avoid the most basic questions like “How can I learn about Bayesian statistics without opening a book?” or “What is the Binomial distribution?” that often clutter the Stack Exchange boards). That’s an interesting approach which I will monitor in the future, as on the one hand, it would be nice to have a Statistics forum without “lazy undergraduate” questions as one of my interlocutors put, and on the other hand, to see how STEM forums can compete with the well-established Cross Validated and its core of dedicated moderators and editors. I left the booth with a neat tee-shirt exhibiting the above quote as well as alpha-tester on the back: STEM forums is indeed calling for entries into the Statistics section, with rewards of ebooks for the first 250 entries and a sweepstakes offering a free trip to Seattle next year!

new laptop with ubuntu 14.04

Posted in Linux, R, Statistics, University life with tags , , , , on August 14, 2014 by xi'an

Screenshot 2014-05-31 21:55:55As I was getting worried about the chances of survival of my current laptop (bought in emergency upon my return from Kyoto!), I decided to use some available grant money to buy a new laptop without stepping through the emergency square. Thanks to my local computer engineer, Thomas, I found a local dealer selling light laptops with an already installed Ubuntu 14.04… And qwerty (UK) keyboards. Even though the previous move to Kubuntu 12.04 had been seamless, a failed attempt to switch a Mac to Ubuntu a few months later left me wary about buying a computer first and testing later whether or not it was truly Linux compatible. I am therefore quite happy with the switch and grateful to Thomas for the suggestion. I managed to re-compile my current papers and to run my current R codes, plus connect by wireless and read photos from my camera, hence validating the basic operations I primarily require from a computer! And reinstalled KDE. (I am still having difficulties with the size of the fonts in Firefox though. Which do not seem coherent from a tab to the next.) Enough to sacrifice a new sticker to cover the brand on its cover….

Bangalore workshop [ಬೆಂಗಳೂರು ಕಾರ್ಯಾಗಾರ] and new book

Posted in Books, pictures, R, Statistics, Travel, University life with tags , , , , , , , , , , , , on August 13, 2014 by xi'an

IIScOn the last day of the IFCAM workshop in Bangalore, Marc Lavielle from INRIA presented a talk on mixed effects where he illustrated his original computer language Monolix. And mentioned that his CRC Press book on Mixed Effects Models for the Population Approach was out! (Appropriately listed as out on a 14th of July on amazon!) He actually demonstrated the abilities of Monolix live and on diabets data provided by an earlier speaker from Kolkata, which was a perfect way to start initiating a collaboration! Nice cover (which is all I saw from the book at this stage!) that maybe will induce candidates to write a review for CHANCE. Estimation of those mixed effect models relies on stochastic EM algorithms developed by Marc Lavielle and Éric Moulines in the 90’s, as well as MCMC methods.

ABC model choice by random forests [guest post]

Posted in pictures, R, Statistics, University life with tags , , , , , , , , , , on August 11, 2014 by xi'an

[Dennis Prangle sent me his comments on our ABC model choice by random forests paper. Here they are! And I appreciate very much contributors commenting on my paper or others, so please feel free to join.]

treerise6This paper proposes a new approach to likelihood-free model choice based on random forest classifiers. These are fit to simulated model/data pairs and then run on the observed data to produce a predicted model. A novel “posterior predictive error rate” is proposed to quantify the degree of uncertainty placed on this prediction. Another interesting use of this is to tune the threshold of the standard ABC rejection approach, which is outperformed by random forests.

The paper has lots of thought-provoking new ideas and was an enjoyable read, as well as giving me the encouragement I needed to read another chapter of the indispensable Elements of Statistical Learning However I’m not fully convinced by the approach yet for a few reasons which are below along with other comments.

Alternative schemes

The paper shows that random forests outperform rejection based ABC. I’d like to see a comparison to more efficient ABC model choice algorithms such as that of Toni et al 2009. Also I’d like to see if the output of random forests could be used as summary statistics within ABC rather than as a separate inference method.

Posterior predictive error rate (PPER)

This is proposed to quantify the performance of a classifier given a particular data set. The PPER is the proportion of times the classifier’s most favoured model is incorrect for simulated model/data pairs drawn from an approximation to the posterior predictive. The approximation is produced by a standard ABC analysis.

Misclassification could be due to (a) a poor classifier or (b) uninformative data, so the PPER aggregrates these two sources of uncertainty. I think it is still very desirable to have an estimate of the uncertainty due to (b) only i.e. a posterior weight estimate. However the PPER is useful. Firstly end users may sometimes only care about the aggregated uncertainty. Secondly relative PPER values for a fixed dataset are a useful measure of uncertainty due to (a), for example in tuning the ABC threshold. Finally, one drawback of the PPER is the dependence on an ABC estimate of the posterior: how robust are the results to the details of how this is obtained?

Classification

This paper illustrates an important link between ABC and machine learning classification methods: model choice can be viewed as a classification problem. There are some other links: some classifiers make good model choice summary statistics (Prangle et al 2014) or good estimates of ABC-MCMC acceptance ratios for parameter inference problems (Pham et al 2014). So the good performance random forests makes them seem a generally useful tool for ABC (indeed they are used in the Pham et al al paper).

Bangalore workshop [ಬೆಂಗಳೂರು ಕಾರ್ಯಾಗಾರ]

Posted in pictures, R, Running, Statistics, Travel, University life, Wines with tags , , , , , , on July 31, 2014 by xi'an

mathdeptSecond day at the Indo-French Centre for Applied Mathematics and the workshop. Maybe not the most exciting day in terms of talks (as I missed the first two plenary sessions by (a) oversleeping and (b) running across the campus!). However I had a neat talk with another conference participant that led to [what I think are] interesting questions… (And a very good meal in a local restaurant as the guest house had not booked me for dinner!)

To wit: given a target like

\lambda \exp(-\lambda) \prod_{i=1}^n \dfrac{1-\exp(-\lambda y_i)}{\lambda}\quad (*)

the simulation of λ can be demarginalised into the simulation of

\pi (\lambda,\mathbf{z})\propto \lambda \exp(-\lambda) \prod_{i=1}^n \exp(-\lambda z_i) \mathbb{I}(z_i\le y_i)

where z is a latent (and artificial) variable. This means a Gibbs sampler simulating λ given z and z given λ can produce an outcome from the target (*). Interestingly, another completion is to consider that the zi‘s are U(0,yi) and to see the quantity

\pi(\lambda,\mathbf{z}) \propto \lambda \exp(-\lambda) \prod_{i=1}^n \exp(-\lambda z_i) \mathbb{I}(z_i\le y_i)

as an unbiased estimator of the target. What’s quite intriguing is that the quantity remains the same but with different motivations: (a) demarginalisation versus unbiasedness and (b) zi ∼ Exp(λ) versus zi ∼ U(0,yi). The stationary is the same, as shown by the graph below, the core distributions are [formally] the same, … but the reasoning deeply differs.

twoversions

Obviously, since unbiased estimators of the likelihood can be justified by auxiliary variable arguments, this is not in fine a big surprise. Still, I had not thought of the analogy between demarginalisation and unbiased likelihood estimation previously. Continue reading

another R new trick [new for me!]

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , on July 16, 2014 by xi'an

La Defense, Dec. 10, 2010While working with Andrew and a student from Dauphine on importance sampling, we wanted to assess the distribution of the resulting sample via the Kolmogorov-Smirnov measure

\max_x |\hat{F_n}(x)-F(x)|

where F is the target.  This distance (times √n) has an asymptotic distribution that does not depend on n, called the Kolmogorov distribution. After searching for a little while, we could not figure where this distribution was available in R. It had to, since ks.test was returning a p-value. Hopefully correct! So I looked into the ks.test function, which happens not to be entirely programmed in C, and found the line

PVAL <- 1 - if (alternative == "two.sided") 
                .Call(C_pKolmogorov2x, STATISTIC, n)

which means that the Kolmogorov distribution is coded as a C function C_pKolmogorov2x in R. However, I could not call the function myself.

> .Call(C_pKolmogorov2x,.3,4)
Error: object 'C_pKolmogorov2x' not found

Hence, as I did not want to recode this distribution cdf, I posted the question on stackoverflow (long time no see!) and got a reply almost immediately as to use the package kolmim. Followed by the extra comment from the same person that calling the C code only required to add the path to its name, as in

> .Call(stats:::C_pKolmogorov2x,STAT=.3,n=4)
[1] 0.2292
Follow

Get every new post delivered to your Inbox.

Join 644 other followers