Archive for C

simulated annealing for Sudokus [2]

Posted in Books, pictures, R, Statistics, University life with tags , , , , , , , on March 17, 2012 by xi'an

On Tuesday, Eric Chi and Kenneth Lange arXived a paper on a comparison of numerical techniques for solving sudokus. (The very Kenneth Lange who wrote this fantastic book on numerical analysis.) One of these techniques is the simulated annealing approach I had played with a long while ago.  They seem to use the same penalisation function as mine, i.e., the number of constraint violations, but the moves are different in that they pick pairs of cells without clues (i.e., not constrained) and swap their contents. The pairs are not picked at random but with probability proportional to exp(k), if k is the number of constraint violations. The temperature decreases geometrically and the simulated annealing program stops when the zero cost is achieved or when a maximum 10⁵ iterations are reached. The R program I wrote while visiting SAMSI had more options, but it was also horrendously slow! The CPU time reported by the authors is far far lower, almost in the range of the backtracking solution that serves as their reference. (Of course, it is written in Fortran 95, not in R…) As in my case, the authors mentioned they sometimes get stuck in a local minimum with only 2 cells with constraint violations.

So I reprogrammed an R code following (as much as possible) their scheme. However, I do not get a better behaviour than with my earlier code, and certainly no solution within seconds, if any. For instance, the temperature decrease in 100(.99)t seems too steep to manage 105 steps. So, either I am missing a crucial element in the code, or my R code is very poor and clever Fortran programming does the trick! Here is my code

target=function(s){
tar=sum(apply(s,1,duplicated)+apply(s,2,duplicated))
for (r in 1:9){
bloa=(1:3)+3*(r-1)%%3
blob=(1:3)+3*trunc((r-1)/3)
tar=tar+sum(duplicated(as.vector(s[bloa,blob])))
}
return(tar)
}

cost=function(i,j,s){
#constraint violations in cell (i,j)
  cos=sum(s[,j]==s[i,j])+sum(s[i,]==s[i,j])
  boxa=3*trunc((i-1)/3)+1;
  boxb=3*trunc((j-1)/3)+1;
  cos+sum(s[boxa:(boxa+2),boxb:(boxb+2)]==s[i,j])
  }

entry=function(){
  s=con
  pop=NULL
  for (i in 1:9)
    pop=c(pop,rep(i,9-sum(con==i)))
  s[s==0]=sample(pop)
  return(s)
  }

move=function(tau,s,con){
  pen=(1:81)
  for (i in pen[con==0])
        pen[i]=cost(((i-1)%%9)+1,trunc((i-1)/9)+1,s)
  wi=sample((1:81)[con==0],2,prob=exp(pen[(1:81)[con==0]]))
  prop=s
  prop[wi[1]]=s[wi[2]]
  prop[wi[2]]=s[wi[1]]

  if (runif(1)<exp((target(s)-target(prop)))/tau)
    s=prop
  return(s)
  }

#Example:
s=matrix(0,ncol=9,nrow=9)
s[1,c(1,6,7)]=c(8,1,2)
s[2,c(2:3)]=c(7,5)
s[3,c(5,8,9)]=c(5,6,4)
s[4,c(3,9)]=c(7,6)
s[5,c(1,4)]=c(9,7)
s[6,c(1,2,6,8,9)]=c(5,2,9,4,7)
s[7,c(1:3)]=c(2,3,1)
s[8,c(3,5,7,9)]=c(6,2,1,9)

con=s
tau=100
s=entry()
for (t in 1:10^4){
  for (v in 1:100) s=move(tau,s,con)
  tau=tau*.99
  if (target(s)==0) break()
  }

speed of R, C, &tc.

Posted in R, Running, Statistics, University life with tags , , , , , , , , , on February 3, 2012 by xi'an

My Paris colleague (and fellow-runner) Aurélien Garivier has produced an interesting comparison of 4 (or 6 if you consider scilab and octave as different from matlab) computer languages in terms of speed for producing the MLE in a hidden Markov model, using EM and the Baum-Welch algorithms. His conclusions are that

  • matlab is a lot faster than R and python, especially when vectorization is important : this is why the difference is spectacular on filtering/smoothing, not so much on the creation of the sample;
  • octave is a good matlab emulator, if no special attention is payed to execution speed…;
  • scilab appears as a credible, efficient alternative to matlab;
  • still, C is a lot faster; the inefficiency of matlab in loops is well-known, and clearly shown in the creation of the sample.

(In this implementation, R is “only” three times slower than matlab, so this is not so damning…) All the codes are available and you are free to make suggestions to improve the speed of of your favourite language!

the Art of R Programming [guest post]

Posted in Books, R, Statistics, University life with tags , , , , on January 31, 2012 by xi'an

(This post is the preliminary version of a book review by Alessandra Iacobucci, to appear in CHANCE. Enjoy [both the review and the book]!)

As Rob J. Hyndman enthusiastically declares in his blog, “this is a gem of a book”. I would go even further and argue that The Art of R programming is a whole mine of gems. The book is well constructed, and has a very coherent structure.

After an introductory chapter, where the reader gets a quick overview on R basics that allows her to work through the examples in the following chapters, the rest of the book can be divided in three main parts. In the first part (Chapters 2 to 6) the reader is introduced to main R objects and to the functions built to handle and operate on each of them. The second part (Chapters 7 to 13) is focussed on general programming issues: R structures and object-oriented nature, I/O, string handling and manipulating issues, and graphics. Chapter 13 is all devoted to the topic of debugging. The third part deals with more advanced topics, such as speed of execution and performance issues (Chapter 14), mix-matching functions written in R and C (or Python), and parallel processing with R. Even though this last part is intended for more experienced programmers, the overall programming skills of the intended reader “may range anywhere from those of a professional software developer to `I took a programming course in college’.” (p.xxii).

With a fluent style, Matloff is able to deal with a large number of topics in a relatively limited number of pages, resulting in an astonishingly complete yet handy guide. At almost every page we discover a new command, most likely the command we had always looked for and done without by means of more or less cumbersome roundabouts. As a matter of fact, it is possible that there exists a ready-made and perfectly suited R function for nearly anything that comes up to one’s mind. Users coming from compiled programming languages may find it difficult to get used to this wealth of functions, just as they may feel uncomfortable not declaring variable types, not initializing vectors and arrays, or getting rid of loops. Nevertheless, through numerous examples and a precise knowledge of its strengths and limitations, Matloff masterly introduces the reader to the flexibility of R. He repeatedly underlines the functional nature of R in every part of the book and stresses from the outset how this feature has to be exploited for an effective programming. Continue reading

Dennis Ritchie 1941-2011

Posted in Books, R, University life with tags , , , , , on October 29, 2011 by xi'an

I just got the “news” that Dennis Ritchie died, although this happened on October 12… The announcement was surprisingly missing from my information channels and certainly got little media coverage, compared with Steve Jobs‘ demise. (I did miss the obituaries in the New York Times and in the Guardian. The Economist has the most appropriate heading, printf(“goodbye, Dennis”); !!!) Still, Dennis Ritchie contributed to computer science to extents comparable to Steve Jobs’, if on a lesser commercial plane: he is a founding father of both the C language and the Unix operating system. I remember spending many days perusing over his reference book, The C programming language, co-written with Brian Kernighan. (I kept trying programming in C until Olivier Cappé kindly pointed out to me that I was merely translating my Pascal vision into C code, missing most of the appeal of the language!) And, of course, I also remember discovering Unix when arriving at Purdue as a logical and much more modern operating system: just tfour years after programming principal components on punched card and in SAS, this was a real shock! I took a few evening classes at Purdue run by the Computer Department and I still carry around the Purdue University UNIX Pocket Guide. Although I hardly ever use it, it is there on the first shelf on top of my desk… As is The C programming language even though I have not opened it in years!

So we (geeks, computer users, Linuxians, R users, …) owe a lot to Dennis Ritchie and it is quite sad both that he passed away by himself and that his enormous contribution was not better acknowledged. Thus, indeed,

for (i=0; i<ULONG_LONG_MAX; i++)
    printf("thanks a lot, Dennis")

Julien on R shortcomings

Posted in Books, R, Statistics, University life with tags , , , , , , , on September 8, 2010 by xi'an

Julien Cornebise posted a rather detailed set of comments (from Jasper!) that I thought was interesting and thought-provoking enough (!) to promote to a guest post. Here it is , then, to keep the debate rolling (with my only censoring being the removal of smileys!). (Please keep in mind that I do not endorse everything stated in this guest post! Especially the point on “Use R!“)

On C vs R
As a reply to Duncan: indeed C (at least for the bottlenecks) will probably always be faster for the final, mainstream use of an algorithm [e.g. as a distributed R library, or a standalone program]. Machine-level, smart compilers, etc etc. The same goes for Matlab, and even for Python: e.g. Pierre Jacob (Xian’s great PhD student) uses Weave to inline C in his Python code for the bottlenecks — simple, and fast. Some hedge funds even hire coders to recode the Matlab code of their consulting academic statisticians.

Point taken. But, as Radford Neal points out, that doesn’t justify R to be much slower that it could be:

  • When statisticians (cf Xian) want to develop/prototype new algorithms and methods while focussing on the math/stat/algo more than on the language-dependent implementation, it is still a shame to waste 50% (or even 25%). Same goes for the memory management, or even for some language features[1]
  • Even less computer-savvy users of R for real-case data, willing to use existing algorithms (not developing new algos) but on big/intricate datasets can be put off by slow speed — or even by memory failures.
  • And the library is BRILLIANT.

On Future Language vs R
Thanks David and Martyn for the link to Ihaka’s great paper on R-like lisp-based. Says things better than I could, and with an expertise on R that I haven’t. I also didn’t know about Robert Gentleman and his success at Harvard (but he *invented* the thing, not merely tuned it up).

Developing a whole new language and concept, as advocated in Ihaka’s paper and as suggested by gappy3000 would be a great leap forward, and a needed breakthrough to change the way we use computational stats. I would *love* to see that, as I personally think (as Ihaka advocates in the paper you link to) that R, as a language, is a hell of a pain [2] and I am saddened to see a lot of “Use R” books who will root its inadequate use for needs where the language hardly fits the bill — although the library does.

But R is here and in everyday use, and the matter is more of making it worth using, to its full potential. I have no special attachment to R, but any breakthrough language that would not be entirely compatible with the massive library contributed over the years would be doomed to fail to pick-up the everyday statistician—and we’re talking here about far-fetched long-term moves. Sanitary breakthrough, but harder to make happen when such an anchor is here.
I would say that R has turned into the Fortran of statistics: here to stay, anchored by the inertia that stems from its intrinsic (and widely acknowledged) merits  (I’ve been nice, I didn’t say Cobol.).

So until of the great leap forward comes (or until we make it happen as a community), I second Radford Neal‘s call for optimization of the existing core of R.

Rejoinder
As a rejoinder to the comments here, I think we need to consider separately

  1. R’s brilliant library
  2. R’s not-so-brilliant language and/or interpreter.

It seems to me from this topic that the community needs/should push for, in chronological order.

  1. First, a speed-up of R’s existing interpreter as called for by Radford Neal.  “Easy” and short-term task, by good-willing amateur coders, or, better, by solid CS people.
  2. Team-up with CS experts interested in developing computational stat-related tools.
  3. With them, get out of the now dead-ended R language and embark on a new stat framework based on an *existing*, proven, language. *Must*  be able to reuse the brilliant R library/codes brought up by the community. Failing so would fail to pick up the userbase = die in limbo.  That’s more or less what is called for by Ihaka (except for his doubts on the backward compatibility, see Section 7 of his paper).  Much harder and longer term, but worth it.

From then on
Who knows the R community enough to relay this call, and make it happen ? I’m out of my league.

Uninteresting footnotes:
[1] I have twitched several times when trying R, feeling the coding was somewhat unnatural from a CS point of view. [Mind, I twitch all the same, although on other points, with Matlab]
[2] again, I speak only out of the few tries I gave it, as I gave up using it for my everyday work, I am biased — and ignorant

Neal

plagiarism exposed!

Posted in R, Statistics, University life with tags , , , , , on June 15, 2010 by xi'an

Last morn, I had the surprise of receiving the following email:

This is to inform you that the following abstract has been submitted to the 3rd International Conference of the ERCIM WG on COMPUTING & STATISTICS (ERCIM’10)

Ab#: 114
Title: Goodness of Fit Via Mixtures of Beta distributions
Keywords: nonparametric estimation, posterior conditional predictive p-value.
Abstract: We consider a Bayesian approach to goodness of fit, that is, to the problem of testing whether or not a given parametric model is compatible with the data at hand . Since we are concerned with a goodness of fit problem, it is more of interest to consider a functional distance to the tested model d(F;F) as the basis of our test, rather than the corresponding Bayes factor, since the later puts more emphasis on the parameters. It is both of high interest and of strong difficulty to come up with a satisfactory notion of a Bayesian test for goodness ofit to a distribution or to a family of distributions.

The abstract is a plagiarism of your work.

I am informing you of about this in case the author has tried to plagiarize the whole paper. The same author has submitted a second abstract plagiarizing another paper.  The author uses bogus affiliations and I cannot trace his institution in case he has one.

It is somehow comforting to see that such a gross example of plagiarism can get detected, despite the fact that our paper never got published. Although I am sure there must be conferences that do not apply any filter on the submission…

This paper with Judith Rousseau was once submitted to Series B, but I could not come to complete the requested revision for programming motives, the task of modifying the several thousand lines of C code driving the beta mixture estimation filling me with paralysing dread! This is actually the time when I stopped programming in C (the fact that I ever really programmed in C is actually debatable!). This is unfortunate as the spirit of the paper was quite nice, using an idea borrowed from Verdinelli and Wasserman to build a genuine Bayesian goodness of fit test… I do not think there is much to salvage at this later stage, given the explosion of Bayesian non-parametrics.

Computational Statistics

Posted in Books, R, Statistics with tags , , , , , , , , , , , , , , , , , , on May 10, 2010 by xi'an

Do not resort to Monte Carlo methods unnecessarily.

When I received this 2009 Springer-Verlag book, Computational Statistics, by James Gentle a while ago, I briefly took a look at the table of contents and decided to have a better look later… Now that I have gone through the whole book, I can write a short review on its scope and contents (to be submitted). Despite its title, the book aims at covering both computational statistics and statistical computing. (With 752 pages at his disposal, Gentle can afford to do both indeed!)

The book Computational Statistics is separated into four parts:

  • Part I: Mathematical and statistical preliminaries.
  • Part II: Statistical Computing (Computer storage and arithmetic.- Algorithms and programming.- Approximation of functions and numerical quadrature.- Numerical linear algebra.- Solution of nonlinear equations and optimization.- Generation of random numbers.)
  • Part III: Methods of Computational Statistics (Graphical methods in computational statistics.- Tools for identification of structure in data.- Estimation of functions.- Monte Carlo methods for statistical inference.- Data randomization, partitioning, and augmentation.- Bootstrap methods.)
  • Part IV: Exploring Data Density and Relationship (Estimation of probability density functions using parametric models.- Nonparametric estimation of probability density functions.- Statistical learning and data mining.- Statistical models of dependencies.)

Computational inference, together with exact inference and asymptotic inference, is an important component of statistical methods.

The first part of Computational Statistics is indeed a preliminary containing essentials of math and probability and statistics. A reader unfamiliar with too many topics within this chapter should first consider improving his or her background in the corresponding area! This is a rather large chapter, with 82 pages, and it should not be extremely useful to readers, except to signal deficiencies in their background, as noted above. Given this purpose, I am not certain the selected exercises of this chapter are necessary (especially when considering that some involve tools introduced much later in the book).

The form of a mathematical expression and the way the expression should be evaluated in actual practice may be quite different .

The second part of Computational Statistics is truly about computing, meaning the theory of computation, i.e. of using computers for numerical approximation, with discussions about the representation of numbers in computers, approximation errors, and of course random number generators. While I judge George Fishman’s Monte Carlo to provide a deeper and more complete coverage of those topics, I appreciate the need for reminding our students of those hardware subtleties as they often seem unaware of them, despite their advanced computer skills. This second part is thus a crash course of 250 pages on numerical methods (like function approximations by basis functions and …) and on random generators, i.e. cover the same ground as Gentle’s earlier books, Random Number Generation and Monte Carlo Methods and Numerical Linear Algebra for Applications in Statistics, while the more recent Elements of Computational Statistics looks very much like a shorter entry on the same topics as those of Parts III IV of Computational Statistics. This part could certainly sustain a whole semester undergraduate course while only advanced graduate students could be expected to gain from a self-study of those topics. It is nonetheless the most coherent and attractive part of the book. It constitutes a must-read for all students and researchers engaging into any kind of serious programming. Obviously, some notions are introduced a bit too superficially, given the scope of this section (as for instance Monte Carlo methods, in particular MCMC techniques that are introduced in less than six pages), but I came to realise this is the point of the book, which provides an entry into “all” necessary topics, along with links to the relevant literature (if missing Monte Carlo Statistical Methods!). I however deplore that the important issue of Monte Carlo experiments, whose construction is often a hardship for students, is postponed till the 100 page long appendix. (I suspect that students do not read appendices is another of those folk theorems!)

Monte Carlo methods differ from other methods of numerical analysis in yielding an estimate rather than an approximation.

The third and fourth parts of the book cover methods of computational statistics, including Monte Carlo methods, randomization and cross validation, the bootstrap, probability density estimation, and statistical learning. Unfortunately, I find the level of Part III to be quite uneven, where all chapters are short and rather superficial because they try to be all-encompassing. (For instance, Chapter 8 includes two pages on the RGB colour coding.) Part IV does a better job of presenting machine learning techniques, if not with the thoroughness of Hastie et al.’s The Elements of Statistical Learning: Data Mining, Inference, and Prediction… It seems to me that the relevant sections of Part III would have fitted better where they belong in Part IV. For instance, Chapter 10 on estimation of functions only covers the evaluation of estimators of functions, postponing the construction of those estimators till Chapter 15. Jackknife is introduced on its own in Chapter 12 (not that I find this introduction ultimately necessary) without the bootstrap covered in eight pages in Chapter 13 (bootstrap for non-iid data is dismissed rather quickly, given the current research in the area). The first chapter of Part IV covers some (non-Bayesian) estimation approaches for parametric families, but I find this chapter somehow superfluous as it does not belong to the computational statistic domain (except as an approximation method, as stressed in Section 14.4). While Chapter 16 is a valuable entry on clustering and data-analysis tools like PCA, the final section on high dimensions feels out of context (and mentioning the curse of dimensionality only that close to the end of the book does not seem appropriate). Chapter 17 on dependent data is missing the rich literature on graphical models and their use in the determination of dependence structures.

Programming is the best way to learn programming (read that again) .

In conclusion, Computational Statistics is a very diverse book that can be used at several levels as textbook, as well as a reference for researchers (even if as an entry towards further and deeper references). The book is well-written, in a lively and personal style. (I however object to the reduction of the notion of Markov chains to discrete state-spaces!) There is no requirement for a specific programming language, although R is introduced in a somewhat dismissive way (R most serious flaw is usually lack of robustness since some [packages] are not of high-quality) and some exercises start with Design and write either a C or a Fortran subroutine. BUGS is not mentioned at all. The appendices of Computational Statistics also contain the solutions to some exercises, even though the level of detail is highly variable, from one word (“1″) to one page (see, e.g., Exercise 11.4). The 20 page list of references is preceded by a few pages on available journals and webpages, which could get obsolete rather quickly. Despite the reservations raised above about some parts of Computational Statistics that would benefit from a deeper coverage, I think this book is a reference book that should appear in the shortlist of any computational statistics/statistical computing graduate course as well as on the shelves of any researcher supporting his or her statistical practice with a significant dose of computing backup.

Follow

Get every new post delivered to your Inbox.

Join 640 other followers