A week ago, I received a request for refereeing a paper for the Journal of Open Source Software, which I have never seen (or heard of) before. The concept is quite interesting with a scope much broader than statistical computing (as I do not know anyone in the board and no-one there seems affiliated with a Statistics department). Papers are very terse, describing the associated code in one page or two, and the purpose of refereeing is to check the code. (I was asked to evaluate an MCMC R package but declined for lack of time.) Which is a pretty light task if the code is friendly enough to operate right away and provide demos. Best of luck to this endeavour!
Archive for software
[A quite significant announcement last October from TOMACS that I had missed:]
To improve the reproducibility of modeling and simulation research, TOMACS is pursuing two strategies.
Number one: authors are encouraged to include sufficient information about the core steps of the scientific process leading to the presented research results and to make as many of these steps as transparent as possible, e.g., data, model, experiment settings, incl. methods and configurations, and/or software. Associate editors and reviewers will be asked to assess the paper also with respect to this information. Thus, although not required, submitted manuscripts which provide clear information on how to generate reproducible results, whenever possible, will be considered favorably in the decision process by reviewers and the editors.
Number two: we will form a new replicating computational results activity in modeling and simulation as part of the peer reviewing process (adopting the procedure RCR of ACM TOMS). Authors who are interested in taking part in the RCR activity should announce this in the cover letter. The associate editor and editor in chief will assign a RCR reviewer for this submission. This reviewer will contact the authors and will work together with the authors to replicate the research results presented. Accepted papers that successfully undergo this procedure will be advertised at the TOMACS web page and will be marked with an ACM reproducibility brand. The RCR activity will take place in parallel to the usual reviewing process. The reviewer will write a short report which will be published alongside the original publication. TOMACS also plans to publish short reports about lessons learned from non-successful RCR activities.
[And now the first paper reviewed according to this protocol has been accepted:]
The paper Automatic Moment-Closure Approximation of Spatially Distributed Collective Adaptive Systems is the first paper that took part in the new replicating computational results (RCR) activity of TOMACS. The paper completed successfully the additional reviewing as documented in its RCR report. This reviewing is aimed at ensuring that computational results presented in the paper are replicable. Digital artifacts like software, mechanized proofs, data sets, test suites, or models, are evaluated referring to ease of use, consistency, completeness, and being well documented.
Already on the final day..! And still this frustration in being unable to attend three sessions at once… Andrew Gelman started the day with a non-computational talk that broached on themes that are familiar to readers of his blog, on the misuse of significance tests and on recommendations for better practice. I then picked the Scaling and optimisation of MCMC algorithms session organised by Gareth Roberts, with optimal scaling talks by Tony Lelièvre, Alex Théry and Chris Sherlock, while Jochen Voss spoke about the convergence rate of ABC, a paper I already discussed on the blog. A fairly exciting session showing that MCMC’ory (name of a workshop I ran in Paris in the late 90’s!) is still well and alive!
After the break (sadly without the ski race!), the software round-table session was something I was looking for. The four softwares covered by this round-table were BUGS, JAGS, STAN, and BiiPS, each presented according to the same pattern. I would have like to see a “battle of the bands”, illustrating pros & cons for each language on a couple of models & datasets. STAN got the officious prize for cool tee-shirts (we should have asked the STAN team for poster prize tee-shirts). And I had to skip the final session for a flu-related doctor appointment…
I called for a BayesComp meeting at 7:30, hoping for current and future members to show up and discuss the format of the future MCMski meetings, maybe even proposing new locations on other “sides of the Italian Alps”! But (workshop fatigue syndrome?!), no-one showed up. So anyone interested in discussing this issue is welcome to contact me or David van Dyk, the new BayesComp program chair.
(My colleague Jean-Louis Fouley, now at I3M, Montpellier, kindly agreed to write a review on the BUGS book for CHANCE. Here is the review, en avant-première! Watch out, it is fairly long and exhaustive! References will be available in the published version. The additions of book covers with BUGS in the title and of the corresponding Amazon links are mine!)
If a book has ever been so much desired in the world of statistics, it is for sure this one. Many people have been expecting it for more than 20 years ever since the WinBUGS software has been in use. Therefore, the tens of thousands of users of WinBUGS are indebted to the leading team of the BUGS project (D Lunn, C Jackson, N Best, A Thomas and D Spiegelhalter) for having eventually succeeded in finalizing the writing of this book and for making sure that the long-held expectations are not dashed.
As well explained in the Preface, the BUGS project initiated at Cambridge was a very ambitious one and at the forefront of the MCMC movement that revolutionized the development of Bayesian statistics in the early 90’s after the pioneering publication of Gelfand and Smith on Gibbs sampling.
This book comes out after several textbooks have already been published in the area of computational Bayesian statistics using BUGS and/or R (Gelman and Hill, 2007; Marin and Robert, 2007; Ntzoufras, 2009; Congdon, 2003, 2005, 2006, 2010; Kéry, 2010; Kéry and Schaub, 2011 and others). It is neither a theoretical book on foundations of Bayesian statistics (e.g. Bernardo and Smith, 1994; Robert, 2001) nor an academic textbook on Bayesian inference (Gelman et al, 2004, Carlin and Louis, 2008). Instead, it reflects very well the aims and spirit of the BUGS project and is meant to be a manual “for anyone who would like to apply Bayesian methods to real-world problems”.
In spite of its appearance, the book is not elementary. On the contrary, it addresses most of the critical issues faced by statisticians who want to apply Bayesian statistics in a clever and autonomous manner. Although very dense, its typical fluid British style of exposition based on real examples and simple arguments helps the reader to digest without too much pain such ingredients as regression and hierarchical models, model checking and comparison and all kinds of more sophisticated modelling approaches (spatial, mixture, time series, non linear with differential equations, non parametric, etc…).
The book consists of twelve chapters and three appendices specifically devoted to BUGS (A: syntax; B: functions and C: distributions) which are very helpful for practitioners. The book is illustrated with numerous examples. The exercises are well presented and explained, and the corresponding code is made available on a web site. Continue reading
(This post is the preliminary version of a book review by Alessandra Iacobucci, to appear in CHANCE. Enjoy [both the review and the book]!)
As Rob J. Hyndman enthusiastically declares in his blog, “this is a gem of a book”. I would go even further and argue that The Art of R programming is a whole mine of gems. The book is well constructed, and has a very coherent structure.
After an introductory chapter, where the reader gets a quick overview on R basics that allows her to work through the examples in the following chapters, the rest of the book can be divided in three main parts. In the first part (Chapters 2 to 6) the reader is introduced to main R objects and to the functions built to handle and operate on each of them. The second part (Chapters 7 to 13) is focussed on general programming issues: R structures and object-oriented nature, I/O, string handling and manipulating issues, and graphics. Chapter 13 is all devoted to the topic of debugging. The third part deals with more advanced topics, such as speed of execution and performance issues (Chapter 14), mix-matching functions written in R and C (or Python), and parallel processing with R. Even though this last part is intended for more experienced programmers, the overall programming skills of the intended reader “may range anywhere from those of a professional software developer to `I took a programming course in college’.” (p.xxii).
With a fluent style, Matloff is able to deal with a large number of topics in a relatively limited number of pages, resulting in an astonishingly complete yet handy guide. At almost every page we discover a new command, most likely the command we had always looked for and done without by means of more or less cumbersome roundabouts. As a matter of fact, it is possible that there exists a ready-made and perfectly suited R function for nearly anything that comes up to one’s mind. Users coming from compiled programming languages may find it difficult to get used to this wealth of functions, just as they may feel uncomfortable not declaring variable types, not initializing vectors and arrays, or getting rid of loops. Nevertheless, through numerous examples and a precise knowledge of its strengths and limitations, Matloff masterly introduces the reader to the flexibility of R. He repeatedly underlines the functional nature of R in every part of the book and stresses from the outset how this feature has to be exploited for an effective programming. Continue reading
I just got this email (yes, in French) looking for a Bayesian ready to work on algorithms:
Dans le cadre de la société Vekia, nous recherchons un Docteur en statistiques bayésiennes pour un poste sur Lille à pourvoir dès que possible.
Vekia est un éditeur de logiciel pour le commerce fondée en 2007 par deux chercheurs (Pierre-Arnaud COQUELIN et Manuel DAVY) et compte actuellement 17 personnes. Nous éditions des logiciels à forte valeur ajoutée scientifique pour le commerce, réalisons des études pour nos clients hors du secteur du commerce.
Dans le cadre de la structuration de ses activités de R&D, Vekia recherche un docteur en statistiques bayésiennes et/ou machine learning, pour les missions suivantes :
— conception de modèles bayésiens et maquettage d’algorithmes en vue de leur validation
— Conception d’algorithmes de contrôle optimal en contexte stochastique et maquettage
— réalisation d’étude de données ponctuelles
Pour toute candidature, merci de contacter Pierre-Arnaud Coquelin au 06 50 44 22 58 ou email@example.com