Archive for testing

¡ya basta [no more excuses]

Posted in Kids with tags , , , , , , , , , , on August 23, 2021 by xi'an

pooling or not pooling

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , , on September 22, 2020 by xi'an

A pro-pooling opinion tribune, followed a few days later, by a collection of opinions on pooling PCR samples to absorb the overload in testing laboratories and produce results in a delay that is short enough for public health impacts.  Le Monde is taking sample pooling most seriously! This includes a link to an April article by Christian Grollier and Olivier Gossner (CREST) published in COVID economics (!), recalling the basics of sample pooling and citing a Technion experiment to show the approach is feasible in practice. With a recommended pool of 80 patients if the prevalence is 2%. And of 30 for a prevalence of 5%. Despite the approach being implemented in other countries, and considerable delays in analysing the PCR samples, the conclusion of the French public health authorities is that it is urgent to wait..!

coronavirus counts do not count

Posted in Books, pictures, Statistics with tags , , , , , , , , on April 8, 2020 by xi'an

Somewhat by chance I came across Nate Silver‘s tribune on FiveThirtyEight about the meaninglessness of COVID-19 case counts. As it reflects on sampling efforts and available resources rather than actual cases, furthermore sampling efforts from at least a fortnight.

“The data, at best, is highly incomplete, and often the tip of the iceberg for much larger problems. And data on tests and the number of reported cases is highly nonrandom. In many parts of the world today, health authorities are still trying to triage the situation with a limited number of tests available. Their goal in testing is often to allocate scarce medical care to the patients who most need it — rather than to create a comprehensive dataset for epidemiologists and statisticians to study.”

This article runs four different scenarios, with the same actual parameters for the epidemics, and highly different and mostly misleading perceptions based on the testing strategies. This is a highly relevant warning but I am surprised Nate Silver does not move to the rather obvious conclusion that some form of official survey or another, for instance based on capture-recapture and representative samples, testing for present and past infections, should be implemented on a very regular basis, even with a limited number of tested persons to get a much more reliable vision of the status of the epidemics. Here, the French official institute of statistics, INSEE, would be most suited to implement such a scheme.

truth or truthiness [book review]

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , , , , on March 21, 2017 by xi'an

This 2016 book by Howard Wainer has been sitting (!) on my desk for quite a while and it took a long visit to Warwick to find a free spot to quickly read it and write my impressions. The subtitle is, as shown on the picture, “Distinguishing fact from fiction by learning to think like a data scientist”. With all due respect to the book, which illustrates quite pleasantly the dangers of (pseudo-)data mis- or over- (or eve under-)interpretation, and to the author, who has repeatedly emphasised those points in his books and tribunes opinion columns, including those in CHANCE, I do not think the book teaches how to think like a data scientist. In that an arbitrary neophyte reader would not manage to handle a realistic data centric situation without deeper training. But this collection of essays, some of which were tribunes, makes for a nice reading  nonetheless.

I presume that in this post-truth and alternative facts [dark] era, the notion of truthiness is familiar to most readers! It is often based on a misunderstanding or a misappropriation of data leading to dubious and unfounded conclusions. The book runs through dozens of examples (some of them quite short and mostly appealing to common sense) to show how this happens and to some extent how this can be countered. If not avoided as people will always try to bend, willingly or not, the data to their conclusion.

There are several parts and several themes in Truth or Truthiness, with different degrees of depth and novelty. The more involved part is in my opinion the one about causality, with illustrations in educational testing, psychology, and medical trials. (The illustration about fracking and the resulting impact on Oklahoma earthquakes should not be in the book, except that there exist officials publicly denying the facts. The same remark applies to the testing cheat controversy, which would be laughable had not someone ended up the victim!) The section on graphical representation and data communication is less exciting, presumably because it comes after Tufte’s books and message. I also feel the 1854 cholera map of John Snow is somewhat over-exploited, since he only drew the map after the epidemic declined.  The final chapter Don’t Try this at Home is quite anecdotal and at the same time this may the whole point, namely that in mundane questions thinking like a data scientist is feasible and leads to sometimes surprising conclusions!

“In the past a theory could get by on its beauty; in the modern world, a successful theory has to work for a living.” (p.40)

The book reads quite nicely, as a whole and a collection of pieces, from which class and talk illustrations can be borrowed. I like the “learned” tone of it, with plenty of citations and witticisms, some in Latin, Yiddish and even French. (Even though the later is somewhat inaccurate! Si ça avait pu se produire, ça avait dû se produire [p.152] would have sounded more vernacular in my Gallic opinion!) I thus enjoyed unreservedly Truth or Truthiness, for its rich style and critical message, all the more needed in the current times, and far from comparing it with a bag of potato chips as Andrew Gelman did, I would like to stress its classical tone, in the sense of being immersed in a broad and deep culture that seems to be receding fast.

testing R code [book review]

Posted in R, Statistics, Travel with tags , , , , , , on March 1, 2017 by xi'an

When I saw this title among the CRC Press novelties, I immediately ordered it as I though it fairly exciting. Now that I have gone through the book, the excitement has died. Maybe faster than need be as I read it while being stuck in a soulless Schipol airport and missing the only ice-climbing opportunity of the year!

Testing R Code was written by Richard Cotton and is quite short: once you take out the appendices and the answers to the exercises, it is about 130 pages long, with a significant proportion of code and output. And it is about some functions developed by Hadley Wickham from RStudio, for testing the coherence of R code in terms of inputs more than outputs. The functions are assertive and testthat. Intended for run-time versus development-time testing. Meaning that the output versus the input are what the author of the code intends them to be. The other chapters contain advices and heuristics about writing maintainable testable code, and incorporating a testing feature in an R package.

While I am definitely a poorly qualified reader for this type of R books, my disappointment stems from my expectation of a book about debugging R code, which is possibly due to a misunderstanding of the term testing. This is an unrealistic expectation, for sure, as testing for a code to produce what it is supposed to do requires some advanced knowledge of what the output should be, at least in some representative situations. Which means using interface like RStudio is capital in spotting unsavoury behaviours of some variables, if not foolproof in any case.