Archive for severe testing

Monsieur le Président [reposted]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , on April 11, 2020 by xi'an

Let us carry out screening campaigns on representative samples of population!

Mr President of the Republic, as you rightly indicated, we are at war and everything must be done to combat the spread of CODIV-19. You had the wisdom to surround yourself with a Scientific Council and an Analysis, Research and Expertise Committee, both competent, and, as you know, applied mathematicians, statisticians have a role to play in this battle. Yes, to predict the evolution of the epidemic, mathematical models are used at different scales. This allows us estimate the number of people infected in the coming weeks and months. We are at war and these predictions are essential to the development of the best control strategy. They inform political decisions. This is especially with the help of these items of information that the confinement of the French population has been decided and renewed.

Mr President we are at war and these predictions must be the most robust possible. The more precise they are, the better the decisions they will guide. Mathematical models include a number of unknown parameters whose values ​​should be set based on expert advice or data. These include the transmission rate, incubation time, contagion time, and, of course, to initialize dynamic mathematical models, the number of covered individuals. To enjoy more reliable predictions, it is necessary to better estimate such crucial quantities. The proportion of healthy carriers appears to be a particularly critical parameter.

Mr President, we are at war and we must assess the proportions of healthy carriers by geographic areas. We do not currently have the means to implement massive screenings, but we can carry out surveys. This means, for a well-defined geographic area, to run biological tests on samples of individuals that are drawn at random and are representative of the total population of the area. Such data would come to supplement those already available and would considerably reduce the uncertainty in model predictions.

Mr. President, we are at war, let us give ourselves the means to fight effectively against this scourge. Thanks to a significant effort, the number of individuals that can be tested daily increases significantly, let’s devote some of these available tests to samples representative. For each individual drawn at random, we will perform a nasal swab, a blood test, let us collect clinical data and other items of information on its follow-up barriers. This would provide important information on the percentage of immunized French people. This data would open the possibility to feed mathematical models wisely, and hence to make informed decisions about the different strategies of deconfinement.

Mr. President, we are at war. This strategy, which could at first be deployed only in the most affected sectors, is, we believe, essential. It is doable: designing the survey and determining a representative sample is not an issue, going to the homes of the people in the sample, towards taking samples and having them fill out a questionnaire is also perfectly achievable if we give ourselves the means to do so. You only have to decide that a few of the available PCR tests and serological tests will be devoted to these statistical studies. In Paris and in the Grand Est, for instance, a mere few thousand tests on a representative population of individuals properly selected could better assess the situation and help in taking informed decisions.

Mr. President, a proposal to this effect has been presented to the Scientific Council and to the Analysis, Research and Expertise Committee that you have set up by a group of mathematicians at École Polytechnique with Professor Josselin Garnier at their head. You will realise by reading this tribune that the statistician that I am does support very strongly. I am in no way disputing the competence of the councils which support you but you have to act quickly and, I repeat, only dedicate a few thousand tests to statistics studies. Emergency is everywhere, assistance to the patients, to people in intensive care, must of course be the priority, but let us attempt to anticipate as well . We do not have the means to massively test the entire population, let us run polls.

Jean-Michel Marin
Professeur à l’Université de Montpellier
Président de la Société Française de Statistique
Directeur de l’Institut Montpelliérain Alexander Grothendieck
Vice-Doyen de la Faculté des Sciences de Montpellier

severe testing : beyond Statistics wars?!

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , on January 7, 2019 by xi'an

A timely start to my reading Deborah Mayo’s [properly printed] Statistical Inference as Severe Testing (How to get beyond the Statistics Wars) on the Armistice Day, as it seems to call for just this, an armistice! And the opportunity of a long flight to Oaxaca in addition… However, this was only the start and it took me several further weeks to peruse seriously enough the book (SIST) before writing the (light) comments below. (Receiving a free copy from CUP and then a second one directly from Deborah after I mentioned the severe sabotage!)

Indeed, I sort of expected a different content when taking the subtitle How to get beyond the Statistics Wars at face value. But on the opposite the book is actually very severely attacking anything not in the line of the Cox-Mayo severe testing line. Mostly Bayesian approach(es) to the issue! For instance, Jim Berger’s construct of his reconciliation between Fisher, Neyman, and Jeffreys is surgically deconstructed over five pages and exposed as a Bayesian ploy. Similarly, the warnings from Dennis Lindley and other Bayesians that the p-value attached with the Higgs boson experiment are not probabilities that the particle does not exist are met with ridicule. (Another go at Jim’s Objective Bayes credentials is found in the squared myth of objectivity chapter. Maybe more strongly than against staunch subjectivists like Jay Kadane. And yet another go when criticising the Berger and Sellke 1987 lower bound results. Which even extends to Vale Johnson’s UMP-type Bayesian tests.)

“Inference should provide posterior probabilities, final degrees of support, belief, probability (…) not provided by Bayes factors.” (p.443)

Another subtitle of the book could have been testing in Flatland given the limited scope of the models considered with one or at best two parameters and almost always a Normal setting. I have no idea whatsoever how the severity principle would apply in more complex models, with e.g. numerous nuisance parameters. By sticking to the simplest possible models, the book can carry on with the optimality concepts of the early days, like sufficiency (p.147) and and monotonicity and uniformly most powerful procedures, which only make sense in a tiny universe.

“The estimate is really a hypothesis about the value of the parameter.  The same data warrant the hypothesis constructed!” (p.92)

There is an entire section on the lack of difference between confidence intervals and the dual acceptance regions, although the lack of unicity in defining either of them should come as a bother. Especially outside Flatland. Actually the following section, from p.193 onward, reminds me of fiducial arguments, the more because Schweder and Hjort are cited there. (With a curve like Fig. 3.3. operating like a cdf on the parameter μ but no dominating measure!)

“The Fisher-Neyman dispute is pathological: there’s no disinterring the truth of the matter (…) Fisher grew to renounce performance goals he himself had held when it was found that fiducial solutions disagreed with them.”(p.390)

Similarly the chapter on the “myth of the “the myth of objectivity””(p.221) is mostly and predictably targeting Bayesian arguments. The dismissal of Frank Lad’s arguments for subjectivity ends up [or down] with a rather cheap that it “may actually reflect their inability to do the math” (p.228). [CoI: I once enjoyed a fantastic dinner cooked by Frank in Christchurch!] And the dismissal of loss function requirements in Ziliak and McCloskey is similarly terse, if reminding me of Aris Spanos’ own arguments against decision theory. (And the arguments about the Jeffreys-Lindley paradox as well.)

“It’s not clear how much of the current Bayesian revolution is obviously Bayesian.” (p.405)

The section (Tour IV) on model uncertainty (or against “all models are wrong”) is somewhat limited in that it is unclear what constitutes an adequate (if wrong) model. And calling for the CLT cavalry as backup (p.299) is not particularly convincing.

It is not that everything is controversial in SIST (!) and I found agreement in many (isolated) statements. Especially in the early chapters. Another interesting point made in the book is to question whether or not the likelihood principle at all makes sense within a testing setting. When two models (rather than a point null hypothesis) are X-examined, it is a rare occurrence that the likelihood factorises any further than the invariance by permutation of iid observations. Which reminded me of our earlier warning on the dangers of running ABC for model choice based on (model specific) sufficient statistics. Plus a nice sprinkling of historical anecdotes, esp. about Neyman’s life, from Poland, to Britain, to California, with some time in Paris to attend Borel’s and Lebesgue’s lectures. Which is used as a background for a play involving Bertrand, Borel, Neyman and (Egon) Pearson. Under the title “Les Miserables Citations” [pardon my French but it should be Les Misérables if Hugo is involved! Or maybe les gilets jaunes…] I also enjoyed the sections on reuniting Neyman-Pearson with Fisher, while appreciating that Deborah Mayo wants to stay away from the “minefields” of fiducial inference. With, mot interestingly, Neyman himself trying in 1956 to convince Fisher of the fallacy of the duality between frequentist and fiducial statements (p.390). Wisely quoting Nancy Reid at BFF4 stating the unclear state of affair on confidence distributions. And the final pages reawakened an impression I had at an earlier stage of the book, namely that the ABC interpretation on Bayesian inference in Rubin (1984) could come closer to Deborah Mayo’s quest for comparative inference (p.441) than she thinks, in that producing parameters producing pseudo-observations agreeing with the actual observations is an “ability to test accordance with a single model or hypothesis”.

“Although most Bayesians these days disavow classic subjective Bayesian foundations, even the most hard-nosed. “we’re not squishy” Bayesian retain the view that a prior distribution is an important if not the best way to bring in background information.” (p.413)

A special mention to Einstein’s cafe (p.156), which reminded me of this picture of Einstein’s relative Cafe I took while staying in Melbourne in 2016… (Not to be confused with the Markov bar in the same city.) And a fairly minor concern that I find myself quoted in the sections priors: a gallimaufry (!) and… Bad faith Bayesianism (!!), with the above qualification. Although I later reappear as a pragmatic Bayesian (p.428), although a priori as a counter-example!

reading pile for X break

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on December 28, 2018 by xi'an

severe testing or severe sabotage? [not a book review]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , on October 16, 2018 by xi'an

Last week, I received this new book of Deborah Mayo, which I was looking forward reading and annotating!, but thrice alas, the book had been sabotaged: except for the preface and acknowledgements, the entire book is printed upside down [a minor issue since the entire book is concerned] and with some part of the text cut on each side [a few letters each time but enough to make reading a chore!]. I am thus waiting for a tested copy of the book to start reading it in earnest!

 

%d bloggers like this: