Archive for uncertainty

scientific Americans abide by Joe

Posted in Books, Travel with tags , , , , , , , , , , , on September 28, 2020 by xi'an

5 ways to fix statistics?!

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , on December 4, 2017 by xi'an

In the last issue of Nature (Nov 30), the comment section contains a series of opinions on the reproducibility crisis, by five [groups of] statisticians. Including Blakeley McShane and Andrew Gelman with whom [and others] I wrote a response to the seventy author manifesto. The collection of comments is introduced with the curious sentence

“The problem is not our maths, but ourselves.”

Which I find problematic as (a) the problem is never with the maths, but possibly with the stats!, and (b) the problem stands in inadequate assumptions on the validity of “the” statistical model and on ignoring the resulting epistemic uncertainty. Jeff Leek‘s suggestion to improve the interface with users seems to come short on that level, while David Colquhoun‘s Bayesian balance between p-values and false-positive only address well-specified models. Michèle Nuitjen strikes closer to my perspective by arguing that rigorous rules are unlikely to help, due to the plethora of possible post-data modellings. And Steven Goodman’s putting the blame on the lack of statistical training of scientists (who “only want enough knowledge to run the statistical software that allows them to get their paper out quickly”) is wishful thinking: every scientific study [i.e., the overwhelming majority] involving data cannot involve a statistical expert and every paper involving data analysis cannot be reviewed by a statistical expert. I thus cannot but repeat the conclusion of Blakeley and Andrew:

“A crucial step is to move beyond the alchemy of binary statements about ‘an effect’ or ‘no effect’ with only a P value dividing them. Instead, researchers must accept uncertainty and embrace variation under different circumstances.”

Terry Tao on Bayes… and Trump

Posted in Books, Kids, Statistics, University life with tags , , , , , , , on June 13, 2016 by xi'an

“From the perspective of Bayesian probability, the grade given to a student can then be viewed as a measurement (in logarithmic scale) of how much the posterior probability that the student’s model was correct has improved over the prior probability.” T. Tao, what’s new, June 1

Jean-Michel Marin pointed out to me the recent post of Terry Tao on setting a subjective prior for allocating partial credits to multiple answer questions. (Although I would argue that the main purpose of multiple answer questions is to expedite grading!) The post considers only true-false questionnaires and the case when the student produces a probabilistic assessment of her confidence in the answer. In the format of a probability p for each question. The goal is then to devise a grading principle, f, such that f(p) goes to the right answer and f(1-p) to the wrong answer. This sounds very much like scoring weather forecasters and hence designing proper scoring rules. Which reminds me of the first time I heard a talk about this: it was in Purdue, circa 1988, and Morrie DeGroot gave a talk on scoring forecasters, based on a joint paper he had written with Susie Bayarri. The scoring rule is proper if the expected reward leads to pick p=q when p is the answer given by the student and q her true belief. Terry Tao reaches the well-known conclusion that the grading function f should be f(p)=log²(2p) where log² denotes the base 2 logarithm. One property I was unaware of is that the total expected score writes as N+log²(L) where L is the likelihood associated with the student’s subjective model. (This is the only true Bayesian aspect of the problem.)

An interesting and more Bayesian last question from Terry Tao is about what to do when the probabilities themselves are uncertain. More Bayesian because this is where I would introduce a prior model on this uncertainty, in a hierarchical fashion, in order to estimate the true probabilities. (A non-informative prior makes its way into the comments.) Of course, all this leads to a lot of work given the first incentive of asking multiple choice questions…

One may wonder at the link with scary Donald and there is none! But the next post by Terry Tao is entitled “It ought to be common knowledge that Donald Trump is not fit for the presidency of the United States of America”. And unsurprisingly, as an opinion post, it attracted a large number of non-mathematical comments.

Bureau international des poids et mesures

Posted in Books, Statistics, University life with tags , , , , , , , , , , on June 15, 2015 by xi'an

Today, I am taking part in a meeting in Paris, for an exotic change!, at the Bureau international des poids et mesures (BIPM), which looks after a universal reference for measurements. For instance, here is its definition of the kilogram:

The unit of mass, the kilogram, is the mass of the international prototype of the kilogram kept in air under three bell jars at the BIPM. It is a cylinder made of an alloy for which the mass fraction of platinum is 90 % and the mass fraction of iridium is 10 %.

And the BIPM is thus interested in the uncertainty associated with such measurements. Hence the workshop on measurement uncertainties. Tony O’Hagan will also be giving a talk in a session that opposes frequentist and Bayesian approaches, even though I decided to introduce ABC as it seems to me to be a natural notion for measurement problems (as far as I can tell from my prior on measurement problems).

Structure and uncertainty, Bristol, Sept. 25

Posted in pictures, Running, Statistics, Travel, Uncategorized, University life with tags , , , , , , , , , on September 26, 2012 by xi'an

This was a fairly full day at the Structure and uncertainty modelling, inference and computation in complex stochastic systems workshop! After a good one hour run around the Clifton Down, the morning was organised around likelihood-free methods, mostly ABC, plus Arnaud Doucet’s study of methods based on unbiased estimators of the likelihood (à la Beaumont, with the novelty of assessing the inefficiency due to the estimation, really fascinating..). The afternoon was dedicated to graphical models. Nicolas Chopin gave an updated version of his Kyoto talk on EP-ABC where he resorted to composite likelihoods for hidden Markov models, (I then wondered about the parameterisation and the tolerance determination for this algorithm.) Oliver Ratman presented some of the work he did on the flu while in Duke, then move to a new approach for ABC tolerance based on various kinds of testing (which I found clearer than in Kyoto, maybe because I was not jet-lagged!) And I gave my talk on ABC-EL.I found the afternoon session harder to follow, mostly because I always have trouble understanding the motivations and the notations used on these models, albeit fascinating. I remained intrigued by the bidirectional dependence arrow in those graphs for the whole afternoon (even though I think I get it now!) After looking at the few posters presented this afternoon, I went for another short run in Leigh Woods, before joining a group of friends for an Indian dinner at the Brunel Raj. A very full day…!