Archive for George Box

RSS Read Paper

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , on April 17, 2017 by xi'an

I had not attended a Read Paper session at the Royal Statistical Society in Errol Street for quite a while and hence it was quite a treat to be back there, especially as a seconder of the vote of thanks for the paper of Andrew Gelman and Christian Hennig. (I realised at this occasion that I had always been invited as a seconder, who in the tradition of the Read Papers is expected to be more critical of the paper. When I mentioned that to a friend, he replied they knew me well!) Listening to Andrew (with no slide) and Christian made me think further about the foundations of statistics and the reasons why we proceed as we do. In particular about the meaning and usages of a statistical model. Which is only useful (in the all models are wrong meme) if the purpose of the statistical analysis is completely defined. Searching for the truth does not sound good enough. And this brings us back full circle to decision theory in my opinion, which should be part of the whole picture and the virtues of openness, transparency and communication.

During his talk, Christian mentioned outliers as a delicate issue in modelling and I found this was a great example of a notion with no objective meaning, in that it is only defined in terms of or against a model, in that it addresses the case of observations not fitting a model instead of a model not fitting some observations, hence as much a case of incomplete (lazy?) modelling as an issue of difficult inference. And a discussant (whose Flemish name I alas do not remember) came with the slide below of an etymological reminder that originally (as in Aristotle) the meaning of objectivity and subjectivity were inverted, in that the later meant about the intrinsic nature of the object, while the former was about the perception of this object. It is only in the modern (?) era that Immanuel Kant reverted the meanings…Last thing, I plan to arXiv my discussions, so feel free to send me yours to add to the arXiv document. And make sure to spread the word about this discussion paper to all O-Bayesians as they should feel concerned about this debate!

all models are wrong

Posted in Statistics, University life with tags , , , , , , , on September 27, 2014 by xi'an

“Using ABC to evaluate competing models has various hazards and comes with recommended precautions (Robert et al. 2011), and unsurprisingly, many if not most researchers have a healthy scepticism as these tools continue to mature.”

Michael Hickerson just published an open-access letter with the above title in Molecular Ecology. (As in several earlier papers, incl. the (in)famous ones by Templeton, Hickerson confuses running an ABC algorithm with conducting Bayesian model comparison, but this is not the main point of this post.)

“Rather than using ABC with weighted model averaging to obtain the three corresponding posterior model probabilities while allowing for the handful of model parameters (θ, τ, γ, Μ) to be estimated under each model conditioned on each model’s posterior probability, these three models are sliced up into 143 ‘submodels’ according to various parameter ranges.”

The letter is in fact a supporting argument for the earlier paper of Pelletier and Carstens (2014, Molecular Ecology) which conducted the above splitting experiment. I could not read this paper so cannot judge of the relevance of splitting this way the parameter range. From what I understand it amounts to using mutually exclusive priors by using different supports.

“Specifically, they demonstrate that as greater numbers of the 143 sub-models are evaluated, the inference from their ABC model choice procedure becomes increasingly.”

An interestingly cut sentence. Increasingly unreliable? mediocre? weak?

“…with greater numbers of models being compared, the most probable models are assigned diminishing levels of posterior probability. This is an expected result…”

True, if the number of models under consideration increases, under a uniform prior over model indices, the posterior probability of a given model mechanically decreases. But the pairwise Bayes factors should not be impacted by the number of models under comparison and the letter by Hickerson states that Pelletier and Carstens found the opposite:

“…pairwise Bayes factor[s] will always be more conservative except in cases when the posterior probabilities are equal for all models that are less probable than the most probable model.”

Which means that the “Bayes factor” in this study is computed as the ratio of a marginal likelihood and of a compound (or super-marginal) likelihood, averaged over all models and hence incorporating the prior probabilities of the model indices as well. I had never encountered such a proposal before. Contrary to the letter’s claim:

“…using the Bayes factor, incorporating all models is perhaps more consistent with the Bayesian approach of incorporating all uncertainty associated with the ABC model choice procedure.”

Besides the needless inclusion of ABC in this sentence, a somewhat confusing sentence, as Bayes factors are not, stricto sensu, Bayesian procedures since they remove the prior probabilities from the picture.

“Although the outcome of model comparison with ABC or other similar likelihood-based methods will always be dependent on the composition of the model set, and parameter estimates will only be as good as the models that are used, model-based inference provides a number of benefits.”

All models are wrong but the very fact that they are models allows for producing pseudo-data from those models and for checking if the pseudo-data is similar enough to the observed data. In components that matters the most for the experimenter. Hence a loss function of sorts…

interesting mis-quote

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , on September 25, 2014 by xi'an

At a recent conference on Big Data, one speaker mentioned this quote from Peter Norvig, the director of research at Google:

“All models are wrong, and increasingly you can succeed without them.”

quote that I found rather shocking, esp. when considering the amount of modelling behind Google tools. And coming from someone citing Kernel Methods for Pattern Analysis by Shawe-Taylor and Christianini as one of his favourite books and Bayesian Data Analysis as another one… Or displaying Bayes [or his alleged portrait] and Turing in his book cover. So I went searching on the Web for more information about this surprising quote. And found the explanation, as given by Peter Norvig himself:

“To set the record straight: That’s a silly statement, I didn’t say it, and I disagree with it.”

Which means that weird quotes have a high probability of being misquotes. And used by others to (obviously) support their own agenda. In the current case, Chris Anderson and his End of Theory paradigm. Briefly and mildly discussed by Andrew a few years ago.

Error and Inference [on wrong models]

Posted in Books, Statistics, University life with tags , , , , , , on December 6, 2011 by xi'an

In connection with my series of posts on the book Error and Inference, and my recent collation of those into an arXiv document, Deborah Mayo has started a series of informal seminars at the LSE on the philosophy of errors in statistics and the likelihood principle. and has also posted a long comment on my argument about only using wrong models. (The title is inspired from the Rolling Stones’ “You can’t always get what you want“, very cool!) The discussion about the need or not to take into account all possible models (which is the meaning of the “catchall hypothesis” I had missed while reading the book) shows my point was not clear. I obviously do not claim in the review that all possible models should be accounted for at once, this was on the opposite my understanding of Mayo’s criticism of the Bayesian approach (I thought the following sentence was clear enough: “According to Mayo, this alternative hypothesis should “include all possible rivals, including those not even though of” (p.37)”)! So I see the Bayesian approach as a way to put on the table a collection of reasonable (if all wrong) models and give to those models a posterior probability, with the purpose that improbable ones are eliminated. Therefore, I am in agreement with most of the comments in the post, esp. because this has little to do with Bayesian versus frequentist testing! Even rejecting the less likely models from a collection seems compatible with a Bayesian approach, model averaging is not always an appropriate solution, depending on the loss function!

%d bloggers like this: