Archive for Gottfried Leibnitz

inflation, evidence and falsifiability

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , on July 27, 2015 by xi'an

[Ewan Cameron pointed this paper to me and blogged about his impressions a few weeks ago. And then Peter Coles wrote a (properly) critical blog entry yesterday. Here are my quick impressions, as an add-on.]

“As the cosmological data continues to improve with its inevitable twists, it has become evident that whatever the observations turn out to be they will be lauded as \proof of inflation”.” G. Gubitosi et al.

In an arXive with the above title, Gubitosi et al. embark upon a generic and critical [and astrostatistical] evaluation of Bayesian evidence and the Bayesian paradigm. Perfect topic and material for another blog post!

“Part of the problem stems from the widespread use of the concept of Bayesian evidence and the Bayes factor (…) The limitations of the existing formalism emerge, however, as soon as we insist on falsifiability as a pre-requisite for a scientific theory (….) the concept is more suited to playing the lottery than to enforcing falsifiability: winning is more important than being predictive.” G. Gubitosi et al.

It is somehow quite hard not to quote most of the paper, because prose such as the above abounds. Now, compared with standards, the authors introduce an higher level than models, called paradigms, as collections of models. (I wonder what is the next level, monads? universes? paradises?) Each paradigm is associated with a marginal likelihood, obtained by integrating over models and model parameters. Which is also the evidence of or for the paradigm. And then, assuming a prior on the paradigms, one can compute the posterior over the paradigms… What is the novelty, then, that “forces” falsifiability upon Bayesian testing (or the reverse)?!

“However, science is not about playing the lottery and winning, but falsifiability instead, that is, about winning given that you have bore the full brunt of potential loss, by taking full chances of not winning a priori. This is not well incorporated into the Bayesian evidence because the framework is designed for other ends, those of model selection rather than paradigm evaluation.” G. Gubitosi et al.

The paper starts by a criticism of the Bayes factor in the point null test of a Gaussian mean, as overly penalising the null against the alternative being only a power law. Not much new there, it is well known that the Bayes factor does not converge at the same speed under the null and under the alternative… The first proposal of those authors is to consider the distribution of the marginal likelihood of the null model under the [or a] prior predictive encompassing both hypotheses or only the alternative [there is a lack of precision at this stage of the paper], in order to calibrate the observed value against the expected. What is the connection with falsifiability? The notion that, under the prior predictive, most of the mass is on very low values of the evidence, leading to concluding against the null. If replacing the null with the alternative marginal likelihood, its mass then becomes concentrated on the largest values of the evidence, which is translated as an unfalsifiable theory. In simpler terms, it means you can never prove a mean θ is different from zero. Not a tremendously item of news, all things considered…

“…we can measure the predictivity of a model (or paradigm) by examining the distribution of the Bayesian evidence assuming uniformly distributed data.” G. Gubitosi et al.

The alternative is to define a tail probability for the evidence, i.e. the probability to be below an arbitrarily set bound. What remains unclear to me in this notion is the definition of a prior on the data, as it seems to be model dependent, hence prohibits comparison between models since this would involve incompatible priors. The paper goes further into that direction by penalising models according to their predictability, P, as exp{-(1-P²)/P²}. And paradigms as well.

“(…) theoretical matters may end up being far more relevant than any probabilistic issues, of whatever nature. The fact that inflation is not an unavoidable part of any quantum gravity framework may prove to be its greatest undoing.” G. Gubitosi et al.

Establishing a principled way to weight models would certainly be a major step in the validation of posterior probabilities as a quantitative tool for Bayesian inference, as hinted at in my 1993 paper on the Lindley-Jeffreys paradox, but I do not see such a principle emerging from the paper. Not only because of the arbitrariness in constructing both the predictivity and the associated prior weight, but also because of the impossibility to define a joint predictive, that is a predictive across models, without including the weights of those models. This makes the prior probabilities appearing on “both sides” of the defining equation… (And I will not mention the issues of constructing a prior distribution of a Bayes factor that are related to Aitkin‘s integrated likelihood. And won’t obviously try to enter the cosmological debate about inflation.)

Seeing Further, &tc.

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , on May 23, 2012 by xi'an

I can tell you at once that my favourite fellow of the Royal Society was the Reverend Thomas Bayes, from Turnbridge Wells in Kent, who lived from about 1701 to 1761. He was by all accounts a hopeless preacher, but a brilliant mathematician.” B. Bryson, Seeing Further, page 2.

After begging for a copy with Harper and Collins (!), I eventually managed to get hold of Bill Bryson’s “Seeing Further: The Story of Science, Discovery, and the Genius of the Royal Society“. Now, a word of warning: Bill Bryson is the editor of the book, meaning he wrote the very first chapter, plus a paragraph of introduction to the 21 next chapters. If, like me, you are a fan of Bryson’s hilarious style and stories (and have been for the past twenty years, starting with “Mother Tongue” about the English language), you will find this distinction rather unfortunate, esp. because it is not particularly well-advertised… But, after opening the book, you should not remain cross very long, and this for two reasons: the first one is that Bayes’s theorem appears on the very first page (written by Bryson, mind you!), with enough greek letters to make sure we are talking of our Bayes rule! This reason is completed by the above sentence which is in fact the very first in the book! Bryson took for sure a strong liking to Reverent Bayes to pick him as the epitome of a FRS! And he further avoids using this suspicious picture of the Reverent that plagues so many of our sites and talks… Bryson includes instead a letter from Thomas Bayes dated 1763, which must mean it was sent by Richard Price towards the publication of “An Essay towards solving a Problem in the Doctrine of Chances” in the Philosophical Transactions, as Bayes had been dead by two years at that time.

What about my second reason? Well, the authors selected by Bryson to write this eulogy of the Royal Society are mostly scientific writers like Richard Dawkins and James Gleick, scientists like Martin Rees and many others, and even a cyberpunk writer like Neal Stephenson, a selection that should not come as a surprise given his monumental Baroque Cycle about Isaac Newton and friends. Now, Neal Stephenson gets to the next level of awesome by writing a chapter on the philosophical concepts of Leibniz, FRS, the monads, and the fact that it was not making sense until quantum mechanics was introduced (drawing inspiration from a recent book by Christia Mercer). Now, the chapters of the book are quite uneven, some are about points not much related to the Royal Society, or bringing little light upon it. But overall the feeling that perspires the book is one of tremendous achievement by this conglomerate of men (and then women after 1945!) who started a Society about useful knowledge in 1660…

Continue reading

%d bloggers like this: