**O**n October 11, at Bletchley Park, the Suffrage Science awards in mathematics and computer sciences were awarded for the first time to 12 senior female researchers. Among whom three statisticians, Professor Christl Donnelly from Imperial College London, my colleague at Warwick, Jane Hutton, and my friend and co-author, Sylvia Richardson, from MRC, Cambridge University. This initiative was started by the Medical Research Council in 2011 by Suffrage Science awards for life sciences, followed in 2013 by one for engineering and physics, and this year for maths and computing. The name of the award aims to connect with the Suffragette movement of the late 19th and early 20th Centuries, which were particularly active in Britain. One peculiar aspect of this award is that the recipients are given pieces of jewellery, created for each field, pieces that they will themselves give two years later to a new recipient of their choice, and so on in an infinite regress! (Which suggests a related puzzle, namely to figure out how many years it should take until all female scientists have received the award. But since the number increases as the square of the number of years, this is not going to happen unless the field proves particularly hostile to women scientists!) This jewellery award also relates to the history of the Suffragette movement since the WPSU commissioned their own jewellery awards. A clever additional touch was that the awards were delivered on Ada Lovelace Day, October 11.

## Archive for Imperial College London

## Suffrage Science awards in maths and computing

Posted in pictures, Statistics, University life with tags Ada Lovelace, Bletchley Park, Cambridge University, Euler's formula, Great-Britain, Imperial College London, jewellery, MRC Unit, Suffrage Science awards, Suffragettes, University of Warwick, WPSU on October 21, 2016 by xi'an## inflation, evidence and falsifiability

Posted in Books, pictures, Statistics, University life with tags astrostatistics, Bayes factor, Bayesian model choice, Bayesian paradigm, Ewan Cameron, Gottfried Leibnitz, Imperial College London, inflation, Karl Popper, monad, paradigm shift, Peter Coles, quantum gravity on July 27, 2015 by xi'an*[Ewan Cameron pointed this paper to me and blogged about his impressions a few weeks ago. And then Peter Coles wrote a (properly) critical blog entry yesterday. Here are my quick impressions, as an add-on.]*

“As the cosmological data continues to improve with its inevitable twists, it has become evident that whatever the observations turn out to be they will be lauded as \proof of inflation”.”G. Gubitosi et al.

**I**n an arXive with the above title, Gubitosi et al. embark upon a generic and critical [and astrostatistical] evaluation of Bayesian evidence and the Bayesian paradigm. Perfect topic and material for another blog post!

“Part of the problem stems from the widespread use of the concept of Bayesian evidence and the Bayes factor (…) The limitations of the existing formalism emerge, however, as soon as we insist on falsifiability as a pre-requisite for a scientific theory (….) the concept is more suited to playing the lottery than to enforcing falsifiability: winning is more important than being predictive.”G. Gubitosi et al.

It is somehow quite hard *not* to quote most of the paper, because prose such as the above abounds. Now, compared with standards, the authors introduce an higher level than models, called *paradigms*, as collections of models. (I wonder what is the next level, monads? universes? paradises?) Each paradigm is associated with a marginal likelihood, obtained by integrating over models and model parameters. Which is also the evidence of or for the paradigm. And then, assuming a prior on the paradigms, one can compute the posterior over the paradigms… What is the novelty, then, that “forces” falsifiability upon Bayesian testing (or the reverse)?!

“However, science is not about playing the lottery and winning, but falsifiability instead, that is, about winning given that you have bore the full brunt of potential loss, by taking full chances of not winning a priori. This is not well incorporated into the Bayesian evidence because the framework is designed for other ends, those of model selection rather than paradigm evaluation.”G. Gubitosi et al.

The paper starts by a criticism of the Bayes factor in the point null test of a Gaussian mean, as overly penalising the null against the alternative being only a power law. Not much new there, it is well known that the Bayes factor does not converge at the same speed under the null and under the alternative… The first proposal of those authors is to consider the distribution of the marginal likelihood of the null model under the [or a] prior predictive encompassing both hypotheses or only the alternative *[there is a lack of precision at this stage of the paper]*, in order to calibrate the observed value against the expected. What is the connection with falsifiability? The notion that, under the prior predictive, most of the mass is on very low values of the evidence, leading to concluding against the null. If replacing the null with the alternative marginal likelihood, its mass then becomes concentrated on the largest values of the evidence, which is translated as an *unfalsifiable* theory. In simpler terms, it means you can never prove a mean θ is different from zero. Not a tremendously item of news, all things considered…

“…we can measure the predictivity of a model (or paradigm) by examining the distribution of the Bayesian evidence assuming uniformly distributed data.”G. Gubitosi et al.

The alternative is to define a tail probability for the evidence, i.e. the probability to be below an arbitrarily set bound. What remains unclear to me in this notion is the definition of a prior on the data, as it seems to be model *dependent*, hence prohibits comparison between models since this would involve incompatible priors. The paper goes further into that direction by penalising models according to their predictability, P, as exp{-(1-P²)/P²}. And paradigms as well.

“(…) theoretical matters may end up being far more relevant than any probabilistic issues, of whatever nature. The fact that inflation is not an unavoidable part of any quantum gravity framework may prove to be its greatest undoing.”G. Gubitosi et al.

Establishing a principled way to weight models would certainly be a major step in the validation of posterior probabilities as a quantitative tool for Bayesian inference, as hinted at in my 1993 paper on the Lindley-Jeffreys paradox, but I do not see such a principle emerging from the paper. Not only because of the arbitrariness in constructing both the predictivity and the associated prior weight, but also because of the impossibility to define a joint predictive, that is a predictive across models, without including the weights of those models. This makes the prior probabilities appearing on “both sides” of the defining equation… (And I will not mention the issues of constructing a prior distribution of a Bayes factor that are related to Aitkin‘s integrated likelihood. And won’t obviously try to enter the cosmological debate about inflation.)

## recents advances in Monte Carlo Methods

Posted in R, Statistics, Travel, University life with tags ABC, auxiliary variables, England, Imperial College London, London, MCMC, Monte Carlo Statistical Methods, particle methods, Read paper, simulation, target environment, warwick university on February 8, 2012 by xi'an**N**ext Thursday *(Feb. 16*), at the RSS, there will be a special half-day meeting (*afternoon, starting at 13:30*) on Recent Advances in Monte Carlo Methods organised by the General Application Section. The speakers are

- Richard Everitt, University of Oxford,
*Missing data, and what to do about it* - Anthony Lee, Warwick University,
*Auxiliary variables and many-core computation* - Nicolas Kantas, Imperial College London,
*Particle methods for computing optimal control inputs* - Nick Whitely, Bristol University,
*Stability properties of some particle filters* - Simon Maskell, QinetiQ & Imperial College London,
*Using a Probabilistic Hypothesis Density filter to confirm tracks in a multi-target environment*

*(Note this is not a Read Paper meeting, so there is no paper nor discussion!)*

## ABC in London [quick recap’]

Posted in Statistics, Travel, University life with tags ABC, ABC in London, ABC-SMC, Imperial College London, Statistics and Computing on May 6, 2011 by xi'an**T**he meeting yesterday went on very smoothly and nicely. Despite a tight schedule of 12 talks that made the meeting a very full day (and a very early start from Paris), it did not feel that exhausting, as also shown by the ensuing discussion in the Queens Arm after the talks. (The organisation of the meeting by Michael Stumpf and his group at Imperial was splendid, with plenty of tea and food to sustain the audience, and a very nice conference room.) It obviously helped that I had read a large portion of the papers related to the talks.

**T**he meeting started with David Balding recalling a few quotes from Alan Templeton to stress that ABC was not uniformly well-received in all circles, then Adam Powell gave a fascinating talk about an implementation of ABC on tracking the evolution of dairy farming in Europe. One amazing result in this work was that the whole of European cattle originated from a small herd of a few hundred domesticated aurochs in the Fertile Crescent! Simon Tavaré presented an equally fascinating study on the ancestral tree of primates that used a mix of ABC and MCM, recently published in *System Biology*, with the age of the common ancestor estimated to be between 80 and 90 million years ago (and an additional estimation of the divergence between humans and chimpanzees to be closer to 8 million years than 5 million years as thought previously). Tina Toni talked about the application of ABC-SMC and ABC model choice to complex biochemical dynamics. Pierre Pudlo and Mohammed Sedki introduced the new ABC-SMC scheme for selecting the tolerance we are developing (with Jean-Michel Marin and Jean-Marie Cornuet), which builds on Del Moral, Doucet and Jasra’s ABC-SMC (and hopefully completed soon to be submitted to *Statistics and Computing* special ABC issue). Oliver Ratmann showed an implementation of his model assessment to several epidemic data, including a superb influenza sequence. Ajay Jasra explained the main ideas in the ABC HMM paper I recently discussed (even mentioning the post during the talk!). Mark Beaumont started with a recollection of the developments on his GIMH algorithm and illustrated the use of particle MCMC with an ABC target in a dynamic admixture model with a sort of Dirichlet random walk on the admixture parameters. Michael Blum presented his study on the clear estimation error improvement brought by linear and non-linear adjustments to the raw ABC output. Dennis Prangle then followed by a pedagogical introduction to the semi-automated ABC discussed several times on the ‘Og. In the final session on ABC model choice, Xavier Didelot started the discussion by stating the problem about Bayes factor approximation and the resolution in the case of exponential families and Chris Barnes showed us a new method for picking summary statistics by a Kullback-Leibler criterion (Michael Stumpf had sent me the draft of the paper a few days ago and I will comment on the approach once it is available on arXiv).

**A**gain, a very full but exhilarating day! Looking forward the next edition in Roma!

## ABC in London

Posted in Statistics, Travel, University life with tags ABC, ABC in London, Hyde Park, Imperial College London, Nature Precedings on May 5, 2011 by xi'an**T**oday is the day of the ABC in London meeting. As noted by Michael Stumpf on his last mail to the speakers, “because of the impressive list of excellent speakers we could have filled the venue three times over but had to severely limit places…” And indeed the waiting queue for the meeting is quite long, at least crossing once the whole of Hyde Park! Note that all talks will be videoed in order to set up a Collection on Nature Precedings, to keep a record of all presentations.

**H**ere are my slides, pilfered from my longer talk in Zurich and incorporating the latest updates: