Archive for Bristol

p-values, Bayes factors, and sufficiency

Posted in Books, pictures, Statistics with tags , , , , , , , , , on April 15, 2019 by xi'an

Among the many papers published in this special issue of TAS on statistical significance or lack thereof, there is a paper I had already read before (besides ours!), namely the paper by Jonty Rougier (U of Bristol, hence the picture) on connecting p-values, likelihood ratio, and Bayes factors. Jonty starts from the notion that the p-value is induced by a transform, summary, statistic of the sample, t(x), the larger this t(x), the less likely the null hypothesis, with density f⁰(x), to create an embedding model by exponential tilting, namely the exponential family with dominating measure f⁰, and natural statistic, t(x), and a positive parameter θ. In this embedding model, a Bayes factor can be derived from any prior on θ and the p-value satisfies an interesting double inequality, namely that it is less than the likelihood ratio, itself lower than any (other) Bayes factor. One novel aspect from my perspective is that I had thought up to now that this inequality only holds for one-dimensional problems, but there is no constraint here on the dimension of the data x. A remark I presumably made to Jonty on the first version of the paper is that the p-value itself remains invariant under a bijective increasing transform of the summary t(.). This means that there exists an infinity of such embedding families and that the bound remains true over all such families, although the value of this minimum is beyond my reach (could it be the p-value itself?!). This point is also clear in the justification of the analysis thanks to the Pitman-Koopman lemma. Another remark is that the perspective can be inverted in a more realistic setting when a genuine alternative model M¹ is considered and a genuine likelihood ratio is available. In that case the Bayes factor remains smaller than the likelihood ratio, itself larger than the p-value induced by the likelihood ratio statistic. Or its log. The induced embedded exponential tilting is then a geometric mixture of the null and of the locally optimal member of the alternative. I wonder if there is a parameterisation of this likelihood ratio into a p-value that would turn it into a uniform variate (under the null). Presumably not. While the approach remains firmly entrenched within the realm of p-values and Bayes factors, this exploration of a natural embedding of the original p-value is definitely worth mentioning in a class on the topic! (One typo though, namely that the Bayes factor is mentioned to be lower than one, which is incorrect.)

hittin’ a Brexit wall

Posted in pictures, Travel with tags , , , , , , , , on December 19, 2018 by xi'an

postdoc position in London plus Seattle

Posted in Statistics with tags , , , , , , , , , , , on March 21, 2018 by xi'an

Here is an announcement from Oliver Ratman for a postdoc position at Imperial College London with partners in Seattle, on epidemiology and new Bayesian methods for estimating sources of transmission with phylogenetics. As stressed by Ollie, no pre-requisites in phylogenetics are required, they are really looking for someone with solid foundations in Mathematics/Statistics, especially Bayesian Statistics, and good computing skills (R, github, MCMC, Stan). The search is officially for a Postdoc in Statistics and Pathogen Phylodynamics. Reference number is NS2017189LH. Deadline is April 07, 2018.

more positions in the UK [postdoc & professor]

Posted in Statistics with tags , , , , , , , , , , , on October 13, 2017 by xi'an

I have received additional emails from England advertising for positions in Bristol, Durham, and London, so here they are, with links to the complete advertising!

  1. The University of Bristol is seeking to appoint a number of Chairs in any areas of Mathematics or Statistical Science, in support of a major strategic expansion of the School of Mathematics. Deadline is December 4.
  2. Durham University is opening a newly created position of Professor of Statistics, with research and teaching duties. Deadline is November 6.
  3. Oliver Ratman, in the Department of Mathematics at Imperial College London, is seeking a Research Associate in Statistics and Pathogen Phylodynamics. Deadline is October 30.

position in Bristol

Posted in pictures, Running, Statistics, Travel, University life with tags , , , , on October 4, 2017 by xi'an

There is [also] an opening for a Lecturer or Senior Lecture or Reader at the University of Bristol, with deadline 27th of November, 2017. The School of Mathematics and in particular the Institute for Statistical Science are quite active in research, with top rankings and a rich area of expertise domains, while [based on personal experience] the City of Bristol is a great place to live! (Details through the links.)

position in Bristol

Posted in Statistics, University life with tags , , , , , , on July 19, 2016 by xi'an

Clifton Bridge, Bristol, Sept. 24, 2012There is an opening for a Lecturer (i.e., assistant/associate professor) position in Statistical Science at the University of Bristol (School of Mathematics) with deadline August 7. Please contact Professor Christophe Andrieu for more details.

read paper [in Bristol]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on January 29, 2016 by xi'an

Clifton & Durdham Downs, Bristol, Sept. 25, 2012I went to give a seminar in Bristol last Friday and I chose to present the testing with mixture paper. As we are busy working on the revision, I was eagerly looking for comments and criticisms that could strengthen this new version. As it happened, the (Bristol) Bayesian Cake (Reading) Club had chosen our paper for discussion, two weeks in a row!, hence the title!, and I got invited to join the group the morning prior to the seminar! This was, of course, most enjoyable and relaxed, including an home-made cake!, but also quite helpful in assessing our arguments in the paper. One point of contention or at least of discussion was the common parametrisation between the components of the mixture. Although all parametrisations are equivalent from a single component point of view, I can [almost] see why using a mixture with the same parameter value on all components may impose some unsuspected constraint on that parameter. Even when the parameter is the same moment for both components. This still sounds like a minor counterpoint in that the weight should converge to either zero or one and hence eventually favour the posterior on the parameter corresponding to the “true” model.

Another point that was raised during the discussion is the behaviour of the method under misspecification or for an M-open framework: when neither model is correct does the weight still converge to the boundary associated with the closest model (as I believe) or does a convexity argument produce a non-zero weight as it limit (as hinted by one example in the paper)? I had thought very little about this and hence had just as little to argue though as this does not sound to me like the primary reason for conducting tests. Especially in a Bayesian framework. If one is uncertain about both models to be compared, one should have an alternative at the ready! Or use a non-parametric version, which is a direction we need to explore deeper before deciding it is coherent and convergent!

A third point of discussion was my argument that mixtures allow us to rely on the same parameter and hence the same prior, whether proper or not, while Bayes factors are less clearly open to this interpretation. This was not uniformly accepted!

Thinking afresh about this approach also led me to broaden my perspective on the use of the posterior distribution of the weight(s) α: while previously I had taken those weights mostly as a proxy to the posterior probabilities, to be calibrated by pseudo-data experiments, as for instance in Figure 9, I now perceive them primarily as the portion of the data in agreement with the corresponding model [or hypothesis] and more importantly as a solution for staying away from a Neyman-Pearson-like decision. Or error evaluation. Usually, when asked about the interpretation of the output, my answer is to compare the behaviour of the posterior on the weight(s) with a posterior associated with a sample from each model. Which does sound somewhat similar to posterior predictives if the samples are simulated from the associated predictives. But the issue was not raised during the visit to Bristol, which possibly reflects on how unfrequentist the audience was [the Statistics group is], as it apparently accepted with no further ado the use of a posterior distribution as a soft assessment of the comparative fits of the different models. If not necessarily agreeing the need of conducting hypothesis testing (especially in the case of the Pima Indian dataset!).