Archive for Saint Giles cemetery

improving bridge samplers by GANs

Posted in Books, pictures, Statistics with tags , , , , , , , on July 20, 2021 by xi'an

Hanwen Xing from Oxford recently posted a paper on arXiv about using GANs to improve the overlap bewtween the densities in bridge sampling. Bringing out new connections with noise contrastive estimation. The idea is to optimise a transform of one of the densities h() to bring it closer to the other density k(), using for instance normalising flows. (The call to transforms for bridge is not new, dating at least to Voter in 1985, the year I was starting my PhD!) Furthermore, using an f-divergence as a measure of functional distance allows for a reasonably straightforward update of the transform. That can be reformulated as a GAN target, which is somewhat natural in that the transform aims at confusing simulation from the transform of h and from k. This is quite an interesting proposal,  even though calculating the optimal transform is time-consuming and subjet to the curse of dimensionality. I also wonder at whether or not iterating the optimisation, one density after the other, would be bring further improvement.

MCqMC2020 key dates

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , on January 23, 2020 by xi'an

A reminder of the key dates for the incoming MCqMC2020 conference this summer in Oxford:

Feb 28, Special sessions/minisymposia submission
Mar 13, Contributed abstracts submission
Mar 27, Acceptance notification
Mar 27, Registration starts
May 8, End of early bird registration
June 12, Speaker registration deadline
Aug 9-14 Conference

and of the list of plenary speakers

Yves Atchadé (Boston University)
Jing Dong (Columbia University)
Pierre L’Ecuyer (Université de Montreal)
Mark Jerrum (Queen Mary University London)
Gerhard Larcher (JKU Linz)
Thomas Muller (NVIDIA)
David Pfau (Google DeepMind)
Claudia Schillings (University of Mannheim)
Mario Ullrich (JKU Linz)

Florence Nightingale Bicentennial Fellowship and Tutor in Statistics and Probability in Oxford [call]

Posted in Statistics, Travel, University life with tags , , , , , on July 29, 2019 by xi'an

Reposted: The Department of Statistics is recruiting a Florence Nightingale Bicentennial Fellowship and Tutor in Statistics and Probability with effect from October 2019 or as soon as possible thereafter. The post holder will join the dynamic and collaborative Department of Statistics. The Department carries out world-leading research in applied statistics fields including statistical and population genetics and bioinformatics, as well as core theoretical statistics, computational statistics, machine learning and probability. This is an exciting time for the Department, which relocated to new premises on St Giles’ in the heart of the University of Oxford in 2015. Our newly-renovated building provides state-of-the-art teaching facilities and modern space to facilitate collaboration and integration, creating a highly visible centre for Statistics in Oxford. The successful candidate will hold a doctorate in the field of Statistics, Mathematics or a related subject. They will be an outstanding individual who has the potential to become a leader in their field. The post holder will have the skills and enthusiasm to teach at undergraduate and graduate level, within the Department of Statistics, and to supervise student projects. They will carry out and publish original research within their area of specialisation. We particularly encourage candidates working in areas that link with existing research groups in the department to apply. The deadline for application is September 30, 2019.

If you would like to discuss this post and find out more about joining the academic community in Oxford, please contact Professor Judith Rousseau or Professor Yee Whye Teh. All enquiries will be treated in strict confidence and will not form part of the selection decision.

Jeffreys priors for hypothesis testing [Bayesian reads #2]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , , on February 9, 2019 by xi'an

A second (re)visit to a reference paper I gave to my OxWaSP students for the last round of this CDT joint program. Indeed, this may be my first complete read of Susie Bayarri and Gonzalo Garcia-Donato 2008 Series B paper, inspired by Jeffreys’, Zellner’s and Siow’s proposals in the Normal case. (Disclaimer: I was not the JRSS B editor for this paper.) Which I saw as a talk at the O’Bayes 2009 meeting in Phillie.

The paper aims at constructing formal rules for objective proper priors in testing embedded hypotheses, in the spirit of Jeffreys’ Theory of Probability “hidden gem” (Chapter 3). The proposal is based on symmetrised versions of the Kullback-Leibler divergence κ between null and alternative used in a transform like an inverse power of 1+κ. With a power large enough to make the prior proper. Eventually multiplied by a reference measure (i.e., the arbitrary choice of a dominating measure.) Can be generalised to any intrinsic loss (not to be confused with an intrinsic prior à la Berger and Pericchi!). Approximately Cauchy or Student’s t by a Taylor expansion. To be compared with Jeffreys’ original prior equal to the derivative of the atan transform of the root divergence (!). A delicate calibration by an effective sample size, lacking a general definition.

At the start the authors rightly insist on having the nuisance parameter v to differ for each model but… as we all often do they relapse back to having the “same ν” in both models for integrability reasons. Nuisance parameters make the definition of the divergence prior somewhat harder. Or somewhat arbitrary. Indeed, as in reference prior settings, the authors work first conditional on the nuisance then use a prior on ν that may be improper by the “same” argument. (Although conditioning is not the proper term if the marginal prior on ν is improper.)

The paper also contains an interesting case of the translated Exponential, where the prior is L¹ Student’s t with 2 degrees of freedom. And another one of mixture models albeit in the simple case of a location parameter on one component only.

relativity is the keyword

Posted in Books, Statistics, University life with tags , , , , , , , on February 1, 2017 by xi'an

St John's College, Oxford, Feb. 23, 2012As I was teaching my introduction to Bayesian Statistics this morning, ending up with the chapter on tests of hypotheses, I found reflecting [out loud] on the relative nature of posterior quantities. Just like when I introduced the role of priors in Bayesian analysis the day before, I stressed the relativity of quantities coming out of the BBB [Big Bayesian Black Box], namely that whatever happens as a Bayesian procedure is to be understood, scaled, and relativised against the prior equivalent, i.e., that the reference measure or gauge is the prior. This is sort of obvious, clearly, but bringing the argument forward from the start avoids all sorts of misunderstanding and disagreement, in that it excludes the claims of absolute and certainty that may come with the production of a posterior distribution. It also removes the endless debate about the determination of the prior, by making each prior a reference on its own. With an additional possibility of calibration by simulation under the assumed model. Or an alternative. Again nothing new there, but I got rather excited by this presentation choice, as it seems to clarify the path to Bayesian modelling and avoid misapprehensions.

Further, the curious case of the Bayes factor (or of the posterior probability) could possibly be resolved most satisfactorily in this framework, as the [dreaded] dependence on the model prior probabilities then becomes a matter of relativity! Those posterior probabilities depend directly and almost linearly on the prior probabilities, but they should not be interpreted in an absolute sense as the ultimate and unique probability of the hypothesis (which anyway does not mean anything in terms of the observed experiment). In other words, this posterior probability does not need to be scaled against a U(0,1) distribution. Or against the p-value if anyone wishes to do so. By the end of the lecture, I was even wondering [not so loudly] whether or not this perspective was allowing for a resolution of the Lindley-Jeffreys paradox, as the resulting number could be set relative to the choice of the [arbitrary] normalising constant. Continue reading

%d bloggers like this: