Archive for Bristol

more positions in the UK [postdoc & professor]

Posted in Statistics with tags , , , , , , , , , , , on October 13, 2017 by xi'an

I have received additional emails from England advertising for positions in Bristol, Durham, and London, so here they are, with links to the complete advertising!

  1. The University of Bristol is seeking to appoint a number of Chairs in any areas of Mathematics or Statistical Science, in support of a major strategic expansion of the School of Mathematics. Deadline is December 4.
  2. Durham University is opening a newly created position of Professor of Statistics, with research and teaching duties. Deadline is November 6.
  3. Oliver Ratman, in the Department of Mathematics at Imperial College London, is seeking a Research Associate in Statistics and Pathogen Phylodynamics. Deadline is October 30.

position in Bristol

Posted in pictures, Running, Statistics, Travel, University life with tags , , , , on October 4, 2017 by xi'an

There is [also] an opening for a Lecturer or Senior Lecture or Reader at the University of Bristol, with deadline 27th of November, 2017. The School of Mathematics and in particular the Institute for Statistical Science are quite active in research, with top rankings and a rich area of expertise domains, while [based on personal experience] the City of Bristol is a great place to live! (Details through the links.)

position in Bristol

Posted in Statistics, University life with tags , , , , , , on July 19, 2016 by xi'an

Clifton Bridge, Bristol, Sept. 24, 2012There is an opening for a Lecturer (i.e., assistant/associate professor) position in Statistical Science at the University of Bristol (School of Mathematics) with deadline August 7. Please contact Professor Christophe Andrieu for more details.

read paper [in Bristol]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on January 29, 2016 by xi'an

Clifton & Durdham Downs, Bristol, Sept. 25, 2012I went to give a seminar in Bristol last Friday and I chose to present the testing with mixture paper. As we are busy working on the revision, I was eagerly looking for comments and criticisms that could strengthen this new version. As it happened, the (Bristol) Bayesian Cake (Reading) Club had chosen our paper for discussion, two weeks in a row!, hence the title!, and I got invited to join the group the morning prior to the seminar! This was, of course, most enjoyable and relaxed, including an home-made cake!, but also quite helpful in assessing our arguments in the paper. One point of contention or at least of discussion was the common parametrisation between the components of the mixture. Although all parametrisations are equivalent from a single component point of view, I can [almost] see why using a mixture with the same parameter value on all components may impose some unsuspected constraint on that parameter. Even when the parameter is the same moment for both components. This still sounds like a minor counterpoint in that the weight should converge to either zero or one and hence eventually favour the posterior on the parameter corresponding to the “true” model.

Another point that was raised during the discussion is the behaviour of the method under misspecification or for an M-open framework: when neither model is correct does the weight still converge to the boundary associated with the closest model (as I believe) or does a convexity argument produce a non-zero weight as it limit (as hinted by one example in the paper)? I had thought very little about this and hence had just as little to argue though as this does not sound to me like the primary reason for conducting tests. Especially in a Bayesian framework. If one is uncertain about both models to be compared, one should have an alternative at the ready! Or use a non-parametric version, which is a direction we need to explore deeper before deciding it is coherent and convergent!

A third point of discussion was my argument that mixtures allow us to rely on the same parameter and hence the same prior, whether proper or not, while Bayes factors are less clearly open to this interpretation. This was not uniformly accepted!

Thinking afresh about this approach also led me to broaden my perspective on the use of the posterior distribution of the weight(s) α: while previously I had taken those weights mostly as a proxy to the posterior probabilities, to be calibrated by pseudo-data experiments, as for instance in Figure 9, I now perceive them primarily as the portion of the data in agreement with the corresponding model [or hypothesis] and more importantly as a solution for staying away from a Neyman-Pearson-like decision. Or error evaluation. Usually, when asked about the interpretation of the output, my answer is to compare the behaviour of the posterior on the weight(s) with a posterior associated with a sample from each model. Which does sound somewhat similar to posterior predictives if the samples are simulated from the associated predictives. But the issue was not raised during the visit to Bristol, which possibly reflects on how unfrequentist the audience was [the Statistics group is], as it apparently accepted with no further ado the use of a posterior distribution as a soft assessment of the comparative fits of the different models. If not necessarily agreeing the need of conducting hypothesis testing (especially in the case of the Pima Indian dataset!).

Sherlock [#3]

Posted in Books with tags , , , , , , on March 14, 2015 by xi'an

After watching the first two seasons of the BBC TV Series Sherlock while at the hospital, I found myself looking forward further adventures of Holmes and Watson and eventually “bought” the third season. And watched it over the past weekends. I liked it very much as this new season distanced itself from the sheer depiction of Sherlock’s amazing powers to a quite ironic and self-parodic story, well in tune with a third season where the audience is now utterly familiar with the main characters. They all put on weight (mostly figuratively!), from Sherlock’s acknowledgement of his psychological shortcomings, to Mrs. Hudson’s revealing her drug trafficking past and expressing her dislike of Mycroft, to  John Watson’s engagement and acceptance of Sherlock’s idiosyncrasies, making him the central character of the series in a sort of fatherly figure. Some new characters are also terrific, including Mary Morstan and the new archvillain, C.A. Magnussen. Paradoxically, this makes the detective part of the stories secondary, which is all for the best as, in my opinion, the plots are rather weak and the resolutions hardly relying on high intellectual powers, albeit always surprising. More sleuthing in the new season would be most welcome! As an aside, the wedding place sounded somewhat familiar to me, until I realised it was Goldney Hall, where the recent workshops I attended in Bristol took place.

high-dimensional stochastic simulation and optimisation in image processing [day #3]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , on August 31, 2014 by xi'an

Last and maybe most exciting day of the “High-dimensional Stochastic Simulation and Optimisation in Image Processing” in Bristol as it was exclusively about simulation (MCMC) methods. Except my own talk on ABC. And Peter Green’s on consistency of Bayesian inference in non-regular models. The talks today were indeed about using convex optimisation devices to speed up MCMC algorithms with tools that were entirely new to me, like the Moreau transform discussed by Marcelo Pereyra. Or using auxiliary variables à la RJMCMC to bypass expensive Choleski decompositions. Or optimisation steps from one dual space to the original space for the same reason. Or using pseudo-gradients on partly differentiable functions in the talk by Sylvain Lecorff on a paper commented earlier in the ‘Og. I particularly liked the notion of Moreau regularisation that leads to more efficient Langevin algorithms when the target is not regular enough. Actually, the discretised diffusion itself may be geometrically ergodic without the corrective step of the Metropolis-Hastings acceptance. This obviously begs the question of an extension to Hamiltonian Monte Carlo. And to multimodal targets, possibly requiring as many normalisation factors as there are modes. So, in fine, a highly informative workshop, with the perfect size and the perfect crowd (which happened to be predominantly French, albeit from a community I did not have the opportunity to practice previously). Massive kudos to Marcello for putting this workshop together, esp. on a week where family major happy events should have kept him at home!

As the workshop ended up in mid-afternoon, I had plenty of time for a long run with Florence Forbes down to the Avon river and back up among the deers of Ashton Court, avoiding most of the rain, all of the mountain bikes on a bike trail that sounded like trail running practice, and building enough of an appetite for the South Indian cooking of the nearby Thali Café. Brilliant!

high-dimensional stochastic simulation and optimisation in image processing [day #2]

Posted in pictures, Statistics, Travel, Uncategorized, University life, Wines with tags , , , , , , on August 30, 2014 by xi'an

After a nice morning run down Leigh Woods and on the muddy banks of the Avon river, I attended a morning session on hyperspectral image non-linear modelling. Topic about which I knew nothing beforehand. Hyperspectral images are 3-D images made of several wavelengths to improve their classification as a mixture of several elements. The non-linearity is due to the multiple reflections from the ground as well as imperfections in the data collection. I found this new setting of clear interest, from using mixtures to exploring Gaussian processes and Hamiltonian Monte Carlo techniques on constrained spaces… Not to mention the “debate” about using Bayesian inference versus optimisation. It was overall a day of discovery as I am unaware of the image processing community (being the outlier in this workshop!) and of their techniques. The problems mostly qualify as partly linear high-dimension inverse problems, with rather standard if sometimes hybrid MCMC solutions. (The day ended even more nicely with another long run in the fields of Ashton Court and a conference diner by the river…)