Archive for Adrian Smith

Adrian Smith on [lack of] Horizon Europe

Posted in pictures, University life with tags , , , , on July 6, 2022 by xi'an

sunset on the horizon…

Posted in pictures, Travel, University life with tags , , , , , , , , , on June 6, 2022 by xi'an

“The window for association is closing fast, and we need to ensure that political issues do not get in the way of a sensible solution. We have always been very clear that association is the preferred outcome for protecting decades of collaborative research, and the benefits this has brought to people’s lives across the continent and beyond. Adrian Smith

As reported in the Guardian of today a terrible impact of BoJo’s Vote Leave Brexit the UK Government failing to implement the Northern Ireland protocol is that this threatens ERC funding for UK scientists, as the associate membership of Horizon Europe was part of the Brexit negociations, whose “deal”  has been delayed because of this. About a hundred new ERC grant recipients who are currently located in the UK have to either relocate to (eager) EU universities or to give up this most prestigious funding…

Bayesian sampling without tears

Posted in Books, Kids, R, Statistics with tags , , , , , , , , , , , , on May 24, 2022 by xi'an

Following a question on Stack Overflow trying to replicate a figure from the paper written by Alan Gelfand and Adrian Smith (1990) for The American Statistician, Bayesian sampling without tears, which precedes their historical MCMC papers, I looked at the R code produced by the OP and could not spot an issue as to why their simulation did not fit the posterior produced in the paper. Which proposes acceptance-rejection and sampling-importance-resampling as two solutions to approximately simulate from the posterior. The later being illustrated by simulations from the prior being weighted by the likelihood… The illustration is made of 3 observations from the sum of two Binomials with different success probabilities, θ¹ and θ². With a Uniform prior on both.

for (i in 1:N)
  for (k in 1:3){
    for (j in max(0,n2[k]-y[k]):min(y[k],n1[k]))

To double-check, I also wrote a Gibbs version:

for(t in 1:(T-1)){
   for(j in 1:3){

which did not show any difference with the above. Nor with the likelihood surface.

Adrian Smith to head British replacement of ERC

Posted in Books, pictures, Statistics, University life with tags , , , , , , on April 14, 2019 by xi'an

Just read in Nature today that Adrian Smith (of MCMC fame!) was to head the search for a replacement to ERC and Marie Curie research funding in the UK. Adrian, whom I first met in Sherbrooke, Québec, in June 1989, when he delivered one of his first talks on MCMC, is currently the director of the Alan Turing Institute in London, of which Warwick is a constituent. (Just for the record, Chris Skimore is the current Science minister in Theresa May’s government and here is what he states and maybe even think about her Brexit deal: “It’s fantastic for science, it’s fantastic for universities, it’s fantastic for collaboration”) I am actually surprised at the notion of building a local alternative to the ERC when the ERC includes many countries outside the European Union and even outside Europe…

recycling Gibbs auxiliaries

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , on December 6, 2016 by xi'an

wreck of the S.S. Dicky, Caloundra beach, Qld, Australia, Aug. 19, 2012Luca Martino, Victor Elvira and Gustau Camps-Valls have arXived a paper on recycling for Gibbs sampling. The argument therein is to take advantage of all simulations induced by MCMC simulation for one full conditional, towards improving estimation if not convergence. The context is thus one when Metropolis-within-Gibbs operates, with several (M) iterations of the corresponding Metropolis being run instead of only one (which is still valid from a theoretical perspective). While there are arguments in augmenting those iterations, as recalled in the paper, I am not a big fan of running a fixed number of M of iterations as this does not approximate better the simulation from the exact full conditional and even if this approximation was perfect, the goal remains simulating from the joint distribution. As such, multiplying the number of Metropolis iterations does not necessarily impact the convergence rate, only brings it closer to the standard Gibbs rate. Moreover, the improvement does varies with the chosen component, meaning that the different full conditionals have different characteristics that produce various levels of variance reduction:

  • if the targeted expectation only depends on one component of the Markov chain, multiplying the number of simulations for the other components has no clear impact, except in increasing time;
  • if the corresponding full conditional is very concentrated, repeating simulations should produce quasi-repetitions, and no gain.

The only advantage in computing time that I can see at this stage is when constructing the MCMC sampler for the full proposal is much more costly than repeating MCMC iterations, which are then almost free and contribute to the reduction of the variance of the estimator.

This analysis of MCMC-withing-Gibbs strategies reminds me of a recent X validated question, which was about the proper degree of splitting simulations from a marginal and from a corresponding conditional in the chain rule, the optimal balance being in my opinion dependent on the relative variances of the conditional expectations.

A last point is that recycling in the context of simulation and Monte Carlo methodology makes me immediately think of Rao-Blackwellisation, which is surprisingly absent from the current paperRao-Blackwellisation was introduced in the MCMC literature and to the MCMC community in the first papers of Alan Gelfand and Adrian Smith, in 1990. While this is not always producing a major gain in Monte Carlo variability, it remains a generic way of recycling auxiliary variables as shown, e.g., in the recycling paper we wrote with George Casella in 1996, one of my favourite papers.

%d bloggers like this: