Archive for Gibbs sampling

fine-sliced Poisson [a.k.a. sashimi]

Posted in Books, Kids, pictures, R, Running, Statistics, University life with tags , , , , , , , , , on March 20, 2014 by xi'an

As my student Kévin Guimard had not mailed me his own Poisson slice sampler of a Poisson distribution, I could not tell why the code was not working! My earlier post prompted him to do so and a somewhat optimised version is given below:

nsim = 10^4
lambda = 6

max.factorial = function(x,u){
        k = x
        parf=1
        while (parf*u<1){
          k = k + 1
          parf = parf * k
          }
        k = k - (parf*u>1)
        return (k)
        }

x = rep(floor(lambda), nsim)
for (t in 2:nsim){
        v1 = ceiling((log(runif(1))/log(lambda))+x[t-1])
        ranj=max(0,v1):max.factorial(x[t-1],runif(1))
        x[t]=sample(ranj,size=1)
        }
barplot(as.vector(rbind(
   table(x)/length(x),dpois(min(x):max(x),
   lambda))),col=c("sienna","gold"))

As you can easily check by running the code, it does not work. My student actually majored my MCMC class and he spent quite a while pondering why the code was not working. I did ponder as well for a part of a morning in Warwick, removing causes for exponential or factorial overflows (hence the shape of the code), but not eliciting the issue… (This now sounds like lethal fugu sashimi! ) Before reading any further, can you spot the problem?!

The corrected R code is as follows:

x = rep(lambda, nsim)
for (t in 2:nsim){
        v1=ceiling((log(runif(1))/log(lambda))+x[t-1])
        ranj=max(0,v1):max.factorial(x[t-1],runif(1))
        if (length(ranj)>1){
          x[t] = sample(ranj, size = 1)
          }else{
                x[t]=ranj}
 }

The culprit is thus the R function sample which simply does not understand Dirac masses and the basics of probability! When running

> sample(150:150,1)
[1] 23

you can clearly see where the problem stands…! Well-documented issue with sample that already caused me woes… Another interesting thing about this slice sampler is that it is awfully slow in exploring the tails. And to converge to the centre from the tails. This is not very pronounced in the above graph with a mean of 6. Moving to 50 makes it more apparent:

slisson5This is due to the poor mixing of the chain, as shown by the raw sequence below, which strives to achieve a single cycle out of 10⁵ iterations! In any case, thanks to Kévin for an interesting morning!

slisson4

Bayesian multimodel inference by RJMCMC: A Gibbs sampling approach

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , on December 27, 2013 by xi'an

img_5288Barker (from the lovely city of Dunedin) and Link published a paper in the American Statistician last September that I missed, as I missed their earlier email about the paper since it arrived The Day After… The paper is about a new specification of RJMCMC, almost twenty years after Peter Green’s (1995) introduction of the method. The authors use the notion of a palette, “from which all model specific parameters can be calculated” (in a deterministic way). One can see the palette ψ as an intermediary step in the move between two models. This reduces the number of bijections, if not the construction of the dreaded Jacobians!, but forces the construction of pseudo-priors on the unessential parts of ψ for every model. Because the dimension of ψ is fixed, a Gibbs sampling interleaving model index and palette value is then implementable. The conditional of the model index given the palette is available provided there are not too many models under competitions, with the probabilities recyclable towards a Rao-Blackwell approximation of the model probability. I wonder at whether or not another Rao-Blackwellisation is possible, namely to draw from all the simulated palettes a sample for the parameter of an arbitrarily chosen model.

running MCMC for too long, and even longer…

Posted in Books, pictures, Running, Statistics, University life with tags , , , , , , , , , , , on October 23, 2013 by xi'an

Clifton observatory, Clifton, Sept. 24, 2012Following my earlier post about the young astronomer who feared he was running his MCMC for too long, here is an update from his visit to my office this morning.  This visit proved quite an instructive visit for both of us. (Disclaimer: the picture of an observatory seen from across Brunel’s suspension bridge in Bristol is as earlier completely unrelated with the young astronomer!)

First, the reason why he thought MCMC was running too long was that the acceptance rate was plummeting down to zero, whatever the random walk scale. The reason for this behaviour is that he was actually running a standard simulated annealing algorithm, hence observing the stabilisation of the Markov chain in one of the (global) modes of the target function. In that sense, he was right that the MCMC was run for “too long”, as there was nothing to expect once the mode had been reached and the temperature turned down to zero. So the algorithm was working correctly.

Second, the astronomy problem he considers had a rather complex likelihood, for which he substituted a distance between the (discretised) observed data and (discretised) simulated data, simulated conditional on the current parameter value. Now…does this ring a bell? If not, here is a three letter clue: ABC… Indeed, the trick he had found to get around this likelihood calculation issue was to re-invent a version of ABC-MCMC! Except that the distance was re-introduced into a regular MCMC scheme as a substitute to the log-likelihood. And compared with the distance at the previous MCMC iteration. This is quite clever, even though this substitution suffers from a normalisation issue (that I already mentioned in the post about Holmes’ and Walker’s idea to turn loss functions into pseudo likelihoods. Regular ABC does not encounter this difficult, obviously. I am still bemused by this reinvention of ABC from scratch!  

So we are now at a stage where my young friend will experiment with (hopefully) correct ABC steps, trying to derive the tolerance value from warmup simulations and use some of the accelerating tricks suggested by Umberto Picchini and Julie Forman to avoid simulating the characteristics of millions of stars for nothing. And we agreed to meet soon for an update. Indeed, a fairly profitable morning for both of us!

running MCMC for too long…

Posted in Books, pictures, Running, Statistics, University life with tags , , , , , , , , on October 20, 2013 by xi'an

Clifton observatory, Clifton, Sept. 24, 2012Here is an excerpt from an email I just received from a young astronomer with whom I have had many email exchanges about the nature and implementation of MCMC algorithms, not making my point apparently:

The acceptance ratio turn to be good if I used (imposed by me) smaller numbers of iterations. What I think I am doing wrong is the convergence criteria. I am not stopping when I should stop.

To which I replied he should come (or Skype) and talk with me as I cannot get into enough details to point out his analysis is wrong… It may be the case that the MCMC algorithm finds a first mode, explores its neighbourhood (hence a good acceptance rate and green signals for convergence), then wanders away, attracted by other modes. It may also be the case the code has mistakes. Anyway, you cannot stop a (basic) MCMC algorithm too late or let it run for too long! (Disclaimer: the picture of an observatory seen from across Brunel’s suspension bridge in Bristol is unrelated to the young astronomer!)

sticky Metropolis

Posted in Statistics, University life with tags , , , , , , on September 6, 2013 by xi'an

My former student Roberto Casarin and his colleagues wrote (and arXived) a paper entitled Adaptive sticky generalized Metropolis algorithm. The basic idea is to use some of the rejected and past values of the chain to build an adaptive proposal, the criterion for choosing those values being related with the distance at the rejected point between the target and the proposal. In a sense, it gives a reward to surprising points, i.e. points where the proposal does poorly in approximating the target. On top of this, they include a multiple-try strategy where several values are generated from the current proposal and one of them is selected, to be accepted or rejected in a Metropolis step. The learning set may include several of the proposed (and rejected) values. This paper generalises Holden, Hauge and Holden (AoAP, 2009) and extends their proof of stationarity. The authors explore at length (the paper is 63 pages long!) the construction of the adaptive proposal distribution. This construction appears to be quite similar to Gilks’ and Wild’s (1993) ARMS algorithm. Hence, unless I missed a generalisation, it seems to me that the solutions are restricted to unidimensional settings. For instance, the authors propose to implement their algorithm for each complex conditional in a Gibbs sampler, meaning starting from scratch and running a large enough number of iterations to “reach” convergence. I also wonder at the correspondence between this construction and the original assumption of a minorisation condition wrt the target density in the event of an unbounded support. While this paper represents an interesting extension of the automated simulation algorithms of the ARMS type, and while the method is investigated thoroughly by several simulation experiments (in the second half of the paper), I remain somehow circumspect at the possibly of using ASMTM in complex high-dimensional problems as the learning cost soar with the dimension.

Follow

Get every new post delivered to your Inbox.

Join 557 other followers