**F**ollowing a question on Stack Overflow trying to replicate a figure from the paper written by Alan Gelfand and Adrian Smith (1990) for *The American Statistician*, Bayesian sampling without tears, which precedes their historical MCMC papers, I looked at the R code produced by the OP and could not spot an issue as to why their simulation did not fit the posterior produced in the paper. Which proposes acceptance-rejection and sampling-importance-resampling as two solutions to approximately simulate from the posterior. The later being illustrated by simulations from the prior being weighted by the likelihood… The illustration is made of 3 observations from the sum of two Binomials with different success probabilities, θ¹ and θ². With a Uniform prior on both.

for (i in 1:N) for (k in 1:3){ llh<-0 for (j in max(0,n2[k]-y[k]):min(y[k],n1[k])) llh<-llh+choose(n1[k],j)*choose(n2[k],y[k]-j)* theta[i,1]^j*(1-theta[i,1])^(n1[k]-j)*theta[i,2]^(y[k]-j)* (1-theta[i,2])^(n2[k]-y[k]+j) l[i]=l[i]*llh}

To double-check, I also wrote a Gibbs version:

theta=matrix(runif(2),nrow=T,ncol=2) x1=rep(NA,3) for(t in 1:(T-1)){ for(j in 1:3){ a<-max(0,n2[j]-y[j]):min(y[j],n1[j]) x1[j]=sample(a,1, prob=choose(n1[j],a)*choose(n2[j],y[j]-a)* theta[t,1]^a*(1-theta[t,1])^(n1[j]-a)* theta[t,2]^(y[j]-a)*(1-theta[t,2])^(n2[j]-y[j]+a) )} theta[t+1,1]=rbeta(1,sum(x1)+1,sum(n1)-sum(x1)+1) theta[t+1,2]=rbeta(1,sum(y)-sum(x1)+1,sum(n2)-sum(y)+sum(x1)+1)}

which did not show any difference with the above. Nor with the likelihood surface.