Archive for Gibbs for Kids

no country for old liars

Posted in Kids, R with tags , , , , , on March 30, 2019 by xi'an

A puzzle from the Riddler about a group of five persons, A,..,E, where all and only people strictly older than L are liars, all making statements about others’ ages:

  1. A: B>20 and D>16
  2. B: C>18 and E<20
  3. C: D<22 and A=19
  4. D: E≠20 and B=20
  5. E: A>21 and C<18

The Riddler is asking for the (integer value of L and the ranges or values of A,…,E. After thinking about this puzzle over a swimming session, I coded the (honest) constraints and their (liar) complements as many binary matrices, limiting the number of values of L to 8 from 0 (15) to 7 (22) and A,…,E to 7 from 1 (16) to 7 (22):

CA=CB=CC=CD=CE=A=B=C=D=E=matrix(1,5,7)
#constraints
A[2,1:(20-15)]=A[4,1]=0 #A honest
CA[2,(21-15):7]=CA[4,2:7]=0 #A lying
B[3,1:(18-15)]=B[5,(20-15):7]=0
CB[3,(19-15):7]=CB[5,1:(19-15)]=0
C[1,-(19-15)]=C[4,7]=0 #C honest
CC[1,(19-15)]=CC[4,-7]=0 #C lying
D[5,(17-15)]=D[2,-(20-15)]=0
CD[5,-(17-15)]=CD[2,(20-15)]=0
E[1,1:(21-15)]=E[3,(18-15):7]=0
CE[1,7]=CE[3,1:(17-15)]=0

since the term-wise product of these five matrices expresses all the constraints on the years, as e.g.

ABCDE=A*CB*CC*D*CE

if A,D≤L and B,C,E>L, and I then looked by uniform draws [with a slight Gibbs flavour] for values of the integers that suited the constraints or their complement, the stopping rule being that the collection of A,…,E,L is producing an ABCDE binary matrix that agrees with all statements modulo the lying statuum of their authors:

yar=1:5
for (i in 1:5) yar[i]=sample(1:7,1)
L=sample(0:7,1)
ABCDE=((yar[1]>L)*CA+(yar[1]<=L)*A)* 
   ((yar[2]>L)*CB+(yar[2]<=L)*B)* 
   ((yar[3]>L)*CC+(yar[3]<=L)*C)* 
   ((yar[4]>L)*CD+(yar[4]<=L)*D)* 
   ((yar[5]>L)*CE+(yar[5]<=L)*E) 
while (min(diag(ABCDE[,yar]))==0){ 
   L=sample(0:7,1);idx=sample(1:5,1) 
   if (max(ABCDE[idx,])==1) yar[idx]=sample(which(ABCDE[idx,]>0),1)
   ABCDE=((yar[1]>L)*CA+(yar[1]<=L)*A)* 
   ((yar[2]>L)*CB+(yar[2]<=L)*B)* 
   ((yar[3]>L)*CC+(yar[3]<=L)*C)*
   ((yar[4]>L)*CD+(yar[4]<=L)*D)* 
   ((yar[5]>L)*CE+(yar[5]<=L)*E) 
    }

which always produces L=18,A=19,B=20,C=18,D=16 and E>19 as the unique solution (also reported by The Riddler).

> ABCDE
     [,1] [,2] [,3] [,4] [,5] [,6] [,7]
[1,]    0    0    0    1    0    0    0
[2,]    0    0    0    0    1    0    0
[3,]    0    0    1    0    0    0    0
[4,]    1    0    0    0    0    0    0
[5,]    0    0    0    0    1    1    1

Gibbs for incompatible kids

Posted in Books, Statistics, University life with tags , , , , , , , , , , on September 27, 2018 by xi'an

In continuation of my earlier post on Bayesian GANs, which resort to strongly incompatible conditionals, I read a 2015 paper of Chen and Ip that I had missed. (Published in the Journal of Statistical Computation and Simulation which I first confused with JCGS and which I do not know at all. Actually, when looking at its editorial board,  I recognised only one name.) But the study therein is quite disappointing and not helping as it considers Markov chains on finite state spaces, meaning that the transition distributions are matrices, meaning also that convergence is ensured if these matrices have no null probability term. And while the paper is motivated by realistic situations where incompatible conditionals can reasonably appear, the paper only produces illustrations on two and three states Markov chains. Not that helpful, in the end… The game is still afoot!

Gibbs for kidds

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , , , , , on February 12, 2018 by xi'an

 

A chance (?) question on X validated brought me to re-read Gibbs for Kids, 25 years after it was written (by my close friends George and Ed). The originator of the question had difficulties with the implementation, apparently missing the cyclic pattern of the sampler, as in equations (2.3) and (2.4), and with the convergence, which is only processed for a finite support in the American Statistician paper. The paper [which did not appear in American Statistician under this title!, but inspired an animal bredeer, Dan Gianola, to write a “Gibbs for pigs” presentation in 1993 at the 44th Annual Meeting of the European Association for Animal Production, Aarhus, Denmark!!!] most appropriately only contains toy examples since those can be processed and compared to know stationary measures. This is for instance the case for the auto-exponential model

f(x,y) \propto exp(-xy)

which is only defined as a probability density for a compact support. (The paper does not identify the model as a special case of auto-exponential model, which apparently made the originator of the model, Julian Besag in 1974, unhappy, as George and I found out when visiting Bath, where Julian was spending the final year of his life, many years later.) I use the limiting case all the time in class to point out that a Gibbs sampler can be devised and operate without a stationary probability distribution. However, being picky!, I would like to point out that, contrary, to a comment made in the paper, the Gibbs sampler does not “fail” but on the contrary still “converges” in this case, in the sense that a conditional ergodic theorem applies, i.e., the ratio of the frequencies of visits to two sets A and B with finite measure do converge to the ratio of these measures. For instance, running the Gibbs sampler 10⁶ steps and ckecking for the relative frequencies of x’s in (1,2) and (1,3) gives 0.685, versus log(2)/log(3)=0.63, since 1/x is the stationary measure. One important and influential feature of the paper is to stress that proper conditionals do not imply proper joints. George would work much further on that topic, in particular with his PhD student at the time, my friend Jim Hobert.

With regard to the convergence issue, Gibbs for Kids points out to Schervish and Carlin (1990), which came quite early when considering Gelfand and Smith published their initial paper the very same year, but which also adopts a functional approach to convergence, along the paper’s fixed point perspective, somehow complicating the matter. Later papers by Tierney (1994), Besag (1995), and Mengersen and Tweedie (1996) considerably simplified the answer, which is that irreducibility is a necessary and sufficient condition for convergence. (Incidentally, the reference list includes a technical report of mine’s on latent variable model MCMC implementation that never got published.)

%d bloggers like this: