A great Bayesian Analysis webinar this afternoon with well-balanced presentations by Steve MacEachern and John Lewis, and original discussions by Bertrand Clarke and Fabrizio Rugieri. Which attracted 122 participants. I particularly enjoyed Bertrand’s points that likelihoods were more general than models [made in 6 different wordings!] and that this paper was closer to the M-open perspective. I think I eventually got the reason why the approach could be seen as an ABC with ε=0, since the simulated y’s all get the right statistic, but this presentation does not bring a strong argument in favour of the restricted likelihood approach, when considering the methodological and computational effort. The discussion also made me wonder if tools like VAEs could be used towards approximating the distribution of T(y) conditional on the parameter θ. This is also an opportunity to thank my friend Michele Guindani for his hard work as Editor of Bayesian Analysis and in particular for keeping the discussion tradition thriving!
Archive for measure zero set
Bayesians conditioning on sets of measure zero
Posted in Books, Kids, pictures, Statistics, University life with tags conditional probability, conditioning, cross validated, measure zero set, probability course on September 25, 2018 by xi'anAlthough I have already discussed this point repeatedly on this ‘Og, I found myself replying to [yet] another question on X validated about the apparent paradox of conditioning on a set of measure zero, as for instance when computing
P(X=.5 | |X|=.5)
which actually has nothing to do with Bayesian inference or Bayes’ Theorem, but is simply wondering about the definition of conditional probability distributions. The OP was correct in stating that
P(X=x | |X|=x)
was defined up to a set of measure zero. And even that
P(X=.5 | |X|=.5)
could be defined arbitrarily, prior to the observation of |X|. But once |X| is observed, say to take the value 0.5, there is a zero probability that this value belongs to the set of measure zero where one defined
P(X=x | |X|=x)
arbitrarily. A point that always proves delicate to explain in class…!
MCMC on zero measure sets
Posted in R, Statistics with tags conditional density, Hastings-Metropolis sampler, Jacobian, MCMC, measure theory, measure zero set, projected measure, random walk on March 24, 2014 by xi'anSimulating a bivariate normal under the constraint (or conditional to the fact) that x²-y²=1 (a non-linear zero measure curve in the 2-dimensional Euclidean space) is not that easy: if running a random walk along that curve (by running a random walk on y and deducing x as x²=y²+1 and accepting with a Metropolis-Hastings ratio based on the bivariate normal density), the outcome differs from the target predicted by a change of variable and the proper derivation of the conditional. The above graph resulting from the R code below illustrates the discrepancy!
targ=function(y){ exp(-y^2)/(1.52*sqrt(1+y^2))} T=10^5 Eps=3 ys=xs=rep(runif(1),T) xs[1]=sqrt(1+ys[1]^2) for (t in 2:T){ propy=runif(1,-Eps,Eps)+ys[t-1] propx=sqrt(1+propy^2) ace=(runif(1)<(dnorm(propy)*dnorm(propx))/ (dnorm(ys[t-1])*dnorm(xs[t-1]))) if (ace){ ys[t]=propy;xs[t]=propx }else{ ys[t]=ys[t-1];xs[t]=xs[t-1]}}
If instead we add the proper Jacobian as in
ace=(runif(1)<(dnorm(propy)*dnorm(propx)/propx)/ (dnorm(ys[t-1])*dnorm(xs[t-1])/xs[t-1]))
the fit is there. My open question is how to make this derivation generic, i.e. without requiring the (dreaded) computation of the (dreadful) Jacobian.