Archive for cross validated

linearity, reversed

Posted in Books, Kids with tags , , , , , on September 19, 2020 by xi'an

While answering a question on X validated on the posterior mean being a weighted sum of the prior mean and of the maximum likelihood estimator, when the weights do not depend on the data, which is true in conjugate natural exponential family settings, I re-read this wonderful 1979 paper of Diaconis & Ylvisaker establishing the converse, namely that when the linear combination holds, the prior need be conjugate! This holds within exponential families, but I cannot think of a reasonable case outside exponential families where the linearity holds (again with constant weights, as otherwise it always holds in dimension one, albeit with weights possibly outside [0,1]).

how can a posterior be uniform?

Posted in Books, Statistics with tags , , , , , , on September 1, 2020 by xi'an

A bemusing question from X validated:

How can we have a posterior distribution that is a uniform distribution?

With the underlying message that a uniform distribution does not depend on the data, since it is uniform! While it is always possible to pick the parameterisation a posteriori so that the posterior is uniform, by simply using the inverse cdf transform, or to pick the prior a posteriori so that the prior cancels the likelihood function, there exist more authentic discrete examples of a data realisation leading to a uniform distribution, as eg in the Multinomial model. I deem the confusion to stem from the impression either that uniform means non-informative (what we could dub Laplace’s daemon!) or that it could remain uniform for all realisations of the sampled rv.

deterministic moves in Metropolis-Hastings

Posted in Books, Kids, R, Statistics with tags , , , , , , , , on July 10, 2020 by xi'an

A curio on X validated where an hybrid Metropolis-Hastings scheme involves a deterministic transform, once in a while. The idea is to flip the sample from one mode, ν, towards the other mode, μ, with a symmetry of the kind

μ-α(x+μ) and ν-α(x+ν)

with α a positive coefficient. Or the reciprocal,

-μ+(μ-x)/α and -ν+(ν-x)/α

for… reversibility reasons. In that case, the acceptance probability is simply the Jacobian of the transform to the proposal, just as in reversible jump MCMC.

Why the (annoying) Jacobian? As explained in the above slides (and other references), the Jacobian is there to account for the change of measure induced by the transform.

Returning to the curio, the originator of the question had spotted some discrepancy between the target and the MCMC sample, as the moments did not fit well enough. For a similar toy model, a balanced Normal mixture, and an artificial flip consisting of

x’=±1-x/2 or x’=±2-2x

implemented by

  u=runif(5)
  if(u[1]<.5){
    mhp=mh[t-1]+2*u[2]-1
    mh[t]=ifelse(u[3]<gnorm(mhp)/gnorm(mh[t-1]),mhp,mh[t-1])
  }else{
    dx=1+(u[4]<.5)
    mhp=ifelse(dx==1,
               ifelse(mh[t-1]<0,1,-1)-mh[t-1]/2,
               2*ifelse(mh[t-1]<0,-1,1)-2*mh[t-1])
    mh[t]=ifelse(u[5]<dx*gnorm(mhp)/gnorm(mh[t-1])/(3-dx),mhp,mh[t-1])

I could not spot said discrepancy beyond Monte Carlo variability.

overlap, overstreched

Posted in Books, Kids, R, Statistics with tags , , , , , , on June 15, 2020 by xi'an

An interesting challenge on The Riddler on the probability to see a random interval X’ing with all other random intervals when generating n intervals from Dirichlet D(1,1,1). As it happens the probability is always 2/3, whatever n>1, as shown by the R code below (where replicate cannot be replaced by rep!):

qro=function(n,T=1e3){
  quo=function(n){
     xyz=t(apply(matrix(runif(2*n),n),1,sort))
  sum(xyz[,1]<min(xyz[,2])&xyz[,2]>max(xyz[,1]))<0}
  mean(replicate(quo(n),T))}

and discussed more in details on X validated. As only a property on permutations and partitions. (The above picture is taken from this 2015 X validated post.)

the strange occurrence of the one bump

Posted in Books, Kids, R, Statistics with tags , , , , , , , , on June 8, 2020 by xi'an

When answering an X validated question on running an accept-reject algorithm for the Gamma distribution by using a mixture of Beta and drifted (bt 1) Exponential distributions, I came across the above glitch in the fit of my 10⁷ simulated sample to the target, apparently displaying a wrong proportion of simulations above (or below) one.

a=.9
g<-function(T){
  x=rexp(T)
  v=rt(T,1)<0
  x=c(1+x[v],exp(-x/a)[!v])
  x[runif(T)<x^a/x/exp(x)/((x>1)*exp(1-x)+a*(x<1)*x^a/x)*a]}

It took me a while to spot the issue, namely that the output of

  z=g(T)
  while(sum(!!z)<T)z=c(z,g(T))
  z[1:T]

was favouring simulations from the drifted exponential by truncating. Permuting the elements of z before returning solved the issue (as shown below for a=½)!