Archive for cross validated

an insufficient puzzle

Posted in Books, Kids, Statistics with tags , , , on January 12, 2022 by xi'an

A rather peculiar and challenging question on X validated,  concerning the absolute impossibility of a conditional expectation, given a non-sufficient statistic, being still a statistic (i.e., being independent on the parameter θ). Inspired from the following except from Hogg and Craig. Namely, could there exist a specific function φ(·) such that E[φ(Y¹)|Y³] does not depend on the parameter θ? I could not find a satisfactory explanation right away (and the question remains unanswered!)

After posting this entry, I thought anew that cases when the unbiased estimator φ(Y¹) is not a bijective transform of Y¹ would work as a counter-example, since Y³=φ(Y¹) is not sufficient but E[φ(Y¹) |φ(Y¹) ]=φ(Y¹) is not involving θ… And this case does not exhibit a paradox in that the variance does not decrease any further.

A discrete Bernoulli factory

Posted in Books, Kids, Statistics with tags , , , , , , on October 18, 2021 by xi'an

A rather confusing (and now closed) question on X validated contained an interesting challenge of simulating an arbitrary discrete distribution using a single (standard) dice. It indeed made me think of the (more challenging) Bernoulli factory problem of simulating B(f(p)) using a B(p) simulator (with p unknown). I still do not see what the optimal solution is but the core challenge is to avoid simulating U(0,1) variate by exploiting the discrete nature of the target. Which may be an issue if the probabilities of the target are irrational and one is considering the cdf inversion approach. An alternative is to use an accept-reject approach, which also works for discrete distributions, by first deriving an instrumental distribution on the discrete support of the target from dice rolls, second finding the maximum of the ratio instrument to target, and third devising a discrete approach to selecting a generation with a probability taking a finite number of values. Which may prove quite costly. Finally, the least debatable approach is to turn the dice into a Uniform generator by using each draw as a digit in the base 5 representation of this Uniform variate, up to the precision desired for the resolution, and then apply the most efficient algorithm for the target distribution.

ensemble Metropolis-Hastings

Posted in Books, Kids, Statistics with tags , , , , , on October 14, 2021 by xi'an

A question on X validated about ensemble MCMC samplers had me try twice to justify the Metropolis-Hasting ratio the authors used. To recap, ensemble sampling moves a cloud of points (just like our bouncy particle sampler) one point X at a time by using another point Z as a pivot or origin and moving randomly X along the line [XZ]. In the paper,  the distribution of the rescaling is symmetric in the sense that f(z)=f(1/z). I indeed started by perceiving the basic step of the sampler as a Metropolis-within-Gibbs step along a random direction. But it did not work as the direction depends on the current X. I then wondered at a possible importance sampling interpretation compensating for the change of scale, but it was leading to the wrong power anyway. Before hitting the fact that this was actually a change of radius in the space with origin Z, leaving the angular coordinates invariant. Which explained for the power (n-1) in the Metropolis ratio, in agreement with a switch to polar coordinates.

1 / duh?!

Posted in Books, R, Statistics, University life with tags , , , , , , , on September 28, 2021 by xi'an

An interesting case on X validated of someone puzzled by the simulation (and variance) of the random variable 1/X when being able to simulate X. And being surprised at the variance of the ratio being way larger than the variances of both numerator and denominator.

mixed feelings

Posted in Books, Kids, Statistics with tags , , , , on September 9, 2021 by xi'an

Two recent questions on X validated about mixtures:

  1. One on the potential negative explosion of the E function in the EM algorithm for a mixture of components with different supports:  “I was hoping to use the EM algorithm to fit a mixture model in which the mixture components can have differing support. I’ve run into a problem during the M step because the expected log-likelihood can be [minus] infinite” Which mistake is based on a confusion between the current parameter estimate and the free parameter to optimise.
  2. Another one on the Gibbs sampler apparently failing for a two-component mixture with only the weights unknown, when the components are close to one another:  “The algorithm works fine if σ is far from 1 but it does not work anymore for σ close to 1.” Which did not see a wide posterior as a possible posterior when both components are similar and hence delicate to distinguish from one another.

 

%d bloggers like this: