Archive for cross validated

A discrete Bernoulli factory

Posted in Books, Kids, Statistics with tags , , , , , , on October 18, 2021 by xi'an

A rather confusing (and now closed) question on X validated contained an interesting challenge of simulating an arbitrary discrete distribution using a single (standard) dice. It indeed made me think of the (more challenging) Bernoulli factory problem of simulating B(f(p)) using a B(p) simulator (with p unknown). I still do not see what the optimal solution is but the core challenge is to avoid simulating U(0,1) variate by exploiting the discrete nature of the target. Which may be an issue if the probabilities of the target are irrational and one is considering the cdf inversion approach. An alternative is to use an accept-reject approach, which also works for discrete distributions, by first deriving an instrumental distribution on the discrete support of the target from dice rolls, second finding the maximum of the ratio instrument to target, and third devising a discrete approach to selecting a generation with a probability taking a finite number of values. Which may prove quite costly. Finally, the least debatable approach is to turn the dice into a Uniform generator by using each draw as a digit in the base 5 representation of this Uniform variate, up to the precision desired for the resolution, and then apply the most efficient algorithm for the target distribution.

ensemble Metropolis-Hastings

Posted in Books, Kids, Statistics with tags , , , , , on October 14, 2021 by xi'an

A question on X validated about ensemble MCMC samplers had me try twice to justify the Metropolis-Hasting ratio the authors used. To recap, ensemble sampling moves a cloud of points (just like our bouncy particle sampler) one point X at a time by using another point Z as a pivot or origin and moving randomly X along the line [XZ]. In the paper,  the distribution of the rescaling is symmetric in the sense that f(z)=f(1/z). I indeed started by perceiving the basic step of the sampler as a Metropolis-within-Gibbs step along a random direction. But it did not work as the direction depends on the current X. I then wondered at a possible importance sampling interpretation compensating for the change of scale, but it was leading to the wrong power anyway. Before hitting the fact that this was actually a change of radius in the space with origin Z, leaving the angular coordinates invariant. Which explained for the power (n-1) in the Metropolis ratio, in agreement with a switch to polar coordinates.

1 / duh?!

Posted in Books, R, Statistics, University life with tags , , , , , , , on September 28, 2021 by xi'an

An interesting case on X validated of someone puzzled by the simulation (and variance) of the random variable 1/X when being able to simulate X. And being surprised at the variance of the ratio being way larger than the variances of both numerator and denominator.

mixed feelings

Posted in Books, Kids, Statistics with tags , , , , on September 9, 2021 by xi'an

Two recent questions on X validated about mixtures:

  1. One on the potential negative explosion of the E function in the EM algorithm for a mixture of components with different supports:  “I was hoping to use the EM algorithm to fit a mixture model in which the mixture components can have differing support. I’ve run into a problem during the M step because the expected log-likelihood can be [minus] infinite” Which mistake is based on a confusion between the current parameter estimate and the free parameter to optimise.
  2. Another one on the Gibbs sampler apparently failing for a two-component mixture with only the weights unknown, when the components are close to one another:  “The algorithm works fine if σ is far from 1 but it does not work anymore for σ close to 1.” Which did not see a wide posterior as a possible posterior when both components are similar and hence delicate to distinguish from one another.

 

empirically Bayesian [wISBApedia]

Posted in Statistics with tags , , , , , , , on August 9, 2021 by xi'an

Last week I was pointed out a puzzling entry in the “empirical Bayes” Wikipedia page. The introduction section indeed contains a description of an iterative simulation method that involves an hyperprior p(η) even though the empirical Bayes perspective does not involve an hyperprior.

While the entry is vague and lacks formulae

These suggest an iterative scheme, qualitatively similar in structure to a Gibbs sampler, to evolve successively improved approximations to p(θy) and p(ηy). First, calculate an initial approximation to p(θy) ignoring the η dependence completely; then calculate an approximation to p(η | y) based upon the initial approximate distribution of p(θy); then use this p(ηy) to update the approximation for p(θy); then update p(ηy); and so on.

it sounds essentially equivalent to a Gibbs sampler, possibly a multiple try Gibbs sampler (unless the author had another notion in mind, alas impossible to guess since no reference is included).

Beyond this specific case, where I think the entire paragraph should be erased from the “empirical Bayes” Wikipedia page, I discussed the general problem of some poor Bayesian entries in Wikipedia with Robin Ryder, who came with the neat idea of running (collective) Wikipedia editing labs at ISBA conferences. If we could further give an ISBA label to these entries, as a certificate of “Bayesian orthodoxy” (!), it would be terrific!

%d bloggers like this: