Archive for the Statistics Category

folded Normals

Posted in Books, Kids, pictures, R, Running, Statistics with tags , , , , , , , , , , , , on February 25, 2021 by xi'an

While having breakfast (after an early morn swim at the vintage La Butte aux Cailles pool, which let me in free!), I noticed a letter to the Editor in the Annals of Applied Statistics, which I was unaware existed. (The concept, not this specific letter!) The point of the letter was to indicate that finding the MLE for the mean and variance of a folded normal distribution was feasible without resorting to the EM algorithm. Since the folded normal distribution is a special case of mixture (with fixed weights), using EM is indeed quite natural, but the author, Iain MacDonald, remarked that an optimiser such as R nlm() could be called instead. The few lines of relevant R code were even included. While this is a correct if minor remark, I am a wee bit surprised at seeing it included in the journal, the more because the authors of the original paper using the EM approach were given the opportunity to respond, noticing EM is much faster than nlm in the cases they tested, and Iain MacDonald had a further rejoinder! The more because the Wikipedia page mentioned the use of optimisers much earlier (and pointed out at the R package Rfast as producing MLEs for the distribution).

a year ago, a world away

Posted in Statistics with tags , , , , , , , , , , , , on February 24, 2021 by xi'an

Metropolis-Hastings via classification

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on February 23, 2021 by xi'an

Veronicka Rockova (from Chicago Booth) gave a talk on this theme at the Oxford Stats seminar this afternoon. Starting with a survey of ABC, synthetic likelihoods, and pseudo-marginals, to motivate her approach via GANs, learning an approximation of the likelihood from the GAN discriminator. Her explanation for the GAN type estimate was crystal clear and made me wonder at the connection with Geyer’s 1994 logistic estimator of the likelihood (a form of discriminator with a fixed generator). She also expressed the ABC approximation hence created as the actual posterior times an exponential tilt. Which she proved is of order 1/n. And that a random variant of the algorithm (where the shift is averaged) is unbiased. Most interestingly requiring no calibration and no tolerance. Except indirectly when building the discriminator. And no summary statistic. Noteworthy tension between correct shape and correct location.

MCMC for conditional Bernoullis

Posted in Books, Statistics, University life with tags , , , , , on February 22, 2021 by xi'an

Jeremy Heng, Pierre Jacob [currently in Paris!] and Nianqiao Ju are in a recent arXival considering the simulation of a conditional Bernoulli, namely generating a vector of N Bernoullis with different probabilities under the constraint that their sum is fixed as I. Rather than going for a perfect simulator, with cost O(NI), they opt for the simplest of MCMC samplers, where a 0 and a 1 entries are exchanged at random. In connection with a recent spate of MCMC works using coupling, they establish convergence in O(N log N) steps, even when the probabilities are arbitrarily close to zero and one. Including the case when they are Uniformly generated. From a mundane perspective, I wonder at the appeal of using the probabilities to select the exchange pair. I realise sorting the probabilities is already of order O(N log N) avoiding selecting highly probable 1’s and highly probable 0’s should speed up converge, unless the gain is negligible. And to link MCMC and exact simulation in this setting, what would the cost of perfect sampling by sampling from the past be? Presumably much higher since there is little chance a total ordering can be found on the starting states.

why does rbinom(1,1) differ from sample(0:1,1) with the same seed?

Posted in Statistics with tags , , , , , , , , , on February 17, 2021 by xi'an
> set.seed(1)
> rbinom(10,1,0.5)
 [1] 0 0 1 1 0 1 1 1 1 0

> set.seed(1) > sample(c(0,1), 10, replace = TRUE) [1] 0 1 0 1 0 0 1 0 1 1

This rather legitimate question was posted on X validated last week, the answer being that the C codes behind both functions do not use pseudo-random generators in the same manner. For instance, rbinom does get involved beyond a mean value of 30 (and otherwise resorts to the inverse cdf approach). And following worries about sample biases, sample was updated in 2019 (and also seems to resort to the inverse cdf when the mean is less than 200). However, when running the above code on my machine, still using the 2018 R version 3.4.4!, I recover the same outcome:

> set.seed(1)
> rbinom(10,1,0.5)
 [1] 0 0 1 1 0 1 1 1 1 0

> set.seed(1)
> sample(c(0,1), 10, replace = TRUE)
 [1] 0 0 1 1 0 1 1 1 1 0

> set.seed(1) > qbinom(runif(10),1,0.5) [1] 0 0 1 1 0 1 1 1 1 0
> set.seed(1) > 1*(runif(10)>.5) [1] 0 0 1 1 0 1 1 1 1 0