Archive for pseudo-random generators

composition versus inversion

Posted in Books, Kids, R, Statistics with tags , , , , , , , on March 31, 2021 by xi'an

While trying to convey to an OP on X validated why the inversion method was not always the panacea in pseudo-random generation, I took the example of a mixture of K exponential distributions when K is very large, in order to impress (?) upon said OP that solving F(x)=u for such a closed-form cdf F was very costly even when using a state-of-the-art (?) inversion algorithm, like uniroot, since each step involves adding the K terms in the cdf. Selecting the component from the cumulative distribution function on the component proves to be quite fast since using the rather crude

x=rexp(1,lambda[1+sum(runif(1)>wes)])

brings a 100-fold improvement over

Q = function(u) uniroot((function(x) F(x) - u), lower = 0, 
    upper = qexp(.999,rate=min(la)))[1] #numerical tail quantile
x=Q(runif(1))

when K=10⁵, as shown by a benchmark call

         test elapsed
1       compo   0.057
2      Newton  45.736
3     uniroot   5.814

where Newton denotes a simple-minded Newton inversion. I wonder if there is a faster way to select the component in the mixture. Using a while loop starting from the most likely components proves to be much slower. And accept-reject solutions are invariably slow or fail to work with such a large number of components. Devroye’s Bible has a section (XIV.7.5) on simulating sums of variates from an infinite mixture distribution, but, for once,  nothing really helpful. And another section (IV.5) on series methods, where again I could not find a direct connection.

nested sampling X check

Posted in Books, Mountains, pictures, Statistics with tags , , , , , , on September 18, 2020 by xi'an

Andrew Fowlie, Will Handley and Liangliang Su have recently arXived a new paper on checking the convergence of nested sampling by a uniformity test. The argument goes as follows: if the draw from the prior under the likelihood restriction (at the core of the nested sampling principle) is correctly generated, the rank of the realised value of the associated likelihood should be uniformly distributed among the remaining likelihoods. Obviously, the opposite does not hold:  a perfectly uniform distribution can happen even when the sampler misses a particularly well-hidden mode of the target disstribution or when it systematically stops too early, using for instance a misspecified bound on the likelihood. One particular setting when uniformity fails is when the likelihood surface plateaus in a particular region of the parameter space. (As a French speaker, writing plateaus makes me cringe since the plural of plateau is plateaux! Pardon my French!) When reaching the plateau the algorithm starts accumulating at the limiting value (or else completely ignores the plateau and its prior mass). I actually wonder if the existence of plateaux is not a sufficient reason for invalidating nested sampling, at least in its original version, since it assumes a continuous distribution on the likelihood values… If no plateau comes to hinder the algorithm, the rank test could be used to calibrate the exploration algorithm as for instance in the determination of the number of MCMC steps, running in parallel T random walks until the rank test across these runs turns green. The authors of the paper suggest using a Kolmogorov-Smirnov test, which strikes me as not the most appropriate solution, given the discrete nature of the theoretical distribution and the existence of uniformity tests in the pseudo random generation literature. At a conceptual level, I am also wondering at the sequential use of the test (as opposed to a parallel version at each iteration) since the target distribution is changing at every step (and so does the approximate method used to reproduce the prior simulation under the likelihood restriction).

a new rule for adaptive importance sampling

Posted in Books, Statistics with tags , , , , , , , , , on March 5, 2019 by xi'an

Art Owen and Yi Zhou have arXived a short paper on the combination of importance sampling estimators. Which connects somehow with the talk about multiple estimators I gave at ESM last year in Helsinki. And our earlier AMIS combination. The paper however makes two important assumptions to reach optimal weighting, which is inversely proportional to the variance:

  1. the estimators are uncorrelated if dependent;
  2. the variance of the k-th estimator is of order a (negative) power of k.

The later is puzzling when considering a series of estimators, in that k appears to act as a sample size (as in AMIS), the power is usually unknown but also there is no reason for the power to be the same for all estimators. The authors propose to use ½ as the default, both because this is the standard Monte Carlo rate and because the loss in variance is then minimal, being 12% larger.

As an aside, Art Owen also wrote an invited discussion “the unreasonable effectiveness of Monte Carlo” of ” Probabilistic Integration: A Role in Statistical Computation?” by François-Xavier Briol, Chris  Oates, Mark Girolami (Warwick), Michael Osborne and Deni Sejdinovic, to appear in Statistical Science, discussion that contains a wealth of smart and enlightening remarks. Like the analogy between pseudo-random number generators [which work unreasonably well!] vs true random numbers and Bayesian numerical integration versus non-random functions. Or the role of advanced bootstrapping when assessing the variability of Monte Carlo estimates (citing a paper of his from 1992). Also pointing out at an intriguing MCMC paper by  Michael Lavine and Jim Hodges to appear in The American Statistician.

%d bloggers like this: