Archive for qMC

BayesComp²³ [aka MCMski⁶]

Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , on March 20, 2023 by xi'an

The main BayesComp meeting started right after the ABC workshop and went on at a grueling pace, and offered a constant conundrum as to which of the four sessions to attend, the more when trying to enjoy some outdoor activity during the lunch breaks. My overall feeling is that it went on too fast, too quickly! Here are some quick and haphazard notes from some of the talks I attended, as for instance the practical parallelisation of an SMC algorithm by Adrien Corenflos, the advances made by Giacommo Zanella on using Bayesian asymptotics to assess robustness of Gibbs samplers to the dimension of the data (although with no assessment of the ensuing time requirements), a nice session on simulated annealing, from black holes to Alps (if the wrong mountain chain for Levi), and the central role of contrastive learning à la Geyer (1994) in the GAN talks of Veronika Rockova and Éric Moulines. Victor  Elvira delivered an enthusiastic talk on our massively recycled importance on-going project that we need to complete asap!

While their earlier arXived paper was on my reading list, I was quite excited by Nicolas Chopin’s (along with Mathieu Gerber) work on some quadrature stabilisation that is not QMC (but not too far either), with stratification over the unit cube (after a possible reparameterisation) requiring more evaluations, plus a sort of pulled-by-its-own-bootstrap control variate, but beating regular Monte Carlo in terms of convergence rate and practical precision (if accepting a large simulation budget from the start). A difficulty common to all (?) stratification proposals is that it does not readily applies to highly concentrated functions.

I chaired the lightning talks session, which were 3mn one-slide snapshots about some incoming posters selected by the scientific committee. While I appreciated the entry into the poster session, the more because it was quite crowded and busy, if full of interesting results, and enjoyed the slide solely made of “0.234”, I regret that not all poster presenters were not given the same opportunity (although I am unclear about which format would have permitted this) and that it did not attract more attendees as it took place in parallel with other sessions.

In a not-solely-ABC session, I appreciated Sirio Legramanti speaking on comparing different distance measures via Rademacher complexity, highlighting that some distances are not robust, incl. for instance some (all?) Wasserstein distances that are not defined for heavy tailed distributions like the Cauchy distribution. And using the mean as a summary statistic in such heavy tail settings comes as an issue, since the distance between simulated and observed means does not decrease in variance with the sample size, with the practical difficulty that the problem is hard to detect on real (misspecified) data since the true distribution behing (if any) is unknown. Would that imply that only intrinsic distances like maximum mean discrepancy or Kolmogorov-Smirnov are the only reasonable choices in misspecified settings?! While, in the ABC session, Jeremiah went back to this role of distances for generalised Bayesian inference, replacing likelihood by scoring rule, and requirement for Monte Carlo approximation (but is approximating an approximation that a terrible thing?!). I also discussed briefly with Alejandra Avalos on her use of pseudo-likelihoods in Ising models, which, while not the original model, is nonetheless a model and therefore to taken as such rather than as approximation.

I also enjoyed Gregor Kastner’s work on Bayesian prediction for a city (Milano) planning agent-based model relying on cell phone activities, which reminded me at a superficial level of a similar exploitation of cell usage in an attraction park in Singapore Steve Fienberg told me about during his last sabbatical in Paris.

In conclusion, an exciting meeting that should have stretched a whole week (or taken place in a less congenial environment!). The call for organising BayesComp 2025 is still open, by the way.

 

living on the edge [of the canal]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on December 15, 2021 by xi'an

Last month, Roberto Casarin, Radu Craiu, Lorenzo Frattarolo and myself posted an arXiv paper on a unified approach to antithetic sampling. To which I mostly and modestly contributed while visiting Roberto in Venezia two years ago (although it seems much farther than that!). I have always found antithetic sampling fascinating, albeit mostly unachievable in realistic situations, except (and approximately) by quasi-random tools. The original approach dates back to Hammersley and Morton, circa 1956, when they optimally couple X=F⁻(U) and Y=F⁻(1-U), with U Uniform, although there is no clear-cut extension beyond pairs or above dimension one. While the search for optimal and feasible antithetic plans dried out in the mid-1980’s, despite near successes by Rubinstein and others, the focus switched to Latin hypercube sampling.

The construction of a general antithetic sampling scheme is based on sampling uniformly an edge within an undirected graph in the d-dimensional hypercube, under some (three) assumptions on the edges to achieve uniformity for the marginals. This construction achieves the smallest Kullback-Leibler divergence between the resulting joint and the product of uniforms. And it can be furthermore constrained to be d-countermonotonic, ie such that a non-linear sum of the components is constant. We also show that the proposal leads to closed-form Kendall’s τ and Spearman’s ρ. Which can be used to assess different d-countermonotonic schemes, incl. earlier ones found in the literature. The antithetic sampling proposal can be applied in Monte Carlo, Markov chain Monte Carlo, and sequential Monte Carlo settings. In a stochastic volatility example of the later (SMC) we achieve performances similar to the quasi-Monte Carlo approach of Mathieu Gerber and Nicolas Chopin.

continuous herded Gibbs sampling

Posted in Books, pictures, Statistics with tags , , , , , , , , on June 28, 2021 by xi'an

Read a short paper by Laura Wolf and Marcus Baum on Gibbs herding, where herding is a technique of “deterministic sampling”, for instance selecting points over the support of the distribution by matching exact and empirical (or “empirical”!) moments. Which reminds me of the principal points devised by my late friend Bernhard Flury. With an unclear argument as to why it could take over random sampling:

“random numbers are often generated by pseudo-random number generators, hence are not truly random”

Especially since the aim is to “draw samples from continuous multivariate probability densities.” The sequential construction of such a sample proceeds sequentially by adding a new (T+1)-th point to the existing sample of y’s by maximising in x the discrepancy

(T+1)\mathbb E^Y[k(x,Y)]-\sum_{t=1}^T k(x,y_t)

where k(·,·) is a kernel, e.g. a Gaussian density. Hence a complexity that grows as O(T). The current paper suggests using Gibbs “sampling” to update one component of x at a time. Using the conditional version of the above discrepancy. Making the complexity grow as O(dT) in d dimensions.

I remain puzzled by the whole thing as these samples cannot be used as regular random or quasi-random samples. And in particular do not produce unbiased estimators of anything. Obviously. The production of such samples being furthermore computationally costly it is also unclear to me that they could even be used for quick & dirty approximations of a target sample.

QMC at CIRM

Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on October 21, 2020 by xi'an

dropping a point

Posted in Statistics, University life with tags , , , , , , , , on September 8, 2020 by xi'an

“A discussion about whether to drop the initial point came up in the plenary tutorial of Fred Hickernell at MCQMC 2020 about QMCPy software for QMC. The issue has been discussed by the pytorch community , and the scipy community, which are both incorporating QMC methods.”

Art Owen recently arXived a paper entitled On dropping the first Sobol’ point in which he examines the impact of a common practice consisting in skipping the first point of a Sobol’ sequence when using quasi-Monte Carlo. By analogy with the burn-in practice for MCMC that aims at eliminating the biais due to the choice of the starting value. Art’s paper shows that by skipping just this one point the rate of convergence of some QMC estimates may drop by a factor, bringing the rate back to Monte Carlo values! As this applies to randomised scrambled Sobol sequences, this is quite amazing. The explanation centers on the suppression leaving one region of the hypercube unexplored, with an O(n⁻¹) error ensuing.

The above picture from the paper makes the case in a most obvious way: the mean squared error is not decreasing at the same rate for the no-drop and one-drop versions, since they are -3/2 and -1, respectively. The paper further “recommends against using roundnumber sample sizes and thinning QMC points.” Conclusion: QMC is not MC!

%d bloggers like this: