Archive for Gibbs sampling

What the …?!

Posted in Books, Statistics with tags , , , , , , , , , on May 3, 2020 by xi'an


too many marginals

Posted in Kids, Statistics with tags , , , , , , , on February 3, 2020 by xi'an

This week, the CEREMADE coffee room puzzle was about finding a joint distribution for (X,Y) such that (marginally) X and Y are both U(0,1), while X+Y is U(½,1+½). Beyond the peculiarity of the question, there is a larger scale problem, as to how many (if any) compatible marginals h¹(X,Y), h²(X,Y), h³(X,Y), …, need one constrains the distribution to reconstruct the joint. And wondering if any Gibbs-like scheme is available to simulate the joint.

MCMC, with common misunderstandings

Posted in Books, pictures, R, Statistics, University life with tags , , , , , , , , , , , , on January 27, 2020 by xi'an

As I was asked to write a chapter on MCMC methods for an incoming Handbook of Computational Statistics and Data Science, published by Wiley, rather than cautiously declining!, I decided to recycle the answers I wrote on X validated to what I considered to be the most characteristic misunderstandings about MCMC and other computing methods, using as background the introduction produced by Wu Changye in his PhD thesis. Waiting for the opinion of the editors of the Handbook on this Q&A style. The outcome is certainly lighter than other recent surveys like the one we wrote with Peter Green, Krys Latuszinski, and Marcelo Pereyra, for Statistics and Computing, or the one with Victor Elvira, Nick Tawn, and Changye Wu.

Scott Sisson’s ABC seminar in Paris [All about that Bayes]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on January 20, 2020 by xi'an

On the “All about that Bayes” seminar tomorrow (Tuesday 21 at 3p.m., room 42, AgroParisTech, 16 rue Claude Bernard, Paris 5ième), Scott Sisson, School of Mathematics and Statistics at UNSW, and visiting Paris-Dauphine this month, will give a talk on

Approximate posteriors and data for Bayesian inference

For various reasons, including large datasets and complex models, approximate inference is becoming increasingly common. In this talk I will provide three vignettes of recent work. These cover a) approximate Bayesian computation for Gaussian process density estimation, b) likelihood-free Gibbs sampling, and c) MCMC for approximate (rounded) data.

stochastic magnetic bits, simulated annealing and Gibbs sampling

Posted in Statistics with tags , , , , , , , , , on October 17, 2019 by xi'an

A paper by Borders et al. in the 19 September issue of Nature offers an interesting mix of computing and electronics and optimisation. With two preparatory tribunes! One [rather overdone] on Feynman’s quest. As a possible alternative to quantum computers for creating probabilistic bits. And making machine learning (as an optimisation program) more efficient. And another one explaining more clearly what is in the paper. As well as the practical advantage of the approach over quantum computing. As for the paper itself, the part I understood about factorising an integer F via minimising the squared difference between a product of two integers and F and using simulated annealing sounded rather easy, while the part I did not about constructing a semi-conductor implementing this stochastic search sounded too technical (especially in the métro during rush hour). Even after checking the on-line supplementary material. Interestingly, the paper claims for higher efficiency thanks to asynchronicity than a regular Gibbs simulation of Boltzman machines, quoting Roberts and Sahu (1997) without further explanation and possibly out of context (as the latter is not concerned with optimisation).