Archive for CIRM

Bayesian Methods for the Social Sciences [and for the Parisians]

Posted in Statistics, Travel, University life with tags , , , , , , , , , , , on July 20, 2022 by xi'an

[Reposting the announcement of a Bayesian conference in Paris, next October, with a fabulous list of friends speakers! Note that this is the week prior to our Fusion workshop at CIRM, Marseille. And to BNP.]

This three-day workshop will gather statisticians, mathematicians and social scientists around the theme of Bayesian statistical methods for the social sciences. This area has been growing rapidly in the past decade, and the speakers will include some of the leading researchers in the area from around the World.

The first day will consist of tutorial introductions to Bayesian inference, demography and social network analysis. Days 2 and 3 will consist of talks and posters on cutting-edge research in the area. The workshop is sponsored by the Fondation des Sciences Mathématiques de Paris (FSMP), and the tutorial sessions are organised jointly with the French Institute of Mathematics for Planet Earth.

It will be held in person at the Institut Henri Poincaré in Paris from October 19 to 21, 2022.

Kick-Kac teleportation

Posted in Books, pictures, Statistics with tags , , , , , , , , on January 23, 2022 by xi'an

Randal Douc, Alain Durmus, Aurélien Enfroy, and Jimmy Olson have arXived their Kick-Kac teleportation paper, which was presented by Randal at CIRM last semester. It is based on Kac’s theorem, which states that, for a Markov chain with invariant distribution π, under (π) stationarity, the average tour between two visits to an accessible set C is also stationary. Which can be used for approximating π(h) if π(C) is known (or well-estimated). Jim Hobert and I exploited this theorem in our 2004 perfect sampling paper. The current paper contains a novel proof of the theorem under weaker conditions. (Note that the only condition on C is that it is accessible, rather than a small set. Which becomes necessary for geometric ergodicity, see condition (A4).)

What they define as the Kick-Kac teleportation (KKT) process is the collection of trajectories between two visits to C. Their memoryless version requires perfect simulations from π restricted to the set C. With a natural extension based on a Markov kernel keeping π restricted to the set C stationary. And a further generalisation allowing for lighter tails that also contains the 2005 paper by Brockwell and Kadane as a special case.

The ability of generating from a different kernel Q at each visit to C allows for different dynamics (as in other composite kernels). In their illustrations, the authors use lowest density regions for C, which is rather surprising to me. Except that it allows for a better connection between modes of the target π: the higher performances of the KKT algorithms against the considered alternatives are apparently dependent on the ability of the kernel Q to explore other modes with sufficient frequency.

more of Sugiton at dawn [jatp]

Posted in Mountains, pictures, Running, Travel with tags , , , , , , , , , , , , on November 7, 2021 by xi'an

control variates [seminar]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on November 5, 2021 by xi'an

Today, Petros Dellaportas (whom I have know since the early days of MCMC, when we met in CIRM) gave a seminar at the Warwick algorithm seminar on control variates for MCMC, reminding me of his 2012 JRSS paper. Based on the Poisson equation and using a second control variate to stabilise the Monte Carlo approximation do the first control variate. The difference with usual control variates is finding a first approximate G(x)-q(y|x)G(Y) to F-πF. And the first Poisson equation is using α(x,y)q(y|x) rather than π. Then the second expands log α(x,y)q(y|x) to achieve a manageable term.

Abstract: We provide a general methodology to construct control variates for any discrete time random walk Metropolis and Metropolis-adjusted Langevin algorithm Markov chains that can achieve, in a post-processing manner and with a negligible additional computational cost, impressive variance reduction when compared to the standard MCMC ergodic averages. Our proposed estimators are based on an approximate solution of the Poisson equation for a multivariate Gaussian target densities of any dimension.

I wonder if there were a neural network version that would first build G from scratch and later optimise it towards solving the Poisson equation. As in this recent arXival I haven’t read (yet).

Sugiton at dawn [jatp]

Posted in Mountains, pictures, Running, Travel with tags , , , , , , , , , , on October 28, 2021 by xi'an

%d bloggers like this: