Archive for pinball sampler

MCMC with strings and branes

Posted in Books, pictures, Statistics with tags , , , , , on June 6, 2016 by xi'an

branes“Roughly speaking, the idea [of MCMC] is that the thermal fluctuations of a particle moving in an energy landscape provides a conceptually elegant way to sample from a target distribution.”

A short version of their earlier long paper was arXived last week by Heckman et al. The starting point of the paper is to consider simultaneously M parallel samplers by envisioning the M-uple as a single object. This reminded me of the attempt we had made in our 1995 pinball sampler paper with Kerrie Mengersen, where we introduced a repulsive scheme to keep the particles apart and individually stationary against the same target. The joint target being defined as the product of the individual targets. As a single Markov chain, the MCMC sampler can take advantage of the parallel chains to possibly improve the efficiency when compared with running M parallel chains. Possibly only, because the target has moved to a space that is M times larger…

“…although the physics of point particles underlies much of our modern understanding of natural phenomena, it has proven fruitful to consider objects such as strings and branes with finite extent in p spatial dimensions.”

The details of the proposal are somewhat unclear in that the notion of brane remains a mystery to me. It sounds like a sort of random graph over the indices of the particles, but endowed with further (magical?) physical properties. As defined in the paper the suburban algorithm picks a random (neighbourhood) graph at each iteration that is used in the proposal over the particle system (or ensemble). As justified by the authors, the fact that this choice is independent of the current state of the Markov chain implies stationarity. If not efficiency, when compared with the independent parallel MCMC scheme. And because there are so many ways in taking into account the neighbours. (I did not see (11) as a particularly relevant implementation of the algorithm, mixing a random walk move with another random walk on the time differences.) Actually, I have somewhat of a worry about the term “nearest neighbour” which may be defined by the graph (which is fine) or by the configuration of the particle system at time t (which is not fine).

“The effective dimension rather than the overall topology of the grid plays the dominant role in the performance of the algorithm.”

The limited influence of the grid topology is quite understandable in that the chain targets an iid sample, so there is no reason one index value is more relevant than another. All particles in the vector are interchangeable in this respect and in the long run only the number of connected particles should matter. A more interesting feature is that the suburban sampler seems to perform best for strings, i.e. when the number of connected particles is larger than two. This somewhat agrees with my initial remark that dealing with the M particles as a single object should slow down convergence because of the dimension increase. This is one reason why we did not pursue our pinball sampler any further, as it seemed to converge quite slowly.

“To summarize: With too few friends one drifts into oblivion, but with too many friends one becomes a boring conformist.”

The notion of generating a sample all at once is quite appealing, especially because of the iid nature of the target, but I am not convinced the approach followed in this paper is sufficiently involved for this purpose. Alex Shestopaloff and Radford Neal’s recent work on ensemble MCMC is certainly pushing things further in the analysis of efficient moves. I also wonder if a repulsive component shouldn’t be added to the target as in pinball sampling, borrowing maybe from determinental processes, towards a more thorough exploration of the target. Even though it remains unclear that this will be more efficient than running M parallel chains.

bouncy particle sampler

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , on October 30, 2015 by xi'an

 Alexandre Bouchard-Coté, Sebastian Vollmer and Arnaud Doucet just arXived a paper with the above title, which reminded me of a proposal Kerrie Mengersen and I made at Valencia 7, in Tenerife, the [short-lived!] pinball sampler. This sampler was a particle (MCMC) sampler where we used the location of the other particles to avoid their neighbourhood, by bouncing away from them according to a delayed rejection principle, with an overall Gibbs justification since the resulting target was the product of copies of the target distribution. The difficulty in implementing the (neat!) idea was in figuring out the amount of bouncing or, in more physical terms, the energy allocated to the move.

In the current paper, inspired from an earlier paper in physics, the Markov chain (or single particle) evolves by linear moves, changing directions according to a Poisson process, with intensity and direction depending on the target distribution. A local version takes advantage of a decomposition of the target into a product of terms involving only some components of the whole parameter to be simulated. And hence allowing for moves in subspaces. An extension proposed by the authors is to bounce along the Hamiltonian isoclines. The method is demonstrably ergodic and irreducible. In practice, I wonder at the level of calibration or preliminary testing required to facilitate the exploration of the parameter space, particularly in the local version that seems to multiply items to be calibrated.

simulating determinantal processes

Posted in Statistics, Travel with tags , , , , , , , , , , on December 6, 2013 by xi'an

In the plane to Atlanta, I happened to read a paper called Efficient simulation of the Ginibre point process by Laurent Decreusefond, Ian Flint, and Anaïs Vergne (from Telecom Paristech). “Happened to” as it was a conjunction of getting tipped by my new Dauphine colleague (and fellow blogger!) Djalil Chaffaï about the paper, having downloaded it prior to departure, and being stuck in a plane (after watching the only Chinese [somewhat] fantasy movie onboard, Saving General Yang).

This is mostly a mathematics paper. While indeed a large chunk of it is concerned with the rigorous definition of this point process in an abstract space, the last part is about simulating such processes. They are called determinantal (and not detrimental as I was tempted to interpret on my first read!) because the density of an n-set (x1x2,…,xn) is given by a kind of generalised Vandermonde determinant

p(x_1,\ldots,x_n) = \dfrac{1}{n!} \text{det} \left( T(x_i,x_j) \right)

where T is defined in terms of an orthonormal family,

T(x,y) = \sum_{i=1}^n \psi_i(x) \overline{\psi_i(y)}.

(The number n of points can be simulated via an a.s. finite Bernoulli process.) Because of this representation, the sequence of conditional densities for the xi‘s (i.e. x1, x2 given x1, etc.) can be found in closed form. In the special case of the Ginibre process, the ψi‘s are of the form

\psi_i(z) =z^m \exp\{-|z|^2/2\}/\sqrt{\pi m!}

and the process cannot be simulated for it has infinite mass, hence an a.s. infinite number of points. Somehow surprisingly (as I thought this was the point of the paper), the authors then switch to a truncated version of the process that always has a fixed number N of points. And whose density has the closed form

p(x_1,\ldots,x_n) = \dfrac{1}{\pi^N} \prod_i \frac{1}{i!} \exp\{-|z_i|^2/2\}\prod_{i<j} |z_i-z_j|^2

It has an interestingly repulsive quality in that points cannot get close to one another. (It reminded me of the pinball sampler proposed by Kerrie Mengersen and myself at one of the Valencia meetings and not pursued since.) The conclusion (of this section) is anticlimactic, though,  in that it is known that this density also corresponds to the distribution of the eigenvalues of an Hermitian matrix with standardized complex Gaussian entries. The authors mentions that the fact that the support is the whole complex space Cn is a difficulty, although I do not see why.

The following sections of the paper move to the Ginibre process restricted to a compact and then to the truncated Ginibre process restricted to a compact, for which the authors develop corresponding simulation algorithms. There is however a drag in that the sequence of conditionals, while available in closed-form, cannot be simulated efficiently but rely on a uniform accept-reject instead. While I am certainly missing most of the points in the paper, I wonder if a Gibbs sampler would not be an interesting alternative given that the full (last) conditional is a Gaussian density…

Dream on!

Posted in Statistics, University life with tags , , , , on May 13, 2011 by xi'an

On Saturday, I was asked to referee Jasper Vrugt’s paper “DREAM(D): an adaptive Markov chain Monte Carlo simulation algorithm to solve discrete, noncontinuous, posterior parameter estimation problems(I have added the proper upper case letters!) for the journal Hydrology and Earth System Sciences. It may sound surprising that I advertise this refereeing, but Hydrology and Earth System Sciences has the fairly interesting feature that referees’ comments can be turned into a discussion of the paper, along with other spontaneous comments. The only drawbacks are that (a) the discussion remains open for only 8 days and (b) using LaTeX commands is not as straightforward as on WordPress. Here is (more or less) my entry: Continue reading

%d bloggers like this: