Archive for particle MCMC

invertible flow non equilibrium sampling (InFiNE)

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , on May 21, 2021 by xi'an

With Achille Thin and a few other coauthors [and friends], we just arXived a paper on a new form of importance sampling, motivated by a recent paper of Rotskoff and Vanden-Eijnden (2019) on non-equilibrium importance sampling. The central ideas of this earlier paper are the introduction of conformal Hamiltonian dynamics, where a dissipative term is added to the ODE found in HMC, namely

\dfrac{\text d p_t}{\text dt}=-\dfrac{\partial}{\partial q}H(q_t,p_t)-\gamma p_t=-\nabla U(q_t)-\gamma p_t

which means that all orbits converge to fixed points that satisfy ∇U(q) = 0 as the energy eventually vanishes. And the property that, were T be a conformal Hamiltonian integrator associated with H, i.e. perserving the invariant measure, averaging over orbits of T would improve the precision of Monte Carlo unbiased estimators, while remaining unbiased. The fact that Rotskoff and Vanden-Eijnden (2019) considered only continuous time makes their proposal hard to implement without adding approximation error, while our approach is directly set in discrete-time and preserves unbiasedness. And since measure preserving transforms are too difficult to come by, a change of variable correction, as in normalising flows, allows for an arbitrary choice of T, while keeping the estimator unbiased. The use of conformal maps makes for a natural choice of T in this context.

The resulting InFiNE algorithm is an MCMC particular algorithm which can be represented as a  partially collapsed Gibbs sampler when using the right auxiliary variables. As in Andrieu, Doucet and Hollenstein (2010) and their ISIR algorithm. The algorithm can be used for estimating normalising constants, comparing favourably with AIS, sampling from complex targets, and optimising variational autoencoders and their ELBO.

I really appreciated working on this project, with links to earlier notions like multiple importance sampling à la Owen and Zhou (2000), nested sampling, non-homogeneous normalising flows, measure estimation à la Kong et al. (2002), on which I worked in a more or less distant past.

statistical inference and uncertainty quantification for complex process-based models using multiple data sets [postdoc in Warwick]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , on February 13, 2020 by xi'an

Applications are invited for a postdoctoral research fellow position to work on the project “Statistical inference and uncertainty quantification for complex process-based models using multiple data sets”.

The position is part of a project to develop, implement and apply methods for parameter inference of models of environmental processes: particularly approaches based on approximate Bayesian computation (ABC) and particle MCMC.

The project is funded through the UKRI Strategic Priorities Fund programme on “Landscape Decisions: Towards a new framework for using land assets”. This programme will address the challenge of delivering better, evidence-based decisions within UK landscapes through research collaboration with policy, business and land management partners to deliver an interdisciplinary decision-making framework to inform how land is used. The post holder will become part of the world-renowned Department of Statistics at the University of Warwick.

Informal enquires can be addressed to Dr Richard Everitt, with closing date 8 March 2020.

delayed-acceptance. ADA boosted

Posted in Statistics with tags , , , , , on August 11, 2019 by xi'an

Samuel Wiqvist and co-authors from Scandinavia have recently arXived a paper on a new version of delayed acceptance MCMC. The ADA in the novel algorithm stands for approximate and accelerated, where the approximation in the first stage is to use a Gaussian process to replace the likelihood. In our approach, we used subsets for partial likelihoods, ordering them so that the most varying sub-likelihoods were evaluated first. Furthermore, if a parameter reaches the second stage, the likelihood is not necessarily evaluated, based on the global probability that a second stage is rejected or accepted. Which of course creates an approximation. Even when using a local predictor of the probability. The outcome of a comparison in two complex models is that the delayed approach does not necessarily do better than particle MCMC in terms of effective sample size per second, since it does reject significantly more. Using various types of surrogate likelihoods and assessments of the approximation effect could boost the appeal of the method. Maybe using ABC first could suggest another surrogate?

IMS workshop [day 4]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , on August 31, 2018 by xi'an

While I did not repeat the mistake of yesterday morning, just as well because the sun was unbearably strong!, I managed this time to board a bus headed in the wrong direction and as a result went through several remote NUS campi! Missing the first talk of the day as a result. By Youssef Marzouk, with a connection between sequential Monte Carlo and optimal transport. Transport for sampling, that is. The following talk by Tiangang Cui was however related, with Marzouk a co-author, as it aimed at finding linear transforms towards creating Normal approximations to the target to be used as proposals in Metropolis algorithms. Which may sound like something already tried a zillion times in the MCMC literature, except that the setting was rather specific to some inverse problems, imposing a generalised Normal structure on the transform, then optimised by transport arguments. It is unclear to me [from just attending the talk] how complex this derivation is and how dimension steps in, but the produced illustrations were quite robust to an increase in dimension.

The remaining talks for the day were mostly particular, from Anthony Lee introducing a new and almost costless way of producing variance estimates in particle filters, exploiting only the ancestry of particles, to Mike Pitt discussing the correlated pseudo-marginal algorithm developed with George Deligiannidis and Arnaud Doucet. Which somewhat paradoxically managed to fight the degeneracy [i.e., the need for a number of terms increasing like the time index T] found in independent pseudo-marginal resolutions, moving down to almost log(T)… With an interesting connection to the quasi SMC approach of Mathieu and Nicolas. And Sebastian Reich also stressed the links with optimal transport in a talk about data assimilation that was way beyond my reach. The day concluded with fireworks, through a magistral lecture by Professeur Del Moral on a continuous time version of PMCMC using the Feynman-Kac terminology. Pierre did a superb job during his lecture towards leading the whole room to the conclusion.

parallelizable sampling method for parameter inference of large biochemical reaction models

Posted in Books, Statistics with tags , , , , , , , , on June 18, 2018 by xi'an

I came across this older (2016) arXiv paper by Jan Mikelson and Mustafa Khammash [antidated as of April 25, 2018] as another version of nested sampling. The novelty of the approach is in applying nested sampling for approximating the likelihood function in the case of involved hidden Markov models (although the name itself does not appear in the paper). This is an interesting proposal, even though there is a fairly large and very active literature on computational approaches to such objects, from sequential Monte Carlo (SMC) to particle MCMC (pMCMC), to SMC².

“We found a way to efficiently sample parameter vectors (particles) from the super level set of the likelihood (sets of particles with a likelihood equal to or higher than some threshold) corresponding to an increasing sequence of thresholds” (p.2)

The approach here is an aggregate of nested sampling and particle filters (SMC), filters that are paradoxically employed in approximating the likelihood function itself, thus called repeatedly as the value of the parameter θ changes, unless I am confused, when it seems to me that, once started with particle filters, the authors could have used them all the way to the upper level (through, again, SMC²). Instead, and that brings a further degree of (uncorrected) approximation to the procedure, a Dirichlet process prior is used to estimate Gaussian mixture approximations to the true posterior distribution(s) on the (super) level sets. Now, approximating a distribution that is zero outside a compact set [the prior restricted to the likelihood being larger than by a distribution with an infinite support does not a priori sound like a particularly enticing idea. Note also that there is no later correction for using the mixture approximation to the restricted prior. (The method also involves an approximation of the (Lebesgue) volume of the level sets that may be poor in higher dimensions.)

“DP-GMM estimations work very well in high dimensional spaces and since we use rejection sampling to obtain samples from the level set by sampling from the DP-GMM estimation, the estimation error does not get propagated through iterations.” (p.13)

One aspect of the paper that puzzles me is the use of a rejection sampler to produce new parameters simulations from a given (super) level set, as this involves a lower bound M on the Gaussian mixture approximation over this level set. If a Gaussian mixture approximation is available, there is apparently no need for this as it can be sampled directly and values below the threshold can be disposed of. It is also unclear why the error does not propagate from one level to the next, if only because of the connection between the successive particle approximations.

 

%d bloggers like this: