Archive for state space model

online approximate Bayesian learning

Posted in Statistics with tags , , , , , , , on September 25, 2020 by xi'an

My friends and coauthors Matthieu Gerber and Randal Douc have just arXived a massive paper on online approximate Bayesian learning, namely the handling of the posterior distribution on the parameters of a state-space model, which remains a challenge to this day… Starting from the iterated batch importance sampling (IBIS) algorithm of Nicolas (Chopin, 2002) which he introduced in his PhD thesis. The online (“by online we mean that the memory and computational requirement to process each observation is finite and bounded uniformly in t”) method they construct is guaranteed for the approximate posterior to converge to the (pseudo-)true value of the parameter as the sample size grows to infinity, where the sequence of approximations is a Cesaro mixture of initial approximations with Gaussian or t priors, AMIS like. (I am somewhat uncertain about the notion of a sequence of priors used in this setup. Another funny feature is the necessity to consider a fat tail t prior from time to time in this sequence!) The sequence is in turn approximated by a particle filter. The computational cost of this IBIS is roughly in O(NT), depending on the regeneration rate.

likelihood free nested sampling

Posted in Books, Statistics with tags , , , , , , , , , , , on April 26, 2019 by xi'an

A recent paper by Mikelson and Khammash found on bioRxiv considers the (paradoxical?) mixture of nested sampling and intractable likelihood. They however cover only the case when a particle filter or another unbiased estimator of the likelihood function can be found. Unless I am missing something in the paper, this seems a very costly and convoluted approach when pseudo-marginal MCMC is available. Or the rather substantial literature on computational approaches to state-space models. Furthermore simulating under the lower likelihood constraint gets even more intricate than for standard nested sampling as the parameter space is augmented with the likelihood estimator as an extra variable. And this makes a constrained simulation the harder, to the point that the paper need resort to a Dirichlet process Gaussian mixture approximation of the constrained density. It thus sounds quite an intricate approach to the problem. (For one of the realistic examples, the authors mention a 12 hour computation on a 48 core cluster. Producing an approximation of the evidence that is not unarguably stabilised, contrary to the above.) Once again, not being completely up-to-date in sequential Monte Carlo, I may miss a difficulty in analysing such models with other methods, but the proposal seems to be highly demanding with respect to the target.

down-under ABC paper accepted in JCGS!

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , on October 25, 2018 by xi'an

Great news!, the ABC paper we had originally started in 2012 in Melbourne with Gael Martin and Brendan MacCabe, before joining forces with David Frazier and Worapree Maneesoothorn, in expanding its scope to using auxiliary likelihoods to run ABC in state-space models, just got accepted in the Journal of Computational and Graphical Statistics. A reason to celebrate with a Mornington Peninsula Pinot Gris wine next time I visit Monash!

ABC forecasts

Posted in Books, pictures, Statistics with tags , , , , , , , , on January 9, 2018 by xi'an

My friends and co-authors David Frazier, Gael Martin, Brendan McCabe, and Worapree Maneesoonthorn arXived a paper on ABC forecasting at the turn of the year. ABC prediction is a natural extension of ABC inference in that, provided the full conditional of a future observation given past data and parameters is available but the posterior is not, ABC simulations of the parameters induce an approximation of the predictive. The paper thus considers the impact of this extension on the precision of the predictions. And argues that it is possible that this approximation is preferable to running MCMC in some settings. A first interesting result is that using ABC and hence conditioning on an insufficient summary statistic has no asymptotic impact on the resulting prediction, provided Bayesian concentration of the corresponding posterior takes place as in our convergence paper under revision.

“…conditioning inference about θ on η(y) rather than y makes no difference to the probabilistic statements made about [future observations]”

The above result holds both in terms of convergence in total variation and for proper scoring rules. Even though there is always a loss in accuracy in using ABC. Now, one may think this is a direct consequence of our (and others) earlier convergence results, but numerical experiments on standard time series show the distinct feature that, while the [MCMC] posterior and ABC posterior distributions on the parameters clearly differ, the predictives are more or less identical! With a potential speed gain in using ABC, although comparing parallel ABC versus non-parallel MCMC is rather delicate. For instance, a preliminary parallel ABC could be run as a burnin’ step for parallel MCMC, since all chains would then be roughly in the stationary regime. Another interesting outcome of these experiments is a case when the summary statistics produces a non-consistent ABC posterior, but still leads to a very similar predictive, as shown on this graph.This unexpected accuracy in prediction may further be exploited in state space models, towards producing particle algorithms that are greatly accelerated. Of course, an easy objection to this acceleration is that the impact of the approximation is unknown and un-assessed. However, such an acceleration leaves room for multiple implementations, possibly with different sets of summaries, to check for consistency over replicates.

ABC in Stockholm [on-board again]

Posted in Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , on May 18, 2016 by xi'an

abcruiseAfter a smooth cruise from Helsinki to Stockholm, a glorious sunrise over the Ålend Islands, and a morning break for getting an hasty view of the city, ABC in Helsinki (a.k.a. ABCruise) resumed while still in Stockholm. The first talk was by Laurent Calvet about dynamic (state-space) models, when the likelihood is not available and replaced with a proximity between the observed and the simulated observables, at each discrete time in the series. The authors are using a proxy predictive for the incoming observable and derive an optimal—in a non-parametric sense—bandwidth based on this proxy. Michael Gutmann then gave a presentation that somewhat connected with his talk at ABC in Roma, and poster at NIPS 2014, about using Bayesian optimisation to reduce the rejections in ABC algorithms. Which means building a model of a discrepancy or distance by Bayesian optimisation. I definitely like this perspective as it reduces the simulation to one of a discrepancy (after a learning step). And does not require a threshold. Aki Vehtari expanded on this idea with a series of illustrations. A difficulty I have with the approach is the construction of the acquisition function… The last session while pretty late was definitely exciting with talks by Richard Wilkinson on surrogate or emulator models, which goes very much in a direction I support, namely that approximate models should be accepted on their own, by Julien Stoehr with clustering and machine learning tools to incorporate more summary statistics, and Tim Meeds who concluded with two (small) talks!, centred on the notion of deterministic algorithms that explicitly incorporate the random generators within the comparison, resulting in post-simulation recentering à la Beaumont et al. (2003), plus new advances with further incorporations of those random generators turned deterministic functions within variational Bayes inference

On Wednesday morning, we will land back in Helsinki and head back to our respective homes, after another exciting ABC in… workshop. I am terribly impressed by the way this workshop at sea operated, providing perfect opportunities for informal interactions and collaborations, without ever getting claustrophobic or dense. Enjoying very long days also helped. While it seems unlikely we can repeat this successful implementation, I hope we can aim at similar formats in the coming occurrences. Kitos paljon to our Finnish hosts!