Archive for state space model

simulation based composite likelihood

Posted in Statistics with tags , , , , , on December 29, 2023 by xi'an

Lorenzo Rimella, Chris Jewell, and Paul Fearnhead have recently arXived a paper entitled Simulation Based Composite Likelihood, where they consider a composite likelihood approximation for running inference on HMM parameters under the specific scenario of HMMs on finite, high-dimension N, state spaces X with huge cost of order card (Χ)2N when computing the likelihood  by the forward algorithm:

“Inference for high-dimensional hidden Markov models is challenging due to the exponential-in-dimension computational cost of the forward algorithm.”

The authors make an assumption (2) of total factorisation across dimensions for both current hidden and current observed terms, given the previous hidden states, which is very very strong, if not resulting in a complete separation into independent component-wise HMMs. This helps however in deriving a Monte Carlo approximation of the likelihood of one component of the HMM sequence, the full likelihood being then approximated in a composite (likelihood) manner by the product of these component marginals.  The remaining difficulty of computing the marginals of the component-wise observed (pseudo-) Markov chains is attenuated

“by fixing the state of all but one component n of the latent process, [since] we can leverage the factorisation and calculate probabilities related to the time-trajectory of the remaining [latent] state”

but it requires simulation of the hidden chain, overall of order  O(PTN²card (X)²) when P is the number of MCMC simulations, which can be improved by a factor N by removing a feedback step through a further marginal likelihood approximation. Interestingly falling into a prediction-correction pattern usual in sequential simulations. All this demonstrates craftsmanship of a high order, even though the issue of using an approximate composite likelihood does not seem to be addressed.

 

a versatile alternative to ABC

Posted in Books, Statistics with tags , , , , , , , , , on July 25, 2023 by xi'an

“We introduce the Fixed Landscape Inference MethOd, a new likelihood-free inference method for continuous state-space stochastic models. It applies deterministic gradient-based optimization algorithms to obtain a point estimate of the parameters, minimizing the difference between the data and some simulations according to some prescribed summary statistics. In this sense, it is analogous to Approximate Bayesian Computation (ABC). Like ABC, it can also provide an approximation of the distribution of the parameters.”

I quickly read this arXival by Monard et al. that is presented as an alternative to ABC, while outside a Bayesian setup. The central concept is that a deterministic gradient descent provides an optimal parameter value when replacing the likelihood with a distance between the observed data and simulated synthetic data indexed by the current value of the parameter (in the descent). In order to operate the descent the synthetic data is assumed to be available as a deterministic transform of the parameter value and of a vector of basic random objects, eg Uniforms. In order to make the target function differentiable, the above Uniform vector is fixed for the entire gradient descent. A puzzling aspect of the paper is that it seems to compare the (empirical) distribution of the resulting estimator with a posterior distribution, unless the comparison is with the (empirical) distribution of the Bayes estimators. The variability due to the choice of the fixed vector of basic random objects does not seem to be taken into account either, apparently. Furthermore, the method is presented as able to handle several models at once, which I find difficult to fathom as (a) the random vectors behind each model necessarily vary and (b) there is no apparent penalisation for complexity.

ABC in Lapland²

Posted in Mountains, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , , , , on March 16, 2023 by xi'an

On the second day of our workshop, Aki Vehtari gave a short talk about his recent works on speed up post processing by importance sampling a simulation of an imprecise version of the likelihood until the desired precision is attained, importance corrected by Pareto smoothing¹⁵. A very interesting foray into the meaning of practical models and the hard constraints on computer precision. Grégoire Clarté (formerly a PhD student of ours at Dauphine) stayed on a similar ground of using sparse GP versions of the likelihood and post processing by VB²³ then stir and repeat!

Riccardo Corradin did model-based clustering when the nonparametric mixture kernel is missing a normalizing constant, using ABC with a Wasserstein distance and an adaptive proposal, with some flavour of ABC-Gibbs (and no issue of label switching since this is clustering). Mixtures of g&k models, yay! Tommaso Rigon reconsidered clustering via a (generalised Bayes à la Bissiri et al.) discrepancy measure rather than a true model, summing over all clusters and observations a discrepancy between said observation and said cluster. Very neat if possibly costly since involving distances to clusters or within clusters. Although she considered post-processing and Bayesian bootstrap, Judith (formerly [?] Dauphine)  acknowledged that she somewhat drifted from the theme of the workshop by considering BvM theorems for functionals of unknown functions, with a form of Laplace correction. (Enjoying Lapland so much that I though “Lap” in Judith’s talk was for Lapland rather than Laplace!!!) And applications to causality.

After the (X country skiing) break, Lorenzo Pacchiardi presented his adversarial approach to ABC, differing from Ramesh et al. (2022) by the use of scoring rule minimisation, where unbiased estimators of gradients are available, Ayush Bharti argued for involving experts in selecting the summary statistics, esp. for misspecified models, and Ulpu Remes presented a Jensen-Shanon divergence for selecting models likelihood-freely²², using a test statistic as summary statistic..

Sam Duffield made a case for generalised Bayesian inference in correcting errors in quantum computers, Joshua Bon went back to scoring rules for correcting the ABC approximation, with an importance step, while Trevor Campbell, Iuri Marocco and Hector McKimm nicely concluded the workshop with lightning-fast talks in place of the cancelled poster session. Great workshop, in my most objective opinion, with new directions!

ABC in Lapland

Posted in Mountains, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , on March 15, 2023 by xi'an

Greetings from Levi, Lapland! Sonia Petrone beautifully started the ABC workshop with a (the!) plenary Sunday night talk on quasi-Bayes in the spirit of both Fortini & Petrone (2020) and the more recent Fong, Holmes, and Walker (2023). The talk got me puzzled by wondering the nature of convergence, in that it happens no matter what the underlying distribution (or lack thereof) of the data is, in that, even without any exchangeability structure, the predictive is converging. The quasi stems from a connection with the historical Smith and Markov (1978) sequential update approximation for the posterior attached with mixtures of distributions. Which itself relates to both Dirichlet posterior updates and Bayesian bootstrap à la Newton & Raftery. Appropriate link when the convergence seems to stem from the sequence of predictives instead of the underlying distribution, if any, pulling Bayes by its own bootstrap…! Chris Holmes also talked the next day about this approach, esp. about a Bayesian approach to causality that does not require counterfactuals, in connection with a recent arXival of his (on my reading list).

Carlo Alberto presented both his 2014 SABC (simulated annealing) algorithm with a neat idea of reducing waste in the tempering schedule and a recent summary selection approach based on an auto-encoder function of both y and noise to reduce to sufficient statistic. A similar idea was found in Yannik Schälte’s talk (slide above). Who was returning to Richard Wiilkinson’s exact ABC¹³ with adaptive sequential generator, also linking to simulated annealing and ABC-SMC¹² to the rescue. Notion of amortized inference. Seemingly approximating data y with NN and then learn parameter by a normalising flow.

David Frazier talked on Q-posterior²³ approach, based on Fisher’s identity, for approximating score function, which first seemed to require some exponential family structure on a completed model (but does not, after discussing with David!), Jack Jewson on beta divergence priors²³ for uncertainty on likelihoods, better than KLD divergence on e-contamination situations, any impact on ABC? Masahiro Fujisawa back to outliers impact on ABC, again with e-contaminations (with me wondering at the impact of outliers on NN estimation).

In the afternoon session (due to two last minute cancellations, we skipped (or [MCMC] skied) one afternoon session, which coincided with a bright and crispy day, how convenient! ), Massi Tamborino (U of Warwick) FitzHugh-Nagumo process, with impossibilities to solve the inference problem differently, for instance Euler-Maruyama does not always work, numerical schemes are inducing a bias. Back to ABC with the hunt for a summary that get rid of the noise, as in Carlo Alberto’s work. Yuexi Wang talked about her works on adversarial ABC inspired from GANs. Another instance where noise is used as input. True data not used in training? Imke Botha discussed an improvement to ensemble Kalman inversion which, while biased, gains over both regular SMC timewise and ensemble Kalman inversion in precision, and Chaya Weerasinghe focussed on Bayesian forecasting in state space models under model misspecification, via approximate Bayesian computation, using an auxiliary model to produce summary statistics as in indirect inference.

Introduction to Sequential Monte Carlo [book review]

Posted in Books, Statistics with tags , , , , , , , , , , , , , , , , on June 8, 2021 by xi'an

[Warning: Due to many CoI, from Nicolas being a former PhD student of mine, to his being a current colleague at CREST, to Omiros being co-deputy-editor for Biometrika, this review will not be part of my CHANCE book reviews.]

My friends Nicolas Chopin and Omiros Papaspiliopoulos wrote in 2020 An Introduction to Sequential Monte Carlo (Springer) that took several years to achieve and which I find remarkably coherent in its unified presentation. Particles filters and more broadly sequential Monte Carlo have expended considerably in the last 25 years and I find it difficult to keep track of the main advances given the expansive and heterogeneous literature. The book is also quite careful in its mathematical treatment of the concepts and, while the Feynman-Kac formalism is somewhat scary, it provides a careful introduction to the sampling techniques relating to state-space models and to their asymptotic validation. As an introduction it does not go to the same depths as Pierre Del Moral’s 2004 book or our 2005 book (Cappé et al.). But it also proposes a unified treatment of the most recent developments, including SMC² and ABC-SMC. There is even a chapter on sequential quasi-Monte Carlo, naturally connected to Mathieu Gerber’s and Nicolas Chopin’s 2015 Read Paper. Another significant feature is the articulation of the practical part around a massive Python package called particles [what else?!]. While the book is intended as a textbook, and has been used as such at ENSAE and in other places, there are only a few exercises per chapter and they are not necessarily manageable (as Exercise 7.1, the unique exercise for the very short Chapter 7.) The style is highly pedagogical, take for instance Chapter 10 on the various particle filters, with a detailed and separate analysis of the input, algorithm, and output of each of these. Examples are only strategically used when comparing methods or illustrating convergence. While the MCMC chapter (Chapter 15) is surprisingly small, it is actually an introducing of the massive chapter on particle MCMC (and a teaser for an incoming Papaspiloulos, Roberts and Tweedie, a slow-cooking dish that has now been baking for quite a while!).