**W**hile it took quite a while (!), with several visits by three of us to our respective antipodes, incl. my exciting trip to Melbourne and Monash University two years ago, our paper on ABC for state space models was arXived yesterday! Thanks to my coauthors, Gael Martin, Brendan McCabe, and Worapree Maneesoonthorn, I am very glad of this outcome and of the new perspective on ABC it produces. For one thing, it concentrates on the selection of summary statistics from a more econometrics than usual point of view, defining asymptotic sufficiency in this context and demonstrated that both asymptotic sufficiency and Bayes consistency can be achieved when using maximum likelihood estimators of the parameters of an auxiliary model as summary statistics. In addition, the proximity to (asymptotic) sufficiency yielded by the MLE is replicated by the score vector. Using the score instead of the MLE as a summary statistics allows for huge gains in terms of speed. The method is then applied to a continuous time state space model, using as auxiliary model an augmented unscented Kalman filter. We also found in the various state space models tested therein that the ABC approach based on the marginal [likelihood] score was performing quite well, including wrt Fearnhead’s and Prangle’s (2012) approach… I like the idea of using such a generic object as the unscented Kalman filter for state space models, even when it is not a particularly accurate representation of the true model. Another appealing feature of the paper is in the connections made with indirect inference.

## Archive for state space model

## Approximate Bayesian Computation in state space models

Posted in Statistics, Travel, University life with tags ABC, indirect inference, Kalman filter, marginal likelihood, Melbourne, Monash University, score function, state space model on October 2, 2014 by xi'an## Statistical modeling and computation [apologies]

Posted in Books, R, Statistics, University life with tags apologies, Australia, Bayesian statistics, Dirk Kroese, introductory textbooks, Joshua Chan, Monte Carlo methods, Monte Carlo Statistical Methods, R, state space model, Statistical Modeling, typo on June 11, 2014 by xi'an**I**n my book review of the recent book by Dirk Kroese and Joshua Chan, *Statistical Modeling and Computation*, I mistakenly and persistently typed the name of the second author as Joshua Chen. This typo alas made it to the printed and on-line versions of the subsequent CHANCE **27**(2) column. I am thus very much sorry for this mistake of mine and most sincerely apologise to the authors. Indeed, it always annoys me to have my name mistyped (usually as Roberts!) in references. *[If nothing else, this typo signals it is high time for a change of my prescription glasses.]*

## Statistical modeling and computation [book review]

Posted in Books, R, Statistics, University life with tags ANU, Australia, Bayesian Essentials with R, Bayesian statistics, Brisbane, Dirk Kroese, introductory textbooks, Joshua Chan, Matlab, maximum likelihood estimation, Monte Carlo methods, Monte Carlo Statistical Methods, R, state space model on January 22, 2014 by xi'an**D**irk Kroese (from UQ, Brisbane) and Joshua Chan (from ANU, Canberra) just published a book entitled *Statistical Modeling and Computation*, distributed by Springer-Verlag (I cannot tell which series it is part of from the cover or frontpages…) The book is intended mostly for an undergrad audience (or for graduate students with no probability or statistics background). Given that prerequisite, *Statistical Modeling and Computation* is fairly standard in that it recalls probability basics, the principles of statistical inference, and classical parametric models. In a third part, the authors cover “advanced models” like generalised linear models, time series and state-space models. The specificity of the book lies in the inclusion of simulation methods, in particular MCMC methods, and illustrations by Matlab code boxes. (Codes that are available on the companion website, along with R translations.) It thus has a lot in common with our *Bayesian Essentials with R*, meaning that I am not the most appropriate or least ~~un~~biased reviewer for this book. Continue reading

## particle efficient importance sampling

Posted in Statistics with tags efficient importance sampling, hidden Markov models, importance sampling, particle filters, sequential Monte Carlo, state space model, stochastic volatility on October 15, 2013 by xi'an**M**arcel Scharth and Robert Kohn just arXived a new article entitled “particle efficient importance sampling“. What is—the efficiency—about?! The spectacular diminution in variance—(the authors mention a factor of 6,000 when compared with regular particle filters!—in a stochastic volatility simulation study.

**I**f I got the details right, the improvement stems from a paper by Richard and Zhang (*Journal of Econometrics*, 2007). In a state-space/hidden Markov model setting, (non-sequential) importance sampling tries to approximate the smoothing distribution one term at a time, ie p(x_{t}|x_{t-1},y_{1:n}), but Richard and Zhang (2007) modify the target by looking at

p(y_{t}|x_{t})p(x_{t}|x_{t-1})χ_{(}x_{t-1},y_{1:n}),

where the last term χ_{(}x_{t-1},y_{1:n}) is the *normalising constant* of the proposal kernel for the previous (in *t-1*) target, k(x_{t-1}|x_{t-2},y_{1:n}). This kernel is actually parameterised as k(x_{t-1}|x_{t-2},a_{t}(y_{1:n)}) and the EIS algorithm optimises those parameters, one term at a time. The current paper expands Richard and Zhang (2007) by using particles to approximate the likelihood contribution and reduce the variance once the “optimal” EIS solution is obtained. (They also reproduce Richard’s and Zhang’s tricks of relying on the same common random numbers.

**T**his approach sounds like a “miracle” to me, in the sense(s) that (a) the “normalising constant” is far from being uniquely defined (and just as far from being constant in the parameter a_{t}) and (b) it is unrelated with the target distribution (except for the optimisation step). In the extreme case when the normalising constant is also constant… in a_{t}, this step clearly is useless. (This also opens the potential for an optimisation in the choice of χ_{(}x_{t-1},y_{1:n})…)

**T**he simulation study starts from a univariate stochastic volatility model relying on two hidden correlated AR(1) models. (There may be a typo in the definition in Section 4.1, i.e. a Φ_{i} missing.) In those simulations, EIS brings a significant variance reduction when compared with standard particle filters and particle EIS further improves upon EIS by a factor of 2 to 20 (in the variance). I could not spot in the paper which choice had been made for χ()… which is annoying as I gathered from my reading that it must have a strong impact on the efficiency attached to the name of the method!