ABC convergence for HMMs

Following my previous post on Paul Fearnhead’s and Dennis Prangle’s Semi-automatic ABC, Ajay Jasra pointed me to the paper he arXived with Thomas Dean, Sumeetpal Singh and Gareth Peters twenty days ago. I read it today. It is entitled Parameter Estimation for Hidden Markov Models with Intractable Likelihoods  and it relates to Fearnhead’s and Prangle’s paper in that those authors also establish ABC consistency for the noisy ABC. The paper focus on the HMM case and the authors construct an ABC scheme such that the ABC simulated sequence remains an HMM, the conditional distribution of the observables given the latent Markov chain being modified by the ABC acceptance ball. This means that conducting maximum likelihood (or Bayesian) estimation based on the ABC sample is equivalent to exact inference under the perturbed HMM scheme. In this sense, this equivalence brings the paper close to Wilkinson’s (2008) and Fearnhead’s and Prangle’s. While this also establishes asymptotic bias for a fixed value of the tolerance ε, it also proves that an arbitrary accuracy can be attained with enough data and a small enough ε. The authors of the paper show in addition (as in Fearnhead’s and Prangle’s) that an ABC inference based on noisy observations

\hat y_1+\epsilon z_1,\ldots,\hat y_n+\epsilon z_n

is equivalent to a regular inference based on the original data

\hat y_1,\ldots,\hat y_n

hence the asymptotic consistence of noisy ABC! Furthermore, the authors show that the asymptotic variance of the ABC version is always greater than the asymptotic variance of the standard MLE, but that it decreases as ε². The ppr also contains an illustration on an HMM with α-stable observables. (Of course, the restriction to summary statistics that preserve the HMM structure is paramount for the results in the paper to apply, hence preventing the use of truly summarising statistics that would not grow in dimension with the size of the HMM series.)

In conclusion, here comes a second paper that validates [noisy] ABC without non-parametric arguments. Both those recent papers make me appreciate even further the idea of noisy ABC: at first, I liked the concept but found the randomisation it involved rather counter-intuitive from a Bayesian perspective. Now, I rather perceive it as a duplication of the randomness in the data that brings the simulated model closer to the observed model.

6 Responses to “ABC convergence for HMMs”

  1. […] Sumeet Singh gave a talk mixing ABC with maximum likelihood estimation for HMMS, in connection with his earlier paper, and I got more convince  by the idea of using a sequence of balls for keeping pseudo-data close […]

  2. […] mentioning the convergence of ABC algorithms, in particular the very relevant paper by Dean et al. I had already discussed in an earlier post. (This is taking a larger chunk of my time than expected! I am glad I will use […]

  3. […] than an approximation to Bayesian inference is clearly appealing. (Fearnhead and Prangle, and Dean, Singh, Jasra and Peters could be quoted as […]

  4. […] currently missing (although an extension of the perspective adopted in Fearnhead and Prangle and in Dean et al., namely to see ABC as an inference method per se rather than an approximation to a Bayesian […]

  5. […] including a superb influenza sequence. Ajay Jasra explained the main ideas in the ABC HMM paper I recently discussed (even mentioning the post during the talk!). Mark Beaumont started with a recollection of the […]

  6. […] its criticisms in [2].  Re HMM, again may be natural to build from the ABC, see C. Robert’s post, and a recent paper on the […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: