ABC convergence for HMMs
Following my previous post on Paul Fearnhead’s and Dennis Prangle’s Semi-automatic ABC, Ajay Jasra pointed me to the paper he arXived with Thomas Dean, Sumeetpal Singh and Gareth Peters twenty days ago. I read it today. It is entitled Parameter Estimation for Hidden Markov Models with Intractable Likelihoods and it relates to Fearnhead’s and Prangle’s paper in that those authors also establish ABC consistency for the noisy ABC. The paper focus on the HMM case and the authors construct an ABC scheme such that the ABC simulated sequence remains an HMM, the conditional distribution of the observables given the latent Markov chain being modified by the ABC acceptance ball. This means that conducting maximum likelihood (or Bayesian) estimation based on the ABC sample is equivalent to exact inference under the perturbed HMM scheme. In this sense, this equivalence brings the paper close to Wilkinson’s (2008) and Fearnhead’s and Prangle’s. While this also establishes asymptotic bias for a fixed value of the tolerance ε, it also proves that an arbitrary accuracy can be attained with enough data and a small enough ε. The authors of the paper show in addition (as in Fearnhead’s and Prangle’s) that an ABC inference based on noisy observations
is equivalent to a regular inference based on the original data
hence the asymptotic consistence of noisy ABC! Furthermore, the authors show that the asymptotic variance of the ABC version is always greater than the asymptotic variance of the standard MLE, but that it decreases as ε². The ppr also contains an illustration on an HMM with α-stable observables. (Of course, the restriction to summary statistics that preserve the HMM structure is paramount for the results in the paper to apply, hence preventing the use of truly summarising statistics that would not grow in dimension with the size of the HMM series.)
In conclusion, here comes a second paper that validates [noisy] ABC without non-parametric arguments. Both those recent papers make me appreciate even further the idea of noisy ABC: at first, I liked the concept but found the randomisation it involved rather counter-intuitive from a Bayesian perspective. Now, I rather perceive it as a duplication of the randomness in the data that brings the simulated model closer to the observed model.
April 19, 2012 at 12:13 am
[…] Sumeet Singh gave a talk mixing ABC with maximum likelihood estimation for HMMS, in connection with his earlier paper, and I got more convince by the idea of using a sequence of balls for keeping pseudo-data close […]
February 8, 2012 at 12:13 am
[…] mentioning the convergence of ABC algorithms, in particular the very relevant paper by Dean et al. I had already discussed in an earlier post. (This is taking a larger chunk of my time than expected! I am glad I will use […]
July 8, 2011 at 12:13 am
[…] than an approximation to Bayesian inference is clearly appealing. (Fearnhead and Prangle, and Dean, Singh, Jasra and Peters could be quoted as […]
June 14, 2011 at 12:14 am
[…] currently missing (although an extension of the perspective adopted in Fearnhead and Prangle and in Dean et al., namely to see ABC as an inference method per se rather than an approximation to a Bayesian […]
May 6, 2011 at 8:49 am
[…] including a superb influenza sequence. Ajay Jasra explained the main ideas in the ABC HMM paper I recently discussed (even mentioning the post during the talk!). Mark Beaumont started with a recollection of the […]
April 19, 2011 at 8:21 pm
[…] its criticisms in [2]. Re HMM, again may be natural to build from the ABC, see C. Robert’s post, and a recent paper on the […]