Archive for model mispecification

model misspecification in ABC

Posted in Statistics with tags , , , , , , , , on August 21, 2017 by xi'an

With David Frazier and Judith Rousseau, we just arXived a paper studying the impact of a misspecified model on the outcome of an ABC run. This is a question that naturally arises when using ABC, but that has been not directly covered in the literature apart from a recently arXived paper by James Ridgway [that was earlier this month commented on the ‘Og]. On the one hand, ABC can be seen as a robust method in that it focus on the aspects of the assumed model that are translated by the [insufficient] summary statistics and their expectation. And nothing else. It is thus tolerant of departures from the hypothetical model that [almost] preserve those moments. On the other hand, ABC involves a degree of non-parametric estimation of the intractable likelihood, which may sound even more robust, except that the likelihood is estimated from pseudo-data simulated from the “wrong” model in case of misspecification.

In the paper, we examine how the pseudo-true value of the parameter [that is, the value of the parameter of the misspecified model that comes closest to the generating model in terms of Kullback-Leibler divergence] is asymptotically reached by some ABC algorithms like the ABC accept/reject approach and not by others like the popular linear regression [post-simulation] adjustment. Which suprisingly concentrates posterior mass on a completely different pseudo-true value. Exploiting our recent assessment of ABC convergence for well-specified models, we show the above convergence result for a tolerance sequence that decreases to the minimum possible distance [between the true expectation and the misspecified expectation] at a slow enough rate. Or that the sequence of acceptance probabilities goes to zero at the proper speed. In the case of the regression correction, the pseudo-true value is shifted by a quantity that does not converge to zero, because of the misspecification in the expectation of the summary statistics. This is not immensely surprising but we hence get a very different picture when compared with the well-specified case, when regression corrections bring improvement to the asymptotic behaviour of the ABC estimators. This discrepancy between two versions of ABC can be exploited to seek misspecification diagnoses, e.g. through the acceptance rate versus the tolerance level, or via a comparison of the ABC approximations to the posterior expectations of quantities of interest which should diverge at rate Vn. In both cases, ABC reference tables/learning bases can be exploited to draw and calibrate a comparison with the well-specified case.

Bayesian brittleness, again

Posted in Books, Statistics with tags , , , on September 11, 2013 by xi'an

“With the advent of high-performance computing, Bayesian methods are increasingly popular tools for the quantification of uncertainty throughout science and industry. Since these methods impact the making of sometimes critical decisions in increasingly complicated contexts, the sensitivity of their posterior conclusions with respect to the underlying models and prior beliefs is becoming a pressing question.”

A second paper by Owhadi, Scovel and Sullivan on Bayesian brittleness has just been arXived. This one has the dramatic title `When Bayesian inference shatters‘..! If you remember (or simply check) my earlier post, the topic of this work is the robustness of Bayesian inference under model mispsecification, robustness which is completely lacking from the authors’ perspective. This paper is much shorter than the earlier one (and sounds like a commentary on it), but it concludes in a similar manner, namely that Bayesian inference suffers from `maximal brittleness under local mis-speci cation’ (p.6), which means that `the range of posterior predictions among all admissible priors is as wide as the deterministic range of the quantity of interest’ when the true model is not within the range of the parametric models covered by the prior distribution.  The novelty in the paper appears to be in the extension that, even when we consider only the k first moments of the unknown distribution, Bayesian inference is not robust (this is called the Brittleness Theorem, p.9). As stated earlier, while I appreciate this sort of theoretical derivation, I am somehow dubious as to whether or not this impacts the practice of Bayesian statistics to the amount mentioned in the above quote. In particular, I do not see how those results cast more doubts on the impact of the prior modelling on the posterior outcome. While we all (?) agree on the fact that “any given prior and model can be slightly perturbed to achieve any desired posterior conclusion”, the repeatability or falsifiability of the Bayesian experiment (change your prior and run the experiment afresh) allows for an assessment of the posterior outcome that prevents under-the-carpet effects.

%d bloggers like this: