Archive for Approximate Bayesian computation

Relevant statistics for Bayesian model choice [hot off the press!]

Posted in Books, Statistics, University life with tags , , , , , , on October 30, 2014 by xi'an

jrssbabcOur paper about evaluating statistics used for ABC model choice has just appeared in Series B! It somewhat paradoxical that it comes out just a few days after we submitted our paper on using random forests for Bayesian model choice, thus bypassing the need for selecting those summary statistics by incorporating all statistics available and letting the trees automatically rank those statistics in term of their discriminating power. Nonetheless, this paper remains an exciting piece of work (!) as it addresses the more general and pressing question of the validity of running a Bayesian analysis with only part of the information contained in the data. Quite usefull in my (biased) opinion when considering the emergence of approximate inference already discussed on this ‘Og…

[As a trivial aside, I had first used fresh from the press(es) as the bracketted comment, before I realised the meaning was not necessarily the same in English and in French.]

accurate ABC: comments by Oliver Ratman [guest post]

Posted in R, Statistics, University life with tags , , , , , , , , on May 31, 2013 by xi'an

Here are comments by Olli following my post:

I think we found a general means to obtain accurate ABC in the sense of matching the posterior mean or MAP exactly, and then minimising the KL distance between the true posterior and its ABC approximation subject to this condition. The construction works on an auxiliary probability space, much like indirect inference. Now, we construct this probability space empirically, this is where our approach differs first from indirect inference and this is where we need the “summary values” (>1 data points on a summary level; see Figure 1 for clarification). Without replication, we cannot model the distribution of summary values but doing so is essential to construct this space. Now, lets focus on the auxiliary space. We can fiddle with the tolerances (on a population level) and m so that on this space, the ABC approximation has the aforesaid properties. All the heavy technical work is in this part. Intuitively, as m increases, the power increases for sufficiently regular tests (see Figure 2) and consequently, for calibrated tolerances, the ABC approximation on the auxiliary space goes tighter. This offsets the broadening effect of the tolerances, so having non-identical lower and upper tolerances is fine and does not hurt the approximation. Now, we need to transport the close-to-exact ABC approximation on the auxiliary space back to the original space. We need some assumptions here, and given our time series example, it seems these are not unreasonable. We can reconstruct the link between the auxiliary space and the original parameter space as we accept/reject. This helps us understand (with the videos!) the behaviour of the transformation and to judge if its properties satisfy the assumptions of Theorems 2-4. While we offer some tools to understand the behaviour of the link function, yes, we think more work could be done here to improve on our first attempt to accurate ABC.

Now some more specific comments:
“The paper also insists over and over on sufficiency, which I fear is a lost cause.” To clarify, all we say is that on the simple auxiliary space, sufficient summaries are easily found. For example, if the summary values are normally distributed, the sample mean and the sample variance are sufficient statistics. Of course, this is not the original parameter space and we only transform the sufficiency problem into a change of variable problem. This is why we think that inspecting and understanding the link function is important.

“Another worry is that the … test(s) rel(y) on an elaborate calibration”. We provide some code here  for everyone to try out. In our examples, this did not slow down ABC considerably. We generally suppose that the distribution of the summary values is simple, like Gaussian, Exponential, Gamma, ChiSquare, Lognormal. In these cases, the ABC approximation takes on an easy-enough-to-calibrate-fast functional form on the auxiliary space.

“This Theorem 3 sounds fantastic but makes me uneasy: unbiasedness is  a sparse property that is rarely found in statistical problems. … Witness the use of “essentially unbiased” in Fig. 4.” What Theorem 3 says is that if unbiasedness can be achieved on the simple auxiliary space, then there are regularity conditions under which these properties can be transported back to the original parameter space. We hope to illustrate these conditions with our examples, and to show that they hold in quite general cases such as the time series application. The thing in Figure 4 is that the sample autocorrelation is not an unbiased estimator of the population autocorrelation. So unbiasedness does not quite hold on the auxiliary space and the conditions of Theorem 3 are not satisfied. Nevertheless, we found this bias to be rather negligible in our example and the bigger concern was the effect of the link function.

And here are Olli’s slides:

accurate ABC?

Posted in Statistics, Travel, University life with tags , , , , , , on May 29, 2013 by xi'an

flying over London, March 03, 2013As posted in the previous entry, Olli Ratman, Anton Camacho, Adam Meijer, and Gé Donker arXived their paper on accurate ABC. A paper which [not whose!] avatars I was privy to in the past six months! While I acknowledge the cleverness of the reformulation of the core ABC accept/reject step as a statistical test, and while we discussed the core ideas with Olli and Anton when I visited Gatsby, the paper still eludes me to some respect… Here is why. (Obviously, you should read this rich & challenging paper first for the comments to make any sense! And even then they may make little sense…)

The central idea of this accurate ABC [aABC? A²BC?] is that, if the distribution of the summary statistics is known and if replicas of those summary statistics are available for the true data (and less problematically for the generated data), then a classical statistical test can be turned into a natural distance measure for each statistics and even “natural” bounds can be found on that distance, to the point of recovering most properties of the original posterior distribution… A first worry is this notion that the statistical distribution of a collection of summary statistics is available in closed form: this sounds unrealistic even though it may not constitute a major contention issue. Indeed, replacing a tailored test with a distribution-free test of identical location parameter could not hurt that much. [Just the power. If that matters... See bellow.] The paper also insists over and over on sufficiency, which I fear is a lost cause. In my current understanding of ABC, the loss of some amount of information contained in the data should be acknowledged and given a write-off as a Big Data casualty. (See, e.g., Lemma 1.)

Another worry is that the rephrasing of the acceptance distance as the maximal difference for a particular test relies on an elaborate calibration, incl. α, c+, τ+, &tc. (I am not particularly convinced by the calibration in terms of the power of the test being maximised at the point null value. Power?! See bellow, once again.) When cumulating tests and aiming at a nominal  α level, the orthogonality of the test statistics in Theorem 1(iii) is puzzling and I think unrealistic.

The notion of accuracy that is central to the paper and its title corresponds to the power of every test being maximal at the true value of the parameter. And somehow to the ABC approximation being maximises at the true parameter, even though I am lost by then [i.e. around eqn (18)] about the meaning of ρ*… The major result in the paper is however that, under the collection of assumptions made therein, the ABC MLE and MAP versions are equal to their exact counterparts. And that these versions are also unbiased. This Theorem 3 sounds fantastic but makes me uneasy: unbiasedness is  a sparse property that is rarely found in statistical problems. Change the parameterisation and you loose unbiasedness.  And even the possibility to find an unbiased estimator. Since this difficulty does not appear in the paper, I would conclude that either the assumptions are quite constraining or the result holds in a weaker sense… (Witness the use of “essentially unbiased” in Fig. 4.)

This may be a wee rude comment (even for a Frenchman) but I also felt the paper could be better written  in that notations pop in unannounced. For instance, on page 2, x [the data] becomes x1:n becomes sk1:n. This seems to imply that the summary statistics are observed repeatedly over the true sample. Unless n=1, this does not seem realistic. (I do not understand everything in Example 1, in particular the complaint that the ABC solutions were biased for finite values of n. That sounds like an odd criticism of Bayesian estimators. Now, it seems the paper is very intent on achieving unbiasedness! So maybe it should be called the aAnsBC algorithm for “not-so-Bayes!) I am also puzzled by the distinction between summary values and summary statistics.  This sounds like insisting on having a large enough iid dataset. Or, on page 5, the discussion that the summary parameters are  replaced by estimates seems out of context because this adds an additional layer of notation to the existing summary “stuff”… With the additional difficulty that Lemma 1 assumes reparameterisation of the model in terms  of those summary parameters. I also object to the point null hypotheses being written in terms of a point estimate, i.e. of  a  quantity depending on the data x: it sounds like confusing the test [procedure] with the test [problem]. Another example: I read several times Lemma 5 about the calibration of the number of ABC simulations m but cannot fathom what this m is calibrated against. It seems only a certain value of m achieves the accurate correspondence with the genuine posterior, which sounds counter-intuitive. Last counter-example: some pictures seemed to be missing in the Appendix, but as it happened, it is only my tablet being unable to process them! S2 is actually a movie about link functions, really cool!

In conclusion, this is indeed a rich and challenging paper. I am certain I will get a better understanding by listening to Olli’s talk in Roma. And discussing with him.

austerity in MCMC land

Posted in Statistics with tags , , , , , on April 28, 2013 by xi'an

Anoop Korattikara, Yutian Chen and Max Welling recently arXived a paper on the appeal of using only part of the data to speed up MCMC. This is different from the growing literature on unbiased estimators of the likelihood exemplified by Andrieu & Roberts (2009). Here, the approximation to the true target is akin to the approximation in ABC algorithms in that a value of the parameter is accepted if the difference in the likelihoods is larger than a given bound. Expressing this perspective as a test on the mean of the log likelihood leads the authors to use instead a subsample from the whole sample. (The approximation level ε is then a bound on the p-value.) While this idea only applies to iid settings, it is quite interesting and sounds a wee bit like a bootstrapped version of MCMC. Especially since it sounds as if it could provide an auto-evaluation of its error.

workshop a Venezia

Posted in pictures, Running, Statistics, Travel, University life with tags , , , , , , , , on October 9, 2012 by xi'an

I came to Venezia today to give a series of two talks on recent advances in ABC, within a workshop on Likelihood, approximate likelihood and nonparametric statistical methods for complex applications, organised by the Ca’ Foscari University, and marking the end of two 2008 research projects.

It is always the same pleasure to stay in Venezia, to roam the back streets away from the crowds, and to discover hidden passageways, magnificent buildings, and unsuspected trattorias with the most exquisite sea foods (being with Italian friends obviously helping a lot!)… I actually did a short run this morning, spending more time waiting for the sun to come behind San Giorggio Maggiore. Here are the slides of my tutorial, mostly for the benefit of the workshop attendees, as they replicate those I gave in Bristol last week. (Plus the two sections on ABC model choice.)

dimension reduction in ABC [a review's review]

Posted in Statistics, University life with tags , , , , , , , , , , , on February 27, 2012 by xi'an

What is very apparent from this study is that there is no single `best’ method of dimension reduction for ABC.

Michael Blum, Matt Nunes, Dennis Prangle and Scott Sisson just posted on arXiv a rather long review of dimension reduction methods in ABC, along with a comparison on three specific models. Given that the choice of the vector of summary statistics is presumably the most important single step in an ABC algorithm and as selecting too large a vector is bound to fall victim of the dimension curse, this is a fairly relevant review! Therein, the authors compare regression adjustments à la Beaumont et al.  (2002), subset selection methods, as in Joyce and Marjoram (2008), and projection techniques, as in Fearnhead and Prangle (2012). They add to this impressive battery of methods the potential use of AIC and BIC. (Last year after ABC in London I reported here on the use of the alternative DIC by Francois and Laval, but the paper is not in the bibliography, I wonder why.) An argument (page 22) for using AIC/BIC is that either provides indirect information about the approximation of p(θ|y) by p(θ|s); this does not seem obvious to me.

The paper also suggests a further regularisation of Beaumont et al.  (2002) by ridge regression, although L1 penalty à la Lasso would be more appropriate in my opinion for removing extraneous summary statistics. (I must acknowledge never being a big fan of ridge regression, esp. in the ad hoc version à la Hoerl and Kennard, i.e. in a non-decision theoretic approach where the hyperparameter λ is derived from the data by X-validation, since it then sounds like a poor man’s Bayes/Stein estimate, just like BIC is a first order approximation to regular Bayes factors… Why pay for the copy when you can afford the original?!) Unsurprisingly, ridge regression does better than plain regression in the comparison experiment when there are many almost collinear summary statistics, but an alternative conclusion could be that regression analysis is not that appropriate with  many summary statistics. Indeed, summary statistics are not quantities of interest but data summarising tools towards a better approximation of the posterior at a given computational cost… (I do not get the final comment, page 36, about the relevance of summary statistics for MCMC or SMC algorithms: the criterion should be the best approximation of p(θ|y) which does not depend on the type of algorithm.)

I find it quite exciting to see the development of a new range of ABC papers like this review dedicated to a better derivation of summary statistics in ABC, each with different perspectives and desideratas, as it will help us to understand where ABC works and where it fails, and how we could get beyond ABC…

Follow

Get every new post delivered to your Inbox.

Join 679 other followers