Archive for threshold determination

threshold schedules for ABC-SMC (reply from the authors)

Posted in Statistics with tags , , , , on October 19, 2012 by xi'an

(Below is a reply from the authors of threshold schedules for ABC-SMC I discussed two days ago.)

In our experience the problems that we seek to address arise routinely if adaptive quantile schedules are employed and are not due to poor initial exploration of the parameter space. Rather, when sub-optimal quantiles are chosen we observe what has often been called the “survival of the fattest”. Here a finite population of particles is accepted into regions that satisfy some less stringent cost function (e.g. simulated annealing at high temperature) easily but thereby drifts (in the genetic sense) away from better solutions that would also satisfy tighter thresholds. We do not claim to be the first to point this out; but we believe that depending on the problem any quantile method will be susceptible to this. A similar point has been made, perhaps more beautifully, by Calvet and Czellar (2012).

The toy example is maybe slightly misleading in the sense that parameter-space and data-space can be mapped bijectively. It is not, however, fanciful: it is an illustration of “survival of the fattest”. And it also demonstrates that reliance on a fixed quantile (say 10%) for any type of problem is likely to result in systematically biased, plainly incorrect “posteriors” in some if not most cases.

We do not claim that the optimal ε is zero. But clearly it is not the one dictated by some fixed computational cost you want to get away with (as is the case when people take the best n out of N simulations). What we try to argue is that the threshold should be set based on the problem under consideration. Therefore the choice of ε should make use of the hypothetical acceptance curve, ℵ.

Clearly we do not know ℵ but show that the UT is one way of second-guessing its shape. That each guess depends on the present population is both obvious and necessary.

A CDF-based estimate for ℵ using the empirical distances from the previous population would ignore the role of the perturbation kernel that we employ (and which should also be problem-specific). This we found, and by hindsight should have been obvious, has huge impact on the efficiency, in particular the validity of ℵ. Of course, if we already had the CDF of the present target population (ie the whole set of perturbed particles) we would be fine. But knowing the answer makes any question look easy. But getting an answer to the types of problems that interest us is definitely obtained more quickly when we use the UT.

Maybe the problems that we are dealing with in our “day-jobs” (in the analysis of biomolecular and non-linear dynamical systems) make the shortcomings of the “quantile-method” more readily apparent; against this background the proposed approach is perhaps more easily understood.

threshold schedules for ABC-SMC

Posted in Statistics, University life with tags , , , , , , , , , on October 17, 2012 by xi'an

Daniel Silk, Sarah Filippi and Michael Stumpf just posted on arXiv a new paper on ABC-SMC. They attack therein a typical problem with SMC (and PMC, and even MCMC!) methods, namely the ability to miss (global) modes of the target due to a poor initial exploration. So, if a proposal is built on previous simulations and if  those simulations have missed an important mode, it is quite likely that this proposal will concentrate on other parts of the space and miss the important mode even more. This is also why simulated annealing and other stochastic optimisation algorithms are so fickle: a wrong parameterisation (like the temperature schedule) induces the sequence to converge to a local optimum rather than to the global optimum. Since sequential ABC is a form of simulated annealing, the decreasing tolerance (or threshold) playing the part of the temperature, it is no surprise that it suffers from this generic disease…

The method proposed in the paper aims at avoiding this difficulty by looking at sudden drops in the acceptance rate curve (as a function of the tolerance ε),

\aleph_t(\epsilon)=\int p_t(x)\mathbb{I}(\Delta(x,x^\star)\le \epsilon)\,\text{d}x,

suggesting for threshold the value of ε that maximises the second derivative of this curve. Now, before getting to (at?) the issue of turning this principle into a practical algorithm, let me linger at the motivation for it:

“To see this, one can imagine an ε-ball expanding about the true data; at first the ball only encompasses a small number of particles that were drawn from very close to the global maximum, corresponding to the low gradient at the foot of the shape. Once ε is large enough we are able to accept the relatively large number of particles sitting in the local maximum, which causes the increase in gradient.” (p.5)

Thus, the argument for looking at values of ε preceding fast increases in the acceptance rate ℵ is that we are thus avoiding flatter and lower regions of the posterior support corresponding to a local maximum. It clearly encompasses the case studied by the authors of a global highly concentrated global mode, surrounded by flatter lower modes, but it seems to me that this is not necessarily the only possible reason for changes in the shape of the acceptance probability ℵ. First, we are looking at an ABC acceptance rate, not at a genuine likelihood. Second, this acceptance rate function depends on the current (and necessarily rough) approximate to the genuine predictive, pt. Furthermore, when taking into account this rudimentary replacement of the true likelihood function, it is rather difficult to envision how it impacts the correspondence between a proximity in the data and a proximity in the parameter space. (The toy example is both illuminating and misleading, because it considers a setting where the data is a deterministic transform of the parameter.) I thus think the analysis should refer more strongly to the non-parametric literature and in particular to the k-nearest-neighbour approach recently reformulated by Biau et al.: there is no reason to push the tolerance ε all the way down to zero as this limit does not correspond to the optimal value of the tolerance. If one does not use a non-parametric determination of the “right” tolerance, the lingering question is when and why stopping the sequence of ABC simulations.

The acceptance rate function ℵ is approximated using an unscented transform. I understand neither the implementation of, nor the motivations for, this choice. Given that the function ℵ is a one-dimensional transform of the tolerance and that it actually corresponds to the cdf of the distance between the true data and the (current) pseudo-data, why couldn’t we use smooth versions of a cdf estimate based on the simulated distances, given that these distances are already computed..?  I must be missing something.