Incoherent inference

“The probability of the nested special case must be less than or equal to the probability of the general model within which the special case is nested. Any statistic that assigns greater probability to the special case is incoherent. An example of incoherence is shown for the ABC method.” Alan Templeton, PNAS, 2010

Alan Templeton just published an article in PNAS about “coherent and incoherent inference” (with applications to phylogeography and human evolution). While his (misguided) arguments are mostly those found in an earlier paper of his’ and discussed in this post as well as in the defence of model based inference twenty-two of us published in Molecular Ecology a few months ago, the paper contains a more general critical perspective on Bayesian model comparison, aligning argument after argument about the incoherence of the Bayesian approach (and not of ABC, as presented there). The notion of coherence is borrowed from the 1991 (Bayesian) paper of Lavine and Schervish on Bayes factors, which shows that Bayes factors may be nonmonotonous in the alternative hypothesis (but also that posterior probabilities aren’t!). Templeton’s first argument proceeds from the quote above, namely that larger models should have larger probabilities or else this violates logic and coherence! The author presents the reader with a Venn diagram to explain why a larger set should have a larger measure. Obviously, he does not account for the fact that in model choice, different models induce different parameters spaces and that those spaces are endowed with orthogonal measures, especially if the spaces are of different dimensions. In the larger space, P(\theta_1=0)=0. (This point is not even touching the issue of defining “the” probability over the collection of models that Templeton seems to take for granted but that does not make sense outside a Bayesian framework.) Talking therefore of nested models having a smaller probability than the encompassing model or of “partially overlapping models” does not make sense from a measure theoretic (hence mathematical) perspective. (The fifty-one occurences of coherent/incoherent in the paper do not bring additional weight to the argument!)

“Approximate Bayesian computation (ABC) is presented as allowing statistical comparisons among models. ABC assigns posterior probabilities to a finite set of simulated a priori models.” Alan Templeton, PNAS, 2010

An issue common to all recent criticisms by Templeton is the misleading or misled confusion between the ABC method and the resulting Bayesian inference. For instance, Templeton distinguishes between the incoherence in the ABC model choice procedure from the incoherence in the Bayes factor, when ABC is used as a computational device to approximate the Bayes factor. There is therefore no inferential aspect linked with ABC,  per se, it is simply a numerical tool to approximate Bayesian procedures and, with enough computer power, the approximation can get as precise as one wishes. In this paper, Templeton also reiterates the earlier criticism that marginal likelihoods are not comparable across models, because they “are not adjusted for the dimensionality of the data or the models” (sic!). This point is missing the whole purpose of using marginal likelihoods, namely that they account for the dimensionality of the parameter by providing a natural Ockham’s razor penalising the larger model without requiring to specify a penalty term. (If necessary, BIC is so successful! provides an approximation to this penalty, as well as the alternate DIC.) The second criticism of ABC (i.e. of the Bayesian approach) is that model choice requires a collection of models and cannot decide outside this collection. This is indeed the purpose of a Bayesian model choice and studies like Berger and Sellke (1987, JASA) have shown the difficulty of reasoning within a single model. Furthermore, Templeton advocates the use of a likelihood ratio test, which necessarily implies using two models. Another Venn diagram also explains why Bayes formula when used for model choice is “mathematically and logically incorrect” because marginal likelihoods cannot be added up when models “overlap”: according to him, “there can be no universal denominator, because a simple sum always violates the constraints of logic when logically overlapping models are tested“. Once more, this simply shows a poor understanding of the probabilistic modelling involved in model choice.

“The central equation of ABC is inherently incoherent for three separate reasons, two of which are applicable in every case that deals with overlapping hypotheses.” Alan Templeton, PNAS, 2010

This argument relies on the representation of the “ABC equation” (sic!)

P(H_i|H,S^*) = \dfrac{G_i(||S_i-S^*||) \Pi_i}{\sum_{j=1}^n G_j(||S_j-S^*||) \Pi_j}

where S^* is the observed summary statistic, S_i is “the vector of expected (simulated) summary statistics under model i” and “G_i is a goodness-of-fit measure“. Templeton states that this “fundamental equation is mathematically incorrect in every instance (..) of overlap.” This representation of the ABC approximation is again misleading or misled in that the simulation algorithm ABC produces an approximation to a posterior sample from \pi_i(\theta_i|S^*). The resulting approximation to the marginal likelihood under model M_i is a regular Monte Carlo step that replaces an integral with a weighted sum, not a “goodness-of-fit measure.”  The subsequent argument  of Templeton’s about the goodness-of-fit measures being “not adjusted for the dimensionality of the data” (re-sic!) and the resulting incoherence is therefore void of substance. The following argument repeats an earlier misunderstanding with the probabilistic model involved in Bayesian model choice: the reasoning that, if

\sum_j \Pi_j = 1

the constraints of logic are violated [and] the prior probabilities used in the very first step of their Bayesian analysis are incoherent“, does not assimilate the issue of measures over mutually exclusive spaces.

“ABC is used for parameter estimation in addition to hypothesis testing and another source of incoherence is suggested from the internal discrepancy between the posterior probabilities generated by ABC and the parameter estimates found by ABC.” Alan Templeton, PNAS, 2010

The point corresponding to the above quote is that, while the posterior probability that \theta_1=0 (model M_1) is much higher than the posterior probability of the opposite (model M_2), the Bayes estimate of \theta_1 under model M_2 is “significantly different from zero“. Again, this reflects both a misunderstanding of the probability model, namely that \theta_1=0 is impossible [has measure zero] under model M_2, and a confusion between confidence intervals (that are model specific) and posterior probabilities (that work across models). The concluding message that “ABC is a deeply flawed Bayesian procedure in which ignorance overwhelms data to create massive incoherence” is thus unsubstantiated.

“Incoherent methods, such as ABC, Bayes factor, or any simulation approach that treats all hypotheses as mutually exclusive, should never be used with logically overlapping hypotheses.” Alan Templeton, PNAS, 2010

In conclusion, I am quite surprised at this controversial piece of work being published in PNAS, as the mathematical and statistical arguments of Professor Templeton should have been assessed by referees who are mathematicians and statisticians, in which case they would have spotted the obvious inconsistencies!

23 Responses to “Incoherent inference”

  1. I am afraid you missed my most important point: it is impossible to have the same prior for small and large models as they cannot share the same space. This is a mathematical impossibility result, not a whim from the Bayesian statistician. I do not want to repeat earlier arguments, but simply consider the BIC criterion. It naturally incorporates a penalty for the dimension of the parameter space and it is a first order approximation to the Bayes factor.

    I am relatively unschooled in Bayesian methods and therefore may be missing something obvious. But Fagundes et al. (2007), which Templeton uses as his main example, did not claim to use a BIC criterion.

    Nor do I see any sense in which the priors for the “large” and “small” models are anything but equal. The paper took the “large” model with the highest posterior among “larges”, and the “small” model with the highest posterior among “smalls”, and decided to accept the “small” model without any discussion of priors at all — which amounts to the assumption that their prior probability is identical. Mathematically impossible it may be, but that’s what the paper assumed!

    • John: (a) I am mentioning BIC to stress that Bayes factors and posterior probabilities of models put a “natural” penalty on larger parameter spaces.
      (b) In Fagundes et al. (2007) the number of parameters varies among the eight models, so the prior distributions on those parameters cannot be the same. (A uniform prior on \mathbb{R}^6 is not the same as a uniform prior on \mathbb{R}^10.) Putting the same prior weights on all eight models is another thing (which amounts to using the Bayes factor).

  2. This point is missing the whole purpose of using marginal likelihoods, namely that they account for the dimensionality of the parameter by providing a natural Ockham’s razor penalising the larger model without requiring to specify a penalty term.

    But if the priors are identical for “large” and “small” models, and the goodness-of-fit statistic is calculated identically for “large” and “small” models, there is no “natural Ockham’s razor” at all.

    • John: I am afraid you missed my most important point: it is impossible to have the same prior for small and large models as they cannot share the same space. This is a mathematical impossibility result, not a whim from the Bayesian statistician. I do not want to repeat earlier arguments, but simply consider the BIC criterion. It naturally incorporates a penalty for the dimension of the parameter space and it is a first order approximation to the Bayes factor.

  3. A cladistic nest of vipers Says:

    If you want a reason for picking on ABC, this is a fight that has spread from phylogeography

    Mark Beamont (who did a lot of work in ABC) took down Templeton’s house of cards “nested cladistic analysis”.

    “On the validity of nested clade phylogeographical analysis” Panchal and Beaumont. And this is an attempted fightback using his in house publisher

    Additional evidence – look at the papers that cite

    “Why does a method that fails continue to be used” by lacey knowles evolution 62 2713-2717.

    In defence of model-baed inference in phylogeography: Mole ecol 19: 436-446

    • The…vipers: Thank you: as a co-author of the last reference, I am aware of this connection indeed! But the paper about incoherence inference has a much wider scope and aims at Bayesian inference as a whole.

  4. I fear the debate will never end as Templeton has an alternative theory for philogeography called nested clade that loosely relates to likelihood ratio tests (hence his defence of the “coherence” of the likelihood ratio test). Bayesian methods are therefore competitors, hence cannot give a correct answer.

    I think the question is more, why do they seem to give really different answers in some cases. One possibility is that one of the methods is incorrectly applied by its practitioners. I read Templeton’s paper not as a general critique of ABC but as pointing out specific cases where it has been misapplied.

    Perhaps I misunderstand your post, but it seems to me that “different models induce different parameter spaces” detracts from Templeton’s argument only if the models have not been constrained by their authors to the same parameters.

    • John: The ‘Incoherent inference” paper has a much wider scope than criticising ABC. It is a fully grown criticism of Bayesian inference, which mostly shows a lack of understanding of its mathematical basis. (A constrained model does not share the same parameter space as its unconstrained counterpart.) I repeat the point stressed in our recent Molecular Ecology paper: ABC has nothing to do with those criticisms since this is simply a crude Monte Carlo to implement Bayesian inference in complicated settings.

  5. […] don’t know the statistical details well enough to comment with much knowledge, but I see that a statistician has responded to Templeton already, so I would recommend checking that out. I immediately went looking for responses because […]

  6. […] likelihood of the model(s). Although this seems to be an important issue, as illustrated by the controversy with Templeton, the opposition between likelihood inference and “cladistic” parsimony […]

  7. […] yesterday upon a section where Sober reproduces the error central to Templeton’s thesis and discussed on the Og a few days ago. He indeed states that “the simpler model cannot have the higher […]

  8. sonofoson Says:

    i suspect that he picks on ABC because the Fagundes et al ABC analysis overturned his nested cladist multi-regional inference of human evolution.

  9. This is crazy! Reading Templeton’s work, I just get the feeling that he is hopelessly out of his depth. There is nothing wrong with that by itself, but he mixes his confusion with a zealous overconfidence that could well confuse others.

    I assume that this paper was published only due to peculiarities of the PNAS review system under which Academy members (such as Templeton) can submit papers through an “open review” where authors choose their own referees and openly communicate with them.

  10. Will you reply to PNAS with all of this information? It’s shocking that this Templeton figure keeps being allowed to peddle his nonsense. And why has he decided to pick on ABC particularly? Have you been in touch with him at any point?

    • Mr Bayes: Yes, indeed, I am preparing a letter to PNAS to point out the sheer absurdities in this piece of “work”! The publication path for PNAS is somehow obscure, with academicians allowed to submit their work in a way that seems to bypass a true peer evaluation, at least in this case where a statistician of any creed would have spotted the inconsistencies… The “funniest” part is that every paper is addressed at ABC when the criticisms bear on regular Bayesian methodology! I fear the debate will never end as Templeton has an alternative theory for philogeography called nested clade that loosely relates to likelihood ratio tests (hence his defence of the “coherence” of the likelihood ratio test). Bayesian methods are therefore competitors, hence cannot give a correct answer.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.