Archive for coherence

marginal likelihood as exhaustive X validation

Posted in Statistics with tags , , , , , , , , on October 9, 2020 by xi'an

In the June issue of Biometrika (for which I am deputy editor) Edwin Fong and Chris Holmes have a short paper (that I did not process!) on the validation of the marginal likelihood as the unique coherent updating rule. Marginal in the general sense of Bissiri et al. (2016). Coherent in the sense of being invariant to the order of input of exchangeable data, if in a somewhat self-defining version (Definition 1). As a consequence, marginal likelihood arises as the unique prequential scoring rule under coherent belief updating in the Bayesian framework. (It is unique given the prior or its generalisation, obviously.)

“…we see that 10% of terms contributing to the marginal likelihood come from out-of-sample predictions, using on average less than 5% of the available training data.”

The paper also contains the interesting remark that the log marginal likelihood is the average leave-p-out X-validation score, across all values of p. Which shows that, provided the marginal can be approximated, the X validation assessment is feasible. Which leads to a highly relevant (imho) spotlight on how this expresses the (deadly) impact of the prior selection on the numerical value of the marginal likelihood. Leaving outsome of the least informative terms in the X-validation leads to exactly the log geometric intrinsic Bayes factor of Berger & Pericchi (1996). Most interesting connection with the Bayes factor community but one that depends on the choice of the dismissed fraction of p‘s.

on Dutch book arguments

Posted in Books, Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , on May 1, 2017 by xi'an

“Reality is not always probable, or likely.”― Jorge Luis Borges

As I am supposed to discuss Teddy Seidenfeld‘s talk at the Bayes, Fiducial and Frequentist conference in Harvard today [the snow happened last time!], I started last week [while driving to Wales] reading some related papers of his. Which is great as I had never managed to get through the Dutch book arguments, including those in Jim’s book.

The paper by Mark Schervish, Teddy Seidenfeld, and Jay Kadane is defining coherence as the inability to bet against the predictive statements based on the procedure. A definition that sounds like a self-fulfilling prophecy to me as it involves a probability measure over the parameter space. Furthermore, the notion of turning inference, which aims at scientific validation, into a leisure, no-added-value, and somewhat ethically dodgy like gambling, does not agree with my notion of a validation for a theory. That is, not as a compelling reason for adopting a Bayesian approach. Not that I have suddenly switched to the other [darker] side, but I do not feel those arguments helping in any way, because of this dodgy image associated with gambling. (Pardon my French, but each time I read about escrows, I think of escrocs, or crooks, which reinforces this image! Actually, this name derives from the Old French escroue, but the modern meaning of écroué is sent to jail, which brings us back to the same feeling…)

Furthermore, it sounds like both a weak notion, since it implies an almost sure loss for the bookmaker, plus coherency holds for any prior distribution, including Dirac masses!, and a frequentist one, in that it looks at all possible values of the parameter (in a statistical framework). It also turns errors into monetary losses, taking them at face value. Which sounds also very formal to me.

But the most fundamental problem I have with this approach is that, from a Bayesian perspective, it does not bring any evaluation or ranking of priors, and in particular does not help in selecting or eliminating some. By behaving like a minimax principle, it does not condition on the data and hence does not evaluate the predictive properties of the model in terms of the data, e.g. by comparing pseudo-data with real data.

 While I see no reason to argue in favour of p-values or minimax decision rules, I am at a loss in understanding the examples in How to not gamble if you must. In the first case, i.e., when dismissing the α-level most powerful test in the simple vs. simple hypothesis testing case, the argument (in Example 4) starts from the classical (Neyman-Pearsonist) statistician favouring the 0.05-level test over others. Which sounds absurd, as this level corresponds to a given loss function, which cannot be compared with another loss function. Even though the authors chose to rephrase the dilemma in terms of a single 0-1 loss function and then turn the classical solution into the choice of an implicit variance-dependent prior. Plus force the poor Pearsonist to make a wager represented by the risk difference. The whole sequence of choices sounds both very convoluted and far away from the usual practice of a classical statistician… Similarly, when attacking [in Section 5.2] the minimax estimator in the Bernoulli case (for the corresponding proper prior depending on the sample size n), this minimax estimator is admissible under quadratic loss and still a Dutch book argument applies, which in my opinion definitely argues against the Dutch book reasoning. The way to produce such a domination result is to mix two Bernoulli estimation problems for two different sample sizes but the same parameter value, in which case there exist [other] choices of Beta priors and a convex combination of the risks functions that lead to this domination. But this example [Example 6] mostly exposes the artificial nature of the argument: when estimating the very same probability θ, what is the relevance of adding the risks or errors resulting from using two estimators for two different sample sizes. Of the very same probability θ. I insist on the very same because when instead estimating two [independent] values of θ, there cannot be a Stein effect for the Bernoulli probability estimation problem, that is, any aggregation of admissible estimators remains admissible. (And yes it definitely sounds like an exercise in frequentist decision theory!)

Incoherent phylogeographic inference

Posted in Statistics, University life with tags , , , , , , on June 22, 2010 by xi'an

“In statistics, coherent measures of fit of nested and overlapping composite hypotheses are technically those measures that are consistent with the constraints of formal logic. For example, the probability of the nested special case must be less than or equal to the probability of the general model within which the special case is nested. Any statistic that assigns greater probability to the special case is said to be incoherent. An example of incoherence is shown in human evolution, for which the approximate Bayesian computation (ABC) method assigned a probability to a model of human evolution that was a thousand-fold larger than a more general model within which the first model was fully nested. Possible causes of this incoherence are identified, and corrections and restrictions are suggested to make ABC and similar methods coherent.” Alan R. Templeton, PNAS, doi:10.1073/pnas.0910647107

Following the astounding publication of Templeton’s pamphlet against Bayesian inference in PNAS last March, Jim Berger, Steve Fienberg, Adrian Raftery and myself polished a reply focussing on the foundations of statistical testing in Benidorm and submitted a letter to the journal. Here are the (500 word) contents.

Templeton (2010, PNAS) makes a broad attack on the foundations of Bayesian statistical methods—rather than on the purely numerical technique called Approximate Bayesian Computation (ABC)—using incorrect arguments and selective references taken out of context.  The most significant example is the argument ``The probability of the nested special case must be less than or equal to the probability of the general model within which the special case is nested. Any statistic that assigns greater probability to the special case is incoherent. An example of incoherence is shown for the ABC (sic!) method.” This opposes both the basis and the practice of Bayesian testing.

The confusion seems to arise from misunderstanding the difference between scientific hypotheses and their mathematical representation. Consider vaccine testing,  where in what follows we use VE to represent the vaccine efficacy measured on a scale from -\infty to 100.  Exploratory vaccines may be efficacious or not.  Thus a real biological model corresponds to the hypothesis “VE=0″, that the vaccine is not efficacious.  The alternative biological possibility, that the vaccine has an effect, is often stated mathematically as the alternative model “any allowed value of VE is possible,” making it appear that it contains “VE=0.” But Bayesian analysis assigns each model prior distributions arising from the background science; a point mass (e.g. probability 1/2) is assigned to “VE=0″ and the remaining probability mass (e.g. 1/2) is distributed continuously over values of VE in the alternative model. Elementary use of Bayes’ theorem (see, e.g., Berger, 1985, Statistical Decision Theory and Bayesian Analysis) then shows that the simpler model can indeed have a much higher posterior probability. Mathematically, this is explained by the  probability distributions residing in different dimensional spaces, and is elementary probability theory for which use of Templeton’s “Venn diagram argument” is simply incorrect.

Templeton also argues that Bayes factors are mathematically incorrect, and he backs his claims with Lavine and Schervish’s (1999, American Statistician) notion of coherence. These authors do indeed criticize the use of Bayes factors as stand-alone criteria but point out that, when combined with prior probabilities of models (as illustrated in the vaccine example above), the result is fully coherent posterior probabilities. Further, Templeton directly attacks the ABC algorithm.  ABC is simply a numerical computational technique; attacking it as incoherent is similar to calling calculus incoherent if it is used to compute the wrong thing.

Finally, we note that Templeton has already published essentially identical if more guarded arguments in the ecology literature; we refer readers to a related rebuttal to Templeton’s (2008, Molecular Ecology) critique of the Bayesian approach by Beaumont et al. (2010, Molecular Ecology) that is broader in scope, since it also covers the phylogenetic aspects of nested clade versus a model-based approach.

The very first draft I had written on this paper, in conjunction with my post, has been submitted to posted on arXiv this morning.

Incoherent inference

Posted in Statistics, University life with tags , , , , , , on March 28, 2010 by xi'an

“The probability of the nested special case must be less than or equal to the probability of the general model within which the special case is nested. Any statistic that assigns greater probability to the special case is incoherent. An example of incoherence is shown for the ABC method.” Alan Templeton, PNAS, 2010

Alan Templeton just published an article in PNAS about “coherent and incoherent inference” (with applications to phylogeography and human evolution). While his (misguided) arguments are mostly those found in an earlier paper of his’ and discussed in this post as well as in the defence of model based inference twenty-two of us published in Molecular Ecology a few months ago, the paper contains a more general critical perspective on Bayesian model comparison, aligning argument after argument about the incoherence of the Bayesian approach (and not of ABC, as presented there). The notion of coherence is borrowed from the 1991 (Bayesian) paper of Lavine and Schervish on Bayes factors, which shows that Bayes factors may be nonmonotonous in the alternative hypothesis (but also that posterior probabilities aren’t!). Templeton’s first argument proceeds from the quote above, namely that larger models should have larger probabilities or else this violates logic and coherence! The author presents the reader with a Venn diagram to explain why a larger set should have a larger measure. Obviously, he does not account for the fact that in model choice, different models induce different parameters spaces and that those spaces are endowed with orthogonal measures, especially if the spaces are of different dimensions. In the larger space, P(\theta_1=0)=0. (This point is not even touching the issue of defining “the” probability over the collection of models that Templeton seems to take for granted but that does not make sense outside a Bayesian framework.) Talking therefore of nested models having a smaller probability than the encompassing model or of “partially overlapping models” does not make sense from a measure theoretic (hence mathematical) perspective. (The fifty-one occurences of coherent/incoherent in the paper do not bring additional weight to the argument!)

“Approximate Bayesian computation (ABC) is presented as allowing statistical comparisons among models. ABC assigns posterior probabilities to a finite set of simulated a priori models.” Alan Templeton, PNAS, 2010

An issue common to all recent criticisms by Templeton is the misleading or misled confusion between the ABC method and the resulting Bayesian inference. For instance, Templeton distinguishes between the incoherence in the ABC model choice procedure from the incoherence in the Bayes factor, when ABC is used as a computational device to approximate the Bayes factor. There is therefore no inferential aspect linked with ABC,  per se, it is simply a numerical tool to approximate Bayesian procedures and, with enough computer power, the approximation can get as precise as one wishes. In this paper, Templeton also reiterates the earlier criticism that marginal likelihoods are not comparable across models, because they “are not adjusted for the dimensionality of the data or the models” (sic!). This point is missing the whole purpose of using marginal likelihoods, namely that they account for the dimensionality of the parameter by providing a natural Ockham’s razor penalising the larger model without requiring to specify a penalty term. (If necessary, BIC is so successful! provides an approximation to this penalty, as well as the alternate DIC.) The second criticism of ABC (i.e. of the Bayesian approach) is that model choice requires a collection of models and cannot decide outside this collection. This is indeed the purpose of a Bayesian model choice and studies like Berger and Sellke (1987, JASA) have shown the difficulty of reasoning within a single model. Furthermore, Templeton advocates the use of a likelihood ratio test, which necessarily implies using two models. Another Venn diagram also explains why Bayes formula when used for model choice is “mathematically and logically incorrect” because marginal likelihoods cannot be added up when models “overlap”: according to him, “there can be no universal denominator, because a simple sum always violates the constraints of logic when logically overlapping models are tested“. Once more, this simply shows a poor understanding of the probabilistic modelling involved in model choice.

“The central equation of ABC is inherently incoherent for three separate reasons, two of which are applicable in every case that deals with overlapping hypotheses.” Alan Templeton, PNAS, 2010

This argument relies on the representation of the “ABC equation” (sic!)

P(H_i|H,S^*) = \dfrac{G_i(||S_i-S^*||) \Pi_i}{\sum_{j=1}^n G_j(||S_j-S^*||) \Pi_j}

where S^* is the observed summary statistic, S_i is “the vector of expected (simulated) summary statistics under model i” and “G_i is a goodness-of-fit measure“. Templeton states that this “fundamental equation is mathematically incorrect in every instance (..) of overlap.” This representation of the ABC approximation is again misleading or misled in that the simulation algorithm ABC produces an approximation to a posterior sample from \pi_i(\theta_i|S^*). The resulting approximation to the marginal likelihood under model M_i is a regular Monte Carlo step that replaces an integral with a weighted sum, not a “goodness-of-fit measure.”  The subsequent argument  of Templeton’s about the goodness-of-fit measures being “not adjusted for the dimensionality of the data” (re-sic!) and the resulting incoherence is therefore void of substance. The following argument repeats an earlier misunderstanding with the probabilistic model involved in Bayesian model choice: the reasoning that, if

\sum_j \Pi_j = 1

the constraints of logic are violated [and] the prior probabilities used in the very first step of their Bayesian analysis are incoherent“, does not assimilate the issue of measures over mutually exclusive spaces.

“ABC is used for parameter estimation in addition to hypothesis testing and another source of incoherence is suggested from the internal discrepancy between the posterior probabilities generated by ABC and the parameter estimates found by ABC.” Alan Templeton, PNAS, 2010

The point corresponding to the above quote is that, while the posterior probability that \theta_1=0 (model M_1) is much higher than the posterior probability of the opposite (model M_2), the Bayes estimate of \theta_1 under model M_2 is “significantly different from zero“. Again, this reflects both a misunderstanding of the probability model, namely that \theta_1=0 is impossible [has measure zero] under model M_2, and a confusion between confidence intervals (that are model specific) and posterior probabilities (that work across models). The concluding message that “ABC is a deeply flawed Bayesian procedure in which ignorance overwhelms data to create massive incoherence” is thus unsubstantiated.

“Incoherent methods, such as ABC, Bayes factor, or any simulation approach that treats all hypotheses as mutually exclusive, should never be used with logically overlapping hypotheses.” Alan Templeton, PNAS, 2010

In conclusion, I am quite surprised at this controversial piece of work being published in PNAS, as the mathematical and statistical arguments of Professor Templeton should have been assessed by referees who are mathematicians and statisticians, in which case they would have spotted the obvious inconsistencies!

%d bloggers like this: