**C**oming to Section III in Chapter Seven of * Error and Inference*, written by Deborah Mayo, I discovered that she considers that the likelihood principle does not hold (at least as a logical consequence of the combination of the sufficiency and of the conditionality principles), thus that Allan Birnbaum was wrong…. As well as the dozens of people working on the likelihood principle after him! Including Jim Berger and Robert Wolpert

*[whose book sells for $214 on amazon!, I hope the authors get a hefty chunk of that ripper!!! Esp. when it is available for free on project Euclid…]*I had not heard of (nor seen) this argument previously, even though it has apparently created enough of a bit of a stir around the likelihood principle page on Wikipedia. It does not seem the result is published anywhere but in the book, and I doubt it would get past a review process in a statistics journal.

*[Judging from a serious conversation in Zürich this morning, I may however be wrong!]*

**T**he core of Birnbaum’s proof is relatively simple: given two experiments *E¹* and *E²* about the same parameter *θ* with different sampling distributions *f¹* and *f²*, such that there exists a pair of outcomes *(y¹,y²)* from those experiments with proportional likelihoods, i.e. as a function of *θ*

one considers the mixture experiment *E⁰* where *E¹* and *E²* are each chosen with probability ½. Then it is possible to build a sufficient statistic *T* that is equal to the data *(j,x)*, except when *j=2* and *x=y²*, in which case *T(j,x)=(1,y¹)*. This statistic is sufficient since the distribution of *(j,x)* given *T(j,x)* is either a Dirac mass or a distribution on *{(1,y¹),(2,y²)}* that only depends on *c*. Thus it does not depend on the parameter *θ*. According to the weak conditionality principle, *statistical evidence*, meaning the whole range of inferences possible on *θ* and being denoted by *Ev(E,z)*, should satisfy

Because the sufficiency principle states that

this leads to the likelihood principle

(See, e.g., ** The Bayesian Choice**, pp. 18-29.) Now, Mayo argues this is wrong because

“The inference from the outcome (E^{j},y^{j}) computed using the sampling distribution of [the mixed experiment] E⁰ is appropriately identified with an inference from outcome y^{j}based on the sampling distribution of E^{j}, which is clearly false.” (p.310)

**T**his sounds to me like a direct rejection of the conditionality principle, so I do not understand the point. (A formal rendering in Section 5 using the logic formalism of A’s and Not-A’s reinforces my feeling that the conditionality principle is the one criticised and misunderstood.) If Mayo’s frequentist stance leads her to take the sampling distribution into account at all times, this is fine within her framework. But I do not see how this argument contributes to invalidate Birnbaum’s proof. The following and last sentence of the argument may bring some light on the reason why Mayo considers it does:

“The sampling distribution to arrive at Ev(E⁰,(j,y^{j})) would be the convex combination averaged over the two ways that y^{j}could have occurred. This differs from the sampling distributions of both Ev(E^{1},y^{1}) and Ev(E^{2},y^{2}).” (p.310)

**I**ndeed, and rather obviously, the sampling distribution of the evidence *Ev(E ^{*},*

*z*will differ depending on the experiment. But this is not what is stated by the likelihood principle, which is that the inference itself should be the same for

^{*})*y¹*and

*y²*. Not the distribution of this inference. This confusion between inference and its assessment is reproduced in the “Explicit Counterexample” section, where

*p*-values are computed and found to differ for various conditional versions of a mixed experiment. Again, not a reason for invalidating the likelihood principle. So, in the end, I remain fully unconvinced by this demonstration that Birnbaum was wrong. (If in a bystander’s agreement with the fact that frequentist inference can be built conditional on ancillary statistics.)