Archive for ancillary statistics

That the likelihood principle does not hold…

Posted in Statistics, University life with tags , , , , , , , , , , on October 6, 2011 by xi'an

Coming to Section III in Chapter Seven of Error and Inference, written by Deborah Mayo, I discovered that she considers that the likelihood principle does not hold (at least as a logical consequence of the combination of the sufficiency and of the conditionality principles), thus that  Allan Birnbaum was wrong…. As well as the dozens of people working on the likelihood principle after him! Including Jim Berger and Robert Wolpert [whose book sells for $214 on amazon!, I hope the authors get a hefty chunk of that ripper!!! Esp. when it is available for free on project Euclid…] I had not heard of  (nor seen) this argument previously, even though it has apparently created enough of a bit of a stir around the likelihood principle page on Wikipedia. It does not seem the result is published anywhere but in the book, and I doubt it would get past a review process in a statistics journal. [Judging from a serious conversation in Zürich this morning, I may however be wrong!]

The core of Birnbaum’s proof is relatively simple: given two experiments and about the same parameter θ with different sampling distributions and , such that there exists a pair of outcomes (y¹,y²) from those experiments with proportional likelihoods, i.e. as a function of θ

f^1(y^1|\theta) = c f^2(y^2|\theta),

one considers the mixture experiment E⁰ where  and are each chosen with probability ½. Then it is possible to build a sufficient statistic T that is equal to the data (j,x), except when j=2 and x=y², in which case T(j,x)=(1,y¹). This statistic is sufficient since the distribution of (j,x) given T(j,x) is either a Dirac mass or a distribution on {(1,y¹),(2,y²)} that only depends on c. Thus it does not depend on the parameter θ. According to the weak conditionality principle, statistical evidence, meaning the whole range of inferences possible on θ and being denoted by Ev(E,z), should satisfy

Ev(E^0, (j,x)) = Ev(E^j,x)

Because the sufficiency principle states that

Ev(E^0, (j,x)) = Ev(E^0,T(j,x))

this leads to the likelihood principle

Ev(E^1,y^1)=Ev(E^0, (j,y^j)) = Ev(E^2,y^2)

(See, e.g., The Bayesian Choice, pp. 18-29.) Now, Mayo argues this is wrong because

“The inference from the outcome (Ej,yj) computed using the sampling distribution of [the mixed experiment] E⁰ is appropriately identified with an inference from outcome yj based on the sampling distribution of Ej, which is clearly false.” (p.310)

This sounds to me like a direct rejection of the conditionality principle, so I do not understand the point. (A formal rendering in Section 5 using the logic formalism of A’s and Not-A’s reinforces my feeling that the conditionality principle is the one criticised and misunderstood.) If Mayo’s frequentist stance leads her to take the sampling distribution into account at all times, this is fine within her framework. But I do not see how this argument contributes to invalidate Birnbaum’s proof. The following and last sentence of the argument may bring some light on the reason why Mayo considers it does:

“The sampling distribution to arrive at Ev(E⁰,(j,yj)) would be the convex combination averaged over the two ways that yj could have occurred. This differs from the  sampling distributions of both Ev(E1,y1) and Ev(E2,y2).” (p.310)

Indeed, and rather obviously, the sampling distribution of the evidence Ev(E*,z*) will differ depending on the experiment. But this is not what is stated by the likelihood principle, which is that the inference itself should be the same for and . Not the distribution of this inference. This confusion between inference and its assessment is reproduced in the “Explicit Counterexample” section, where p-values are computed and found to differ for various conditional versions of a mixed experiment. Again, not a reason for invalidating the likelihood principle. So, in the end, I remain fully unconvinced by this demonstration that Birnbaum was wrong. (If in a bystander’s agreement with the fact that frequentist inference can be built conditional on ancillary statistics.)

workshop in Columbia [talk]

Posted in Statistics, Travel, University life with tags , , , , , , , , , on September 25, 2011 by xi'an

Here are the slides of my talk yesterday at the Computational Methods in Applied Sciences workshop in Columbia:

The last section of the talk covers our new results with Jean-Michel Marin, Natesh Pillai and Judith Rousseau on the necessary and sufficient conditions for a summary statistic to be used in ABC model choice. (The paper is about to be completed.) This obviously comes as the continuation of our reflexions on  ABC model choice started last January. The major message of the paper is that the statistics used for running model choice cannot have a mean value common to both models, which strongly implies using ancillary statistics with different means under each model. (I am afraid that, thanks to the mixture of no-jetlag fatigue and of slide inflation [95 vs. 40mn] and of asymptotics technicalities in the last part, the talk was far from comprehensible. I started on the wrong foot with not getting an XL [Xiao-Li’s] comment on the measure-theory problem with the limit in ε going to zero. A peak given that great debate we had in Banff with Jean-Michel, David Balding, and Mark Beaumont, years ago. And our more recent paper about the arbitrariness of the density value in the Savage-Dickey paradox. I then compounded the confusion by stating the empirical mean was sufficient in the Laplace case…which is not even an exponential family. I hope I will be more articulate next week in Zürich where at least I will not speak past my bedtime!)

15 all-timers [back]

Posted in Statistics with tags , , , , , on November 26, 2010 by xi'an

Following an earlier post and poll. six of my graduate students took the Reading Classics seminar this year (plus two who dropped out). They chose

  1. W.K.Hastings (1970) Monte Carlo sampling methods using Markov chains and their applications, Biometrika
  2. G. Casella & W. Strawderman (1981) Estimation of a bounded mean Annals of Statistics
  3. A.P. Dawid, M. Stone & J. Zidek (1973) Marginalisation paradoxes in Bayesian and structural inference J. Royal Statistical Society
  4. C. Stein (1981) Estimation of the mean of a multivariate normal distribution Annals of Statistics
  5. D.V. Lindley & A.F.M. Smith (1972) Bayes Estimates for the Linear Model  J. Royal Statistical Society
  6. A. Birnbaum (1962) On the Foundations of Statistical Inference J. American Statistical Assoc.

in this order and mostly managed to grasp the quintessentials of the papers and to give decent (Beamer) presentations. The hardest one was the exposition of the likelihood principle and the student who chose this paper struggled to go past a mere repetition of the proofs. I enjoyed it nonetheless because the presentation raised questions about this principle,

Continue reading