Therefore, Birnbaum didn’t prove that CP+SP will necessarily lead to SLP. And Mayo didn’t provide a convincing disproof of Birnbaum’s “proof” in her first and second variations, even though her conclusion was correct.

To disprove Birnbaum’s “proof”, the following (very short) argument will be sufficient:

“Without violating the CP and SP, we can report the results (make inference) like the way Birnbaum did (follow the SLP) or report the results as a frequentist would do. Therefore, SLP is not a necessary result of CP+SP.”

Dr Mayo, I have provided an email through your blog. Talk to you soon via email.

“Therefore, What Birnbaum actually did was use the SLP to prove the SLP – as simple as that!”

Yes, that is called a circular proof, and my arguments were intended to show that Birnbaum’s arguments fail because they are circular. Therefore, Chang was very mistaken in dismissing my argument in his book as he does! If he wants to write to me, we can converse further. I do not have his e-mail. We happen to be discussing the SLP on my blog at current.

The MLE being (almost always) biased does not make an argument against or in favour of the likelihood principle. It is a completely unrelated issue, if any. (All Bayes estimates are biased as well.)

]]>In Mayo’s paper (Mayo 2010, p.305-314), the symbol * is used to denote the result when the likelihoods of the two experiments are proportional.

My understanding is that Mayo’s disproof versions 1 and 2 are essentially based on the same contradiction: (antecedent of) premise (1) is based on the unconditional formulation and premise (2) or antecedent of premise (2)’ is based on the conditional formulation.

In my view, premise (1) does not contradict with premise (2) or (2)’ (Mayo 2010, p.309) for the following reasons:

Premise (1) says in the case of * results, the conditional and unconditional results should be the same. This does not contradict with the statement “inference should be conditional on the experiment actually performed.” In other words, premises (1) and (2) can be both based on conditional formulations, but in the case of * results, premise (1) asserts that the conditional and unconditional results should be the same.

On the other hand, did Birnbaum prove anything meaningful? My answer is “No”. With the adaptation of the conditional principle (CP) and the strong likelihood principle (SLP), there is still plenty room for one to choose a different inferential procedure. What Birnbaum did was to report the same result (TBB) when the two likelihoods were proportional (neither SP nor CP requires one to report the result this way!) – This is what SLP wants. Therefore, What Birnbaum actually did was use the SLP to prove the SLP – as simple as that!

I will be happy to log onto Mayo’s blog for further discussion later.

Like the Lindley paradox, the strong likelihood principle (SLP) might have been read (can possibly be read) in two ways: in a restricted sense and an unrestricted sense.

(1) In the restricted sense, the parameter of interest in the two experiments concerns the same physical parameter.

(2) In the unrestricted sense, the parameter of interest concerns the statistical model parameter, which may concern the same or different physical parameters.

The restricted-sense SLP is most intuitive and is often discussed (see the familiar example on p.115 of the book). However, in this understanding, with the SLP as a “general principle”, it cannot even apply to the parameter inference for some simple cases such as the mixed experiment mentioned on p.136 of the book, because the mixed experiment concerns two different physical parameters.

If one takes SLP in the unrestricted sense (considering it a general/fundamental principle, i would prefer to read it this way), we would be confronted with the paradox raised on p.115 (the third par) in my book.

Furthermore, whether from the restricted or unrestricted SPL, I raised the controversy on the validity of SLP by pointing out the existence of a biased MLE (on p.111 and in previous discussion). ]]>

A less informal and clearer treatment may be found in a recent paper: http://www.phil.vt.edu/dmayo/conference_2010/9-18-12MayoBirnbaum.pdf.

I am inviting comments for posting (some time in January) as explained at this link: http://errorstatistics.com/2012/10/31/u-phil-blogging-the-likelihood-principle-new-summary/.

I invite Chang to contribute, perhaps with a newly clarified attempt to reject my disproof of Birnbaum. ]]>

Thanks, Manoel! The Lindley paradox has indeed two facets, one stating that the p-value and the posterior probabilities can differ to the extreme (0 versus 1, essentially). The other one, that I find more interesting, is that the Bayes factor may have no definite limit when the prior variance under the alternative goes to infinity …

]]>I only took one class in Bayesian Inference, and as far as I remember, the issue was not discussed. So, It’s hard for a reader like me, who took a few classes in statistics (graduate level) but not many ones, to judge on parts like that.

Maybe I can found something about it in your book? I’d have bought it earlier, but I still think the e-book is quite expensive (about US$ 45,00 on amazon for an e-book. But the marginal cost is almost zero!).

]]>