P(H|D,K)=P(D|H,K)P(H|K)/P(D|K)

Where D=data, H=params, K=all other knowledge

If the likelihood principle holds in any particular instance, it will just fall out of this equation. If it doesn’t hold in given example, that will just fall out of this equation as well. So I don’t see why a Bayesian need pay any attention the the Birnbaum’s proof or any version of the Likelihood principle.

It seems to be an issue that Frequentists get hot and bothered about, but that Bayesian can safely ignore in principle and in practice.

]]>Yes, that is the denial of the likelihood principle and thus it’s important to see why Birnbaum’s attempt fails (as he realized).

]]>http://www.phil.vt.edu/dmayo/conference_2010/9-18-12MayoBirnbaum.pdf

The premises, taken together, require that the evidential import of a result known to have arisen from E’ both should and should not be influenced by an unperformed experiment E”. If you’re using sampling distributions in Bayesian inference, then you should be glad the Birnbaum argument is unsound. My criticism, I think, can be extended to apply to the Bayesian formulation, although I have not done so. ]]>

You could do that, but, even so, the sampling distribution can depend on information that is not included in the likelihood function. ]]>

A,

The predictive check could be restricted to a replica of the sufficient statistic, no?!

This seems related to our point in chapter 6 of BDA that you need to know the sampling distribution (not just the likelihood) to do a posterior predictive check. For example, the stopping rule is relevant even if it only depends on observed data. ]]>