http://www.bayesianphilosophy.com/the-data-can-change-the-prior/

What’s happening here is that Bayes is being equated with one particular application and context of the sum/product rules (Bayes theorem with no doubt about the model) for historical reasons. In reality, Bayes is anything derivable from the sum/product rules for any given context.

When you start exploring more general consequences of the sum/product rules in other contexts, something funny happens. Many things which don’t seem “Bayesian” according that limited historical understanding fall right out of the equations.

]]>1. Using the maximum likelihood estimator (MLE) is not a frequentist move per se since the MLE is a special form of MAP. Whether it has nice or poor frequentist properties depends on the setting and on the properties one is interested in.

2. For me empirical Bayes is not Bayes.

3. The (i) likelihood principle and the (ii) prohibition of data-dependent priors seem to be orthogonal principles. Once I observe my data, I can set my prior according to this data irrelevant of my choice of likelihood hence of (i) and still be violating (ii). For the second part, I remain confused by the statement: the posterior expected loss is not the same for the MLE and for an arbitrary shrinkage estimator. Unless the Bayes risks are infinite for both, they also differ.

4. One can create shrinkage by using a different prior or by using a different loss. (I have mostly forgotten what I meant there!)

Nothing of substance to add about the following points!

]]>