The criticism was rather about Michael’s radical dismissal of the posterior probability (resulting from Bayes’ theorem) as a conditional probability because of the dependence on the model. And of the potential meaningless of P(B|A) when A proved to be wrong prior to B occurring. While I cannot say I completely followed Michael’s arguments, they do sound too radical: For one thing I did not see how an expectation could save the day when, mathematically, there is an equivalence between the definition of a measure and the corresponding definition of expectations for measurable functions. For another, the fact that some entities are model dependent is a non sequitur issue: in model choice settings, the encompassing measure takes care of this. Bayes linear is a different thing: it can be seen as a robust Bayes of sorts, requiring a smaller input at the prior modelling level.

]]>