You mention that your personal preference (or bias) is to favor non-parametrics. Can you provide a reference?

]]>Thanks, Christos, another paper worth exploring! And fodder for my O-Bayes tutorial, for sure.

]]>http://m.pnas.org/content/early/2013/10/28/1313476110.full.pdf?with-ds=yes

One comment concerns the word “equipoise”, used to justify the equal priors of the alternative hypotheses. If one were to adhere to standard clinical trial vernacular (and in fact the second paper appeals to medical/clinical audience), equipoise refers to an average/group state of uncertainty. So if one wanted to be consistent in term usage and mathematical representation, the alternative hypothesis should be framed as an ensemble of hypothesis which on average are uncertain.

]]>Dan:

As Christian and I have discussed, there can definitely be such a thing as a “true prior.” All you need is a model in which the parameter theta has been drawn from some distribution or process. The true prior is the distribution of those thetas. This sort of thing occurs, for example, in genetics (where the mixing genes provide a sampling process) or, more generally, any time you are applying a statistical method repeatedly (for example, spell checking): the true prior is the distribution of true values under all the cases where the method is being applied.

]]>I definitely hope it will remain as such!

]]>Deborah: Thanks. My criticism of this paper is not about the point null (which may be relevant in exceptional cases like testing for ESP or for the boundary value of the Hubble constant), but about not caring about the alternative. If the null is rejected, a new prior must be constructed, which is an incoherent requirement…

]]>I also don’t understand what equation (4) means [which is as far as I’ve gotten]. Clearly this is algebraically true, but what does that expectation actually mean. It’s the expectation of the log-bayes factor w.r.t. the “true marginal likelihood”, but why would that be important?

Obviously if you took the expectation with respect to any marginal likelihood, the prior corresponding to that marginal likelihood would maximise the expected weight of evidence, so what’s so special about this one? It seems like circular logic (the true prior is the best because it maximises the expected WOE defined w.r.t. the true prior)

]]>