However, when discussing the connections with similar approaches, our results are often ignored in many subsequent papers, that nevertheless acknowledge other alternative frameworks that are perhaps not so closely related (hard to work south of the equator). ]]>

Once again in this blog, I would like to ask current

efforts to acknowledge previous work by Marcel Lauretto

on testing separate hypothesis via mixture models.

M.Lauretto, S.R.Faria, C.A.B.Pereira, J.M.Stern (2007).

The Problem of Separate Hypotheses via Mixtures Models.

AIP Conference Proceedings, 954, 268-275.

https://www.ime.usp.br/~jstern/papers/papersJS/Lauretto07.pdf

This approach is an evolution of Marcelo Lauretto’s

earlier work on

FBST for Mixture Model Selection

Marcelo S. Lauretto and Julio M. Stern.

AIP Conference Proceedings 803, 121-128.

https://www.ime.usp.br/~jstern/papers/papersJS/Lauretto05.pdf

Instead of Bayes Factors, these works used the FBST,

the Full Bayesian Significance Test, an alternative that

is a lot easier to implement, and also has far better

theoretical properties than Bayes Factors, see for example:

J.M.Stern and C.A.B. Pereira (2014).

Bayesian epistemic values:

Focus on surprise, measure probability!

Logic Journal of the IGPL, 22, 2, 236-254.

https://www.ime.usp.br/~jstern/papers/papersJS/IGPL13.pdf

W.Borges, J.M.Stern (2007). The Rules of Logic

Composition for the Bayesian Epistemic e-Values.

Logic Journal of the IGPL, 15, 5-6, 401-420.

https://www.ime.usp.br/~jstern/papers/papersJS/igpl07.pdf

C.A.B.Pereira, J.M.Stern, S.Wechsler (2008).

Can a Signicance Test be Genuinely Bayesian.

Bayesian Analysis, 3, 79-100.

https://www.ime.usp.br/~jstern/papers/papersJS/jsbayan1.pdf

Surely you marginalise out the usual way: just ignore the nuisance parameters. If you have 2 parameters and only the first IS of interest then the marginal for the interesting parameters is the histogram of the first component of the markov chain and the corresponding HPD comes from that.

]]>As for the global perspective you propose, there is so little flavour in the output that one hardly remembers the Bayesian cook!

]]>I don’t understand your point about computing the marginal HPD in the presence of “nuisance parameters” – isn’t this essentially trivial postprocessing for MCMC? Or am I missing something?

Similarly, if a Bayesian model is calibrated (via prior choice or through matching priors) so that, under data generated from H0, the 95% credible interval contains the null value 95% of the time, then surely the resulting hypothesis test is valid. It may not be powerful, but it would would be a valid Neyman-Pearson test.

This isn’t “Bayesian hypothesis testing” so much as “N-P hypothesis testing from Bayesian output”, but I’m not sure it’s less sensible than Bayesian hypothesis testing (at least BHT with point nulls, or 0-1 loss functions). It’s also infinitely more convenient computationally, and doesn’t require an alternative model but instead only requires a (sensible) null model for calibration. Alternative models only need to be posited when considering the sensitivity of the test (or the type-2 error rate, or the power, or whatever you’re going to call it).

]]>