We define “objective” [testing] priors as a result of an information minimisation goal. The principle was laid by Pérez & Berger (2002) and we follow it in this less manageable setting of binomial regression.. I kind of like it for the reason that it allows for the ‘improper prior sin’ in testing, offering a way out or rather a way in for improper priors. The implementation issue is not part of this question.

Now, I agree with you [?] that we could have conducted experiments where we knew the “truth” and had the possibility of finding the error rate of a model selection principle based on integral priors. A nice proposal for a summer project.

]]>How far from a “bad” prior (aka a prior that gives bad results) are the integral priors?

And, to answer my own question, I think that they’re quite far away, in the sense that you’re solving a well-posed problem (an integral equation of the second kind) to get the prior, so a “nearby” prior should be the solution of a “nearby” integral equation.

]]>It’s probably just an awkward way of framing a “prior sensitivity” question. The prior that you’re actually using is a perturbation of a theoretically motivated prior, so it’s worth checking how good/bad that is.

]]>A different thing: drawing linearly independent columns isn’t, to my knowlege, trivial, especially in the big data context. Isn’t that part of why g-priors exist? (The X^T X but deals with the approximate colinearity) is there a similar trick here? I imagine drawing independent but almost colinear columns would be a bad thing…

]]>Then I’d probably stick the resulting approximate priors into INLA (but that’s personal preference :p)

Did you look at how the MC error upsets the balance ? (i.e. are the priors still neutral ) Because 10k chains will (if you’re lucky) give you 1 significant figure (maaaaybe 2).

(NB – I’ve only read the start and the end… Apologies if this was addressed in the middle (pp 5-10)… I’m getting to it presently)

A more general question: is this the sort of things scientists want? As opposed to designing objective priors on the whole of 2^X (X=set of covariates) and then leaping around the model space with gay abandon? Or is it more common/practical/useful to test given groupwise in/out hypotheses?

]]>