Whereas I’ve just never seen a convincing argument for them!

]]>I totally agree that you need to go outside pure Bayes theory for this. But I think it will a refinement of the Bayesian argument rather than a new universe.

]]>Ah improper priors, “unique objet de mon ressentiment” for some! But I am still waiting for The Big One, the B argument that would Bayxit them from the scene…

]]>When using the data twice (agreeing that this notion can be utterly confusing!), there must be a garde-fou of sorts against over-fitting. Which forces us to seek it outside the standard B theory…

]]>I’m less okay with using the data to inform the posteriors than Dan, although I am more and more begrudgingly accepting its practical reality. I mean, how doesn’t standardize the covariates before building a regression and specifying priors? That’s using the data twice in a way that large has more positive effects than negative effects. What we can agree on is that we definitely need to understand it better.

Also, I wonder what could be learned about prior/likelihood tensions by comparing the prior predictive and posterior predictive — is there a reason why that would be more interpretable than looking at prior/posterior comparisons?

]]>Dan@OB17: I am sure as well people will notice!!! See you in Austin.

]]>Some responses in no particular order:

– I will be at O’Bayes, but I don’t think I’m presenting anything (this would make a very bad poster and I wasn’t asked to talk), but I’m sure people will notice I’m there.

– How to handle improper priors based on our recommendations: don’t use improper priors. They are completely incompatible with the idea of generative modelling.

– I really like the idea of using the posterior predictive to check your prior. There is probably a tighter link that can be made with the prior-data conflict literature. But even if you don’t like it, one way to see it is that the posterior predictive is the best that you can do with all of the information (data + model) at hand. If in that case you still can’t predict new data (or pseudo-new data), then something is wrong with your model [NB: Not necessarily the prior!].

– I don’t completely agree with you about priors being unable to depend on the data without disastrous consequences. I think it’s all about working out how to mitigate the problems. It also ignores the problem that often the *likelihood* is constructed with knowledge of the data (which should be just as bad). I think there’s a need to formalise this type of process and work out how to do it safely and specifically what you can and cannot do. For instance, you obviously can’t use BvM-type arguments if you have a data-dependent prior. The other paper we wrote (nominally about visualisation) has a bit of this in it, as does the associated blog post [and it’s many comments](http://andrewgelman.com/2017/09/07/touch-want-feel-data/).

– I like your idea of relativity, but it really just moves the problem it a different place. Your posterior is interpreted relative to your prior that is interpreted relative to reality. As you say, this really gives a much better use of the prior predictive than just computing marginal likelihoods. In the other paper (https://arxiv.org/abs/1709.01449) we argue briefly that you can use the prior predictive to get a notion of how informative your generative model is. This means you can talk about generative models being “weakly informative”.

]]>