I am a devotee of the Bayesian approach, as you know. And yet, I have some misgivings about the lack of frequentist coverage. I imagine two artificial intelligence agents, one of which has a set of posterior predictive probability distributions and the other of which has a set of exact predictive confidence distributions. When the observables of interest become known, we propose to calculate the probability integral transforms relative to their respective distributions and make a histogram of the resulting random deviates. Each agent asserts that the histogram will be uniform, but this is guaranteed only for the exact confidence distributions.
Exact confidence distributions are not nearly as flexible and useful as Bayesian posterior distributions, being available only in a limited set of circumstances, so this isn’t even close to a knock-down argument against the Bayesian approach. But when I read arguments like Fraser’s, the ideas I related above do niggle at me.

]]>