(continuation) I’m about to see y = ( y_1, \dots, y_n ), each a real number, and (from problem context) my uncertainty about the y_i is exchangeable. de Finetti proved that one logically-internally-consistent way to express my predictive distribution is

F ~ p ( F )

( y_i | F ) ~IID F

in which p ( F ) is a prior on CDFs on \Re. Can I compute

p ( F | y ) using only Jaynes’s ‘sneaking up on infinity’ approach, with any contextually-suitably-rich p ( F )? I’d like to know the answer to that.

Let me say at the outset that I’m a tremendous fan of all of Jaynes’s work. However, It’s not obvious to me that his ‘finite sets policy’ induces complete rigor in his probability system in all cases in which he wants to be able to quantify uncertainty about uncountably infinitely many propositions, simultaneously, in a logically-internally-consistent manner. His beautiful book is filled with examples of this type; for instance, in his section 4.5 he builds a continuous CDF G ( . ) on the unit interval and invites us to evaluate G ( f ) for any 0 < f < 1, at which point we are making uncountably infinitely many probability assertions without having 'snuck up on infinity' in the usual jaynesian 'evaluate A_n and then gently let n get big' manner.

]]>Doing so instantly avoids any problems of the form: (1) assume an infinte limit already accomplished, but don’t specify how the limit was approached, (2) ask a question whose answer depends on the how the limit was approached, (3) proclaim to the whole world you’ve discovered a paradox in statistics and/or proved Bayesian statsitics is nonsense.

]]>