I understand that a lot of past work has focussed on minimal training samples (e.g. intrinsic Bayes factors), but I would guess that one might be able to make an more strategic decision by starting with a minimal training sample and then progressively adding to the size of the training sample until some kind of utility function tells you, “well, we’ve learnt enough about the parameters of the competing models, it’s now a good idea to go ahead and compute Bayes factors with the remainder of the data”?

]]>