On using regressions for each pair of models, the argument is that this will increase robustness. If the Bayes factors between a few pairs of models are captured poorly, then the others may often still be enough to approximate sufficient statistics. For problems with many (>3?) models this would certainly produce too many summaries and a multinomial regression-like approach would be preferable.

You mention the restricted regions causing a change to the prior model weights. We attempt to make a correction for this in the “Truncation Correction” section on page 10.

On alternatives to hypercubes for restricted regions, I am toying with using a HPD region for a normal or mixture of normals approximation to the posterior. You raise a good point about the difficulties of extending these to high dimensional problems.

You mention a couple of criticisms that are common to the RSS B paper: dependence of the method on how the pilot stage is performed and the problem of (weakly) using the data twice. These are still very much valid criticisms, and the same heuristic justifications as before as used in this paper.

I’m completely in agreement with your closing comments on the potential problems of comparing models based on different vectors of statistics. I didn’t consider that the same issue could arise from using regularised regression. Perhaps L2 regularisation would be preferable here (some regularisation is needed in the main applicaiton to avoid ill-conditioning).

]]>