“Choosing the ranges has been criticized as introducing subjectivity; however, the key point is that the ranges are given quantitatively and should be justified”
On arXiv, I came across a paper by physicists Dunstan, Crowne, and Drew, on computing the Bayes factor by linear regression. Paper that I found rather hard to read given that the method is never completely spelled out but rather described through some examples (or the captions of figures)… The magical formula (for the marginal likelihood)
where n is the parameter dimension, Cov is the Fisher information matrix, and the denominator the volume of a flat prior on an hypercube (!), seems to come for a Laplace approximation. But it depends rather crucially (!) on the choice of this volume. A severe drawback the authors evacuate with the above quote… And by using an example where the parameters have a similar meaning under both models. The following ones compare several dimensions of parameters without justifying (enough) the support of the corresponding priors. In addition, using a flat prior over the hypercube seems to clash with the existence of a (Fisher) correlation between the components. (To be completely open as to why I discuss this paper, I was asked to review the paper, which I declined.)