Thank you, Dan! Blame Andrew for the quality of writing and excuse my French!!! I am glad you notice the “tension” [in the physical, not psychological meaning of the word] in the review, as Andrew and I (and Judith) are not in total agreement about Bayes factors and the whole idea of testing. I completely agree that electronic journals should open to new types of entries, from non-reviewed discussions of main papers (something I repeatedly suggested for Bayesian Analysis, but never found the time for starting through an independent “Bayesian Analysis – Comments” blog) to indeed unsuccessful attempts and dead-ends. Maybe there is a fear from editors that this would impact the readership or the impact factor, or maybe the [time] cost of running those additional threads is too daunting..

]]>Sure, this is a serious possibility. (Are you truly Andrew Gelman?!)

]]>An interesting thing about the likelihood vs prior dichotomy discussed in the paragraph that I quoted is the way in which the standard categorisation differs from reality. I doubt anyone actually using a statistical model in practice would suggest that the likelihood is objective, yet this implication seems to float unhindered through statistical courses, textbooks, monographs, and literature. In fact, I would suspect (although I could be horribly wrong) that the majority of effort when building (hierarchical) Bayesian models is devoted to the building the prior (especially when there is a physical model that can partially explain the latent process), followed by the hyperparameters. The model for the actual observation process (the likelihood) rarely (in my rather limited and specific and, therefore, probably not sufficiently general to be making this sort of claim, experience) does not seem to receive the same amount of love.

]]>‘In a nearly century-long tradition in statistics, any probability model is sharply divided into “likelihood” (which is considered to be objective and, in textbook presen- tations, is often simply given as part of the mathematical specification of the problem) and “prior” (a dangerously subjective entity to which the statistical researcher is en- couraged to pour all of his or her pent-up skepticism). This may be a tradition but it has no logical basis. If writers such as Aitkin wish to consider their likelihoods as objective and consider their priors as subjective, that is their privilege. But we would prefer them to restrain themselves when characterizing the models as others. It would be polite to either tentatively accept the objectivity of others’ models or, contrariwise, to gallantly affirm the subjectivity of one’s own choices.’

is worth the price of admission alone. [It’s far too rare that the quality of writing in a statistics article is worth commenting on.] But I am not convinced that this fantastic paragraph—which, in practice, boils down to “don’t conflate ‘useful’ and ‘correct'”—is entirely consistent with the view on hypothesis testing in section 7 (which, incidentally, features another great one-liner). Surely nothing that we do in applied statistics can step beyond a cartoon, a caricature or a ‘treasured metaphor’!

I also think that it’s an interesting (implicit) challenge in this blog post: statistical journals have really not embraced the ‘electronic’ format. The difference in the structure of a statistics journal from 1991 and from 2011 disheartening. I really hope that the community as a whole works to rectify this – people need to write non-traditional pieces like long reviews, opinion pieces, open problems and interesting unsuccessful attempts.

]]>