Sometimes, if not that often, I forget about submitted papers to the point of thinking they are already accepted. This happened with the critical analysis of Murray Aitkin’s book Statistical Inference, already debated on the ‘Og, written with Andrew Gelman and Judith Rousseau, and resubmitted to Statistics and Risk Modeling in November…2011. As I had received a few months ago a response to our analysis from Murray, I was under the impression it was published or about to be published. Earlier this week I started looking for the reference in connection with the paper I was completing on the Jeffreys-Lindley paradox and could not find it. Checking emails on that topic I then discovered the latest one was from Novtember 2011 and the editor, when contacted, confirmed the paper was still under review! As it got accepted only a few hours later, my impression is that it had been misfiled and forgotten at some point, an impression reinforced by an earlier experience with the previous avatar of the journal, Statistics & Decisions. In the 1990′s George Casella and I had had a paper submitted to this journal for a while, which eventually got accepted. Then nothing happened for a year and more, until we contacted the editor who acknowledged the paper had been misfiled and forgotten! (This was before the electronic processing of papers, so it is quite plausible that the file corresponding to our accepted paper went under a drawer or into the wrong pile and that the editor was not keeping track of those accepted papers. After all, until Series B turned submission into an all-electronic experience, I was using a text file to keep track of daily submissions…) If you knew George, you can easily imagine his reaction when reading this reply… Anyway, all is well that ends well in that our review and Murray’s reply will appear in Statistics and Risk Modeling, hopefully in a reasonable delay.
Archive for rejection
Following Nicolas’ guest-post on this ‘Og, plus Andrew’s and mine’s, we took advantage of Kerrie Mengersen visiting Paris to write a common piece on the future of the refereeing system and on our proposals to improve it from within. Rather than tearing the whole thing down. In particular, one idea is to make writing referees’ reports part of the academic vitas, by turning them into discussions of published papers. Another one is to achieve some training of referees, by setting refereeing codes and more formalised steps. Yet another one is to federate reports rather than repeating the process one journal at a time for the unlucky ones… The resulting paper has now appeared on arXiv and has just been submitted (I am rather uncertain about the publication chances of this paper, given it is an opinion column, rather than a research paper…! It has already been rejected
once, twice, three five times!)
Following rejections of our discussion paper of Murray Aitkin’s book, Statistical Inference, written with Andrew Gelman and Judith Rousseau, by the journals Bayesian Analysis [where I think it truly belonged, being more than a book review, an assessment of the relevance of the approach from a Bayesian viewpoint!], JASA Book Reviews, and Electronic Journal of Statistics, we have decided to try yet another outlet for our discussion, Statistics and Decisions, to which I had not submitted a paper in about twenty years (since the loss of an accepted paper with George Casella by the S&D editor at the time!). More fundamentally, I completely understand and acknowledge the individual decision by each editorial board not to publish our piece in their respective journals, but I bemoan (once again) the lack of outlet for this type of opinion tribune that should appeal to the community as a whole (again, because this is a book that aims at a complete shift in or out of the Bayesian theory!) and that should be possible given the current electronic communication tools. In other and more precise words, journals should start blogs or forums where readers could comment on published papers and, why not?!, rejected authors could respond to reviews… This is why I liked the format of the review process in the journal Hydrology and Earth System Sciences. that allows for a publication of referee’ reports and comments from the readership. In any case, I hope Statistics and Decisions will be interested in our piece as we are about to run out of options and stamina! (I usually give up much earlier than that!)
(X) narrowly focused and the readership is likely to be very limited.
this morning. Fine, I understand the point and appreciate the quick return (even though I now worry about the overall publishability of the review).
Our Savage-Dickey paper has been rejected by the Annals of Statistics, for being too obscure. I completely understand the Editor’s perspective that this resolution of ours has very little bearing on statistical practice and on the community as a whole (“the readership at large”, as I used to write for Series B). But I fear both screeners have missed the main point of our paper which is that papers and books using the Savage-Dickey ratio all start with an assumption that does not make sense from a measure-theoretic point of view… One screener argued that our point is moot, given that everyone agrees on the same version of the density, as otherwise “would even a likelihood function be properly defined?” But this is not true: a likelihood
is the value of the density at the observed value of the random variable. Since this observed value is by nature random, it is not possible to define a specific version of the density function at x… This may alas be related with the progressive disappearance of measure-theory from the Statistics programs: when my third year exchange students go abroad, it is rarer and rarer to find a place that offers Measure Theory at a level lower than a PhD course.