Archive for refereeing

running ABC when the likelihood is available

Posted in Statistics with tags , , , , , on September 19, 2017 by xi'an

Today I refereed a paper where the authors used ABC to bypass convergence (and implementation) difficulties with their MCMC algorithm. And I am still pondering whether or not this strategy makes sense. If only because ABC needs to handle the same complexity and the same amount of parameters as an MCMC algorithm. While shooting “in the dark” by using the prior or a coarse substitute to the posterior. And I wonder at the relevance of simulating new data when the [true] likelihood value [at the observed data] can be computed. This would sound to me like the relevant and unique “statistics” worth considering…

crowd-based peer review

Posted in Statistics with tags , , , , , , , , , , on June 20, 2017 by xi'an

In clear connection with my earlier post on Peer Community In… and my visit this week to Montpellier towards starting a Peer Community In Computational Statistics, I read a tribune in Nature (1 June, p.9) by the editor of Synlett, Benjamin List, describing an experiment conducted by this journal in chemical synthesis. The approach was to post (volunteered) submitted papers on a platform accessible to a list of 100 reviewers, nominated by the editorial board, who could anonymously comment on the papers and read others’ equally anonymous comments. With a 72 hours deadline! According to Benjamin List (and based on  a large dataset of … 10 papers!), the outcome of the experiment is one of better quality return than with traditional reviewing policies. While Peer Community In… does not work exactly this way, and does not aim at operating as a journal, it is exciting and encouraging to see such experiments unfold!

peer community in evolutionary biology

Posted in Statistics with tags , , , , , , , on May 18, 2017 by xi'an

My friends (and co-authors) from Montpellier pointed out the existence of PCI Evolutionary Biology, which is a preprint and postprint validation forum [so far only] in the field of Evolutionary Biology. Authors of a preprint or of a published paper request a recommendation from the forum. If someone from the board finds the paper of interest, this person initiates a quick refereeing process with one or two referees and returns a review to the authors, with possible requests for modification, and if the lead reviewer is happy with the new version, the link to the paper and the reviews are published on PCI Evol Biol, which thus gives a stamp of validation to the contents in the paper. The paper can then be submitted for publication in any journal, as can be seen from the papers in the list.

This sounds like a great initiative and since PCI is calling for little brothers and sisters to PCI Evol Biol, I think we should try to build its equivalent in Statistics or maybe just Computational Statistics.

estimation versus testing [again!]

Posted in Books, Statistics, University life with tags , , , , , , , , , , on March 30, 2017 by xi'an

The following text is a review I wrote of the paper “Parameter estimation and Bayes factors”, written by J. Rouder, J. Haff, and J. Vandekerckhove. (As the journal to which it is submitted gave me the option to sign my review.)

The opposition between estimation and testing as a matter of prior modelling rather than inferential goals is quite unusual in the Bayesian literature. In particular, if one follows Bayesian decision theory as in Berger (1985) there is no such opposition, but rather the use of different loss functions for different inference purposes, while the Bayesian model remains single and unitarian.

Following Jeffreys (1939), it sounds more congenial to the Bayesian spirit to return the posterior probability of an hypothesis H⁰ as an answer to the question whether this hypothesis holds or does not hold. This however proves impossible when the “null” hypothesis H⁰ has prior mass equal to zero (or is not measurable under the prior). In such a case the mathematical answer is a probability of zero, which may not satisfy the experimenter who asked the question. More fundamentally, the said prior proves inadequate to answer the question and hence to incorporate the information contained in this very question. This is how Jeffreys (1939) justifies the move from the original (and deficient) prior to one that puts some weight on the null (hypothesis) space. It is often argued that the move is unnatural and that the null space does not make sense, but this only applies when believing very strongly in the model itself. When considering the issue from a modelling perspective, accepting the null H⁰ means using a new model to represent the model and hence testing becomes a model choice problem, namely whether or not one should use a complex or simplified model to represent the generation of the data. This is somehow the “unification” advanced in the current paper, albeit it does appear originally in Jeffreys (1939) [and then numerous others] rather than the relatively recent Mitchell & Beauchamp (1988). Who may have launched the spike & slab denomination.

I have trouble with the analogy drawn in the paper between the spike & slab estimate and the Stein effect. While the posterior mean derived from the spike & slab posterior is indeed a quantity drawn towards zero by the Dirac mass at zero, it is rarely the point in using a spike & slab prior, since this point estimate does not lead to a conclusion about the hypothesis: for one thing it is never exactly zero (if zero corresponds to the null). For another thing, the construction of the spike & slab prior is both artificial and dependent on the weights given to the spike and to the slab, respectively, to borrow expressions from the paper. This approach thus leads to model averaging rather than hypothesis testing or model choice and therefore fails to answer the (possibly absurd) question as to which model to choose. Or refuse to choose. But there are cases when a decision must be made, like continuing a clinical trial or putting a new product on the market. Or not.

In conclusion, the paper surprisingly bypasses the decision-making aspect of testing and hence ends up with a inconclusive setting, staying midstream between Bayes factors and credible intervals. And failing to provide a tool for decision making. The paper also fails to acknowledge the strong dependence of the Bayes factor on the tail behaviour of the prior(s), which cannot be [completely] corrected by a finite sample, hence its relativity and the unreasonableness of a fixed scale like Jeffreys’ (1939).

Journal of Open Source Software

Posted in Books, R, Statistics, University life with tags , , , , , , , , on October 4, 2016 by xi'an

A week ago, I received a request for refereeing a paper for the Journal of Open Source Software, which I have never seen (or heard of) before. The concept is quite interesting with a scope much broader than statistical computing (as I do not know anyone in the board and no-one there seems affiliated with a Statistics department). Papers are very terse, describing the associated code in one page or two, and the purpose of refereeing is to check the code. (I was asked to evaluate an MCMC R package but declined for lack of time.) Which is a pretty light task if the code is friendly enough to operate right away and provide demos. Best of luck to this endeavour!

[Royal] Series B’log

Posted in Books, Statistics, University life, Wines with tags , , , , , , on September 12, 2016 by xi'an

[Thanks to Ingmar for suggesting the additional Royal!]

RSS wineLast week, I got an email from Piotr Fryzlewicz on behalf of the Publication Committee of the Royal Statistical Society enquiring about my interest in becoming a blog associate editor for Series B! Although it does not come exactly as a surprise, as I had previously heard about this interest in creating a dedicated blog, this is great news as I think a lively blog can only enhance the visibility and impact of papers published in Series B and hence increase the influence of Series B. Being quite excited by this on-line and interactive extension to the journal, I have accepted the proposal and we are now working on designing the new blog (Series B’log!) to get it on track as quickly as possible.

Suggestions towards this experiment are most welcome! I am thinking of involving authors to write blog summaries of their paper, AEs and reviewers to voice their expert opinions about the paper, anonymously or not, and of course anyone interested in commenting the paper. The idea is to turn (almost) all papers into on-line Read Papers, with hopefully the backup of authors through their interactions with the commentators. I certainly do not intend to launch discussions on each and every paper, betting on the AEs or referees to share their impressions. And if a paper ends up being un-discussed, this may prove enough of an incentive for some. (Someone asked me if we intended to discuss rejected papers as well. This is an interesting concept, but not to be considered at the moment!)

Florid’AISTATS

Posted in pictures, R, Statistics, Travel, University life with tags , , , , , , , , , on August 31, 2016 by xi'an

The next AISTATS conference is taking place in Florida, Fort Lauderdale, on April 20-22. (The website keeps the same address one conference after another, which means all my links to the AISTATS 2016 conference in Cadiz are no longer valid. And that the above sunset from Florida is named… cadiz.jpg!) The deadline for paper submission is October 13 and there are two novel features:

  1. Fast-track for Electronic Journal of Statistics: Authors of a small number of accepted papers will be invited to submit an extended version for fast-track publication in a special issue of the Electronic Journal of Statistics (EJS) after the AISTATS decisions are out. Details on how to prepare such extended journal paper submission will be announced after the AISTATS decisions.
  2. Review-sharing with NIPS: Papers previously submitted to NIPS 2016 are required to declare their previous NIPS paper ID, and optionally supply a one-page letter of revision (similar to a revision letter to journal editors; anonymized) in supplemental materials. AISTATS reviewers will have access to the previous anonymous NIPS reviews. Other than this, all submissions will be treated equally.

I find both initiatives worth applauding and replicating in other machine-learning conferences. Particularly in regard with the recent debate we had at Annals of Statistics.