Archive for refereeing

new reproducibility initiative in TOMACS

Posted in Books, Statistics, University life with tags , , , , , , , , , , on April 12, 2016 by xi'an

[A quite significant announcement last October from TOMACS that I had missed:]

To improve the reproducibility of modeling and simulation research, TOMACS  is pursuing two strategies.

Number one: authors are encouraged to include sufficient information about the core steps of the scientific process leading to the presented research results and to make as many of these steps as transparent as possible, e.g., data, model, experiment settings, incl. methods and configurations, and/or software. Associate editors and reviewers will be asked to assess the paper also with respect to this information. Thus, although not required, submitted manuscripts which provide clear information on how to generate reproducible results, whenever possible, will be considered favorably in the decision process by reviewers and the editors.

Number two: we will form a new replicating computational results activity in modeling and simulation as part of the peer reviewing process (adopting the procedure RCR of ACM TOMS). Authors who are interested in taking part in the RCR activity should announce this in the cover letter. The associate editor and editor in chief will assign a RCR reviewer for this submission. This reviewer will contact the authors and will work together with the authors to replicate the research results presented. Accepted papers that successfully undergo this procedure will be advertised at the TOMACS web page and will be marked with an ACM reproducibility brand. The RCR activity will take place in parallel to the usual reviewing process. The reviewer will write a short report which will be published alongside the original publication. TOMACS also plans to publish short reports about lessons learned from non-successful RCR activities.

[And now the first paper reviewed according to this protocol has been accepted:]

The paper Automatic Moment-Closure Approximation of Spatially Distributed Collective Adaptive Systems is the first paper that took part in the new replicating computational results (RCR) activity of TOMACS. The paper completed successfully the additional reviewing as documented in its RCR report. This reviewing is aimed at ensuring that computational results presented in the paper are replicable. Digital artifacts like software, mechanized proofs, data sets, test suites, or models, are evaluated referring to ease of use, consistency, completeness, and being well documented.

AISTATS 2016 [post-decisions]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on December 27, 2015 by xi'an

Now that the (extended) deadline for AISTATS 2016 decisions is gone, I can gladly report that out of 594 submissions, we accepted 165 papers, including 35 oral presentations. As reported in the previous blog post, I remain amazed at the gruesome efficiency of the processing machinery and at the overwhelmingly intense involvement of the various actors who handled those submissions. And at the help brought by the Toronto Paper Matching System, developed by Laurent Charlin and Richard Zemel. I clearly was not as active and responsive as many of those actors and definitely not [and by far] as my co-program-chair, Arthur Gretton, who deserves all the praise for achieving a final decision by the end of the year. We have already received a few complaints from rejected authors, but this is to be expected with a rejection rate of 73%. (More annoying were the emails asking for our decisions in the very final days…) An amazing and humbling experience for me, truly! See you in Cadiz, hopefully.

grateful if you could give us your expert opinion [give as in gift]

Posted in Statistics with tags , , on December 12, 2015 by xi'an

I received this mail today about refereeing a paper for yet another open source “publisher” and went and checked that the F1000Research business model was as suspected another of those websites charging large amounts for publishing. At least they ask real referees…

Dear Christian,

You have been recommended by so-and-so as being an expert referee for their article “dis-and-dat” published in F1000Research. Please would you provide a referee report for this article? The abstract is included at the end of this email and the full article is available here.

F1000Research is a unique open science publishing platform that was set up as part of Faculty of 1000 (by the same publisher who created BioMed Central and previously the Current Opinion journals). Our advisors include the Nobel Prize winners Randy Schekman and Sir Tim Hunt, Steve Hyman, Edward Benz, and many more.

F1000Research is aiming to reshape scientific publishing: articles are published rapidly after a careful editorial check, and formal peer review takes place openly after publication. Articles that pass peer review are indexed in PubMed and PubMed Central. Referees receive full credit for their contribution as their names, affiliations and comments are permanently attached to the article and each report is assigned a DOI and therefore easily citable.

We understand that you have a lot of other commitments, but we would be very grateful if you could give us your expert opinion on this article. We would of course be happy for a colleague (for example, someone in your group) to help prepare the report and be named as a co-referee with you.

AISTATS 2016 [post-submissions]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on October 22, 2015 by xi'an

Now that the deadline for AISTATS 2016 submissions is past, I can gladly report that we got the amazing number of 559 submissions, which is much more than what was submitted to the previous AISTATS conferences. To the point it made us fear for a little while [but not any longer!] that the conference room was not large enough. And hope that we had to install video connections in the hotel bar!

Which also means handling about the same amount of papers as a year of JRSS B submissions within a single month!, the way those submissions are handled for the AISTATS 2016 conference proceedings. The process is indeed [as in other machine learning conferences] to allocate papers to associate editors [or meta-reviewers or area chairs] with a bunch of papers and then have those AEs allocate papers to reviewers, all this within a few days, as the reviews have to be returned to authors within a month, for November 16 to be precise. This sounds like a daunting task but it proceeded rather smoothly due to a high degree of automation (this is machine-learning, after all!) in processing those papers, thanks to (a) the immediate response to the large majority of AEs and reviewers involved, who bid on the papers that were of most interest to them, and (b) a computer program called the Toronto Paper Matching System, developed by Laurent Charlin and Richard Zemel. Which tremendously helps with managing about everything! Even when accounting for the more formatted entries in such proceedings (with an 8 page limit) and the call to the conference participants for reviewing other papers, I remain amazed at the resulting difference in the time scales for handling papers in the fields of statistics and machine-learning. (There was a short lived attempt to replicate this type of processing for the Annals of Statistics, if I remember well.)

beyond subjective and objective in Statistics

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , on August 28, 2015 by xi'an

“At the level of discourse, we would like to move beyond a subjective vs. objective shouting match.” (p.30)

This paper by Andrew Gelman and Christian Hennig calls for the abandonment of the terms objective and subjective in (not solely Bayesian) statistics. And argue that there is more than mere prior information and data to the construction of a statistical analysis. The paper is articulated as the authors’ proposal, followed by four application examples, then a survey of the philosophy of science perspectives on objectivity and subjectivity in statistics and other sciences, next to a study of the subjective and objective aspects of the mainstream statistical streams, concluding with a discussion on the implementation of the proposed move. Continue reading

improved approximate-Bayesian model-choice method for estimating shared evolutionary history

Posted in Books, Statistics, University life with tags , , , , , , , , , , , on May 14, 2014 by xi'an

“An appealing approach would be a comparative, Bayesian model-choice method for inferring the probability of competing divergence histories while integrating over uncertainty in mutational and ancestral processes via models of nucleotide substitution and lineage coalescence.” (p.2)

Jamies Oaks arXived (a few months ago now) a rather extensive Monte-Carlo study on the impact of prior modelling on the model-choice performances of ABC model choice. (Of which I only became aware recently.) As in the earlier paper I commented on the Óg, the issue here has much more to do with prior assessment and calibration than with ABC implementation per se. For instance, the above quote recaps the whole point of conducting Bayesian model choice. (As missed by Templeton.)

“This causes divergence models with more divergence-time parameters to integrate over a much greater parameter space with low likelihood yet high prior density, resulting in small marginal likelihoods relative to models with fewer divergence-time parameters.” (p.2)

This second quote is essentially stressing the point with Occam’s razor argument. Which I deem [to be] a rather positive feature of Bayesian model choice. A reflection on the determination of the prior distribution, getting away from uniform priors, thus sounds most timely! The current paper takes place within a rather extensive exchange between Oak’s group and Hickerson’s group on what makes Bayesian model choice (and the associated software msBayes) pick or not the correct model. Oak and coauthors objected to the use of “narrow, empirically informed uniform priors”, arguing that this leads to a bias towards models with less parameters, a “statistical issue” in their words, while Hickerson et al. (2014) think this is due to msBayes way of selecting models and their parameters at random. However it refrains from reproducing earlier criticisms of or replies to Hickerson et al.

The current paper claims to have reached a satisfactory prior modelling with ¨improved robustness, accuracy, and power” (p.3).  If I understand correctly, the changes are in replacing a uniform distribution with a Gamma or a Dirichlet prior. Which means introducing a seriously large and potentially crippling number of hyperparameters into the picture. Having a lot of flexibility in the prior also means a lot of variability in the resulting inference… In other words, with more flexibility comes more responsibility, to paraphrase Voltaire.

“I have introduced a new approximate-Bayesian model choice method.” (p.21)

The ABC part is rather standard, except for the strange feature that the divergence times are used to construct summary statistics (p.10). Strange because these times are not observed for the actual data. So I must be missing something. (And I object to the above quote and to the title of the paper since there is no new ABC technique there, simply a different form of prior.)

“ABC methods in general are known to be biased for model choice.” (p.21)

I do not understand much the part about (reshuffling) introducing bias as detailed on p.11: every approximate method gives a “biased” answer in the sense this answer is not the true and proper posterior distribution. Using a different (re-ordered) vector of statistics provides a different ABC outcome,  hence a different approximate posterior, for which it seems truly impossible to check whether or not it increases the discrepancy from the true posterior, compared with the other version. I must admit I always find annoying to see the word bias used in a vague meaning and esp. within a Bayesian setting. All Bayesian methods are biased. End of the story. Quoting our PNAS paper as concluding that ABC model choice is biased is equally misleading: the intended warning represented by the paper was that Bayes factors and posterior probabilities could be quite unrelated with those based on the whole dataset. That the proper choice of summary statistics leads to a consistent model choice shows ABC model choice is not necessarily “biased”… Furthermore, I also fail to understand why the posterior probability of model i should be distributed as a uniform (“If the method is unbiased, the points should fall near the identity line”) when the data is from model i: this is not a p-value but a posterior probability and the posterior probability is not the frequentist coverage…

My overall problem is that, all in all, this is a single if elaborate Monte Carlo study and, as such, it does not carry enough weight to validate an approach that remains highly subjective in the selection of its hyperparameters. Without raising any doubt about an hypothetical “fixing” of those hyperparameters, I think this remains a controlled experiment with simulated data where the true parameters are know and the prior is “true”. This obviously helps in getting better performances.

“With improving numerical methods (…), advances in Monte Carlo techniques and increasing efficiency of likelihood calculations, analyzing rich comparative phylo-geographical models in a full-likelihood Bayesian framework is becoming computationally feasible.” (p.21)

This conclusion of the paper sounds over-optimistic and rather premature. I do not know of any significant advance in computing the observed likelihood for the population genetics models ABC is currently handling. (The SMC algorithm of Bouchard-Côté, Sankaraman and Jordan, 2012, does not apply to Kingman’s coalescent, as far as I can tell.) This is certainly a goal worth pursuing and borrowing strength from multiple techniques cannot hurt, but it remains so far a lofty goal, still beyond our reach… I thus think the major message of the paper is to reinforce our own and earlier calls for caution when interpreting the output of an ABC model choice (p.20), or even of a regular Bayesian analysis, agreeing that we should aim at seeing “a large amount of posterior uncertainty” rather than posterior probability values close to 0 and 1.

statistical significance as explained by The Economist

Posted in Books, Statistics, University life with tags , , , , , , on November 7, 2013 by xi'an

There is a long article in The Economist of this week (also making the front cover), which discusses how and why many published research papers have unreproducible and most often “wrong” results. Nothing immensely new there, esp. if you read Andrew’s blog on a regular basis, but the (anonymous) writer(s) take(s) pains to explain how this related to statistics and in particular statistical testing of hypotheses. The above is an illustration from this introduction to statistical tests (and their interpretation).

“First, the statistics, which if perhaps off-putting are quite crucial.”

It is not the first time I spot a statistics backed article in this journal and so assume it has either journalists with a statistics background or links with (UK?) statisticians. The description of why statistical tests can err is fairly (Type I – Type II) classical. Incidentally, it reports a finding of Ioannidis that when reporting a positive at level 0.05,  the expectation of a false positive rate of one out of 20 is “highly optimistic”. An evaluation opposed to, e.g., Berger and Sellke (1987) who reported a too-early rejection in a large number of cases. More interestingly, the paper stresses that this classical approach ignores “the unlikeliness of the hypothesis being tested”, which I interpret as the prior probability of the hypothesis under test.

“Statisticians have ways to deal with such problems. But most scientists are not statisticians.”

The paper also reports about the lack of power in most studies, report that I find a bit bizarre and even meaningless in its ability to compute an overall power, all across studies and researchers and even fields. Even in a single study, the alternative to “no effect” is composite, hence has a power that depends on the unknown value of the parameter. Seeking a single value for the power requires some prior distribution on the alternative.

“Peer review’s multiple failings would matter less if science’s self-correction mechanism—replication—was in working order.”

The next part of the paper covers the failings of peer review, of which I discussed in the ISBA Bulletin, but it seems to me too easy to blame the ref in failing to spot statistical or experimental errors, when lacking access to the data or to the full experimental methodology and when under pressure to return (for free) a report within a short time window. The best that can be expected is that a referee detects the implausibility of a claim or an obvious methodological or statistical mistake. These are not math papers! And, as pointed out repeatedly, not all referees are statistically numerate….

“Budding scientists must be taught technical skills, including statistics.”

The last part discusses of possible solutions to achieve reproducibility and hence higher confidence in experimental results. Paying for independent replication is the proposed solution but it can obviously only apply to a small margin of all published results. And having control bodies testing at random labs and teams following a major publication seems rather unrealistic, if only for filling the teams of such bodies with able controllers… An interesting if pessimistic debate, in fine. And fit for the International Year of Statistics.

Follow

Get every new post delivered to your Inbox.

Join 1,020 other followers