Archive for peer review

Nature Outlook on AI

Posted in Statistics with tags , , , , , , , , , , , , , , , on January 13, 2019 by xi'an

The 29 November 2018 issue of Nature had a series of papers on AIs (in its Outlook section). At the general public (awareness) level than in-depth machine-learning article. Including one on the forecasted consequences of ever-growing automation on jobs, quoting from a 2013 paper by Carl Frey and Michael Osborne [of probabilistic numerics fame!] that up to 47% of US jobs could become automated. The paper is inconclusive on how taxations could help in or deter from transfering jobs to other branches, although mentioning the cascading effect of taxing labour and subsidizing capital. Another article covers the progresses in digital government, with Estonia as a role model, including the risks of hacking (but not mentioning Russia’s state driven attacks). Differential privacy is discussed as a way to keep data “secure” (but not cryptography à la Louis Aslett!). With another surprising entry that COBOL is still in use in some administrative systems. Followed by a paper on the apparently limited impact of digital technologies on mental health, despite the advertising efforts of big tech companies being described as a “race to the bottom of the brain stem”! And another one on (overblown) public expectations on AIs, although the New York Time had an entry yesterday on people in Arizona attacking self-driving cars with stones and pipes… Plus a paper on the growing difficulties of saving online documents and culture for the future (although saving all tweets ever published does not sound like a major priority to me!).

Interesting (?) aside, the same issue contains a general public article on the use of AIs for peer reviews (of submitted papers). The claim being that “peer review by artificial intelligence (AI) is promising to improve the process, boost the quality of published papers — and save reviewers time.” A wee bit over-optimistic, I would say, as the developed AI’s are at best “that statistics and methods in manuscripts are sound”. For instance, producing “key concepts to summarize what the paper is about” is not particularly useful. A degree of innovation compared with the existing would be. Or an automated way to adapt the paper style to the strict and somewhat elusive Biometrika style!

a good start in Series B!

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , on January 5, 2019 by xi'an

Just received the great news for the turn of the year that our paper on ABC using Wasserstein distance was accepted in Series B! Inference in generative models using the Wasserstein distance, written by Espen Bernton, Pierre Jacob, Mathieu Gerber, and myself, bypasses the (nasty) selection of summary statistics in ABC by considering the Wasserstein distance between observed and simulated samples. It focuses in particular on non-iid cases like time series in what I find fairly innovative ways. I am thus very glad the paper is going to appear in JRSS B, as it has methodological consequences that should appeal to the community at large.

mixture modelling for testing hypotheses

Posted in Books, Statistics, University life with tags , , , , , , , , , , on January 4, 2019 by xi'an

After a fairly long delay (since the first version was posted and submitted in December 2014), we eventually revised and resubmitted our paper with Kaniav Kamary [who has now graduated], Kerrie Mengersen, and Judith Rousseau on the final day of 2018. The main reason for this massive delay is mine’s, as I got fairly depressed by the general tone of the dozen of reviews we received after submitting the paper as a Read Paper in the Journal of the Royal Statistical Society. Despite a rather opposite reaction from the community (an admittedly biased sample!) including two dozens of citations in other papers. (There seems to be a pattern in my submissions of Read Papers, witness our earlier and unsuccessful attempt with Christophe Andrieu in the early 2000’s with the paper on controlled MCMC, leading to 121 citations so far according to G scholar.) Anyway, thanks to my co-authors keeping up the fight!, we started working on a revision including stronger convergence results, managing to show that the approach leads to an optimal separation rate, contrary to the Bayes factor which has an extra √log(n) factor. This may sound paradoxical since, while the Bayes factor  converges to 0 under the alternative model exponentially quickly, the convergence rate of the mixture weight α to 1 is of order 1/√n, but this does not mean that the separation rate of the procedure based on the mixture model is worse than that of the Bayes factor. On the contrary, while it is well known that the Bayes factor leads to a separation rate of order √log(n) in parametric models, we show that our approach can lead to a testing procedure with a better separation rate of order 1/√n. We also studied a non-parametric setting where the null is a specified family of distributions (e.g., Gaussians) and the alternative is a Dirichlet process mixture. Establishing that the posterior distribution concentrates around the null at the rate √log(n)/√n. We thus resubmitted the paper for publication, although not as a Read Paper, with hopefully more luck this time!

peer reviews on-line or peer community?

Posted in Statistics with tags , , , , , , , , , on September 20, 2018 by xi'an

Nature (or more precisely some researchers through Nature, associated with the UK Wellcome Trust, the US Howard Hughes Medical Institute (hhmo), and ASAPbio) has (have) launched a call for publishing reviews next to accept papers, one way or another, which is something I (and many others) have supported for quite a while. Including for rejected papers, not only because making these reviews public diminishes on principle the time involved in re-reviewing re-submitted papers but also because this should induce authors to revise papers with obvious flaws and missing references (?). Or abstain from re-submitting. Or publish a rejoinder addressing the criticisms. Anything that increases the communication between all parties, as well as the perspectives on a given paper. (This year, NIPS allows for the posting of reviews of rejected submissions, which I find a positive trend!)

In connection with this entry, I am still most sorry that I could not pursue the [superior in my opinion] project of Peer Community in computational statistics, for the time requested by Biometrika editing is just too important [given my current stamina!] for me to handle another journal (or the better alternative to a journal!). I hope someone else can take over the project and create the editorial team needed to run it.

And yet again in connection with this post (!), Andrew posted an announcement about the launch of res3archers.one, an on-line publication forum launched by Harry Crane and Ryan Martin, where the authors handle the peer review process from A to Z, including choosing the reviewers, whose reviews may be public or not, taken into account or not. Once published, the papers are open to comments from users, which constitutes a form of post-publication peer-review. Albeit a weak one in my opinion as the weakness of all such open depositories is the potential lack of interest of and reaction from the community. Incidentally, there is a $10 fee per submission for maintenance. Contrary to Peer Community in… the copyright is partly transferred to res3archers.one, which apparently prevents further publication in another journal.

and here we go!

Posted in Books, Running, Statistics, University life with tags , , , , , on March 16, 2018 by xi'an

On March 1, I have started handling papers for Biometrika as deputy editor, along with Omiros Papaspiliopoulos. With on average one paper a day to handle this means a change in my schedule and presumably less blog posts about recent papers and arXivals if I want to keep my daily morning runs!

stop the rot!

Posted in Statistics with tags , , , , , , , , , , , , on September 26, 2017 by xi'an

Several entries in Nature this week about predatory journals. Both from Ottawa Hospital Research Institute. One emanates from the publication officer at the Institute, whose role is “dedicated to educating researchers and guiding them in their journal submission”. And telling the tale of a senior scientist finding out a paper submitted to a predatory journal and later rescinded was nonetheless published by the said journal. Which reminded me of a similar misadventure that occurred to me a few years ago. After having a discussion of an earlier paper therein rejected from The American Statistician, my PhD student Kaniav Kamary and I resubmitted it to the Journal of Applied & Computational Mathematics, from which I had received an email a few weeks earlier asking me in flowery terms for a paper. When the paper got accepted as such two days after submission, I got alarmed and realised this was a predatory journal, which title played with the quasi homonymous Journal of Computational and Applied Mathematics (Elsevier) and International Journal of Applied and Computational Mathematics (Springer). Just like the authors in the above story, we wrote back to the editors, telling them we were rescinding our submission, but never got back any reply or request of copyright transfer. Instead, requests for (diminishing) payments were regularly sent to us, for almost a year, until they ceased. In the meanwhile, the paper had been posted on the “journal” website and no further email of ours, including some from our University legal officer, induced a reply or action from the journal…

The second article in Nature is from a group of epidemiologists at the same institute, producing statistics about biomedical publications in predatory journals (characterised as such by the defunct Beall blacklist). And being much more vehement about the danger represented by these journals, which “articles we examined were atrocious in terms of reporting”, and authors submitting to them, as unethical for wasting human and animal observations. The authors of this article identify thirteen characteristics for spotting predatory journals, the first one being “low article-processing fees”, our own misadventure being the opposite. And they ask for higher control and auditing from the funding institutions over their researchers… Besides adding an extra-layer to the bureaucracy, I fear this is rather naïve, as if the boundary between predatory and non-predatory journals was crystal clear, rather than a murky continuum. And putting the blame solely on the researchers rather than sharing it with institutions always eager to push their bibliometrics towards more automation of the assessment of their researchers.

crowd-based peer review

Posted in Statistics with tags , , , , , , , , , , on June 20, 2017 by xi'an

In clear connection with my earlier post on Peer Community In… and my visit this week to Montpellier towards starting a Peer Community In Computational Statistics, I read a tribune in Nature (1 June, p.9) by the editor of Synlett, Benjamin List, describing an experiment conducted by this journal in chemical synthesis. The approach was to post (volunteered) submitted papers on a platform accessible to a list of 100 reviewers, nominated by the editorial board, who could anonymously comment on the papers and read others’ equally anonymous comments. With a 72 hours deadline! According to Benjamin List (and based on  a large dataset of … 10 papers!), the outcome of the experiment is one of better quality return than with traditional reviewing policies. While Peer Community In… does not work exactly this way, and does not aim at operating as a journal, it is exciting and encouraging to see such experiments unfold!