Archive for publish or perish

Nature [5 Jan issue]

Posted in Books, pictures, University life with tags , , , , , , , , , on February 17, 2023 by xi'an

Nature in its 5 Jan issue has an editorial by Daniël Lakens asking for statistical reviews prior to research being performed and data being collected which sounds like a reasonable idea provided reviewers with proper expertise and dedication can be found, an issue the editorial does not mention. Main focus on sample size that sounds overly simplistic… it contains the following funny (?) jab:

“I do not propose that reviewers debate matters as such as frequentist versus Bayesian philosophies of statistics.”

One could see a connexion with preregistered trials, with the sound argument that hypotheses should be clearly stated prior to getting data.

The issue also contains an open-access paper by WHO and U of Washington researchers (incl. Bayesian John Wakefield) on estimating the number of COVID-19 deaths from excess deaths. With the issue that data is missing for some countries. With a critical commentary from Enrique Acosta on not adjusting for avoided deaths. And apparently (and surprisingly) not accounting for age structure in each country, esp. since regression is involved. The modelling is done via a Poisson count model. And analysed by Bayesian methods. As often I wonder why France doesn’t feature in the picture, except for a mention that the ratio of excess deaths to COVID-19 deaths is less than one, and French Guiana is not on the maps… Unclear issues about highly reliable countries like Germany and Sweden. And splines… Instead of Gaussian processes. No attempt at capture recapture?

And a somewhat puzzling paper [rewarded by the journal cover] on diminishing disruption of scientific papers over time. It is sort of obvious that as the numbers explode novelty and impact diminish. If only because an increasing number of papers never get cited. Based on a single CD index (with a typo in the formula!) Nothing about maths? As noted by the authors in their conclusion the sheer number of disruptive papers had remained essentially constant…

arbitrary non-constant function [nonsensical]

Posted in Statistics with tags , , , , , , , , , , , on November 6, 2020 by xi'an
When looking for properties of the negative exponential distribution, in connection with an X validated question, I came across this nonsensical paper, starting with a truncated and drifted exponential distribution being defined as a negative exponential distribution, including a nonsensical bound a in (1.1), followed by an equally nonsensical characterisation of the distribution, including a theorem with a useless function Φ… Unsurprisingly, the publisher (SCIRP) of the Open Journal of Statistics is part of Beall’s list of potential, possible, or probable] predatory publishers. (List that is now maintained by Scholarly Open Access.)

peer reviews on-line or peer community?

Posted in Statistics with tags , , , , , , , , , on September 20, 2018 by xi'an

Nature (or more precisely some researchers through Nature, associated with the UK Wellcome Trust, the US Howard Hughes Medical Institute (hhmo), and ASAPbio) has (have) launched a call for publishing reviews next to accept papers, one way or another, which is something I (and many others) have supported for quite a while. Including for rejected papers, not only because making these reviews public diminishes on principle the time involved in re-reviewing re-submitted papers but also because this should induce authors to revise papers with obvious flaws and missing references (?). Or abstain from re-submitting. Or publish a rejoinder addressing the criticisms. Anything that increases the communication between all parties, as well as the perspectives on a given paper. (This year, NIPS allows for the posting of reviews of rejected submissions, which I find a positive trend!)

In connection with this entry, I am still most sorry that I could not pursue the [superior in my opinion] project of Peer Community in computational statistics, for the time requested by Biometrika editing is just too important [given my current stamina!] for me to handle another journal (or the better alternative to a journal!). I hope someone else can take over the project and create the editorial team needed to run it.

And yet again in connection with this post (!), Andrew posted an announcement about the launch of res3archers.one, an on-line publication forum launched by Harry Crane and Ryan Martin, where the authors handle the peer review process from A to Z, including choosing the reviewers, whose reviews may be public or not, taken into account or not. Once published, the papers are open to comments from users, which constitutes a form of post-publication peer-review. Albeit a weak one in my opinion as the weakness of all such open depositories is the potential lack of interest of and reaction from the community. Incidentally, there is a $10 fee per submission for maintenance. Contrary to Peer Community in… the copyright is partly transferred to res3archers.one, which apparently prevents further publication in another journal.

publish or perish [or move to .005]

Posted in Books, pictures, Statistics, University life with tags , , , , on October 24, 2017 by xi'an

A series of articles in the Sciences et Médecine part of Le Monde reproduced coverages found elsewhere on the debates running within the scientific community on improving the quality of scientific papers. Through reproducible experiments and conclusions. And on using new bounds for the p-value, the solution to all woes! The article borrows a lot from the Nature proposal [discussed quite a lot here in the past weeks] and does not provide particularly insightful views. It however contains a coverage (rightmost columns) on a peer community approach called PubPeer, which was launched by two neuroscientists, Brandon Stell and Boris Barbour, both at CNRS, towards sharing comments on published papers. Mostly to criticise the methodology used in these papers. Or to point out multiple usages of the same graphs. Or doctoring of pictures. In the vast majority of cases, the papers are in biology and the comments not addressed by the authors of the papers. (With this exception of a discussion of the Nature paper covering the call for new bounds on p-values. Nature paper that had the appealing feature of calling for an end to `one-size-fits-all’ thresholds.) Creating a platform for discussing papers from a journal is already hard enough (as shown with the closure of Series B’log!), hence running a global discussion forum for all journals sounds hard to manage and foster. By which I mean it is difficult to fathom the impact of the discussions on the published papers and the journals where they are published, given the reticence of said journals to engage into reassessments of published papers…

 

stop the rot!

Posted in Statistics with tags , , , , , , , , , , , , on September 26, 2017 by xi'an

Several entries in Nature this week about predatory journals. Both from Ottawa Hospital Research Institute. One emanates from the publication officer at the Institute, whose role is “dedicated to educating researchers and guiding them in their journal submission”. And telling the tale of a senior scientist finding out a paper submitted to a predatory journal and later rescinded was nonetheless published by the said journal. Which reminded me of a similar misadventure that occurred to me a few years ago. After having a discussion of an earlier paper therein rejected from The American Statistician, my PhD student Kaniav Kamary and I resubmitted it to the Journal of Applied & Computational Mathematics, from which I had received an email a few weeks earlier asking me in flowery terms for a paper. When the paper got accepted as such two days after submission, I got alarmed and realised this was a predatory journal, which title played with the quasi homonymous Journal of Computational and Applied Mathematics (Elsevier) and International Journal of Applied and Computational Mathematics (Springer). Just like the authors in the above story, we wrote back to the editors, telling them we were rescinding our submission, but never got back any reply or request of copyright transfer. Instead, requests for (diminishing) payments were regularly sent to us, for almost a year, until they ceased. In the meanwhile, the paper had been posted on the “journal” website and no further email of ours, including some from our University legal officer, induced a reply or action from the journal…

The second article in Nature is from a group of epidemiologists at the same institute, producing statistics about biomedical publications in predatory journals (characterised as such by the defunct Beall blacklist). And being much more vehement about the danger represented by these journals, which “articles we examined were atrocious in terms of reporting”, and authors submitting to them, as unethical for wasting human and animal observations. The authors of this article identify thirteen characteristics for spotting predatory journals, the first one being “low article-processing fees”, our own misadventure being the opposite. And they ask for higher control and auditing from the funding institutions over their researchers… Besides adding an extra-layer to the bureaucracy, I fear this is rather naïve, as if the boundary between predatory and non-predatory journals was crystal clear, rather than a murky continuum. And putting the blame solely on the researchers rather than sharing it with institutions always eager to push their bibliometrics towards more automation of the assessment of their researchers.

%d bloggers like this: