Computo (latin for calculate, compute, reckon) is a new journal launched by the French statistical society (SFDS) to promote reproducible research in statistics and machine learning by publishing papers with reproducible contributions. Towards this goal, Computo goes beyond classical static publications by including technical advances in literate programming and scientific reporting. The reproducibility of numerical results is a necessary condition for publication in Computo. In particular, submissions must include all necessary data (e.g. via zenodo repositories) and code. For contributions featuring the implementation of methods/algorithms, the quality of the provided code is assessed during the review process. Meaning accepting contributions in the form of notebooks (e.g. Rmarkdown, or Jupyter). This is a 100% free and open-access journal, thanks to the sponsoring of the SFDS. Once a manuscript is accepted, its reviews will be made available on the Computo website. Reviewers can choose to remain anonymous or not. (Towards an even broader reach, we are now considering a partnership with the PCI, following an earlier attempt I did not pursue till its completion…) Computo’s logo has been designed by Loïc Schwaller. And represents the letters of Computo in bytes. Submissions are now open!
Archive for non-reproducible research
COMPUTO, the journal for reproducible statistical research
Posted in Books, Statistics, University life with tags academic journals, Computo, logo, non-reproducible research, notebook, open and free access, PCI Comput Stats, reproducibility, Rmarkdown, SFDS, Société française de Statistique on February 15, 2022 by xi'anBayes, reproducibility, and the quest for truth
Posted in Books, Kids, Statistics, University life with tags accuracy, all models are wrong, Bayes(Pharma), expertise, frequentist coverage, L'Aquila, legal statistics, magnitude, non-reproducible research, Richter scale on September 2, 2016 by xi'an“Avoid opinion priors, you could be held legally or otherwise responsible.”
Don Fraser, Mylène Bedard, Augustine Wong, Wei Lin, and Ailana Fraser wrote a paper to appear in Statistical Science, with the above title. This paper is a continuation of Don’s assessment of Bayes procedures in earlier Statistical Science [which I discussed] and Science 2013 papers, which I would qualify with all due respect of a demolition enterprise [of the Bayesian approach to statistics]… The argument therein is similar in that “reproducibility” is to be understood therein as providing frequentist confidence assessment. The authors also use “accuracy” in this sense. (As far as I know, there is no definition of reproducibility to be found in the paper.) Some priors are matching priors, in the (restricted) sense that they give second-order accurate frequentist coverage. Most are not matching and none is third-order accurate, a level that may be attained by alternative approaches. As far as the abstract goes, this seems to be the crux of the paper. Which is fine, but does not qualify in my opinion as a criticism of the Bayesian paradigm, given that (a) it makes no claim at frequentist coverage and (b) I see no reason in proper coverage being connected with “truth” or “accuracy”. It truly makes no sense to me to attempt either to put a frequentist hat on posterior distributions or to check whether or not the posterior is “valid”, “true” or “actual”. I similarly consider that Efron‘s “genuine priors” do not belong to the Bayesian paradigm but are on the opposite anti-Bayesian in that they suggest all priors should stem from frequency modelling, to borrow the terms from the current paper. (This is also the position of the authors, who consider they have “no Bayes content”.)
Among their arguments, the authors refer to two tragic real cases: the earthquake at L’Aquila, where seismologists were charged (and then discharged) with manslaughter for asserting there was little risk of a major earthquake, and the indictment of the pharmaceutical company Merck for the deadly side-effects of their drug Vioxx. The paper however never return to those cases and fails to explain in which sense this is connected with the lack of reproducibility or of truth(fullness) of Bayesian procedures. If anything, the morale of the Aquila story is that statisticians should not draw definitive conclusions like there is no risk of a major earthquake or that it was improbable. There is a strange if human tendency for experts to reach definitive conclusions and to omit the many layers of uncertainty in their models and analyses. In the earthquake case, seismologists do not know how to predict major quakes from the previous activity and that should have been the [non-]conclusion of the experts. Which could possibly have been reached by a Bayesian modelling that always includes uncertainty. But the current paper is not at all operating at this (epistemic?) level, as it never ever questions the impact of the choice of a likelihood function or of a statistical model in the reproducibility framework. First, third or 47th order accuracy nonetheless operates strictly within the referential of the chosen model and providing the data to another group of scientists, experts or statisticians will invariably produce a different statistical modelling. So much for reproducibility or truth.
Revised evidence for statistical standards
Posted in Kids, Statistics, University life with tags Andrew Gelman, Bayesian tests, False positive, letter, non-reproducible research, PNAS, Ronald Fisher, uniformly most powerful tests, Valen Johnson on December 19, 2013 by xi'anWe just submitted a letter to PNAS with Andrew Gelman last week, in reaction to Val Johnson’s recent paper “Revised standards for statistical evidence”, essentially summing up our earlier comments within 500 words. Actually, we wrote one draft each! In particular, Andrew came up with the (neat) rhetorical idea of alternative Ronald Fishers living in parallel universes who had each set a different significance reference level and for whom alternative Val Johnsons would rise and propose a modification of the corresponding Fisher’s level. For which I made the above graph, left out of the letter and its 500 words. It relates “the old z” and “the new z”, meaning the boundaries of the rejection zones when, for each golden dot, the “old z” is the previous “new z” and “the new z” is Johnson’s transform. We even figured out that Val’s transform was bringing the significance down by a factor of 10 in a large range of values. As an aside, we also wondered why most of the supplementary material was spent on deriving UMPBTs for specific (formal) problems when the goal of the paper sounded much more global…
As I am aware we are not the only ones to have submitted a letter about Johnson’s proposal, I am quite curious at the reception we will get from the editor! (Although I have to point out that all of my earlier submissions of letters to to PNAS got accepted.)