Archive for misuse of Statistics

ten recommendations from the RSS

Posted in Statistics, University life with tags , , , , , , , , , , , on March 21, 2021 by xi'an

‘Statistics have been crucial both to our understanding of the pandemic and to our efforts to fight it. While we hope we won’t see another pandemic on this scale, we need to see a culture change now – with more transparency around data and evidence, stronger mechanisms to challenge the misuse of statistics, and leaders with statistical skills.’

  • Invest in public health data – which should be regarded as critical national infrastructure and a full review of health data should be conducted
  • Publish evidence – all evidence considered by governments and their advisers must be published in a timely and accessible manner
  • Be clear and open about data – government should invest in a central portal, from which the different sources of official data, analysis protocols and up-to-date results can be found
  • Challenge the misuse of statistics – the Office for Statistics Regulation should have its funding augmented so it can better hold the government to account
  • The media needs to step up its responsibilities – government should support media institutions that invest in specialist scientific and medical reporting
  • Build decision makers’ statistical skills – politicians and senior officials should seek out statistical training
  • Build an effective infectious disease surveillance system to monitor the spread of disease – the government should ensure that a real-time surveillance system is ready for future pandemics
  • Increase scrutiny and openness for new diagnostic tests – similar steps to those adopted for vaccine and pharmaceutical evaluation should be followed for diagnostic tests
  • Health data is incomplete without social care data – improving social care data should be a central part of any review of UK health data
  • Evaluation should be put at the heart of policy – efficient evaluations or experiments should be incorporated into any intervention from the start.

Testing and significance

Posted in R, Statistics, University life with tags , , , , , , , on September 13, 2011 by xi'an

Julien Cornebise pointed me to this Guardian article that itself summarises the findings of a Nature Neuroscience article I cannot access. The core of the paper is that a large portion of comparative studies conclude to a significant difference between protocols when one protocol result is significantly different from zero and the other one(s) is(are) not…  From a frequentist perspective (I am not even addressing the Bayesian aspects of using those tests!), under the null hypothesis that both protocols induce the same null effect, the probability of wrongly deriving a significant difference can be evaluated by

> x=rnorm(10^6)
> y=rnorm(10^6)
> sum((abs(x)<1.96)*(abs(y)>1.96)*(abs(x-y)<1.96*sqrt(2)))
[1] 31805
> sum((abs(x)>1.96)*(abs(y)<1.96)*(abs(x-y)<1.96*sqrt(2)))
[1] 31875
> (31805+31875)/10^6
[1] 0.06368

which moves to a 26% probability of error when x is drifted by 2! (The maximum error is just above 30%, when x is drifted by around 2.6…)

(This post was written before Super Andrew posted his own “difference between significant and not significant“! My own of course does not add much to the debate.)

%d bloggers like this: