## the most important statistical ideas of the past 50 years

**A**ki and Andrew are celebrating the New Year in advance by composing a list of the most important statistics ideas occurring (roughly) since they were born (or since Fisher died)! Like

- substitution of computing for mathematical analysis (incl. bootstrap)
- fitting a model with a large number of parameters, using some regularization procedure to get stable estimates and good predictions (e.g., Gaussian processes, neural networks, generative adversarial networks, variational autoencoders)
- multilevel or hierarchical modelling (incl. Bayesian inference)
- advances in statistical algorithms for efficient computing (with a long list of innovations since 1970, including ABC!), pointing out that a large fraction was of the divide & conquer flavour (in connection with large—if not necessarily Big—data)
- statistical decision analysis (e.g., Bayesian optimization and reinforcement learning, getting beyond classical experimental design )
- robustness (under partial specification, misspecification or in the M-open world)
- EDA à la Tukey and statistical graphics (and R!)
- causal inference (via counterfactuals)

Now, had I been painfully arm-bent into coming up with such a list, it would have certainly been shorter, for lack of opinion about some of these directions (even the Biometrika deputeditoship has certainly helped in reassessing the popularity of different branches!), and I would have have presumably been biased towards Bayes as well as more mathematical flavours. Hence objecting to the witty comment that “theoretical statistics is the theory of applied statistics”(p.10) and including Ghosal and van der Vaart (2017) as a major reference. Also bemoaning the lack of long-term structure and theoretical support of a branch of the machine-learning literature.

Maybe also more space and analysis could have been spent on “debates remain regarding appropriate use and interpretation of statistical methods” (p.11) in that a major difficulty with the latest in data science is not so much the method(s) as the data on which they are based, which in a large fraction of the cases, is not representative and is poorly if at all corrected for this bias. The “replication crisis” is thus only one (tiny) aspect of the challenge.

*Related*

This entry was posted on January 10, 2020 at 12:21 am and is filed under Books, pictures, Statistics, Travel with tags ABC, applied statistics, Bayesian Analysis, Bayesian nonparametrics, Bayesian optimization, big data, Biometrika, bootstrap, divide & conquer, EDA, Helsinki, list, machine learning, reinforcement learning, replication crisis, robustness, statistical computing, theory of statistics. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

## Leave a Reply