Archive for Fisher lecture

a conversation about eugenism at JSM

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , , , , , on July 29, 2020 by xi'an

Following the recent debate on Fisher’s involvement in eugenics (and the renaming of the R.A. Fisher Award and Lectureship into the COPSS Distinguished Achievement Award and Lectureship), the ASA is running a JSM round table on Eugenics and its connections with statistics, to which I had been invited, along with Scarlett BellamyDavid Bellhouse, and David Cutler. The discussion is planned on 06 August at 3pm (ET, i.e., 7GMT) and here is the abstract:

The development of eugenics and modern statistical theory are inextricably entwined in history.  Their evolution was guided by the culture and societal values of scholars (and the ruling class) of their time through and including today.  Motivated by current-day societal reckonings of systemic injustice and inequity, this roundtable panel explores the role of prominent statisticians and of statistics more broadly in the development of eugenics at its inception and over the past century.  Leveraging a diverse panel, the discussions seek to shed light on how eugenics and statistics – despite their entangled past — have now severed, continue to have presence in ways that affect our lives and aspirations.

It is actually rather unclear to me why I was invited at the table, apart from my amateur interest in the history of statistics. On a highly personal level, I remember being introduced to Galton’s racial theories during my first course on probability, in 1982, by Prof Ogier, who always used historical anecdotes to enliven his lectures, like Galton trying to measure women mensurations during his South Africa expedition. Lectures that took place in the INSEE building, boulevard Adolphe Pinard in Paris, with said Adolphe Pinard being a founding member of the French Eugenics Society in 1913.

10 Little’s simple ideas

Posted in Books, Statistics, University life with tags , , , , , , , , on July 17, 2013 by xi'an

“I still feel that too much of academic statistics values complex mathematics over elegant simplicity — it is necessary for a research paper to be complicated in order to be published.” Roderick Little, JASA, p.359

Roderick Little wrote his Fisher lecture, recently published in JASA, around ten simple ideas for statistics. Its title is “In praise of simplicity not mathematistry! Ten simple powerful ideas for the statistical scientist”. While this title is rather antagonistic, blaming mathematical statistics for the rise of mathematistry in the field (a term borrowed from Fisher, who also invented the adjective ‘Bayesian’), the paper focus on those 10 ideas and very little on why there is (would be) too much mathematics in statistics:

  1. Make outcomes univariate
  2. Bayes rule, for inference under an assumed model
  3. Calibrated Bayes, to keep inference honest
  4. Embrace well-designed simulation experiments
  5. Distinguish the model/estimand, the principles of estimation, and computational methods
  6. Parsimony — seek a good simple model, not the “right” model
  7. Model the Inclusion/Assignment and try to make it ignorable
  8. Consider dropping parts of the likelihood to reduce the modeling part
  9. Potential outcomes and principal stratification for causal inferenc
  10. Statistics is basically a missing data problem

“The mathematics of problems with infinite parameters is interesting, but with finite sample sizes, I would rather have a parametric model. “Mathematistry” may eschew parametric models because the asymptotic theory is too simple, but they often work well in practice.” Roderick Little, JASA, p.365

Both those rules and the illustrations that abund in the paper are reflecting upon Little’s research focus and obviously apply to his model in a fairly coherent way. However, while a mostly parametric model user myself, I fear the rejection of non-parametric techniques is far too radical. It is more and more my convinction that we cannot handle the full complexity of a realistic structure in a standard Bayesian manner and that we have to give up on the coherence and completeness goals at some point… Using non-parametrics and/or machine learning on some bits and pieces then makes sense, even though it hurts elegance and simplicity.

“However, fully Bayes inference requires detailed probability modeling, which is often a daunting task. It seems worth sacrifycing some Bayesian inferential purity if the task can be simplified.” Roderick Little, JASA, p.366

I will not discuss those ideas in detail, as some of them make complete sense to me (like Bayesian statistics laying its assumptions in the open) and others remain obscure (e.g., causality) or with limited applicability. It is overall a commendable Fisher lecture that focus on methodology and the practice of statistical science, rather than on theory. I however do not see the reason why maths should be blamed for this state of the field. Nor why mathematical statistics journals like AoS would carry some responsibility in the lack of further applicability in other fields.  Students of statistics do need a strong background in mathematics and I fear we are losing ground in this respect, at least judging by the growing difficulty in finding measure theory courses abroad for our exchange undergradutes from Paris-Dauphine. (I also find the model misspecification aspects mostly missing from this list.)