**I**n preparation for the JSM round table on eugenics and statistics, organised by the COPSS Award Committee, I read the 1985 book of Daniel Kevles, In the Name of Eugenics: Genetics and the Uses of Human Heredity, as recommended by Stephen Stiegler. While a large part of the book was published in The New Yorker, in which Kevles published on a regular basis, and while he abstains from advanced methodological descriptions, focussing more on the actors of this first attempt at human genetics and of the societal consequences of biased interpretations and mistaken theories, his book is a scholarly accomplishment, with a massive section of notes and numerous references. This is a comparative history of eugenics from the earliest (Francis Galton, 1865) to the current days (1984) since “modern eugenics” survived the exposure of the Nazi crimes (including imposed sterilizations that are still enforced to this day). Comparative between the UK and the US, however, hardly considering other countries, except for a few connections with Germany and the Soviet Union, albeit in the sole perspective of Muller’s sojourn there and the uneasy “open-minded” approach to Lysenkoism by Haldane. (Japan is also mentioned in connection with Neel’s study of the genetic impact of the atomic bombs.) While discussing the broader picture, the book mostly concentrates on the scientific aspects, on how the misguided attempts to reduce intelligence to IQ tests or to a single gene, and to improve humanity (or some of its subgroups) by State imposed policies perceived as crude genetic engineering simultaneously led to modern genetics and a refutation of eugenic perspectives by most if not all. There is very little about statistical methodology per, beside stories on the creation of Biometrika and the Annals of Eugenics, but much more on the accumulation of data by eugenic societies and the exploitation of this data for ideological purposes. Galton and Pearson get the lion’s share of the book, while Fisher does not get more coverage than Haldane or Penrose. Overall, I found the book immensely informative as exposing the diversity of scientific and pseudo-scientific viewpoints within eugenism and its evolution towards human genetics as a scientific endeavour.

## Archive for Ronald Fisher

## in the name of eugenics [book review]

Posted in Statistics with tags Annals of Eugenics, Biometrika, book review, eugenics, Japan, John Burdon Sanderson Haldane, Lionel Penrose, Muller, Nazi State, Ronald Fisher, Soviet Union, The New Yorker, Trofim Lysenko on August 30, 2020 by xi'an## a conversation about eugenism at JSM

Posted in Books, Kids, pictures, Statistics, University life with tags Adolphe Pinard, Caius & Gonville College, Cambridge University, Charles Darwin, eugenics, Fisher lecture, Flinder Petrie, Flinders Petrie, Francis Galton, JSM, JSM 2020, Karl Pearson, Mary Snopes, Ronald Fisher on July 29, 2020 by xi'anFollowing the recent debate on Fisher’s involvement in eugenics (and the renaming of the R.A. Fisher Award and Lectureship into the COPSS Distinguished Achievement Award and Lectureship), the ASA is running a JSM round table on Eugenics and its connections with statistics, to which I had been invited, along with Scarlett Bellamy, David Bellhouse, and David Cutler. The discussion is planned on 06 August at 3pm (ET, i.e., 7GMT) and here is the abstract:

The development of eugenics and modern statistical theory are inextricably entwined in history. Their evolution was guided by the culture and societal values of scholars (and the ruling class) of their time through and including today. Motivated by current-day societal reckonings of systemic injustice and inequity, this roundtable panel explores the role of prominent statisticians and of statistics more broadly in the development of eugenics at its inception and over the past century. Leveraging a diverse panel, the discussions seek to shed light on how eugenics and statistics – despite their entangled past — have now severed, continue to have presence in ways that affect our lives and aspirations.

It is actually rather unclear to me why I was invited at the table, apart from my amateur interest in the history of statistics. On a highly personal level, I remember being introduced to Galton’s racial theories during my first course on probability, in 1982, by Prof Ogier, who always used historical anecdotes to enliven his lectures, like Galton trying to measure women mensurations during his South Africa expedition. Lectures that took place in the INSEE building, boulevard Adolphe Pinard in Paris, with said Adolphe Pinard being a founding member of the French Eugenics Society in 1913.

## stained glass to go

Posted in pictures, University life with tags Caius & Gonville College, Cambridge, Cambridge colleges, controversies, design of experiments, DNA helix, eugenics, Francis Crick, latin square, Ronald Fisher, stained glass, Venn diagram on July 6, 2020 by xi'an## Monte Carlo Markov chains

Posted in Books, Statistics, University life with tags Andrei Kolmogorov, Bayesian Analysis, Bayesian model comparison, book review, CHANCE, Gregor Mendel, iris data, irreducibility, JASA, Jeffreys priors, Kolmogorov axioms, Kolmogorov-Smirnov distance, MCMC, physics, population genetics, pot-pourri, recurrence, Richard Price, Ronald Fisher, Springer-Verlag, textbook, Thomas Bayes, W. Gosset on May 12, 2020 by xi'an**D**arren Wraith pointed out this (currently free access) Springer book by Massimiliano Bonamente [whose family name means *good spirit* in Italian] to me for its use of the unusual *Monte Carlo Markov chain* rendering of MCMC. (Google Trend seems to restrict its use to California!) This is a graduate text for physicists, but one could nonetheless expect more rigour in the processing of the topics. Particularly of the Bayesian topics. Here is a pot-pourri of memorable quotes:

*“Two major avenues are available for the assignment of probabilities. One is based on the repetition of the experiments a large number of times under the same conditions, and goes under the name of the frequentist or classical method. The other is based on a more theoretical knowledge of the experiment, but without the experimental requirement, and is referred to as the Bayesian approach.”*

*“The Bayesian probability is assigned based on a quantitative understanding of the nature of the experiment, and in accord with the Kolmogorov axioms. It is sometimes referred to as *empirical probability*, in recognition of the fact that sometimes the probability of an event is assigned based upon a practical knowledge of the experiment, although without the classical requirement of repeating the experiment for a large number of times. This method is named after the Rev. Thomas Bayes, who pioneered the development of the theory of probability.”*

*“The likelihood P(B/A) represents the probability of making the measurement B given that the model A is a correct description of the experiment.”*

*“…a uniform distribution is normally the logical assumption in the absence of other information.”*

*“The Gaussian distribution can be considered as a special case of the binomial, when the number of tries is sufficiently large.”*

*“This clearly does not mean that the Poisson distribution has no variance—in that case, it would not be a random variable!”*

*“The method of moments therefore returns unbiased estimates for the mean and variance of every distribution in the case of a large number of measurements.”*

*“The great advantage of the Gibbs sampler is the fact that the acceptance is 100 %, since there is no rejection of candidates for the Markov chain, unlike the case of the Metropolis–Hastings algorithm.”*

Let me then point out (or just whine about!) the book using “statistical independence” for plain independence, the use of / rather than Jeffreys’ | for conditioning (and sometimes forgetting \ in some LaTeX formulas), the confusion between events and random variables, esp. when computing the posterior distribution, between models and parameter values, the reliance on discrete probability for continuous settings, as in the Markov chain chapter, confusing density and probability, using Mendel’s pea data without mentioning the unlikely fit to the expected values (or, as put more subtly by Fisher (1936), “the data of most, if not all, of the experiments have been falsified so as to agree closely with Mendel’s expectations”), presenting Fisher’s and Anderson’s *Iris data* [a motive for rejection when George was JASA editor!] as a “a new classic experiment”, mentioning Pearson but not Lee for the data in the 1903 Biometrika paper “On the laws of inheritance in man” (and woman!), and not accounting for the discrete nature of this data in the linear regression chapter, the three page derivation of the Gaussian distribution from a Taylor expansion of the Binomial pmf obtained by differentiating in the integer argument, spending endless pages on deriving standard properties of classical distributions, this appalling mess of adding over the conditioning atoms with no normalisation in a Poisson experiment

,

botching the proof of the CLT, which is treated *before* the Law of Large Numbers, restricting maximum likelihood estimation to the Gaussian and Poisson cases and muddling its meaning by discussing unbiasedness, confusing a drifted Poisson random variable with a drift on its parameter, as well as using the pmf of the Poisson to define an area under the curve (Fig. 5.2), sweeping the improperty of a constant prior under the carpet, defining a null hypothesis as a range of values for a summary statistic, no mention of Bayesian perspectives in the hypothesis testing, model comparison, and regression chapters, having one-dimensional case chapters followed by two-dimensional case chapters, reducing model comparison to the use of the Kolmogorov-Smirnov test, processing bootstrap and jackknife in the Monte Carlo chapter without a mention of importance sampling, stating recurrence results without assuming irreducibility, motivating MCMC by the intractability of the evidence, resorting to the term *link* to designate the current value of a Markov *chain*, incorporating the need for a prior distribution in a terrible description of the Metropolis-Hastings algorithm, including a discrete proof for its stationarity, spending many pages on early 1990’s MCMC convergence tests rather than discussing the adaptive scaling of proposal distributions, the inclusion of numerical tables [in a 2017 book] and turning Bayes (1763) into Bayes and Price (1763), or Student (1908) into Gosset (1908).

*[Usual disclaimer about potential self-plagiarism: this post or an edited version of it could possibly appear later in my Books Review section in CHANCE. Unlikely, though!]*