Archive for Francis Galton

What are the chances of that?

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , on May 13, 2022 by xi'an

What are the chances that I review a book with this title, a few months after reviewing a book called What is luck?! This one is written by Andrew Elliott, whose Is that a big number? I reviewed a wee bit earlier… And that the cover of this book involves a particularly unlucky sequence of die as in my much earlier review of Krysz Burdzy’s book? (About 10⁻⁶ less likely than the likeliest draw!)

The (relative) specificity of this book is to try to convey the notions of chance and uncertainty to the general public, more in demonstrating that our intuition is most often wrong by examples and simulations, than in delving into psychological reasons as in Barbara Blatchley’s book. The author advances five dualities that underly our (dysfunctional) relation to chance: individual vs. collective, randomness vs. meaning, foresight vs. insight, uniformity vs. variability, and disruption vs. opportunity.

“News programmes clearly understand that the testimonies of individuals draw better audiences than the summaries of statisticians.” (p. xvii)

Some of the nice features of the book  are (a) the description of a probabilistic problem at the beginning of each chapter, to be solved at the end, (b) the use of simulation experiments, represented by coloured pixels over a grey band crossing the page, including a section on pseudorandom generators [which is less confusing that the quote below may indicate!], (c) taking full advantage of the quincunx apparatus, and (d) very few apologies for getting into formulas. And even a relevant quote of Taleb’s Black Swan about the ludic fallacy. On the other hand, the author spends quite a large component of the book on chance games, exhibiting a ludic tendency! And contemplates biased coins, while he should know better! The historical sections may prove too much for both informed and uninformed readers. (However, I learned that the UK Government had used a form of lottery to pay interests on premium bonds.) And the later parts are less numerical and quantified, even though the author brings in the micromort measurement [invented by Ronald Howard and] favoured by David Spiegelhalter. Who actually appears to have inspired several other sections, like the one on coincidences (which remains quite light in its investigation!). I finished the book rather quickly by browsing though mostly anecdotes and a lesser feel of a unified discourse. I did not find the attempt to link with the COVID pandemic, which definitely resets our clocks on risk, particularly alluring…

“People go to a lot of trouble to generate truly random numbers—sequences that are impossible to predict.” (p.66)

The apparition of the Normal distribution is somewhat overdone and almost mystical, if the tone gets more reasonable by the end of the corresponding chapter.

“…combining random numbers from distributions that really have no business being added together (…) ends up with a statistic that actually fits the normal distribution quite well.” (p.83)

The part about Bayes and Bayesian reasoning does not include any inference, with a rather duh! criticism of prior modelling.

“If you are tempted to apply a group statistic derived from a broad analysis to a more narrow purpose, you run the risk of making an unfair judgement.” (p.263)

The section about Xenakis’ musical creations as a Markov process was most interesting (and novel to me). I also enjoyed the shared cultural entries, esp. literary ones. Like citing the recent Chernobyl TV drama. Or Philip K. Dick’s Do Androids Dream of Electric Sheep? Or yet Monty Python’s Life of Brian. Overall, there is enough trivia and engagement to keep reading the book till its end!

extinction minus one

Posted in Books, Kids, pictures, R, Statistics, University life with tags , , , , , , , , , , , , , , , on March 14, 2022 by xi'an

The riddle from The Riddler of 19 Feb. is about the Bernoulli Galton-Watson process, where each individual in the population has one or zero descendant with equal probabilities: Starting with a large population os size N, what is the probability that the size of the population on the brink of extinction is equal to one? While it is easy to show that the probability the n-th generation is extinct is

\mathbb{P}(S_n=0) = 1 - \frac{1}{2^{nN}}

I could not find a way to express the probability to hit one and resorted to brute force simulation, easily coded

for(t in 1:(T<-1e8)){N=Z=1e4 
while(Z>1)Z=rbinom(1,Z,.5)
F=F+Z}
F/T

which produces an approximate probability of 0.7213 or 0.714. The impact of N is quickly vanishing, as expected when the probability to reach 1 in one generation is negligible…

However, when returning to Dauphine after a two-week absence, I presented the problem with my probabilist neighbour François Simenhaus, who immediately pointed out that this probability was more simply seen as the probability that the maximum of N independent geometric rv’s was achieved by a single one among the N. Searching later a reference for that probability, I came across the 1990 paper of Bruss and O’Cinneide, which shows that the probability of uniqueness of the maximum does not converge as N goes to infinity, but rather fluctuates around 0.72135 with logarithmic periodicity. It is only when N=2^n that the sequence converges to 0.721521… This probability actually writes down in closed form as

N\sum_{i=1}^\infty 2^{-i-1}(1-2^{-i})^{N-1}

(which is obvious in retrospect!, albeit containing a typo in the original paper which is missing a ½ factor in equation (17)) and its asymptotic behaviour is not obvious either, as noted by the authors.

On the historical side, and in accordance with Stiegler’s law, the Galton-Watson process should have been called the Bienaymé process! (Bienaymé was a student of Laplace, who successively lost positions for his political idea, before eventually joining Académie des Sciences, and later founding the Société Mathématique de France.)

another book on J.B.S. Haldane [review of a book review]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , on August 24, 2020 by xi'an

As I noticed a NYT book review of a most recent book on J.B.S. Haldane, I realised several other books had already been written about him. From an early 1985 biography, “Haldane: the life and work of J.B.S. Haldane with special references to India” followed by a “2016 biographyPopularizing Science” along an  2009 edited book on some Haldane’s essays, “What I require from life“, all by Krishna R. Dronamraju to a 1969 biography with the cryptic title “J.B.S.“, by Richard Clarke, along with a sensational 2018 “Comrade Haldane Is Too Busy to Go on Holiday: The Genius Who Spied for Stalin” by Gavan Tredoux, depicting him as a spy for the Soviet Union during WW II. (The last author is working on a biography of Francis Galton, hopefully exonerating him of spying for the French! But a short text of him comparing Haldane and Darlington appears to support the later’s belief in racial differences in intelligence…) I also discovered that J.B.S. had written a children book, “Mr Friend Mr. Leaky“, illustrated by Quentin Blake, Roald Dahl’s illustrator. (Charlotte Franken Haldane, J.B.S.’s first wife, also wrote a considerable number of books.)

The NYT review is more a summary of Haldane’s life than an analysis of the book itself, hard as it is not to get mesmerised by the larger-than-life stature of J.B.S. It does not dwell very long on the time it took Haldane to break from the Communist Party for its adherence to the pseudo-science Lysenko (while his wife Charlotte had realised the repressive nature of the Soviet regime much earlier, which may have led to their divorce). While the review makes no mention at all of Haldane’s ideological move to the ISI in Kolkata, it concludes with “for all his failings, he was “deeply attractive during a time of shifting, murky moralities.”” [The double quotes being the review quoting the book!]

a conversation about eugenism at JSM

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , , , , , on July 29, 2020 by xi'an

Following the recent debate on Fisher’s involvement in eugenics (and the renaming of the R.A. Fisher Award and Lectureship into the COPSS Distinguished Achievement Award and Lectureship), the ASA is running a JSM round table on Eugenics and its connections with statistics, to which I had been invited, along with Scarlett BellamyDavid Bellhouse, and David Cutler. The discussion is planned on 06 August at 3pm (ET, i.e., 7GMT) and here is the abstract:

The development of eugenics and modern statistical theory are inextricably entwined in history.  Their evolution was guided by the culture and societal values of scholars (and the ruling class) of their time through and including today.  Motivated by current-day societal reckonings of systemic injustice and inequity, this roundtable panel explores the role of prominent statisticians and of statistics more broadly in the development of eugenics at its inception and over the past century.  Leveraging a diverse panel, the discussions seek to shed light on how eugenics and statistics – despite their entangled past — have now severed, continue to have presence in ways that affect our lives and aspirations.

It is actually rather unclear to me why I was invited at the table, apart from my amateur interest in the history of statistics. On a highly personal level, I remember being introduced to Galton’s racial theories during my first course on probability, in 1982, by Prof Ogier, who always used historical anecdotes to enliven his lectures, like Galton trying to measure women mensurations during his South Africa expedition. Lectures that took place in the INSEE building, boulevard Adolphe Pinard in Paris, with said Adolphe Pinard being a founding member of the French Eugenics Society in 1913.

Probability and Bayesian modeling [book review]

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , , , , , , , , , , , on March 26, 2020 by xi'an

Probability and Bayesian modeling is a textbook by Jim Albert [whose reply is included at the end of this entry] and Jingchen Hu that CRC Press sent me for review in CHANCE. (The book is also freely available in bookdown format.) The level of the textbook is definitely most introductory as it dedicates its first half on probability concepts (with no measure theory involved), meaning mostly focusing on counting and finite sample space models. The second half moves to Bayesian inference(s) with a strong reliance on JAGS for the processing of more realistic models. And R vignettes for the simplest cases (where I discovered R commands I ignored, like dplyr::mutate()!).

As a preliminary warning about my biases, I am always reserved at mixing introductions to probability theory and to (Bayesian) statistics in the same book, as I feel they should be separated to avoid confusion. As for instance between histograms and densities, or between (theoretical) expectation and (empirical) mean. I therefore fail to relate to the pace and tone adopted in the book which, in my opinion, seems to dally on overly simple examples [far too often concerned with food or baseball] while skipping over the concepts and background theory. For instance, introducing the concept of subjective probability as early as page 6 is laudable but I doubt it will engage fresh readers when describing it as a measurement of one’s “belief about the truth of an event”, then stressing that “make any kind of measurement, one needs a tool like a scale or ruler”. Overall, I have no particularly focused criticisms on the probability part except for the discrete vs continuous imbalance. (With the Poisson distribution not covered in the Discrete Distributions chapter. And the “bell curve” making a weird and unrigorous appearance there.) Galton’s board (no mention found of quincunx) could have been better exploited towards the physical definition of a prior, following Steve Stiegler’s analysis, by adding a second level. Or turned into an R coding exercise. In the continuous distributions chapter, I would have seen the cdf coming first to the pdf, rather than the opposite. And disliked the notion that a Normal distribution was supported by an histogram of (marathon) running times, i.e. values lower bounded by 122 (at the moment). Or later (in Chapter 8) for Roger Federer’s serving times. Incidentally, a fun typo on p.191, at least fun for LaTeX users, as

f_{Y\ mid X}

with an extra space between `\’ and `mid’! (I also noticed several occurrences of the unvoidable “the the” typo in the last chapters.) The simulation from a bivariate Normal distribution hidden behind a customised R function sim_binom() when it could have been easily described as a two-stage hierarchy. And no comment on the fact that a sample from Y-1.5X could be directly derived from the joint sample. (Too unconscious a statistician?)

When moving to Bayesian inference, a large section is spent on very simple models like estimating a proportion or a mean, covering both discrete and continuous priors. And strongly focusing on conjugate priors despite giving warnings that they do not necessarily reflect prior information or prior belief. With some debatable recommendation for “large” prior variances as weakly informative or (worse) for Exp(1) as a reference prior for sample precision in the linear model (p.415). But also covering Bayesian model checking either via prior predictive (hence Bayes factors) or posterior predictive (with no mention of using the data twice). A very marginalia in introducing a sufficient statistic for the Normal model. In the Normal model checking section, an estimate of the posterior density of the mean is used without (apparent) explanation.

“It is interesting to note the strong negative correlation in these parameters. If one assigned informative independent priors on and , these prior beliefs would be counter to the correlation between the two parameters observed in the data.”

For the same reasons of having to cut on mathematical validation and rigour, Chapter 9 on MCMC is not explaining why MCMC algorithms are converging outside of the finite state space case. The proposal in the algorithmic representation is chosen as a Uniform one, since larger dimension problems are handled by either Gibbs or JAGS. The recommendations about running MCMC do not include how many iterations one “should” run (or other common queries on Stack eXchange), albeit they do include the sensible running multiple chains and comparing simulated predictive samples with the actual data as a  model check. However, the MCMC chapter very quickly and inevitably turns into commented JAGS code. Which I presume would require more from the students than just reading the available code. Like JAGS manual. Chapter 10 is mostly a series of examples of Bayesian hierarchical modeling, with illustrations of the shrinkage effect like the one on the book cover. Chapter 11 covers simple linear regression with some mentions of weakly informative priors,  although in a BUGS spirit of using large [enough?!] variances: “If one has little information about the location of a regression parameter, then the choice of the prior guess is not that important and one chooses a large value for the prior standard deviation . So the regression intercept and slope are each assigned a Normal prior with a mean of 0 and standard deviation equal to the large value of 100.” (p.415). Regardless of the scale of y? Standardisation is covered later in the chapter (with the use of the R function scale()) as part of constructing more informative priors, although this sounds more like data-dependent priors to me in the sense that the scale and location are summarily estimated by empirical means from the data. The above quote also strikes me as potentially confusing to the students, as it does not spell at all how to design a joint distribution on the linear regression coefficients that translate the concentration of these coefficients along y̅=β⁰+β¹x̄. Chapter 12 expands the setting to multiple regression and generalised linear models, mostly consisting of examples. It however suggests using cross-validation for model checking and then advocates DIC (deviance information criterion) as “to approximate a model’s out-of-sample predictive performance” (p.463). If only because it is covered in JAGS, the definition of the criterion being relegated to the last page of the book. Chapter 13 concludes with two case studies, the (often used) Federalist Papers analysis and a baseball career hierarchical model. Which may sound far-reaching considering the modest prerequisites the book started with.

In conclusion of this rambling [lazy Sunday] review, this is not a textbook I would have the opportunity to use in Paris-Dauphine but I can easily conceive its adoption for students with limited maths exposure. As such it offers a decent entry to the use of Bayesian modelling, supported by a specific software (JAGS), and rightly stresses the call to model checking and comparison with pseudo-observations. Provided the course is reinforced with a fair amount of computer labs and projects, the book can indeed achieve to properly introduce students to Bayesian thinking. Hopefully leading them to seek more advanced courses on the topic.

Update: Jim Albert sent me the following precisions after this review got on-line:

Thanks for your review of our recent book.  We had a particular audience in mind, specifically undergraduate American students with some calculus background who are taking their first course in probability and statistics.  The traditional approach (which I took many years ago) teaches some probability one semester and then traditional inference (focusing on unbiasedness, sampling distributions, tests and confidence intervals) in the second semester.  There didn’t appear to be any Bayesian books at that calculus-based undergraduate level and that motivated the writing of this book.  Anyway, I think your comments were certainly fair and we’ve already made some additions to our errata list based on your comments.
[Disclaimer about potential self-plagiarism: this post or an edited version will eventually appear in my Books Review section in CHANCE. As appropriate for a book about Chance!]
%d bloggers like this: