**H**ere are the slides I built as support of my discussion, inspired by readings like Kevles’ In the Name of Eugenism and others listed on the final slide, as well as Wikipedia entries. Nothing original or new, to be sure.

## Archive for history of statistics

## Hastings 50 years later

Posted in Books, pictures, Statistics, University life with tags 1066, asynchronous algorithms, automation, Battle of Hastings, Bayesian statistics, BUGS, history of statistics, incompatible conditionals, Metropolis-Hastings algorithms, Normans, pseudo-marginal MCMC, STAN, Wilfred Keith Hastings on January 9, 2020 by xi'an**W**hat is the exact impact of the Metropolis-Hastings algorithm on the field of Bayesian statistics? and what are the new tools of the trade? What I personally find the most relevant and attractive element in a review on the topic is the current role of this algorithm, rather than its past (his)story, since many such reviews have already appeared and will likely continue to appear. What matters most imho is how much the Metropolis-Hastings algorithm signifies for the community at large, especially beyond academia. Is the availability or unavailability of software like BUGS or Stan a help or an hindrance? Was Hastings’ paper the start of the era of approximate inference or the end of exact inference? Are the algorithm intrinsic features like Markovianity a fundamental cause for an eventual extinction because of the ensuing time constraint and the lack of practical guarantees of convergence and the illusion of a fully automated version? Or are emerging solutions like unbiased MCMC and asynchronous algorithms a beacon of hope?

In their Biometrika paper, Dunson and Johndrow (2019) recently wrote a celebration of Hastings’ 1970 paper in Biometrika, where they cover adaptive Metropolis (Haario et al., 1999; Roberts and Rosenthal, 2005), the importance of gradient based versions toward universal algorithms (Roberts and Tweedie, 1995; Neal, 2003), discussing the advantages of HMC over Langevin versions. They also recall the significant step represented by Peter Green’s (1995) reversible jump algorithm for multimodal and multidimensional targets, as well as tempering (Miasojedow et al., 2013; Woodard et al., 2009). They further cover intractable likelihood cases within MCMC (rather than ABC), with the use of auxiliary variables (Friel and Pettitt, 2008; Møller et al., 2006) and pseudo-marginal MCMC (Andrieu and Roberts, 2009; Andrieu and Vihola, 2016). They naturally insist upon the need to handle huge datasets, high-dimension parameter spaces, and other scalability issues, with links to unadjusted Langevin schemes (Bardenet et al., 2014; Durmus and Moulines, 2017; Welling and Teh, 2011). Similarly, Dunson and Johndrow (2019) discuss recent developments towards parallel MCMC and non-reversible schemes such as PDMP as highly promising, with a concluding section on the challenges of automatising and robustifying much further the said procedures, if only to reach a wider range of applications. The paper is well-written and contains a wealth of directions and reflections, including those in my above introduction. Here are some mostly disconnected directions I would have liked to see covered or more covered

- convergence assessment today, e.g. the comparison of various approximation schemes
- Rao-Blackwellisation and other post-processing improvements
- other approximate inference tools than the pseudo-marginal MCMC
- importance of the parameterisation of the problem for convergence
- dimension issues and connection with quasi-Monte Carlo
- constrained spaces of measure zero, as for instance matrix distributions imposing zeros outside a diagonal band
- given the rise of the machine(-learners), are exploratory and intrinsically slow algorithms like MCMC doomed or can both fields feed one another? The section on optimisation could be expanded in that direction
- the wasteful nature of the random walk feature of MCMC algorithms, as opposed to non-reversible kernels like HMC and other PDMPs, missing from the gradient based methods section (and can we once again learn from physicists?)
- finer convergence issues and hence inference difficulties with complex MCMC algorithms like Gibbs samplers with incompatible conditionals
- use of the Hastings ratio in other algorithms like ABC or EP (in link with the section on generalised Bayes)
- adapting Metropolis-Hastings methods for emerging computing tools like GPUs and quantum computers

or possibly less covered, namely data augmentation put forward when it is a special case of auxiliary variables as in slice sampling and in earlier physics literature. For instance, both probit and logistic regressions do not truly require data augmentation and are more toy examples than really challenging applications. The approach of Carlin & Chib (1995) is another illustration, which has met with recent interest, despite requiring heavy calibration (just like RJMCMC). As well as a a somewhat awkward opposition between Gibbs and Hastings, in that I am not convinced that Gibbs does not remain ultimately necessary to handle high dimension problems, in the sense that the alternative solutions like Langevin, HMC, or PDMP, or…, are relying on Euclidean assumptions for the entire vector, while a direct product of Euclidean structures may prove more adequate.

## down with Galton (and Pearson and Fisher…)

Posted in Books, Statistics, University life with tags Annals of Eugenics, Biometrika, eugenics, Francis Galton, Genetics, history of statistics, honours, Karl Pearson, London, physiognomy, population genetics, R.A. Fisher, racism, Stephen Stigler, UCL, University College London on July 22, 2019 by xi'an

**I**n the last issue of Significance, which I read in Warwick prior to the conference, there is a most interesting article on Galton’s eugenics, his heritage at University College London (UCL), and the overall trouble with honouring prominent figures of the past with memorials like named building or lectures… The starting point of this debate is a protest from some UCL students and faculty about UCL having a lecture room named after the late Francis Galton who was a professor there. Who further donated at his death most of his fortune to the university towards creating a professorship in eugenics. The protests are about Galton’s involvement in the eugenics movement of the late 18th and early 19th century. As well as professing racist opinions.

My first reaction after reading about these protests was *why not?!* Named places or lectures, as well as statues and other memorials, have a limited utility, especially when the named person is long dead and they certainly do not contribute in making a scientific theory [associated with the said individual] more appealing or more valid. And since “humans are [only] humans”, to quote Stephen Stigler speaking in this article, it is unrealistic to expect great scientists to be perfect, the more if one multiplies the codes for ethical or acceptable behaviours across ages and cultures. It is also more rational to use amphitheater MS.02 and lecture room AC.18 rather than associate them with one name chosen out of many alumni’s or former professors’.

Predictably, another reaction of mine was *why bother?!,* as removing Galton’s name from the items it is attached to is highly unlikely to change current views on eugenism or racism. On the opposite, it seems to detract from opposing the present versions of these ideologies. As some recent proposals linking genes and some form of academic success. Another of my (multiple) reactions was that as stated in the article these views of Galton’s reflected upon the views and prejudices of the time, when the notions of races and inequalities between races (as well as genders and social classes) were almost universally accepted, including in scientific publications like the proceedings of the Royal Society and Nature. When Karl Pearson launched the Annals of Eugenics in 1925 (after he started Biometrika) with the very purpose of establishing a scientific basis for eugenics. (An editorship that Ronald Fisher would later take over, along with his views on the differences between races, believing that “human groups differ profoundly in their innate capacity for intellectual and emotional development”.) Starting from these prejudiced views, Galton set up a scientific and statistical approach to support them, by accumulating data and possibly modifying some of these views. But without much empathy for the consequences, as shown in this terrible quote I found when looking for more material:

“I should feel but little compassion if I saw all the Damaras in the hand of a slave-owner, for they could hardly become more wretched than they are now…”

As it happens, my first exposure to Galton was in my first probability course at ENSAE when a terrific professor was peppering his lectures with historical anecdotes and used to mention Galton’s data-gathering trip to Namibia, literally measure local inhabitants towards his physiognomical views , also reflected in the above attempt of his to superpose photographs to achieve the “ideal” thief…

## Bayes-250

Posted in Books, pictures, Statistics, University life with tags Bayes 250, Bayesian foundations, Duke University, history of statistics on December 18, 2013 by xi'an**M**y fourth Bayes-250 and presumably the last one, as it starts sounding like groundhog day!

**S**tephen Stigler started the day with three facts or items of inference on Thomas Bayes: the first one was about The Essay and its true title, a recent research I made use of in Budapest. As reported in his Statistical Science paper, Stigler found an off-print of Bayes’ Essay with an altogether different title:* “A Method of Calculating the Exact Probability of All Conclusions founded on Induction”*, which sounds much better than the title of the version published in the Proceedings of the Royal Society,* “An Essay toward solvi**ng a Problem in the Doctrine of Chances”*, and appears as part of a larger mathematical construct in answering Hume’s dismissal of miracles… (Dennis Lindley in a personal communication to Stephen acknowledged the importance of the title and regretted “as an atheist” that the theorem was intended for religious usage!)

**S**tephen then discussed Bayes’s portrait, which (first?) appeared in June 1933 in *The American Conservationist*. Herein acknowledged as taken from the Wing collection of the Newberry library in Chicago (where Stephen has not yet unearthed the said volume!) My suggestion would be to use a genealogy algorithm to check whether or not paternity cannot be significantly rejected by comparing the two portraits. The more portraits from Bayes’ family, the better.

**S**teven Fienberg took over for another enjoyable historical talk about the neo-Bayesian revival of the 50s. In connection with his BA paper on the appearance of the term *Bayesian*. Giving appropriately a large place to Alan Turing. And Jimmy Savage (whose book does not use the term *Bayesian*). He also played great videos of Howard Raiffa explaining how he became a (closet) Bayesian. And of Jack Good being interviewed by Persi Diaconis. *(On a highly personal level, I wonder who in my hotel has named his or her network “Neo Bayesian Revival”!)*

**I**n a very unusual format, Adrian Smith and Alan Gelfand ran an exchange around a bottle of Scotch (and a whole amphitheatre), where Adrian recollected his youth at Cambridge and the slow growth of Bayesian statistics in the UK (“a very unorthodox form of inference” in Dennis’ words). I liked very much the way he explained how Dennis Lindley tried to build for statistics the equivalent of the system of axioms Kolmogorov had produced for probability. And even more how Dennis came to the Bayesian side for decision-theoretic reasons. (The end of the exchange was more predictable as being centred on the MCMC revolution.)

**M**ichael Jordan completed the day with a talk oriented much more towards the future. About the growing statistical perspective on document analysis. Document as data indeed. Starting with the bag of words representation. (A side remark was that his paper Latent Dirichlet allocation got more citations than classics like Jim Berger’s 1985 book or Efron’s 1984 book.) The central theme of the talk was that there is much work left to be done to address real problems. Really real problems with computational issues orders of magnitude away from what we can propose today. Michael took linguistics as a final example. Linking with Adrian’s conclusion in that respect.

## Bayesian introductions at IXXI

Posted in Mountains, Statistics, Travel, University life with tags ABC, Bayesian statistics, causality, Grenoble, history of statistics, neurosciences, population genetics on October 28, 2013 by xi'an**T**en days ago I did a lighting-fast visit to Grenoble for a quick introduction to Bayesian notions during a Bayesian day organised by Michael Blum. It was supported by IXXI, Rhône Alpes Complex Systems Institute, a light structure that favors interdisciplinary research to model complex sytems such as biological or social systems, technological networks… This was an opportunity to recycle my Budapest overview from Bayes 250th to Bayes 2.5.0. *(As I have changed my email signature initial from X to IX, I further enjoyed the name IXXI!)* More seriously, I appreciated (despite the too short time spent there!) the mix of perspectives and disciplines represented in this introduction, from Bayesian networks and causality in computer science and medical expert systems, to neurosciences and the Bayesian theory of mind, to Bayesian population genetics. And hence the mix of audiences. The part about neurosciences and social representations on others’ mind reminded me of the discussion with Pierre Bessières we had a year ago on France Culture. Again, I am quite sorry and apologetic for having missed part of the day and opportunities for discussions, simply because of a tight schedule this week…