Archive for Harvard University

EM gets the Nobel (of statistics)

Posted in Statistics with tags , , , , , on March 23, 2021 by xi'an

secrets of the surface

Posted in Statistics with tags , , , , on October 30, 2020 by xi'an

 

if then [reading a book self-review]

Posted in Statistics with tags , , , , , , , , , , , , , on October 26, 2020 by xi'an

Nature of 17 September 2020 has a somewhat surprising comment section where an author, Jill Lepore from Harvard University, actually summarises her own book, If Then: How the Simulmatics Corporation invented the Future. This book is the (hi)story of a precursor of Big Data Analytics, Simulmatics, which used as early as 1959 clustering and simulation to predict election results and if possible figure out discriminant variables. Which apparently contributed to John F. Kennedy’ s victory over Richard Nixon in 1960. Rather than admiring the analytic abilities of such precursors (!), the author is blaming them for election interference. A criticism that could apply to any kind of polling, properly or improperly conducted. The article also describes how Simulmatics went into advertising, econometrics and counter-insurgency, vainly trying to predict the occurence and location of riots (at home) and revolutions (abroad). And argues in a all-encompassing critique against any form of data-analytics applied to human behaviour. And praises the wisdom of 1968 protesters over current Silicon Valley researchers (whose bosses may have been among these 1968 protesters!)… (Stressing again that my comments come from reading and reacting to the above Nature article, not the book itself!)

Nature tidbits [the Bayesian brain]

Posted in Statistics with tags , , , , , , , , , , , , , , on March 8, 2020 by xi'an

In the latest Nature issue, a long cover of Asimov’s contributions to science and rationality. And a five page article on the dopamine reward in the brain seen as a probability distribution, seen as distributional reinforcement learning by researchers from DeepMind, UCL, and Harvard. Going as far as “testing” for this theory with a p-value of 0.008..! Which could be as well a signal of variability between neurons to dopamine rewards (with a p-value of 10⁻¹⁴, whatever that means). Another article about deep learning about protein (3D) structure prediction. And another one about learning neural networks via specially designed devices called memristors. And yet another one on West Africa population genetics based on four individuals from the Stone to Metal age (8000 and 3000 years ago), SNPs, PCA, and admixtures. With no ABC mentioned (I no longer have access to the journal, having missed renewal time for my subscription!). And the literal plague of a locust invasion in Eastern Africa. Making me wonder anew as to why proteins could not be recovered from the swarms of locust to partly compensate for the damages. (Locusts eat their bodyweight in food every day.) And the latest news from NeurIPS about diversity and inclusion. And ethics, as in checking for responsibility and societal consequences of research papers. Reviewing the maths of a submitted paper or the reproducibility of an experiment is already challenging at times, but evaluating the biases in massive proprietary datasets or the long-term societal impact of a classification algorithm may prove beyond the realistic.

unbiased MCMC with couplings [4pm, 26 Feb., Paris]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , on February 24, 2020 by xi'an

On Wednesday, 26 February, Pierre Jacob (Havard U, currently visiting Paris-Dauphine) is giving a seminar on unbiased MCMC methods with couplings at AgroParisTech, bvd Claude Bernard, Paris 5ième, Room 32, at 4pm in the All about that Bayes seminar.

MCMC methods yield estimators that converge to integrals of interest in the limit of the number of iterations. This iterative asymptotic justification is not ideal; first, it stands at odds with current trends in computing hardware, with increasingly parallel architectures; secondly, the choice of “burn-in” or “warm-up” is arduous. This talk will describe recently proposed estimators that are unbiased for the expectations of interest while having a finite computing cost and a finite variance. They can thus be generated independently in parallel and averaged over. The method also provides practical upper bounds on the distance (e.g. total variation) between the marginal distribution of the chain at a finite step and its invariant distribution. The key idea is to generate “faithful” couplings of Markov chains, whereby pairs of chains coalesce after a random number of iterations. This talk will provide an overview of this line of research.

%d bloggers like this: