**A**s Blade Runner 2049 was shown at a local cinema in a Nuit du Cinéma special, my daughter and I took the opportunity to see the sequel to Blade Runner, despite the late hour. And both came back quite enthusiastic about it! Maybe the plot stands a bit thin at times, with too many coincidences and the evil ones being too obviously evil, but the rendering of this future of the former future LA of the original Blade Runner is amazingly complex and opening many threads of potential explanations. And many more questions, which is great. With fascinating openings into almost philosophical questions like the impossible frontier between humans and AIs or the similarly impossible definition of self… Besides, the filming, with a multiplicity of (drone) views, the use of light, from blurred white to glaring yellow and back to snow white, the photography, the musical track, almost overwhelming and more complex than Vangelis’ original, are all massively impressive. As for the quintessential question of how the sequel compares with the original film, I do not think it makes much sense: for one thing the sequel would not have been without the original, the filming has evolved with the era, from the claustrophobic and almost steam-punk film by Scott to this post-apocalyptic rendering by Villeneuve, both movies relating to Philip K Dick’s book in rather different ways (if fortunately avoiding sheep and goats!).

## Archive for the Books Category

## blade runner 2049

Posted in Books, Kids, pictures with tags Blade Runner, Blade Runner 2049, Denis Villeneuve, film noir, movie review, Nuit du Cinéma, Philip K. DIck, Ridley Scott, Vangelis on December 10, 2017 by xi'an## off to Austin!

Posted in Books, Kids, Statistics, Travel, University life, Wines with tags Austin, conference, ISBA, O'Bayes17, objective Bayes, Objective Bayesian hypothesis testing, Robert Dedman, Texas, The University of Texas at Austin, USA on December 9, 2017 by xi'an**T**oday I am flying to Austin, Texas, on the occasion of the O’Bayes 2017 conference, the 12th meeting in the series. In complete objectivity (I am a member of the scientific committee!), the scientific program looks quite exciting, with new themes and new faces. (And Peter Müller concocted a special social program as well!) As indicated above [with an innovative spelling of my first name!] I will give my “traditional” tutorial on O’Bayes testing and model choice tomorrow, flying back to Paris on Wednesday (and alas missing the final talks, including Better together by Pierre!). A nice pun is that the conference centre is located on Robert De[a]dman Drive, which I hope is not premonitory of a fatal ending to my talk there..!

## more random than random!

Posted in Books, Kids, pictures, Statistics with tags Charles Dickens, cross validated, Oliver Twist, random number generator, random variates, randomness, xkcd on December 8, 2017 by xi'an**A** revealing question on X validated the past week was asking for a random generator that is “more random” than the outcome of a specific random generator, à la Oliver Twist:The question is revealing of a quite common misunderstanding of the nature of random variables (as deterministic measurable transforms of a fundamental alea) and of their maybe paradoxical ability to enjoy stability or predictable properties. And much less that it relates to the long-lasting debate about the very [elusive] nature of randomness. The title of the question is equally striking: “Random numbers without pre-specified distribution” which could be given some modicum of meaning in a non-parametric setting, still depending on the choices made at the different levels of the model…

## resampling methods

Posted in Books, pictures, Running, Statistics, Travel, University life with tags Book, Clifton, hidden Markov models, Hilbert curve, iterated importance sampling, resampling, sequential Monte Carlo, stratified resampling, systematic resampling, Université Paris Dauphine, University of Bristol on December 6, 2017 by xi'an**A** paper that was arXived [and that I missed!] last summer is a work on resampling by Mathieu Gerber, Nicolas Chopin (CREST), and Nick Whiteley. Resampling is used to sample from a weighted empirical distribution and to correct for very small weights in a weighted sample that otherwise lead to degeneracy in sequential Monte Carlo (SMC). Since this step is based on random draws, it induces noise (while improving the estimation of the target), reducing this noise is preferable, hence the appeal of replacing plain multinomial sampling with more advanced schemes. The initial motivation is for sequential Monte Carlo where resampling is rife and seemingly compulsory, but this also applies to importance sampling when considering several schemes at once. I remember discussing alternative schemes with Nicolas, then completing his PhD, as well as Olivier Cappé, Randal Douc, and Eric Moulines at the time (circa 2004) we were working on the Hidden Markov book. And getting then a somewhat vague idea as to why systematic resampling failed to converge.

In this paper, Mathieu, Nicolas and Nick show that stratified sampling (where a uniform is generated on every interval of length 1/n) enjoys some form of consistent, while systematic sampling (where the “same” uniform is generated on every interval of length 1/n) does not necessarily enjoy this consistency. There actually exists cases where convergence does not occur. However, a residual version of systematic sampling (where systematic sampling is applied to the residuals of the decimal parts of the n-enlarged weights) is itself consistent.

The paper also studies the surprising feature uncovered by Kitagawa (1996) that stratified sampling applied to an ordered sample brings an error of O(1/n²) between the cdf rather than the usual O(1/n). It took me a while to even understand the distinction between the original and the ordered version (maybe because Nicolas used the empirical cdf during his SAD (Stochastic Algorithm Day!) talk, ecdf that is the same for ordered and initial samples). And both systematic and deterministic sampling become consistent in this case. The result was shown in dimension one by Kitagawa (1996) but extends to larger dimensions via the magical trick of the Hilbert curve.

## about paradoxes

Posted in Books, Kids, Statistics, University life with tags bias, book review, email, Jacobian, Mark Chang, MLE, paradoxes, reparameterisation, scientific inference, The Bayesian Choice, unbiasedness on December 5, 2017 by xi'an**A**n email I received earlier today about statistical paradoxes:

I am a PhD student in biostatistics, and an avid reader of your work. I recently came across this blog post, where you review a text on statistical paradoxes, and I was struck by this section:

I found this section provocative, but I am unclear on the nature of these “paradoxes”. I reviewed my stat inference notes and came across the classic example that there is no unbiased estimator for 1/p w.r.t. a binomial distribution, but I believe you are getting at a much more general result. If it’s not too much trouble, I would sincerely appreciate it if you could point me in the direction of a reference or provide a bit more detail for these two “paradoxes”.

The text is Chang’s Paradoxes in Scientific Inference, which I indeed reviewed negatively. To answer about the bias “paradox”, it is indeed a neglected fact that, while the average of *any* transform of a sample obviously is an unbiased estimator of its mean (!), the converse does not hold, namely, an *arbitrary* transform of the model parameter θ is not necessarily enjoying an unbiased estimator. In Lehmann and Casella, Chapter 2, Section 4, this issue is (just slightly) discussed. But essentially, transforms that lead to unbiased estimators are mostly the polynomial transforms of the mean parameters… (This also somewhat connects to a recent X validated question as to why MLEs are not always unbiased. Although the simplest explanation is that the transform of the MLE is the MLE of the transform!) In exponential families, I would deem the range of transforms with unbiased estimators closely related to the collection of functions that allow for inverse Laplace transforms, although I cannot quote a specific result on this hunch.

The other “paradox” is that, if h(X) is the MLE of the model parameter θ for the observable X, the distribution of h(X) has a density different from the density of X and, hence, its maximisation in the parameter θ may differ. An example (my favourite!) is the MLE of ||a||² based on x N(a,I) which is ||x||², a poor estimate, and which (strongly) differs from the MLE of ||a||² based on ||x||², which is close to (1-p/||x||²)²||x||² and (nearly) admissible [as discussed in the Bayesian Choice].

## 5 ways to fix statistics?!

Posted in Books, Kids, pictures, Statistics, University life with tags cartoon, falsehood flies and truth comes limping after it, Nature, p-values, poor statistics, predictability, reproducible research, uncertainty on December 4, 2017 by xi'an**I**n the last issue of Nature (Nov 30), the comment section contains a series of opinions on the reproducibility crisis, by five [groups of] statisticians. Including Blakeley McShane and Andrew Gelman with whom [and others] I wrote a response to the seventy author manifesto. The collection of comments is introduced with the curious sentence

“The problem is not our maths, but ourselves.”

Which I find problematic as (a) the problem is *never* with the maths, but possibly with the stats!, and (b) the problem stands in inadequate assumptions on the validity of “the” statistical model and on ignoring the resulting epistemic uncertainty. Jeff Leek‘s suggestion to improve the interface with users seems to come short on that level, while David Colquhoun‘s Bayesian balance between p-values and false-positive only address well-specified models. Michèle Nuitjen strikes closer to my perspective by arguing that rigorous rules are unlikely to help, due to the plethora of possible post-data modellings. And Steven Goodman’s putting the blame on the lack of statistical training of scientists (who “only want enough knowledge to run the statistical software that allows them to get their paper out quickly”) is wishful thinking: every scientific study [i.e., the overwhelming majority] involving data cannot involve a statistical expert and every paper involving data analysis cannot be reviewed by a statistical expert. I thus cannot but repeat the conclusion of Blakeley and Andrew:

“A crucial step is to move beyond the alchemy of binary statements about ‘an effect’ or ‘no effect’ with only a P value dividing them. Instead, researchers must accept uncertainty and embrace variation under different circumstances.”

## a quincunx on NBC

Posted in Books, Kids, pictures, Statistics with tags central limit theorem, FiveThirtyEight, Francis Galton, Galton Board, Napoléon Bonaparte, NBC, Pierre Simon Laplace, quincunx, The God Delusion, The Wall, TV-show on December 3, 2017 by xi'an**T**hrough Five-Thirty-Eight, I became aware of a TV game call The Wall [so appropriate for Trumpian times!] that is essentially based on Galton’s quincunx! A huge [15m!] high version of Galton’s quincunx, with seven possible starting positions instead of one, which kills the whole point of the apparatus which is to demonstrate by simulation the proximity of the Binomial distribution to the limiting Normal (density) curve.

But the TV game has obvious no interest in the CLT, or in the Beta binomial posterior, only in a visible sequence of binary events that turn out increasing or decreasing the money “earned” by the player, the highest sums being unsurprisingly less likely. The only decision made by the player is to pick one of the seven starting points (meaning the outcome should behave like a weighted sum of seven Normals with drifted means depending on the probabilities of choosing these starting points). I found one blog entry analysing an “idiot” strategy of playing the game, but not the entire game. (Except for this entry on the older Plinko.) And Five-Thirty-Eight surprisingly does not get into the optimal strategies to play this game (maybe because there is none!). Five-Thirty-Eight also reproduces the apocryphal quote of Laplace not requiring this [God] hypothesis.

*[Note: When looking for a picture of the Quincunx, I also found this desktop version! Which “allows you to visualize the order embedded in the chaos of randomness”, nothing less. And has even obtain a patent for this “visual aid that demonstrates [sic] a random walk and generates [re-sic] a bell curve distribution”…]*