Archive for Nature

rage against the [Nature] Machine [Intelligence]

Posted in Books, Statistics, University life with tags , , , , , , , , , on May 15, 2018 by xi'an

Yesterday evening, my friend and colleague Pierre Alquier (CREST-ENSAE) got interviewed (for a few seconds on-line!, around minute 06) by the French national radio, France Culture, about the recent call to boycott the incoming Nature Machine Intelligence electronic journal. Call to the machine learning community, based on the lack of paying journals among the major machine learnings journals, like JMLR. Meaning that related conferences like AISTATS and NIPS also get their accepted papers available on-line for free. As noted in the call

“Machine learning has been at the forefront of the movement for free and open access to research. For example, in 2001 the Editorial Board of the Machine Learning Journal resigned en masse to form a new zero-cost open access journal, the Journal of Machine Learning Research (JMLR).”

certified randomness, 187m away…

Posted in Statistics with tags , , , , , , , on May 3, 2018 by xi'an

As it rarely happens with Nature, I just read an article that directly relates to my research interests, about a secure physical random number generator (RNG). By Peter Bierhost and co-authors, mostly physicists apparently. Security here means that the outcome of the RNG is unpredictable. This very peculiar RNG is based on two correlated photons sent to two measuring stations, separated by at least 187m, which have to display unpredictable outcomes in order to respect the impossibility of faster-than-light communications, otherwise known as Bell inequalities. This is hardly practical though, especially when mentioning that the authors managed to produce 2¹⁰ random bits over 10 minutes, post processing “the measurement of 55 million photon pairs”. (I however fail to see why the two-arm apparatus would be needed for regular random generation as it seems relevant solely for the demonstration of randomness.) I also checked the associated supplementary material, which is mostly about proving some total variation bound, and constructing a Bell function. What is most puzzling in this paper (and the associated supplementary material) is the (apparent) lack of guarantee of uniformity of the RNG. For instance, a sentence (Supplementary Material, p.11) about  a distribution being “within TV distance of uniform” hints at the method being not provably uniform, which makes the whole exercise incomprehensible…

Nature snapshots

Posted in Books, pictures, University life with tags , , , , , , , , , , on April 2, 2018 by xi'an

In this 15 March issue of Nature, a rather puzzling article on altruism that seemed to rely on the Hamilton rule:

linking fitness and number of offspring and benefits through kth-order moments. In a stochastic environment represented by distribution π. Hard to fathom what comes from data and what follows from this (hypothetical) model. Plus a proposal for geoengineering meltdown delays on some Greenland and Antarctica glaciers. Scary. And a film review of Iceman (Der Mann aus dem Eis), retracing Ötzi’s life in an archaeologically acceptable rendering. Including a reconstituted local language, Rhaetic.

gender gaps

Posted in Statistics, University life with tags , , , , , , , , , , on March 31, 2018 by xi'an

Two of my colleagues [and co-authors] at Dauphine, Elyès Jouini and Clotilde Napp, published a paper in Science last week (and an associated tribune in Le Monde which I spotted first) about explaining differences in national gender inequalities in maths (as measured by PISA) in terms of the degree of overall inequality in the respective countries. Gaps in the highest maths performer sex ratio. While I have no qualm about the dependency or the overall statistical cum machine learning analysis (supported by our common co-author Jean-Michel Marin), and while I obviously know nothing about the topic!, I leisurely wonder at the cultural factor (which may also partly explain for the degree of inequality) when considering that the countries at the bottom of the above graphs are rather religious (and mostly catholic). I also find it most intriguing that the gender gap is consistently reversed when considering higher performer sex ratio for reading, because mastering the language should be a strong factor in power structures and hence differences therein should also lead to inequalities…

Bayesian maps of Africa

Posted in pictures, Statistics with tags , , , , , , on March 21, 2018 by xi'an

A rather special issue of Nature this week (1 March 2018) as it addresses Bayesian geo-cartography and mapping childhood growth failure and educational achievement (along with sexual differences) all across Africa! Including the (nice) cover of the journal, a preface by Kofi Annan, a cover article by Brian Reich and Murali Haran, and the first two major articles of the journal, one of which includes Ewan Cameron as a co-author. As I was reading this issue of Nature in the train back from Brussels, I could not access the supplementary material, so could not look at the specifics of the statistics, but the maps look quite impressive with a 5×5 km² resolution. And inclusion not only of uncertainty maps but also of predictive maps on the probability of achieving WHO 2025 goals. Surprisingly close to one in some parts of Africa. In terms of education, there are strong oppositions between different regions, with the south of the continent, including Madagascar, showing a positive difference for women in terms of years of education. While there is no reason (from my train seat) to doubt the statistical analyses, I take quite seriously the reservation of the authors that the quality of the prediction cannot be better than the quality of the data, which is “determined by the volume and fidelity of nationally representative surveys”. Which relates to an earlier post of mine about a similar concern with the deaths in Congo.

AlphaGo [100 to] zero

Posted in Books, pictures, Statistics, Travel with tags , , , on December 12, 2017 by xi'an

While in Warwick last week, I read a few times through Nature article on AlphaGo Zero, the new DeepMind program that learned to play Go by itself, through self-learning, within a few clock days, and achieved massive superiority (100 to 0) over the earlier version of the program, which (who?!) was based on a massive data-base of human games. (A Nature paper I also read while in Warwick!) From my remote perspective, the neural network associated with AlphaGo Zero seems more straightforward that the double network of the earlier version. It is solely based on the board state and returns a probability vector p for all possible moves, as well as the probability of winning from the current position. There are still intermediary probabilities π produced by a Monte Carlo tree search, which drive the computation of a final board, the (reinforced) learning aiming at bringing p and π as close as possible, via a loss function like

(z-v)²-<π, log p>+c|θ

where z is the game winner and θ is the vector of parameters of the neural network. (Details obviously missing above!) The achievements of this new version are even more impressive than those of the earlier one (which managed to systematically beat top Go players) in that blind exploration of game moves repeated over some five million games produced a much better AI player. With a strategy at times remaining a mystery to Go players.

Incidentally a two-page paper appeared on arXiv today with the title Demystifying AlphaGo Zero, by Don, Wu, and Zhou. Which sets AlphaGo Zero as a special generative adversarial network. And invoking Wasserstein distance as solving the convergence of the network. To conclude that “it’s not [sic] surprising that AlphaGo Zero show [sic] a good convergence property”… A most perplexing inclusion in arXiv, I would say.

5 ways to fix statistics?!

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , on December 4, 2017 by xi'an

In the last issue of Nature (Nov 30), the comment section contains a series of opinions on the reproducibility crisis, by five [groups of] statisticians. Including Blakeley McShane and Andrew Gelman with whom [and others] I wrote a response to the seventy author manifesto. The collection of comments is introduced with the curious sentence

“The problem is not our maths, but ourselves.”

Which I find problematic as (a) the problem is never with the maths, but possibly with the stats!, and (b) the problem stands in inadequate assumptions on the validity of “the” statistical model and on ignoring the resulting epistemic uncertainty. Jeff Leek‘s suggestion to improve the interface with users seems to come short on that level, while David Colquhoun‘s Bayesian balance between p-values and false-positive only address well-specified models. Michèle Nuitjen strikes closer to my perspective by arguing that rigorous rules are unlikely to help, due to the plethora of possible post-data modellings. And Steven Goodman’s putting the blame on the lack of statistical training of scientists (who “only want enough knowledge to run the statistical software that allows them to get their paper out quickly”) is wishful thinking: every scientific study [i.e., the overwhelming majority] involving data cannot involve a statistical expert and every paper involving data analysis cannot be reviewed by a statistical expert. I thus cannot but repeat the conclusion of Blakeley and Andrew:

“A crucial step is to move beyond the alchemy of binary statements about ‘an effect’ or ‘no effect’ with only a P value dividing them. Instead, researchers must accept uncertainty and embrace variation under different circumstances.”