In the latest Nature issue, a long cover of Asimov’s contributions to science and rationality. And a five page article on the dopamine reward in the brain seen as a probability distribution, seen as distributional reinforcement learning by researchers from DeepMind, UCL, and Harvard. Going as far as “testing” for this theory with a p-value of 0.008..! Which could be as well a signal of variability between neurons to dopamine rewards (with a p-value of 10⁻¹⁴, whatever that means). Another article about deep learning about protein (3D) structure prediction. And another one about learning neural networks via specially designed devices called memristors. And yet another one on West Africa population genetics based on four individuals from the Stone to Metal age (8000 and 3000 years ago), SNPs, PCA, and admixtures. With no ABC mentioned (I no longer have access to the journal, having missed renewal time for my subscription!). And the literal plague of a locust invasion in Eastern Africa. Making me wonder anew as to why proteins could not be recovered from the swarms of locust to partly compensate for the damages. (Locusts eat their bodyweight in food every day.) And the latest news from NeurIPS about diversity and inclusion. And ethics, as in checking for responsibility and societal consequences of research papers. Reviewing the maths of a submitted paper or the reproducibility of an experiment is already challenging at times, but evaluating the biases in massive proprietary datasets or the long-term societal impact of a classification algorithm may prove beyond the realistic.
Archive for NeurIPS
Nature tidbits [the Bayesian brain]
Posted in Statistics with tags ABC, deep learning, DeepMind, desert locust, Harvard University, Human Genetics, Isaac Asimov, memristors, neural network, NeurIPS, p-values, SNPs, UCL, University College London, Vancouver on March 8, 2020 by xi'anNeurIPS without visa
Posted in pictures, Statistics, Travel, University life with tags Addis Abeba, Africa, Canada, conferences, Ethiopia, ethiopian food, Human Rights, LGBT rights, lottery, NeurIPS, Synced, USA, Vancouver, visa on September 22, 2019 by xi'an
I came by chance upon this 2018 entry in Synced that NeurIPS now takes place in Canada between Montréal and Vancouver primarily because visas to Canada are easier to get than visas to the USA, even though some researchers still get difficulties in securing theirs. Especially researchers from some African countries, which is exposed in the article as one of the reasons the next ICLR takes place in Addis Ababa. Which I wish I could attend! In the meanwhile, I will be taking part in an ABC workshop in Vancouver, December 08, prior to NeurIPS 2019, before visiting the Department of Statistics at UBC the day after. (My previous visit there was in 1990, I believe!) Incidentally but interestingly, the lottery entries for NeurIPS 2019 are open till September 25, to the public (those not contributing to the conference or any of its affiliated groups). This is certainly better than having bots buying all entries within 12 minutes of the opening time!
More globally, this entry makes me wonder how learned societies could invest in ensuring locations for their (international) meetings allow for a maximum inclusion in terms of these visa difficulties, but also ensuring freedom and safety for all members. Which may prove a de facto impossibility. For instance, Ethiopia has a rather poor record in terms of human rights and, in particular, homosexuality is criminalised there. An alternative would be to hold the conferences in parallel locations chosen to multiply the chances for this inclusion, but this could prove counter-productive [for inclusion] by creating groups that would never ever meet. An insolvable conundrum?
the Montréal declarAIon
Posted in University life with tags AI, Alan Turing, Alan Turing award, artificial intelligence, ethics, Europe, France, Le Monde, Montréal, NeurIPS, NIPS, plan Intelligence artificielle, Québec on April 27, 2019 by xi'anIn conjunction with Yoshua Bengio being one of the three recipients of the 2018 Alan Turing award, Nature ran an interview of him about the Montréal Déclaration for a responsible AI, which he launched at NeurIPS last December.
“Self-regulation is not going to work. Do you think that voluntary taxation works? It doesn’t.”
Reflecting on the dangers of abuse of and by AIs, from surveillance, to discrimination, but being somewhat unclear on the means to implement the ten generic principles listed there. (I had missed the Declaration when it came up.) I agree with the principles stressed by this list, well-being, autonomy, privacy, democracy, diversity, prudence, responsability, and sustainability, it remains to be seem how they can be imposed upon corporations whose own public image puts more restraint on them than ethics or on governments that are all too ready to automatise justice, police, and the restriction of citizen’s rights. Which makes the construction of a responsible AI institution difficult to imagine, if the current lack of outreach of the extra-national institutions is the gauge. (A striking coincidence is that, when Yoshua Bengio pointed out that France was trying to make Europe an AI power, there was also a tribune in Le Monde about the lack of practical impact of this call to arms, apart from more academics moving to half-time positions in private companies.) [Warning: the picture does not work well with the dark background theme of this blog.]