Archive for DeepMind
matrix multiplication [cover]
Posted in Books, pictures, Statistics, University life with tags algorithms, AlphaTensor, cover, deep learning, deep neural network, DeepMind, Google, London, matrix algebra, matrix multiplication, Monte Carlo algorithm, Nature, reinforcement learning, tensor, UK on December 15, 2022 by xi'anNature tidbits [the Bayesian brain]
Posted in Statistics with tags ABC, deep learning, DeepMind, desert locust, Harvard University, Human Genetics, Isaac Asimov, memristors, neural network, NeurIPS, p-values, SNPs, UCL, University College London, Vancouver on March 8, 2020 by xi'anIn the latest Nature issue, a long cover of Asimov’s contributions to science and rationality. And a five page article on the dopamine reward in the brain seen as a probability distribution, seen as distributional reinforcement learning by researchers from DeepMind, UCL, and Harvard. Going as far as “testing” for this theory with a p-value of 0.008..! Which could be as well a signal of variability between neurons to dopamine rewards (with a p-value of 10⁻¹⁴, whatever that means). Another article about deep learning about protein (3D) structure prediction. And another one about learning neural networks via specially designed devices called memristors. And yet another one on West Africa population genetics based on four individuals from the Stone to Metal age (8000 and 3000 years ago), SNPs, PCA, and admixtures. With no ABC mentioned (I no longer have access to the journal, having missed renewal time for my subscription!). And the literal plague of a locust invasion in Eastern Africa. Making me wonder anew as to why proteins could not be recovered from the swarms of locust to partly compensate for the damages. (Locusts eat their bodyweight in food every day.) And the latest news from NeurIPS about diversity and inclusion. And ethics, as in checking for responsibility and societal consequences of research papers. Reviewing the maths of a submitted paper or the reproducibility of an experiment is already challenging at times, but evaluating the biases in massive proprietary datasets or the long-term societal impact of a classification algorithm may prove beyond the realistic.
AlphaGo [100 to] zero
Posted in Books, pictures, Statistics, Travel with tags DeepMind, Go, Nature, University of Warwick on December 12, 2017 by xi'anWhile in Warwick last week, I read a few times through Nature article on AlphaGo Zero, the new DeepMind program that learned to play Go by itself, through self-learning, within a few clock days, and achieved massive superiority (100 to 0) over the earlier version of the program, which (who?!) was based on a massive data-base of human games. (A Nature paper I also read while in Warwick!) From my remote perspective, the neural network associated with AlphaGo Zero seems more straightforward that the double network of the earlier version. It is solely based on the board state and returns a probability vector p for all possible moves, as well as the probability of winning from the current position. There are still intermediary probabilities π produced by a Monte Carlo tree search, which drive the computation of a final board, the (reinforced) learning aiming at bringing p and π as close as possible, via a loss function like
(z-v)²-<π, log p>+c|θ|²
where z is the game winner and θ is the vector of parameters of the neural network. (Details obviously missing above!) The achievements of this new version are even more impressive than those of the earlier one (which managed to systematically beat top Go players) in that blind exploration of game moves repeated over some five million games produced a much better AI player. With a strategy at times remaining a mystery to Go players.
Incidentally a two-page paper appeared on arXiv today with the title Demystifying AlphaGo Zero, by Don, Wu, and Zhou. Which sets AlphaGo Zero as a special generative adversarial network. And invoking Wasserstein distance as solving the convergence of the network. To conclude that “it’s not [sic] surprising that AlphaGo Zero show [sic] a good convergence property”… A most perplexing inclusion in arXiv, I would say.
the DeepMind debacle
Posted in Books, Statistics, Travel with tags data privacy, DeepMind, Monte Rosa, Nature, personalised medicine, Royal Statistical Society, RSS, topology, vacations on August 19, 2017 by xi'an“I hope for a world where data is at the heart of understanding and decision making. To achieve this we need better public dialogue.” Hetan Shah
As I was reading one of the Nature issues I brought on vacations, while the rain was falling on an aborted hiking day on the fringes of Monte Rosa, I came across a 20 July tribune by Hetan Shah, executive director of the RSS. A rare occurrence of a statistician’s perspective in Nature. The event prompting this column is the ruling against the Royal Free London hospital group providing patient data to DeepMind for predicting kidney. Without the patients’ agreement. And with enough information to identify the patients. The issues raised by Hetan Shah are that data transfers should become open, and that they should be commensurate in volume and details to the intended goals. And that public approval should be seeked. While I know nothing about this specific case, I find the article overly critical of DeepMind, which interest in health related problems is certainly not pure and disinterested but nonetheless can contribute advances in (personalised) care and prevention through its expertise in machine learning. (Disclaimer: I have neither connection nor conflict with the company!) And I do not see exactly how public approval or dialogue can help in making progress in handling data, unless I am mistaken in my understanding of “the public”. The article mentions the launch of a UK project on data ethics, involving several [public] institutions like the RSS: this is certainly commandable and may improve personal data is handled by companies, but I would not call this conglomerate representative of the public, which most likely does not really trust these institutions either…