Archive for AI

Nature Outlook on AI

Posted in Statistics with tags , , , , , , , , , , , , , , , on January 13, 2019 by xi'an

The 29 November 2018 issue of Nature had a series of papers on AIs (in its Outlook section). At the general public (awareness) level than in-depth machine-learning article. Including one on the forecasted consequences of ever-growing automation on jobs, quoting from a 2013 paper by Carl Frey and Michael Osborne [of probabilistic numerics fame!] that up to 47% of US jobs could become automated. The paper is inconclusive on how taxations could help in or deter from transfering jobs to other branches, although mentioning the cascading effect of taxing labour and subsidizing capital. Another article covers the progresses in digital government, with Estonia as a role model, including the risks of hacking (but not mentioning Russia’s state driven attacks). Differential privacy is discussed as a way to keep data “secure” (but not cryptography à la Louis Aslett!). With another surprising entry that COBOL is still in use in some administrative systems. Followed by a paper on the apparently limited impact of digital technologies on mental health, despite the advertising efforts of big tech companies being described as a “race to the bottom of the brain stem”! And another one on (overblown) public expectations on AIs, although the New York Time had an entry yesterday on people in Arizona attacking self-driving cars with stones and pipes… Plus a paper on the growing difficulties of saving online documents and culture for the future (although saving all tweets ever published does not sound like a major priority to me!).

Interesting (?) aside, the same issue contains a general public article on the use of AIs for peer reviews (of submitted papers). The claim being that “peer review by artificial intelligence (AI) is promising to improve the process, boost the quality of published papers — and save reviewers time.” A wee bit over-optimistic, I would say, as the developed AI’s are at best “that statistics and methods in manuscripts are sound”. For instance, producing “key concepts to summarize what the paper is about” is not particularly useful. A degree of innovation compared with the existing would be. Or an automated way to adapt the paper style to the strict and somewhat elusive Biometrika style!

AIQ [book review]

Posted in Books, Statistics with tags , , , , , , , , , , , , , , , , , , on January 11, 2019 by xi'an

AIQ was my Christmas day read, which I mostly read while the rest of the household was still sleeping. The book, written by two Bayesians, Nick Polson and James Scott, was published before the ISBA meeting last year, but I only bought it on my last trip to Warwick [as a Xmas present]. This is a pleasant book to read, especially while drinking tea by the fire!, well-written and full of facts and anecdotes I did not know or had forgotten (more below). Intended for a general audience, it is also quite light, from a technical side, rather obviously, but also from a philosophical side. While strongly positivist about the potential of AIs for the general good, it cannot be seen as an antidote to the doomlike Superintelligence by Nick Bostrom or the more factual Weapons of Maths Destruction by Cathy O’Neal. (Both commented on the ‘Og.)

Indeed, I find the book quite benevolent and maybe a wee bit too rosy in its assessment of AIs and the discussion on how Facebook and Russian intervention may have significantly to turn the White House Orange is missing [imho] the viral nature of the game, when endless loops of highly targeted posts can cut people from the most basic common sense. While the authors are “optimistic that, given the chance, people can be smart enough”, I do reflect on the sheer fact that the hoax that Hillary Clinton was involved in a child sex ring was ever considered seriously by people. To the point of someone shooting at the pizza restaurant. And I hence am much less optimistic at the ability for a large enough portion of the population, not even the majority, to keep a critical distance from the message carried by AI driven media. Similarly, while Nick and James point out (rather late in the book) that big data (meaning large data) is not necessarily good data for being unrepresentative at the population at large, they do not propose (in the book) highly convincing solutions to battle bias in existing and incoming AIs. Leading to a global worry that AIs may do well for a majority of the population and discriminate against a minority by the same reasoning. As described in Cathy O’Neal‘s book, and elsewhere, proprietary software does not even have to explain why it discriminates. More globally, the business school environment of the authors may have prevented them from stating a worry on the massive power grab by the AI-based companies, which genetically grow with little interest in democracy and states, as shown (again) by the recent election or their systematic fiscal optimisation. Or by the massive recourse to machine learning by Chinese authorities towards a social credit system grade for all citizens.

“La rage de vouloir conclure est une des manies les plus funestes et les plus stériles qui appartiennent à l’humanité. Chaque religion et chaque philosophie a prétendu avoir Dieu à elle, toiser l’infini et connaître la recette du bonheur.” Gustave Flaubert

I did not know about Henrietta Leavitt’s prediction rule for pulsating stars, behind Hubble’s discovery, which sounds like an astronomy dual to Rosalind Franklin’s DNA contribution. The use of Bayes’ rule for locating lost vessels is also found in The Theorem that would not die. Although I would have also mentioned its failure in locating Malaysia Airlines Flight 370. I had also never heard the great expression of “model rust. Nor the above quote from Flaubert. It seems I have recently spotted the story on how a 180⁰ switch in perspective on language understanding by machines brought the massive improvement that we witness today. But I cannot remember where. And I have also read about Newton missing the boat on the precision of the coinage accuracy (was it in Bryson’s book on the Royal Society?!), but with less neutral views on the role of Newton in the matter, as the Laplace of England would have benefited from keeping the lax measures of assessment.

Great to see friendly figures like Luke Bornn and Katherine Heller appearing in the pages. Luke for his work on the statistical analysis of basketball games, Katherine  for her work on predictive analytics in medicine. Reflecting on the missed opportunities represented by the accumulation of data on any patient throughout their life that is as grossly ignored nowadays as it was at Nightingale‘s time. The message of the chapter [on “The Lady with the Lamp”] may again be somewhat over-optimistic: while AI and health companies see clear incentives in developing more encompassing prediction and diagnostic techniques, this will only benefit patients who can afford the ensuing care. Which, given the state of health care systems in the most developed countries, is an decreasing proportion. Not to mention the less developed countries.

Overall, a nice read for the general public, de-dramatising the rise of the machines!, and mixing statistics and machine learning to explain the (human) intelligence behind the AIs. Nothing on the technical side, to be sure, but this was not the intention of the authors.

crowdsourcing, data science & machine learning to measure violence & abuse against women on twitter

Posted in Books, Statistics, University life with tags , , , , , , , , , on January 3, 2019 by xi'an

Amnesty International just released on December 18 a study on abuse and harassment on twitter account of female politicians and journalists in the US and the UK. Realised through the collaboration of thousands of crowdsourced volunteers labeling  tweets from the database and the machine-learning expertise of the London branch of ElementAI, branch driven by my friend Julien Cornebise with the main purpose of producing AI for good (as he explained at the recent Bayes for good workshop). Including the development of an ML tool to detect abusive tweets, called Troll Patrol [which pun side is clear in French!]. The amount of abuse exposed by this study and the possibility to train AIs to spot [some of the] abuse on line are both arguments that support Amnesty International call for the accountability of social media companies like twitter on abuse and violence propagated through their platform. (Methodology is also made available there.)

Nature tidbits

Posted in Books, Statistics, University life with tags , , , , , , , , , , , on September 18, 2018 by xi'an

In the Nature issue of July 19 that I read in the plane to Singapore, there was a whole lot of interesting entries, from various calls expressing deep concern about the anti-scientific stance of the Trump administration, like cutting funds for environmental regulation and restricting freedom of communication (ETA) or naming a non-scientist at the head of NASA and other agencies, or again restricting the protection of species, to a testimony of an Argentinian biologist in front of a congressional committee about the legalisation of abortion (which failed at the level of the Agentinian senate later this month), to a DNA-like version of neural network, to Louis Chen from NUS being mentioned in a career article about the importance of planning well in advance one’s retirement to preserve academia links and manage a new position or even career. Which is what happened to Louis as he stayed head of NUS after the mandatory retirement age and is now emeritus and still engaged into research. (The article made me wonder however how the cases therein had be selected.) It is actually most revealing to see how different countries approach the question of retirements of academics: in France, for instance, one is essentially forced to retire and, while there exist emeritus positions, it is extremely difficult to find funding.

“Louis Chen was technically meant to retire in 2005. The mathematician at the National University of Singapore was turning 65, the university’s official retirement age. But he was only five years into his tenure as director of the university’s new Institute for Mathematical Sciences, and the university wanted him to stay on. So he remained for seven more years, stepping down in 2012. Over the next 18 months, he travelled and had knee surgery, before returning in summer 2014 to teach graduate courses for a year.”

And [yet] another piece on the biases of AIs. Reproducing earlier papers discussed here, with one obvious reason being that the learning corpus is not representative of the whole population, maybe survey sampling should become compulsory in machine learning training degrees. And yet another piece on why protectionism is (also) bad for the environment.

bitcoin and cryptography for statistical inference and AI

Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , on April 16, 2018 by xi'an

A recent news editorial in Nature (15 March issue) reminded me of the lectures Louis Aslett gave at the Gregynog Statistical Conference last week, on the advanced use of cryptography tools to analyse sensitive and private data. Lectures that reminded me of a graduate course I took on cryptography and coding, in Paris 6, and which led me to visit a lab at the Université de Limoges during my conscripted year in the French Navy. With no research outcome. Now, the notion of using encrypted data towards statistical analysis is fascinating in that it may allow for efficient inference and personal data protection at the same time. As opposed to earlier solutions of anonymisation that introduced noise and data degradation, not always providing sufficient protection of privacy. Encryption that is also the notion at the basis of the Nature editorial. An issue completely missing from the paper, while stressed by Louis, is that this encryption (like Bitcoin) is costly, in order to deter hacking, and hence energy inefficient. Or limiting the amount of data that can be used in such studies, which would turn the idea into a stillborn notion.

A Closed and Common Orbit

Posted in Statistics with tags , , , , , , , on February 27, 2018 by xi'an

This book by Becky Chambers comes as a sequel of sorts to her first [science-fiction] book, A Long Way to a Small Angry Planet. Book that I liked a lot for its construction of relationships between the highly different team members of a spaceship. In this new book, the author pursues a similar elaboration of unlikely friendships between human and alien species, and AIs. If the first book felt homey, this one is even more so, with essentially two principal characters followed alternatively throughout the book, until the stories predictably cross. It is fairly well-written, with again a beautiful cover, but I cannot say it is as magisterial as the first book. The book-long considerations on the nature of AI and of cloned humans are certainly interesting and deep enough, but the story tension ebbs at time, especially for the story in the past since we know from the beginning that the main character will reappear in the current time. Not reaching the superlatives of a Hugo or Clarke Award in my opinion (albeit nominated for these prizes). Still a most enjoyable read!

weapons of math destruction [fan]

Posted in Statistics with tags , , , , , , , , on September 20, 2017 by xi'an

As a [new] member of Parliement, Cédric Villani is now in charge of a committee on artificial intelligence, which goal is to assess the positive and negative sides of AI. And refers in Le Monde interview below to Weapons of Maths Destruction as impacting his views on the topic! Let us hope Superintelligence is no next on his reading list…