Archive for AI

space opera by John Scalzi [book review]

Posted in Books, pictures, Travel with tags , , , , , , , , , , , , , , , , , on June 15, 2019 by xi'an

John Scalzi, author of the memorable Old Man’s War, has started a trilogy of which I only became aware recently (or more precisely became re-aware!), which has the perk of making two of the three books already published and hence available without a one or two year break. And having the book win the 2018 Locus Award in the meanwhile. This new series is yet again a space opera with space travel made possible by a fairly unclear Flow that even the mathematicians in the story have trouble understanding. And The Flow is used by guilds to carry goods and people to planets that are too hostile an environment for the “local” inhabitants to survive on their own. The whole setup is both homely and old-fashioned: the different guilds are associated with families, despite being centuries old, and the empire of 48 planets is still governed by the same dominant family, who also controls a fairly bland religion. Although the later managed to become the de facto religion.

“I’m a Flow physicist.  It’s high-order math. You don’t have to go out into the field for that.”

This does not sound much exciting, even for space operas, but things are starting to deteriorate when the novels start. Or more exactly, as hinted by the title, the Empire is about to collapse! (No spoiler, since this is the title!!!) However, the story-telling gets a wee bit lazy from that (early) point. In that it fixates on a very few characters [among millions of billions of inhabitants of this universe] who set the cogs spinning one way then the other then the earlier way… Dialogues are witty and often funny, those few characters are mostly well-drawn, albeit too one-dimensional, and cataclysmic events seem to be held at bay by the cleverness of one single person, double-crossing the bad guys. Mostly. While the second volume (unusually) sounds better and sees more action, more surprises, and an improvement in the plot itself, and while this makes for a pleasant travel read (I forgot The Collapsing Empire in a plane from B’ham!), I am surprised at the book winning the 2018 Locus Award indeed. It definitely lacks the scope and ambiguity of the two Ancillary novels. The convoluted philosophical construct and math background of Anathem. The historical background of Cryptonomicon and of the Baroque Cycle. Or the singularity of the Hyperion universe. (But I was also unimpressed by the Three-Body Problem! And by Scalzi’s Hugo Award Redshirts!) The third volume is not yet out.

As a French aside, a former king turned AI is called Tomas Chenevert, on a space-ship called Auvergne, with an attempt at coming from a French speaking planet, Ponthieu, except that is should have been spelled Thomas Chênevert (green oak!). Incidentally, Ponthieu is a county in the Norman marches, north of Rouen, that is now part of Picardy, although I do not think this has anything to do with the current novel!

Garrigue administrative

Posted in Books, pictures, University life with tags , , , , on May 20, 2019 by xi'an

A central page in Le Monde of this week (May 08), about the conundrum of dealing with the dozens of thousands of handwritten pages left by Alexandre Grothendiek, from trying to make sense of the contents to assessing the monetary value (!) of such documents. Mentioning that the most reasonable solution would be to extend the numerisation of earlier documents supervised by Jean-Michel Marin. Given the difficulty in reading these pages, as suggested by Le Monde, training an AI to translate them into regular text would make sense, if not helping with evaluating their importance…

the Montréal declarAIon

Posted in University life with tags , , , , , , , , , , , , on April 27, 2019 by xi'an

In conjunction with Yoshua Bengio being one of the three recipients of the 2018 Alan Turing award, Nature ran an interview of him about the Montréal Déclaration for a responsible AI, which he launched at NeurIPS last December.

“Self-regulation is not going to work. Do you think that voluntary taxation works? It doesn’t.”

Reflecting on the dangers of abuse of and by AIs, from surveillance, to discrimination, but being somewhat unclear on the means to implement the ten generic principles listed there. (I had missed the Declaration when it came up.) I agree with the principles stressed by this list, well-being, autonomy, privacy, democracy, diversity, prudence, responsability, and sustainability, it remains to be seem how they can be imposed upon corporations whose own public image puts more restraint on them than ethics or on governments that are all too ready to automatise justice, police, and the restriction of citizen’s rights. Which makes the construction of a responsible AI institution difficult to imagine, if the current lack of outreach of the extra-national institutions is the gauge. (A striking coincidence is that, when  Yoshua Bengio pointed out that France was trying to make Europe an AI power, there was also a tribune in Le Monde about the lack of practical impact of this call to arms, apart from more academics moving to half-time positions in private companies.) [Warning: the picture does not work well with the dark background theme of this blog.]

Bayesian intelligence in Warwick

Posted in pictures, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , on February 18, 2019 by xi'an

This is an announcement for an exciting CRiSM Day in Warwick on 20 March 2019: with speakers

10:00-11:00 Xiao-Li Meng (Harvard): “Artificial Bayesian Monte Carlo Integration: A Practical Resolution to the Bayesian (Normalizing Constant) Paradox”

11:00-12:00 Julien Stoehr (Dauphine): “Gibbs sampling and ABC”

14:00-15:00 Arthur Ulysse Jacot-Guillarmod (École Polytechnique Fedérale de Lausanne): “Neural Tangent Kernel: Convergence and Generalization of Deep Neural Networks”

15:00-16:00 Antonietta Mira (Università della Svizzera italiana e Università degli studi dell’Insubria): “Bayesian identifications of the data intrinsic dimensions”

[whose abstracts are on the workshop webpage] and free attendance. The title for the workshop mentions Bayesian Intelligence: this obviously includes human intelligence and not just AI!

Nature Outlook on AI

Posted in Statistics with tags , , , , , , , , , , , , , , , on January 13, 2019 by xi'an

The 29 November 2018 issue of Nature had a series of papers on AIs (in its Outlook section). At the general public (awareness) level than in-depth machine-learning article. Including one on the forecasted consequences of ever-growing automation on jobs, quoting from a 2013 paper by Carl Frey and Michael Osborne [of probabilistic numerics fame!] that up to 47% of US jobs could become automated. The paper is inconclusive on how taxations could help in or deter from transfering jobs to other branches, although mentioning the cascading effect of taxing labour and subsidizing capital. Another article covers the progresses in digital government, with Estonia as a role model, including the risks of hacking (but not mentioning Russia’s state driven attacks). Differential privacy is discussed as a way to keep data “secure” (but not cryptography à la Louis Aslett!). With another surprising entry that COBOL is still in use in some administrative systems. Followed by a paper on the apparently limited impact of digital technologies on mental health, despite the advertising efforts of big tech companies being described as a “race to the bottom of the brain stem”! And another one on (overblown) public expectations on AIs, although the New York Time had an entry yesterday on people in Arizona attacking self-driving cars with stones and pipes… Plus a paper on the growing difficulties of saving online documents and culture for the future (although saving all tweets ever published does not sound like a major priority to me!).

Interesting (?) aside, the same issue contains a general public article on the use of AIs for peer reviews (of submitted papers). The claim being that “peer review by artificial intelligence (AI) is promising to improve the process, boost the quality of published papers — and save reviewers time.” A wee bit over-optimistic, I would say, as the developed AI’s are at best “that statistics and methods in manuscripts are sound”. For instance, producing “key concepts to summarize what the paper is about” is not particularly useful. A degree of innovation compared with the existing would be. Or an automated way to adapt the paper style to the strict and somewhat elusive Biometrika style!

AIQ [book review]

Posted in Books, Statistics with tags , , , , , , , , , , , , , , , , , , on January 11, 2019 by xi'an

AIQ was my Christmas day read, which I mostly read while the rest of the household was still sleeping. The book, written by two Bayesians, Nick Polson and James Scott, was published before the ISBA meeting last year, but I only bought it on my last trip to Warwick [as a Xmas present]. This is a pleasant book to read, especially while drinking tea by the fire!, well-written and full of facts and anecdotes I did not know or had forgotten (more below). Intended for a general audience, it is also quite light, from a technical side, rather obviously, but also from a philosophical side. While strongly positivist about the potential of AIs for the general good, it cannot be seen as an antidote to the doomlike Superintelligence by Nick Bostrom or the more factual Weapons of Maths Destruction by Cathy O’Neal. (Both commented on the ‘Og.)

Indeed, I find the book quite benevolent and maybe a wee bit too rosy in its assessment of AIs and the discussion on how Facebook and Russian intervention may have significantly to turn the White House Orange is missing [imho] the viral nature of the game, when endless loops of highly targeted posts can cut people from the most basic common sense. While the authors are “optimistic that, given the chance, people can be smart enough”, I do reflect on the sheer fact that the hoax that Hillary Clinton was involved in a child sex ring was ever considered seriously by people. To the point of someone shooting at the pizza restaurant. And I hence am much less optimistic at the ability for a large enough portion of the population, not even the majority, to keep a critical distance from the message carried by AI driven media. Similarly, while Nick and James point out (rather late in the book) that big data (meaning large data) is not necessarily good data for being unrepresentative at the population at large, they do not propose (in the book) highly convincing solutions to battle bias in existing and incoming AIs. Leading to a global worry that AIs may do well for a majority of the population and discriminate against a minority by the same reasoning. As described in Cathy O’Neal‘s book, and elsewhere, proprietary software does not even have to explain why it discriminates. More globally, the business school environment of the authors may have prevented them from stating a worry on the massive power grab by the AI-based companies, which genetically grow with little interest in democracy and states, as shown (again) by the recent election or their systematic fiscal optimisation. Or by the massive recourse to machine learning by Chinese authorities towards a social credit system grade for all citizens.

“La rage de vouloir conclure est une des manies les plus funestes et les plus stériles qui appartiennent à l’humanité. Chaque religion et chaque philosophie a prétendu avoir Dieu à elle, toiser l’infini et connaître la recette du bonheur.” Gustave Flaubert

I did not know about Henrietta Leavitt’s prediction rule for pulsating stars, behind Hubble’s discovery, which sounds like an astronomy dual to Rosalind Franklin’s DNA contribution. The use of Bayes’ rule for locating lost vessels is also found in The Theorem that would not die. Although I would have also mentioned its failure in locating Malaysia Airlines Flight 370. I had also never heard the great expression of “model rust. Nor the above quote from Flaubert. It seems I have recently spotted the story on how a 180⁰ switch in perspective on language understanding by machines brought the massive improvement that we witness today. But I cannot remember where. And I have also read about Newton missing the boat on the precision of the coinage accuracy (was it in Bryson’s book on the Royal Society?!), but with less neutral views on the role of Newton in the matter, as the Laplace of England would have benefited from keeping the lax measures of assessment.

Great to see friendly figures like Luke Bornn and Katherine Heller appearing in the pages. Luke for his work on the statistical analysis of basketball games, Katherine  for her work on predictive analytics in medicine. Reflecting on the missed opportunities represented by the accumulation of data on any patient throughout their life that is as grossly ignored nowadays as it was at Nightingale‘s time. The message of the chapter [on “The Lady with the Lamp”] may again be somewhat over-optimistic: while AI and health companies see clear incentives in developing more encompassing prediction and diagnostic techniques, this will only benefit patients who can afford the ensuing care. Which, given the state of health care systems in the most developed countries, is an decreasing proportion. Not to mention the less developed countries.

Overall, a nice read for the general public, de-dramatising the rise of the machines!, and mixing statistics and machine learning to explain the (human) intelligence behind the AIs. Nothing on the technical side, to be sure, but this was not the intention of the authors.

crowdsourcing, data science & machine learning to measure violence & abuse against women on twitter

Posted in Books, Statistics, University life with tags , , , , , , , , , on January 3, 2019 by xi'an

Amnesty International just released on December 18 a study on abuse and harassment on twitter account of female politicians and journalists in the US and the UK. Realised through the collaboration of thousands of crowdsourced volunteers labeling  tweets from the database and the machine-learning expertise of the London branch of ElementAI, branch driven by my friend Julien Cornebise with the main purpose of producing AI for good (as he explained at the recent Bayes for good workshop). Including the development of an ML tool to detect abusive tweets, called Troll Patrol [which pun side is clear in French!]. The amount of abuse exposed by this study and the possibility to train AIs to spot [some of the] abuse on line are both arguments that support Amnesty International call for the accountability of social media companies like twitter on abuse and violence propagated through their platform. (Methodology is also made available there.)