Archive for AI

the Ramanujan machine

Posted in Books, Kids, pictures, University life with tags , , , , , , , , , , , on February 18, 2021 by xi'an

Nature of 4 Feb. 2021 offers a rather long (Nature-like) paper on creating Ramanujan-like expressions using an automated process. Associated with a cover in the first pages. The purpose of the AI is to generate conjectures of Ramanujan-like formulas linking famous constants like π or e and algebraic formulas like the novel polynomial continued fraction of 8/π²:

\frac{8}{{{\rm{\pi }}}^{2}}=1-\frac{2\times {1}^{4}-{1}^{3}}{7-\frac{2\times {2}^{4}-{2}^{3}}{19-\frac{2\times {3}^{4}-{3}^{3}}{37-\frac{2\times {4}^{4}-{4}^{3}}{\ldots }}}}

which currently remains unproven. The authors of the “machine” provide Python code that one can run to try uncover new conjectures, possibly named after the discoverer! The article is spending a large proportion of its contents to justify the appeal of generating such conjectures, with several unsuspected formulas later proven for real, but I remain unconvinced of the deeper appeal of the machine (as well as unhappy about the association of Ramanujan and machine, since S. Ramanujan had a mystical and unexplained relation to numbers, defeating Hardy’s logic,  “a mathematician of the highest quality, a man of altogether exceptional originality and power”). The difficulty is in separating worthwhile from anecdotal (true) conjectures, not to mention wrng conjectures. This is certainly of much deeper interest than separating chihuahua faces from blueberry muffins, but does it really “help to create mathematical knowledge”?

foundations of algorithmic fairness

Posted in Statistics, University life with tags , , , , , , on February 14, 2021 by xi'an

Just forwarding the announcement of an online workshop on algorithmic fairness, on 12 and 16 March (great idea to separate the two afternoons, GT-wise afternoons). It is organised by Paris, Genoa and Saarbrücken ELLIS units, and, although a member of the Paris unit, I am not involved in this event. (The speakers are pictured above.)

[de]quarantined by slideshare

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , on January 11, 2021 by xi'an

A follow-up episode to the SlideShare m’a tuer [sic] saga: After the 20 November closure of my xianblog account and my request for an explanation, I was told by Linkedin that a complaint has been made about one of my talks for violation of copyright. Most surprisingly, at least at first, it was about the slides for the graduate lectures I gave ten years ago at CREST on (re)reading Jaynes’ Probability Theory. While the slides contain a lot of short quotes from the Logic of Science, somewhat necessarily since I discuss the said book, there are also many quotes from Jeffreys’ Theory of Probability and “t’is but a scratch” on the contents of this lengthy book… Plus, the pdf file appears to be accessible on several sites, including one with an INRIA domain. Since I had to fill a “Counter-Notice of Copyright Infringement” to unlock the rest of the depository, I just hope no legal action is going to be taken about this lecture. But I remain puzzled at the reasoning behind the complaint, unwilling to blame radical Jaynesians for it! As an aside, here are the registered 736 views of the slides for the past year:

missing bit?

Posted in Books, Statistics, University life with tags , , , , , , , , on January 9, 2021 by xi'an

Nature of 7 December 2020 has a Nature Index (a supplement made of a series of articles, more journalistic than scientific, with corporate backup, which “have no influence over the content”) on Artificial Intelligence, including the above graph representing “the top 200 collaborations among 146 institutions based between 2015 and 2019, sized according to each institution’s share in artificial intelligence”, with only the UK, Germany, Switzerland and Italy identified for Europe… Missing e.g. the output from France and from its major computer science institute, INRIA. Maybe because “the articles picked up by [their] database search concern specific applications of AI in the life sciences, physical sciences, chemistry, and Earth and environmental sciences”.  Or maybe because of the identification of INRIA as such.

“Access to massive data sets on which to train machine-learning systems is one advantage that both the US and China have. Europe, on the other hand, has stringent data laws, which protect people’s privacy, but limit its resources for training AI algorithms. So, it seems unlikely that Europe will produce very sophisticated AI as a consequence”

This comment is sort of contradictory for the attached articles calling for a more ethical AI. Like making AI more transparent and robust. While having unrestricted access to personal is helping with social engineering and control favoured by dictatures and corporate behemoths, a culture of data privacy may (and should) lead to develop new methodology to work with protected data (as in an Alan Turing Institute project) and to infuse more trust from the public. Working with less data does not mean less sophistication in handling it but on the opposite! Another clash of events appears in one of the six trailblazers portrayed in the special supplement being Timnit Gebru, “former co-lead of the Ethical AI Team at Google”, who parted way with Google at the time the issue was published. (See Andrew’s blog for  discussion of her firing. And the MIT Technology Review for an analysis of the paper potentially at the source of it.)

quarantined by slideshare

Posted in Books, pictures, Statistics, University life with tags , , , , , , , on November 26, 2020 by xi'an

Just found out that SlideShare has closed my account for “violating SlideShare Terms of Service”! As I have no further detail (and my xianblog account is inaccessible) I have contacted SlideShare to get an explanation for this inexplicable cancellation and hopefully (??) can argue my case. If not the hundred plus slide presentations that were posted there and linked on the ‘Og will become unavailable. I seem to remember this happened to me once before so maybe there is hope to invert the decision presumably made by an AI using the wrong prior or algorithm!