Archive for AI for good

Exhalation [book review]

Posted in Books, Kids, pictures with tags , , , , , , , , , , , , , , , , , , , on January 18, 2024 by xi'an

Exhalation is a diverse and original collection of (more or less) short stories by Ted Chiang, published between 2007 and 2015 in different journals including Nature at the time Nature had a weekly one-page short story (of varying quality). This is science fiction in a sense, given that all stories involve impossible situations and a connection with science at a low technical level (no space opera). However, it is much closer to philosophy in my opinion in that the author builds upon an alternate reality to question ours, sometimes with remarkable prescience and a remarkable kindness towards his characters.

So, there is the time travel portal story that does not lead to paradoxes, even with temporal loops, but to a reflection on human nature. (Not my favourite but this first story got the Nebula Award, the Hugo Award, and the Seiun Award!) They are doomed universes, either because the pressure is slowly getting away, or because a lack of communication and awareness pushes one species towards extinction (with the side sad coïncidence of being associated with the soon to fall Arecibo dish!). There are social media killing free will but calling for resistance. And a novella on virtual entities (digients) turning more and more real, while the company that produced them goes bankrupt and their universe is no longer upgraded, which also brings a well-balanced discussion on whether or not they should become legal entities and be given freedom, resonating in opposition to the recent doomsaying warnings about AI taking over humanity. Plus reflecting on the hardships of parenting and teaching. There is a Victorian story of building a difference engine nanny, with terrible consequences on two successive generations. There is another story about the potential horror of having one’s entire life stored on line and accessible by sentient interfaces, negating the very notion of memory since both remembering and forgetting escape the individual. There is a pre-Copernician universe where everything literally fits the Bible creation, until it does not. And there is the case of a quantic collection of universes where alternate paths could be consulted, to the point of interacting with alternate selves, maybe a bit fuzzy in the scientific details, but again leading to deeper reflections on human nature. I read the book around my trip to Melbourne and back, which may explain for my highly positive reaction as I had plenty of free time and sunny days to enjoy reading in the planes, on the trams (where I nearly lost my wallet, rescued by the kindness of another passenger!), and in my St Kilda comfy rental, but I do very much recommend it. Especially for those with a scientific mind. Joyce Carol Oates wrote for The New Yorker a much longer and better argued analysis of Exhalation. when the book appeared.

[Disclaimer about potential self-plagiarism: this post or an edited version may eventually appear in my Books Review section in CHANCE.]

même pas peur [not afrAId]

Posted in Books, Kids, Travel, University life with tags , , , , , , , , , , , , , on March 30, 2023 by xi'an

Both the Beeb and The New York Times are posting tonight about a call to pause AI experiments, by AI researchers and others, due to the danger they could pose to humanity. While this reminds me of Superintelligence, a book by Nick Bostrom I found rather unconvincing, and although I agree that automated help-to-decision systems should not become automated decision systems, I am rather surprised at them setting the omnipresent Chat-GPT as the reference not to be exceeded.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity (…) recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

The central (?) issue here is whether something like Chat-GPT can claim anything intelligence, when pumping data from a (inevitably biased) database and producing mostly coherent sentences without any attention to facts. Which is useful when polishing a recommendation letter at the same level as a spelling corrector (but requires checking for potential fake facts inclusions, like imaginary research prizes!)

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

The increasingly doom-mongering tone of the above questions is rather over the top (civilization, nothing less?!) and again remindful of Superintelligence, while spreading propaganda and untruth need not wait super-AIs to reach conspiracy theorists.

“Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable (…) Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in”

A six months period sounds like inappropriate for an existential danger, while the belief that governments want or can intervene sounds rather naïve, given for instance that they lack the ability to judge of the dangerosity of the threat and of the safety nets to be imposed on gigantic black-box systems. Who can judge on the positivity and risk of a billion (trillion?) parameter model? Why is being elected any guarantee of fairness or acumen? Beyond dictatures thriving on surveillance automata, more democratic countries are also happily engaging into problematic AI experiments, incl. AI surveillance of the incoming Paris Olympics. (Another valuable reason to stay away from Paris over the games.)

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems.”

While these are worthy goals, at a conceptual level—with the practical issue of defining precisely each of these lofty adjectives—, and although I am certainly missing a lot from my ignorance of the field,  this call remains a mystery to me as it sounds unrealistic it could achieve its goal.

Nature tidbits

Posted in Books, University life with tags , , , , , , , , , , , on September 7, 2019 by xi'an

Before returning a few older issues of Nature to the coffee room of the maths department, a quick look brought out the few following items of interests, besides the great cover above:

  • France showing the biggest decline in overal output among the top 10 countries in the Nature Index Annual Tables.
  • A tribune again the EU’s Plan S, towards funding (private) publishers directly from public (research) money. Why continuing to support commercial journals one way or another?!
  • A short debate on geo-engineering towards climate control, with the dire warning that “little is known about the consequences” [which could be further damaging the chances of human survival on this planet].
  • Another call for the accountability of companies designing AI towards fairness and unbiasedness [provided all agree on the meaning of these terms]
  • A study that argues that the obesity epidemics is more prevalent in rural than urban areas due to a higher recourse to junk food in the former.
  • A data mining venture in India to mine [not read] 73 million computerised journal articles, which is not yet clearly legal as the publishers object to it. Although the EU (and the UK) have laws authorising mining for non-commercial goals. (And India has looser regulations wrt copyright.)

crowdsourcing, data science & machine learning to measure violence & abuse against women on twitter

Posted in Books, Statistics, University life with tags , , , , , , , , , on January 3, 2019 by xi'an

Amnesty International just released on December 18 a study on abuse and harassment on twitter account of female politicians and journalists in the US and the UK. Realised through the collaboration of thousands of crowdsourced volunteers labeling  tweets from the database and the machine-learning expertise of the London branch of ElementAI, branch driven by my friend Julien Cornebise with the main purpose of producing AI for good (as he explained at the recent Bayes for good workshop). Including the development of an ML tool to detect abusive tweets, called Troll Patrol [which pun side is clear in French!]. The amount of abuse exposed by this study and the possibility to train AIs to spot [some of the] abuse on line are both arguments that support Amnesty International call for the accountability of social media companies like twitter on abuse and violence propagated through their platform. (Methodology is also made available there.)

AI for good

Posted in pictures, Statistics, Travel with tags , , , on December 26, 2017 by xi'an

Last week, I had a quick chat in front of the Luxembourg gardens with Julien Cornebise and he told me about the AI for Good Foundation with whom he was going to work through Element AI, after doing volunteer work with Amnesty International. Great!