même pas peur [not afrAId]
Both the Beeb and The New York Times are posting tonight about a call to pause AI experiments, by AI researchers and others, due to the danger they could pose to humanity. While this reminds me of Superintelligence, a book by Nick Bostrom I found rather unconvincing, and although I agree that automated help-to-decision systems should not become automated decision systems, I am rather surprised at them setting the omnipresent Chat-GPT as the reference not to be exceeded.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity (…) recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
The central (?) issue here is whether something like Chat-GPT can claim anything intelligence, when pumping data from a (inevitably biased) database and producing mostly coherent sentences without any attention to facts. Which is useful when polishing a recommendation letter at the same level as a spelling corrector (but requires checking for potential fake facts inclusions, like imaginary research prizes!)
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
The increasingly doom-mongering tone of the above questions is rather over the top (civilization, nothing less?!) and again remindful of Superintelligence, while spreading propaganda and untruth need not wait super-AIs to reach conspiracy theorists.
“Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable (…) Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in”
A six months period sounds like inappropriate for an existential danger, while the belief that governments want or can intervene sounds rather naïve, given for instance that they lack the ability to judge of the dangerosity of the threat and of the safety nets to be imposed on gigantic black-box systems. Who can judge on the positivity and risk of a billion (trillion?) parameter model? Why is being elected any guarantee of fairness or acumen? Beyond dictatures thriving on surveillance automata, more democratic countries are also happily engaging into problematic AI experiments, incl. AI surveillance of the incoming Paris Olympics. (Another valuable reason to stay away from Paris over the games.)
“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems.”
While these are worthy goals, at a conceptual level—with the practical issue of defining precisely each of these lofty adjectives—, and although I am certainly missing a lot from my ignorance of the field, this call remains a mystery to me as it sounds unrealistic it could achieve its goal.
Related
This entry was posted on March 30, 2023 at 12:23 am and is filed under Books, Kids, Travel, University life with tags AI for good, Amnesty International, BBC, book reviews, bots, ChatGPT, doomsday argument, fake news, Paris 2024 Olympics, power to the machines, propaganda, social media, superintelligence, The New York Times. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
One Response to “même pas peur [not afrAId]”
Leave a Reply Cancel reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
March 30, 2023 at 11:48 am
Perhaps the journalists are trying to stop their jobs from being automated away, i.e., they’re latterday Luddites – by means of fearmongering their audience. Automated news article generation is already happening, and AI developments are increasing the pace. See, e.g., Diakopoulos, Nicholas. Automating the News: How Algorithms Are Rewriting the Media. United States: Harvard University Press, 2019.