Significance and artificial intelligence

As my sorry excuse of an Internet provider has been unable to fix my broken connection for several days, I had more time to read and enjoy the latest Significance I received last week. Plenty of interesting entries, once again! Even though, faithful to my idiosyncrasies, I must definitely criticise the cover (but you may also skip till the end of the paragraph!): It shows a pile of exams higher than the page frame on a student table in a classroom and a vague silhouette sitting behind the exams. I do not know whether or not this is intentional but the silhouette has definitely been added to the original picture (and presumably the exams as well!), because the seat and blackboard behind this silhouette show through it. If this is intentional, does that mean that the poor soul grading this endless pile of exams has long turned into a wraith?! If not intentional, that’s poor workmanship for a magazine usually apt at making the most from the graphical side. (And then I could go on and on about the clearly independent choice of illustrations by the managing editor rather than the author(s) of the article…) End of the digression! Or maybe not because there also was an ugly graph from Knowledge is Beautiful about the causes of plane crashes that made pie-charts look great… Not that all the graphs in the book are bad, far from it!

“The development of full artificial intelligence could spell the end of the human race.’ S. Hawkins

The central theme of the magazine is artificial intelligence (and machine learning). A point I wanted to mention in a post following the recent doom-like messages of Gates and Hawking about AIs taking over humanity à la Blade Runner… or in Turing’s test. As if they had not already impacted our life so much and in so many ways. And no all positive or for the common good. Witness the ultra-fast codes on the stock market. Witness the self-replicating and modifying computer viruses. Witness the increasingly autonomous military drones. Or witness my silly Internet issue, where I cannot get hold of a person who can tell me what the problem is and what the company is doing to solve it (if anything!), but instead have to listen to endless phone automata that tell me to press “1 if…” and “3 else”, and that my incident ticket has last been updated three days ago… But at the same time the tone of The Independent tribune by Hawking, Russell, Tegmark, and Wilczek is somewhat misguided, if I may object to such luminaries!, and playing on science fiction themes that have been repeated so many times that they are now ingrained, rather than strong scientific arguments. Military robots that could improve themselves to the point of evading their conceptors are surely frightening but much less realistic than a nuclear reaction that could not be stopped in a Fukushima plant. Or than the long-term impacts of genetically modified crops and animals. Or than the current proposals of climate engineering. Or than the emerging nano-particles.

“If we build systems that are game-theoretic or utility maximisers, we won’t get what we’re hoping for.” P. Norvig

The discussion of this scare in Significance does not contribute much in my opinion. It starts with the concept of a perfect Bayesian agent, supposedly the state of an AI creating paperclips, which (who?) ends up using the entire Earth’s resources to make more paperclips. The other articles in this cover story are more relevant, as for instance how AI moved from pure logic to statistical or probabilist intelligence. With Yee Whye Teh discussing Bayesian networks and the example of Google translation (including a perfect translation into French of an English sentence).

8 Responses to “Significance and artificial intelligence”

  1. Radford Neal Says:

    You comment: “… much less realistic than a nuclear reaction that could not be stopped in a Fukushima plant. Or than the long-term impacts of genetically modified crops and animals. Or than the current proposals of climate engineering. Or than the emerging nano-particles.”

    I’m rather surprised at this comment. There is no possibility whatsoever that a Fukushima-style nuclear accident (or any other nuclear power accident) will pose an existential risk to humanity. Nor is there any possibility that genetic engineering of crops or animals (killer cows?!) will threaten human existence. I’m not sure exactly what proposals for climate engineering or what sort of nano-technology you have in mind, but these also seem quite unlikely to go so far wrong that human existence is threatened.

    You don’t mention it, but there is also no possiility that a global nuclear war will lead to human extinction (though billions of people could die). You might not know this if you grew up when I did, though, with novels like On the Beach and Level Seven. Some people fall prey to apocalyptic thinking, and I think you have here.

    The argument that AI could pose an existential threat seems rather speculative to me, simply because I think we’re still quite far from knowing how AI might come about, but I don’t totally discount it. A more worrying possibility is that some anti-human cult could deliberately create a fatal virus that might spread so rapidly that it kills everyone before counter-measures can be taken. For that matter, it’s conceivable that such a virus could arise naturally. If you want to worry, I’d recommend worrying about that.

    • Fair point, Radford, about the nuclear global apocalypse that sounded much more realistic when we were both growing up. My worry at genetic engineering is that by modifying plants and animals into new genetic structures, given that we consume them as end of the food chain, it may also eventually impact our own gene pool and, for instance, turn us into a sterile species… As for nano-technologies, it is vaguer but I read once that nano-particles could have a deadly impact on our lungs and hence if out of control could wipe out a large chunk of the (First?) World… Even more fairer point about the virus, although there should be isolated pockets of humanity possibly surviving. A fast-acting virus is usually fast mutating as well and thus could die out before reaching such isolated pockets. (Not that it does not remain terrifying, of course.)

      • Radford Neal Says:

        It is pretty much impossible to imagine how genetic engineering that is anything like what’s presently being done could end up transferring genes to humans that would make all humans sterile.

        The thing to keep in mind is that genetically engineered organisms are not any different from other organisms, apart from the particular modification that was made to their genome. So, for example, if someone genetically modified some obscure species of moss that nobody had previously studied in detail, by transferring to it a gene from some equally obscure species of sponge, nobody not involved in the project would even be able tell that the modified moss wasn’t natural, without examining the wild moss to find the place where there was a difference.

        Humans of course consume food containing genetic material all the time. This genetic material is either never or only extremely rarely incorporated into the human genome. There is no reason why engineered genes would be any more likely to be incorporated into the human genome. The idea that such a transfer would happen not just once, but in almost 100% of all people, is utterly implausible. And if it did happen, the chances that it would render humans sterile is also extremely small.

        Now, one could imagine future forms of genetic engineering that might be dangerous. For instance, one could imagine engineering some organism to have an extended genetic code that could use some additional amino acid beyond the 20 normally coded for in DNA. If this amino acid is very useful, you could imagine that such super-organisms could take over the whole world, out-competing the ones that can only use the old set of 20 amino acids.

        Though actually this has already happened naturally. Via a genetic hack, some organisms can use a special form of the amino acid cysteine that incorporates selenium. The set of organisms that do this includes humans, so I guess that such super-organisms can indeed take over the world….

        Seriously, although some people have played around with such modifications in the lab, this is nothing like what is done in current genetic modification of crops.

      • Ok Radford, fair point. Mind you, I’ll make sure to ask you next time I plan to write a sci’fi’ book!!!

  2. Thanks, Corey, for this discussion. Although I do not see the issue as a “difficult” reasoning! I still disagree at the premises of AIs evolving towards higher utilities and possibly harmful consequences with no human intervention. As if humans were solely spectators in this evolution. There is always an ON/OFF switch of sorts for the human operator: “I’m sorry Dave, I’m afraid I can’t do that…

    Not that I disagree with the premises that AIs could reach a higher and more efficient form of intelligence. And maybe that it would be ok for it to replace humans! As for agreeing not to push research further, I think this is hopeless! As shown by other borderline research and development in genetics (with genetically modified crops), biology (coming close to human chimeras), nano-technologies, and so on…

    • I’m going to go meta here for a sec: when it comes to strong AI safety, smart people have been thinking of counter-arguments and counter-counter-arguments for years, so whatever an average smart person can think of off the top of his or her head, or even with five minutes of serious thought, has been thought of and dealt with.

      That said, I have to think that you didn’t even spend one minute trying to think of objections to the ON/OFF switch notion, or you would have generated the idea that an autonomous intelligence is not going to give a human operator reason to turn it off until after it has already secured control of its ON/OFF switch (or more generally, until after its physical instantiation is secure, or backed up, or whatever). One usual follow-up idea here is to run the AI in a sandbox so that it can’t get control of its ON/OFF switch. It turns out that humans simulating super-intelligent AIs can often convince human gatekeepers to “allow” the “AI” “out of the box” (see here, here). Another idea is to run an AI that just answers questions but doesn’t do anything else; I haven’t read this article, but the abstract claims that this remains potentially dangerous.

      As to the hopelessness of coordinating on not pursuing research in autonomous human-level AI: the reason I expend social capital pushing this crackpot-seeming idea on blogs that smart people read is in the hopes that at least a few of those readers notice that although the idea feels absurd, the argument for it is solid and the mere feeling that the idea is absurd does not actually constitute a counter-argument. If enough people doing research in or near AI fields notice this, then it’s not so hopeless after all.

      • I certainly did not think far or deep about this issue and mostly reacted to the doom-like statements of the previous months, plus Significance itself. I still remain convinced that there are several other sources of worry for setting humanity into a much less pleasant mode, if not for wiping it out.

  3. I feel so hipster — I was into the dangers of strong AI before it was cool!

    Listen, the reasoning isn’t difficult, and there’s no reliance on SF tropes here (Hawking et al. don’t bring up Blade Runner — that was you), nor is there what my friend from childhood and current AI researcher at IBMDave Buchanan calls the consciousness fallacy. (Aside: I don’t know exactly what Dave does, but I do know it involves massively parallel MCMC runs!)

    Let’s define “intelligence” as the ability to choose actions which constrain the future state of the world near some optimization target. The premises are:

    1. Materialism and substrate independence. There’s no soul-stuff that gives human special cognitive abilities. Intelligence is a cognitive algorithm running on neurons, but it could equally well run on silicon.

    2. Basic AI drives. An sufficiently intelligent autonomous agent will automatically generate self-preservation and self-improvement as instrumental goals, because for almost any terminal goal, achieving these instrumental goals will help achieve the terminal goal.

    3. A human-level autonomous AI can run at higher clock speeds than humans can think.

    From these premises, we can conclude that a sufficiently intelligent autonomous AI will devote processing power toward the instrumental goal of increasing intelligence, and if it’s human-level to start, then the AI can very likely bootstrap to von Neumann levels of intelligence (which we know are achievable because von Neumann achieved them) and quite possibly beyond. There’s no particular reason to suppose that von Neumann represents the pinnacle of intelligent arrangements of matter just because he was one of the most intelligent humans. (An analogy here: compare the peregrine falcon, which stoops at 320 km/h, with the SR-71 Blackbird). The most extreme scenario is due to I. J. Good, who predicted that AI goes FOOM. But no matter where it tops out, such an intelligence will never stop thinking of strategies and tactics to maximize its (effective) utility function.

    Now obviously, no one would set such an agent running unless they were sure they’d gotten the motivational structure right. But there is an incentive to push the boundaries of certainty here, because there could be an arms race: if you, a human or group of same, think you’ve got the motivational architecture right, then creating an intelligent, autonomous, motivated agent is an instant win for your team. Furthermore, if you’re at that point and you delay, some other group might get to that point and turn on their intelligent, autonomous, motivated agent, and that’s an instant win for them. Best not wait.

    And what happens if the motivation structure is not quite right? Nothing good, that’s for sure. It’s not that the AI will hate humans; it’s just that humans are made of atoms that the AI could use for something else. It will be catastrophic simply if the AI ends up caring as little for human values as we care for the values of the fauna we displace and kill when we construct a new apartment building or pave a new parking lot.

    So there is strong reason, in my view, to be concerned about one very specific type of AI: an autonomous agent with the capacity to evolve the strategy of self-optimization. It would be best if everyone could co-ordinate to not research autonomous human-level AI until the science/math of motivational structures that do not result in catastrophic loss of human values is well-understood.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: