superintelligence [book review]

“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” I.J. Good

I saw the nice cover of Superintelligence: paths, dangers, strategies by Nick Bostrom [owling at me!] at the OUP booth at JSM this summer—nice owl cover that comes will a little philosophical fable at the beginning about sparrows—and, after reading an in-depth review [in English] by Olle Häggström, on Häggström hävdar, asked OUP for a review copy. Which they sent immediately. The reason why I got (so) interested in the book is that I am quite surprised at the level of alertness about the dangers of artificial intelligence (or computer intelligence) taking over. As reported in an earlier blog, and with no expertise whatsoever in the field, I was not and am not convinced that the uncontrolled and exponential rise of non-human or non-completely human intelligences is the number One entry in Doom Day scenarios. (As made clear by Radford Neal and Corey Yanovsky in their comments, I know nothing worth reporting about those issues, but remain presumably irrationally more concerned about climate change and/or a return to barbarity than by the incoming reign of the machines.) Thus, having no competence in the least in either intelligence (!), artificial or human, or in philosophy and ethics, the following comments on the book only reflect my neophyte’s reactions. Which means the following rant should be mostly ignored! Except maybe on a rainy day like today…

“The ideal is that of the perfect Bayesian agent, one that makes probabilistically optimal use of available information.  This idea is unattainable (…) Accordingly, one can view artificial intelligence as a quest to find shortcuts…” (p.9)

Overall, the book stands much more at a philosophical and exploratory level than at attempting any engineering or technical assessment. The graphs found within are sketches rather than outputs of carefully estimated physical processes. There is thus hardly any indication how those super AIs could be coded towards super abilities to produce paper clips (but why on Earth would we need paper clips in a world dominated by AIs?!) or to involve all resources from an entire galaxy to explore even farther. The author envisions (mostly catastrophic) scenarios that require some suspension of belief and after a while I decided to read the book mostly as a higher form of science fiction, from which a series of lower form science fiction books could easily be constructed! Some passages reminded me quite forcibly of Philip K. Dick, less of electric sheep &tc. than of Ubik, where a superpowerful AI(s) turn humans into jar brains satisfied (or ensnared) with simulated virtual realities. Much less of Asimov’s novels as robots are hardly mentioned. And the third laws of robotics dismissed as ridiculously too simplistic (and too human).

“These occasions grace us with the opportunity to abandon a life of overconfidence and resolve to become better Bayesians.” (p.130)

Another level at which to read the book is as a deep reflection on the notions of intelligence, ethics, and morality. In the human sense. Indeed, before defining and maybe controlling such notions for machines, we should reflect how they are defined or coded for humans. I do not find the book very successful at this level (but, again, I know nothing!), as even intelligence does not get a clear definition, maybe because it is simply impossible to do so. The section on embryo selection towards more intelligent newborns made me cringe, not only because of the eugenic tones, but also because I am not aware of any characterisation so far of gene mutations promoting intelligence. (So far, of course, and the book generally considers that any technology or advance that is conceivable now will eventually be conceived. Presumably thanks to our own species’ intelligence.) And of course the arguments get much less clear when ethics and morality are concerned. Which brings me to one question I kept asking myself when going through the book, namely why would we be interested in replicating a human brain and its operation, or creating a superintelligent and self-enhancing machine, except for the sake of proving we can do it? With a secondary question, why would a superintelligent AI necessarily and invariably want to take over the world, a running assumption throughout the book?

“Within a Bayesian framework, we can think of the epistemology as a prior probability function.” (p.224)

While it is an easy counter-argument (and thus can be easily countered itself), notions that we can control the hegemonic tendencies of a powerful AI by appealing to utility and game theory are difficult to accept. This formalism hardly works for us (irrational) humans, so I see no reason why an inhuman form of intelligence could be thus constrained, as it can as well pick another form of utility or game theory as it evolves, following a inhuman logic that we cannot even fathom. Everything is possible, not even the sky is a limit… Even the conjunction of super AIs and of nano-technologies, from which we should be protected by the AI(s) if I follow the book (p.131). The difference between both actually is a matter of perspective as we can envision a swarm of nano-particules endowed with a collective super-intelligence…

“At a pre-set time, nanofactories producing nerve gas or target-seeking-mosquito-like robots might then burgeon forth simultaneously from every square metre of the globe.” (p.97)

Again, this is a leisurely read with no attempt at depth. If you want a deeper perspective, read for instance Olle Häggstöom’s review. Or ask Bill Gates, who “highly recommend this book” as indicated on the book cover. I found the book enjoyable in its systematic exploration of “all” possible scenarios and its connections with (Bayesian) decision theory and learning. As well as well-written, with a pleasant style, rich in references as well as theories, scholarly in its inclusion of as many aspects as possible, possibly lacking some backup from a scientific perspective, and somehow too tentative and exploratory. I cannot say I am now frightened by the emergence of the amoral super AIs or on the contrary reassured that there could be ways of keeping them under human control. (A primary question I did not see processed and would have liked to see is why we should fight this emergence. If AIs are much more intelligent than us, shouldn’t we defer to this intelligence? Just like we cannot not fathom chicken resisting their (unpleasant) fate, except in comics like Chicken Run… Thus completing the loop with the owl.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s