Archive for Bill Gates

“a rare blend of monster raving egomania and utter batshit insanity”

Posted in Books, pictures, University life with tags , , , , , , , , , , , , on November 12, 2020 by xi'an

“I don’t object to speculation or radical proposals, even to radical, grandiose speculative proposals; I just want there to be arguments to back them up, reasons to take them seriously. I don’t object to scientists displaying personality in their work, or staking out positions in vigorous opposition to much of the opinion in their field, and engaging in heated debate; I do object to ignoring criticism and claiming credit for commonplaces, especially before popular audiences who won’t pick up on it.”

A recent post by Andrew on Stephen Wolfram’s (mega) egomania led to a much older post by Cosma Shalizi reviewing the perfectly insane 5.57 pounds of a New Kind of Science. An exhilarating review, trashing the pretentious self-celebration of a void paradigm shift advanced by Wolfram and its abyssal lack of academic rigour, showing anew that a book recommended by Bill Gates is not necessarily a great book. (Note that A New Kind of Science is available for free on-line.)

“Let me try to sum up. On the one hand, we have a large number of true but commonplace ideas, especially about how simple rules can lead to complex outcomes, and about the virtues of toy models. On the other hand, we have a large mass of dubious speculations (many of them also unoriginal). We have, finally, a single new result of mathematical importance, which is not actually the author’s. Everything is presented as the inspired fruit of a lonely genius, delivering startling insights in isolation from a blinkered and philistine scientific community.”

When I bought this monstrous book (eons before I started the ‘Og!), I did not get much further into it than the first series of cellular automata screen copies that fill page after page. And quickly if carefully dropped it by my office door in the corridor. Where it stayed for a few days until one of my colleagues most politely asked me if he could borrow it. (This happens all the time: once I have read or given up on a book I do not imagine reopening again, I put it in the coffee room or, for the least recommended books, on the floor by my door and almost invariably whoever is interested will first ask me for permission. Which is very considerate and leads to pleasant discussions on the said books. Only recently did the library set shelves outside its doors for dropping books free for the taking, but even there I sometimes get colleagues wondering [rightly] if I was the one abandoning there a particular book.)

“I am going to keep my copy of A New Kind of Science, sitting on the same shelf as Atlantis in Wisconsin, The Cosmic Forces of Mu, Of Grammatology, and the people who think the golden ratio explains the universe.”

In case the review is not enough to lighten up your day, in these gloomy times, there is a wide collection of them from the 2000’s, although most of the links have turned obsolete. (The Maths Reviews review has not.) As presumably this very post about a eighteen-years-old non-event…

superintelligence [book review]

Posted in Books, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on November 28, 2015 by xi'an

“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” I.J. Good

I saw the nice cover of Superintelligence: paths, dangers, strategies by Nick Bostrom [owling at me!] at the OUP booth at JSM this summer—nice owl cover that comes will a little philosophical fable at the beginning about sparrows—and, after reading an in-depth review [in English] by Olle Häggström, on Häggström hävdar, asked OUP for a review copy. Which they sent immediately. The reason why I got (so) interested in the book is that I am quite surprised at the level of alertness about the dangers of artificial intelligence (or computer intelligence) taking over. As reported in an earlier blog, and with no expertise whatsoever in the field, I was not and am not convinced that the uncontrolled and exponential rise of non-human or non-completely human intelligences is the number One entry in Doom Day scenarios. (As made clear by Radford Neal and Corey Yanovsky in their comments, I know nothing worth reporting about those issues, but remain presumably irrationally more concerned about climate change and/or a return to barbarity than by the incoming reign of the machines.) Thus, having no competence in the least in either intelligence (!), artificial or human, or in philosophy and ethics, the following comments on the book only reflect my neophyte’s reactions. Which means the following rant should be mostly ignored! Except maybe on a rainy day like today…

“The ideal is that of the perfect Bayesian agent, one that makes probabilistically optimal use of available information.  This idea is unattainable (…) Accordingly, one can view artificial intelligence as a quest to find shortcuts…” (p.9)

Overall, the book stands much more at a philosophical and exploratory level than at attempting any engineering or technical assessment. The graphs found within are sketches rather than outputs of carefully estimated physical processes. There is thus hardly any indication how those super AIs could be coded towards super abilities to produce paper clips (but why on Earth would we need paper clips in a world dominated by AIs?!) or to involve all resources from an entire galaxy to explore even farther. The author envisions (mostly catastrophic) scenarios that require some suspension of belief and after a while I decided to read the book mostly as a higher form of science fiction, from which a series of lower form science fiction books could easily be constructed! Some passages reminded me quite forcibly of Philip K. Dick, less of electric sheep &tc. than of Ubik, where a superpowerful AI(s) turn humans into jar brains satisfied (or ensnared) with simulated virtual realities. Much less of Asimov’s novels as robots are hardly mentioned. And the third laws of robotics dismissed as ridiculously too simplistic (and too human). Continue reading

%d bloggers like this: