Archive for Philip K. DIck

blade runner 2049

Posted in Books, Kids, pictures with tags , , , , , , , , on December 10, 2017 by xi'an

As Blade Runner 2049 was shown at a local cinema in a Nuit du Cinéma special, my daughter and I took the opportunity to see the sequel to Blade Runner, despite the late hour. And both came back quite enthusiastic about it! Maybe the plot stands a bit thin at times, with too many coincidences and the evil ones being too obviously evil, but the rendering of this future of the former future LA of the original Blade Runner is amazingly complex and opening many threads of potential explanations. And many more questions, which is great. With fascinating openings into almost philosophical questions like the impossible frontier between humans and AIs or the similarly impossible definition of self… Besides, the filming, with a multiplicity of (drone) views, the use of light, from blurred white to glaring yellow and back to snow white, the photography, the musical track, almost overwhelming and more complex than Vangelis’ original, are all massively impressive. As for the quintessential question of how the sequel compares with the original film, I do not think it makes much sense: for one thing the sequel would not have been without the original, the filming has evolved with the era, from the claustrophobic and almost steam-punk film by Scott to this post-apocalyptic rendering by Villeneuve, both movies relating to Philip K Dick’s book in rather different ways (if fortunately avoiding sheep and goats!).

blade runner [book review]

Posted in Books, Kids with tags , , , , , , , on November 12, 2017 by xi'an

As the new Blade Runner 2049 film is now out, I realised I have never read the original Philip K Dick novel, Do Androids Dream of Electric Sheep?… So, when I came by it in the wonderful Libreria Marcopolo in Venezia last month, with some time to kill waiting for a free dinner table nearby (and a delicious plate of spaghetti al nero di seppia!), I bought at last the book and read it within a couple evenings. (Plus a trip back from the airport.) While the book is fascinating, both in its construction and in its connection with the first Blade Runner movie, I am somehow disappointed now I have finished it, as I was expecting a somewhat deeper story. [Warning: spoilers to follow!] On the one hand, the post-nuclear California and the hopeless life of those who cannot emigrate to Mars are bleaker and more hopeless than Ridley Scott’s film, with the yearning of Deckard for real animals (rather than his electric sheep) a major focus of the book. And only of the book. For a reason that remains unclear to me, especially because Deckard grows more and more empathic towards androids, and not only towards the ambiguous and fascinating Rachael, while being less and less convinced of his ability to “retire” rogue androids… And of distinguishing between humans and androids. And also because he ends up nurturing a toad he spotted in a deserted location, believing it to be a real animal. The background of the society, its reliance on brainless reality shows and on a religion involving augmented reality, all are great components of the novel, although they feel a bit out-dated fifty years later. (And later than the date the story is supposed to take place.) The human sheltering and helping the fugitive androids is a “chickenhead”, term used in the book for the challenged humans unable to pass the tests for emigrating to Mars. Rather than a robot designer and geek as in the film.

On the other hand, the quasi- or near-humanity of the androids hunted by Deckard is much more better rendered in the film. (Maybe simply because it is a film and hence effortlessly conveys this humanity of actors playing androids. Just like C3PO in Star Wars!) Which connections with expressionisms à la Fritz Lang and noir movies of the 50’s are almost enough to make it a masterpiece. In the book, the androids are much more inconsistent, with repeated hints that they miss some parts of the human experience. There is no lengthy fight between Deckard and the superior (android) Roy. No final existentialist message from the later. And no rescuing of Deckard that makes the android stand ethically (and literally) above Deckard. The only android with some depth is Rachael, albeit with confusing scenes. (If not as confusing as the sequence at the alternative police station that just does not make sense. Unless Deckard himself is an android, a possibility hardly envisioned in the book,.) While Scott’s Blade Runner may seem to hammer its message a wee bit too heavily, it does much better at preserving ambiguity on who is human and who is not, and at the murky moral ground of humans versus androids. In fine, I remain more impacted by the multiple dimensions, perceptions, and uncertainties in Blade Runner.  Than in Philip K Dick’s novel. Still worth reading or re-reading against watching or re-watching these movies…

[Some book covers on this page are taken from a webpage with 23 alternative covers for Do androids dream of electronic sheep?”.]

sex, lies, & brain scans [not a book review]

Posted in Statistics with tags , , , , , , , on February 11, 2017 by xi'an

“Sahakian and Gottwald discuss the problem of “reverse inference” regrettably late in the book.”

In the book review section of Nature [Jan 12, 2017 issue], there was a long coverage of the book sex. lies, & brain scans: How fMRI Reveals What Really Goes on in our Minds, by Barbara J. Sahakian and Julia Gottwald. While I have not read the book (which is not even yet out on amazon), I found some mentions of associating brain patterns with criminal behaviour quite puzzling: “neuroimaging will probably be an imperfect predictor of criminal behaviour”. Actually, much more than puzzling, both frightening with its Minority Report prospects [once again quoted as a movie rather than Philip K. Dick’s novel!], and bordering the irrational, for associating breaking rules with a brain pattern. Of course this is just an impression from reading a book review and the attempts may be restricted to psychological diseases rather than attempt at social engineering and brain policing, but if this is the case, as suggested by the review, it is downright scary!

To predict and serve?

Posted in Books, pictures, Statistics with tags , , , , , , , , , , , on October 25, 2016 by xi'an

Kristian Lum and William Isaac published a paper in Significance last week [with the above title] about predictive policing systems used in the USA and presumably in other countries to predict future crimes [and therefore prevent them]. This sounds like a good idea for a science fiction plot, à la Philip K Dick [in his short story, The Minority Report], but that it is used in real life definitely sounds frightening, especially when the civil rights of the targeted individuals are impacted. (Although some politicians in different democratic countries increasingly show increasing contempt for keeping everyone’ rights equal…) I also feel terrified by the social determinism behind the very concept of predicting crime from socio-economic data (and possibly genetic characteristics in a near future, bringing us back to the dark days of physiognomy!)

“…crimes that occur in locations frequented by police are more likely to appear in the database simply because that is where the police are patrolling.”

Kristian and William examine in this paper one statistical aspect of the police forces relying on crime prediction software, namely the bias in the data exploited by the software and in the resulting policing. (While the accountability of the police actions when induced by such software is not explored, this is obviously related to the Nature editorial of last week, “Algorithm and blues“, which [in short] calls for watchdogs on AIs and decision algorithms.) When the data is gathered from police and justice records, any bias in checks, arrests, and condemnations will be reproduced in the data and hence will repeat the bias in targeting potential criminals. As aptly put by the authors, the resulting machine learning algorithm will be “predicting future policing, not future crime.” Worse, by having no reservation about over-fitting [the more predicted crimes the better], it will increase the bias in the same direction. In the Oakland drug-user example analysed in the article, the police concentrates almost uniquely on a few grid squares of the city, resulting into the above self-predicting fallacy. However, I do not see much hope in using other surveys and datasets towards eliminating this bias, as they also carry their own shortcomings. Even without biases, predicting crimes at the individual level just seems a bad idea, for statistical and ethical reasons.

superintelligence [book review]

Posted in Books, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on November 28, 2015 by xi'an

“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” I.J. Good

I saw the nice cover of Superintelligence: paths, dangers, strategies by Nick Bostrom [owling at me!] at the OUP booth at JSM this summer—nice owl cover that comes will a little philosophical fable at the beginning about sparrows—and, after reading an in-depth review [in English] by Olle Häggström, on Häggström hävdar, asked OUP for a review copy. Which they sent immediately. The reason why I got (so) interested in the book is that I am quite surprised at the level of alertness about the dangers of artificial intelligence (or computer intelligence) taking over. As reported in an earlier blog, and with no expertise whatsoever in the field, I was not and am not convinced that the uncontrolled and exponential rise of non-human or non-completely human intelligences is the number One entry in Doom Day scenarios. (As made clear by Radford Neal and Corey Yanovsky in their comments, I know nothing worth reporting about those issues, but remain presumably irrationally more concerned about climate change and/or a return to barbarity than by the incoming reign of the machines.) Thus, having no competence in the least in either intelligence (!), artificial or human, or in philosophy and ethics, the following comments on the book only reflect my neophyte’s reactions. Which means the following rant should be mostly ignored! Except maybe on a rainy day like today…

“The ideal is that of the perfect Bayesian agent, one that makes probabilistically optimal use of available information.  This idea is unattainable (…) Accordingly, one can view artificial intelligence as a quest to find shortcuts…” (p.9)

Overall, the book stands much more at a philosophical and exploratory level than at attempting any engineering or technical assessment. The graphs found within are sketches rather than outputs of carefully estimated physical processes. There is thus hardly any indication how those super AIs could be coded towards super abilities to produce paper clips (but why on Earth would we need paper clips in a world dominated by AIs?!) or to involve all resources from an entire galaxy to explore even farther. The author envisions (mostly catastrophic) scenarios that require some suspension of belief and after a while I decided to read the book mostly as a higher form of science fiction, from which a series of lower form science fiction books could easily be constructed! Some passages reminded me quite forcibly of Philip K. Dick, less of electric sheep &tc. than of Ubik, where a superpowerful AI(s) turn humans into jar brains satisfied (or ensnared) with simulated virtual realities. Much less of Asimov’s novels as robots are hardly mentioned. And the third laws of robotics dismissed as ridiculously too simplistic (and too human). Continue reading