Archive for AIs

weapons of math destruction [book review]

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , on December 15, 2016 by xi'an

wmd As I had read many comments and reviews about this book, including one by Arthur Charpentier, on Freakonometrics, I eventually decided to buy it from my Amazon Associate savings (!). With a strong a priori bias, I am afraid, gathered from reading some excerpts, comments, and the overall advertising about it. And also because the book reminded me of another quantic swan. Not to mention the title. After reading it, I am afraid I cannot tell my ascertainment has changed much.

“Models are opinions embedded in mathematics.” (p.21)

The core message of this book is that the use of algorithms and AI methods to evaluate and rank people is unsatisfactory and unfair. From predicting recidivism to fire high school teachers, from rejecting loan applications to enticing the most challenged categories to enlist for for-profit colleges. Which is indeed unsatisfactory and unfair. Just like using the h index and citation ranking for promotion or hiring. (The book mentions the controversial hiring of many adjunct faculty by KAU to boost its ranking.) But this conclusion is not enough of an argument to write a whole book. Or even to blame mathematics for the unfairness: as far as I can tell, mathematics has nothing to do with unfairness. Some analysts crunch numbers, produce a score, and then managers make poor decisions. The use of mathematics throughout the book is thus completely inappropriate, when the author means statistics, machine learning, data mining, predictive algorithms, neural networks, &tc. (OK, there is a small section on Operations Research on p.127, but I figure deep learning can bypass the maths.) Continue reading

machines learning but not teaching…

Posted in Books, pictures with tags , , , , , , , on October 28, 2016 by xi'an

A few weeks after the editorial “Algorithms and Blues“, Nature offers another (general public) entry on AIs and their impact on society, entitled “The Black Box of AI“. The call is less on open source AIs and more on accountability, namely the fact that decisions produced by AIS and impacting people one way or another should be accountable. Rather than excused by the way out “the computer said so”. What the article exposes is how (close to) impossible this is when the algorithms are based on black-box structures like neural networks and other deep-learning algorithms. While optimised to predict as accurately as possible one outcome given a vector of inputs, hence learning in that way how the inputs impact this output [in the same range of values], these methods do not learn in a more profound way in that they very rarely explain why the output occurs given the inputs. Hence, given a neural network that predicts go moves or operates a self-driving car, there is a priori no knowledge to be gathered from this network about the general rules of how humans play go or drive cars. This rather obvious feature means that algorithms that determine the severity of a sentence cannot be argued as being rational and hence should not be used per se (or that the judicial system exploiting them should be sued). The article is not particularly deep (learning), but it mentions a few machine-learning players like Pierre Baldi, Zoubin Ghahramani and Stéphane Mallat, who comments on the distance existing between those networks and true (and transparent) explanations. And on the fact that the human brain itself goes mostly unexplained. [I did not know I could include such dynamic images on WordPress!]

To predict and serve?

Posted in Books, pictures, Statistics with tags , , , , , , , , , , , on October 25, 2016 by xi'an

Kristian Lum and William Isaac published a paper in Significance last week [with the above title] about predictive policing systems used in the USA and presumably in other countries to predict future crimes [and therefore prevent them]. This sounds like a good idea for a science fiction plot, à la Philip K Dick [in his short story, The Minority Report], but that it is used in real life definitely sounds frightening, especially when the civil rights of the targeted individuals are impacted. (Although some politicians in different democratic countries increasingly show increasing contempt for keeping everyone’ rights equal…) I also feel terrified by the social determinism behind the very concept of predicting crime from socio-economic data (and possibly genetic characteristics in a near future, bringing us back to the dark days of physiognomy!)

“…crimes that occur in locations frequented by police are more likely to appear in the database simply because that is where the police are patrolling.”

Kristian and William examine in this paper one statistical aspect of the police forces relying on crime prediction software, namely the bias in the data exploited by the software and in the resulting policing. (While the accountability of the police actions when induced by such software is not explored, this is obviously related to the Nature editorial of last week, “Algorithm and blues“, which [in short] calls for watchdogs on AIs and decision algorithms.) When the data is gathered from police and justice records, any bias in checks, arrests, and condemnations will be reproduced in the data and hence will repeat the bias in targeting potential criminals. As aptly put by the authors, the resulting machine learning algorithm will be “predicting future policing, not future crime.” Worse, by having no reservation about over-fitting [the more predicted crimes the better], it will increase the bias in the same direction. In the Oakland drug-user example analysed in the article, the police concentrates almost uniquely on a few grid squares of the city, resulting into the above self-predicting fallacy. However, I do not see much hope in using other surveys and datasets towards eliminating this bias, as they also carry their own shortcomings. Even without biases, predicting crimes at the individual level just seems a bad idea, for statistical and ethical reasons.

superintelligence [book review]

Posted in Books, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on November 28, 2015 by xi'an

“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” I.J. Good

I saw the nice cover of Superintelligence: paths, dangers, strategies by Nick Bostrom [owling at me!] at the OUP booth at JSM this summer—nice owl cover that comes will a little philosophical fable at the beginning about sparrows—and, after reading an in-depth review [in English] by Olle Häggström, on Häggström hävdar, asked OUP for a review copy. Which they sent immediately. The reason why I got (so) interested in the book is that I am quite surprised at the level of alertness about the dangers of artificial intelligence (or computer intelligence) taking over. As reported in an earlier blog, and with no expertise whatsoever in the field, I was not and am not convinced that the uncontrolled and exponential rise of non-human or non-completely human intelligences is the number One entry in Doom Day scenarios. (As made clear by Radford Neal and Corey Yanovsky in their comments, I know nothing worth reporting about those issues, but remain presumably irrationally more concerned about climate change and/or a return to barbarity than by the incoming reign of the machines.) Thus, having no competence in the least in either intelligence (!), artificial or human, or in philosophy and ethics, the following comments on the book only reflect my neophyte’s reactions. Which means the following rant should be mostly ignored! Except maybe on a rainy day like today…

“The ideal is that of the perfect Bayesian agent, one that makes probabilistically optimal use of available information.  This idea is unattainable (…) Accordingly, one can view artificial intelligence as a quest to find shortcuts…” (p.9)

Overall, the book stands much more at a philosophical and exploratory level than at attempting any engineering or technical assessment. The graphs found within are sketches rather than outputs of carefully estimated physical processes. There is thus hardly any indication how those super AIs could be coded towards super abilities to produce paper clips (but why on Earth would we need paper clips in a world dominated by AIs?!) or to involve all resources from an entire galaxy to explore even farther. The author envisions (mostly catastrophic) scenarios that require some suspension of belief and after a while I decided to read the book mostly as a higher form of science fiction, from which a series of lower form science fiction books could easily be constructed! Some passages reminded me quite forcibly of Philip K. Dick, less of electric sheep &tc. than of Ubik, where a superpowerful AI(s) turn humans into jar brains satisfied (or ensnared) with simulated virtual realities. Much less of Asimov’s novels as robots are hardly mentioned. And the third laws of robotics dismissed as ridiculously too simplistic (and too human). Continue reading

Norbert Wiener on robots

Posted in Books, University life with tags , , , , on May 25, 2013 by xi'an

The New York Times published a paper by Norbert Wiener that should have appeared in…1949! In this short paper, Wiener gives his view on the future of computing. He for instance foresaw a “factory substantially without employees”. He also envisioned learning:

“The possibility of learning may be built in by allowing the taping to be re-established in a new way by the performance of the machine and the external impulses coming into it, rather than having it determined by a closed and rigid setup, to be imposed on the apparatus from the beginning.”

The last part reflects upon the impact automatised factories would have on the job market, albeit there is not clear solution proposed by Wiener.  I also notice the final paragraph on “beware of machines taking over humans”:

“Moreover, if we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us.”

Coincidentally, Wiener had just written “The human use of human beings” (1948), which reminded me of the most human human

the most human human

Posted in Books, University life with tags , , , , , , , , , , , , , , on May 24, 2013 by xi'an

“…the story of Homo sapiens trying to stake a claim on shifting ground, flanked on both sides by beast and machine, pinned between meat and math.” (p.13)

No typo in the title, this is truly how this book by Brian Christian is called. It was kindly sent to me by my friends from BUY and I realised I could still write with my right hand when commenting on the margin. (I also found the most marvellous proof to a major theorem but the margin was just too small…)  “The most human human: What artificial intelligence teaches us about being alive” is about the Turing test, designed to test whether an unknown interlocutor is a human or a machine. And eventually doomed to fail.

“The final test, for me, was to give the most uniquely human performance I could in Brighton, to attempt a successful defense against the machines.” (p.15)

What I had not realised earlier is that there is a competition every year running this test against a few AIs and a small group of humans, the judges (blindly) giving votes for each entity and selecting as a result the most human computer. And also the most human … human! This competition is called the Loebner Prize and it was taking place in Brighton, this most English of English seaside towns, in 2008 when Brian Christian took part in it (as a human, obviously!).

“Though both [sides] have made progress, the `algorithmic’ side of the field [of computer science] has, from Turing on, completely dominated the more `statistical’ side. That is, until recently.” (p.65)

I enjoyed the book, much more for the questions it brought out than for the answers it proposed, as the latter sounded unnecessarily conflictual to me, i.e. adopting a “us vs.’em” posture and whining about humanity not fighting hard enough to keep ahead of AIs… I dislike this idea of the AIs being the ennemy and of “humanity lost” the year AIs would fool the judges. While I enjoy the sci’ fi’ literature where this antagonism is exacerbated, from Blade Runner to Hyperion, to Neuromancer, I do not extrapolate those fantasised settings to the real world. For one thing, AIs are designed by humans, so having them winning this test (or winning against chess grand-masters) is a celebration of the human spirit, not a defeat! For another thing, we are talking about a fairly limited aspect of “humanity”, namely the ability to sustain a limited discussion with a set of judges on a restricted number of topics. I would be more worried if a humanoid robot managed to fool me by chatting with me for a whole transatlantic flight. For yet another thing, I do not see how this could reflect on the human race as a whole and indicate that it is regressing in any way. At most, it shows the judges were not trying hard enough (the questions reported in The most human human were not that exciting!) and maybe the human competitors had not intended to be perceived as humans.

“Does this suggest, I wonder, that entropy may be fractal?” (p.239)

Another issue that irked me in the author’s perspective is that he trained and elaborated a complex strategy to win the prize (sorry for the mini-spoiler: in case you did  not know, Brian did finish as the most human human). I do not know if this worry fear to appear less human than an AI was genuine or if it provided a convenient canvas for writing the book around the philosophical question of what makes us human(s). But it mostly highlights the artificial nature of the test, namely that one has to think in advance on the way conversations will be conducted, rather than engage into a genuine conversation with a stranger. This deserves the least human human label, in retrospect!

“So even if you’ve never heard of [Shanon entropy] beofre, something in your head intuits [it] every time you open your mouth.” (p.232)

The book spend a large amount of text/time on the victory of Deep Blue over Gary Kasparov (or, rather, on the defeat of Kasparov against Deep Blue), bemoaning the fact as the end of a golden age. I do not see the problem (and preferred the approach of Nate Silver‘s). The design of the Deep Blue software was a monument to the human mind, the victory did not diminish Kasparov who remains one of the greatest chess players ever, and I am not aware it changed chess playing (except when some players started cheating with the help of hidden computers!). The fact that players started learning more and more chess openings was a trend much before this competition. As noted in The most human human,  checkers had to change its rules once a complete analysis of the game had led to  a status-quo in the games. And this was before the computer era. In Glasgow, Scotland, in 1863. Just to draw another comparison: I like playing Sudoku and the fact that I designed a poor R code to solve Sudokus does not prevent me from playing, while my playing sometimes leads to improving the R code. The game of go could have been mentioned as well, since it proves harder to solve by AIs. But there is no reason this should not happen in a more or less near future…

“…we are ordering appetizers and saying something about Wikipedia, something about Thomas  Bayes, something about vegetarian dining…” (p.266)

While the author produces an interesting range of arguments about language, intelligence, humanity, he missed a part about the statistical modelling of languages, apart from a very brief mention of a Markov dependence. Which would have related to the AIs perspective. The overall flow is nice but somehow meandering and lacking in substance. Esp. in the last chapters. On a minor level, I also find that there are too many quotes from Hofstadter’ Gödel, Escher and Bach, as well as references to pop culture. I was surprised to find Thomas Bayes mentioned in the above quote, as it did not appear earlier, except in a back-note.

“A girl on the stairs listen to her father / Beat up her mother” C.D. Wright,  Tours

As a side note to Andrew, there was no mention made of Alan Turing’s chess rules in the book, even though both Turing and chess were central themes. I actually wondered if a Turing test could apply to AIs playing Turing’s chess: they would have to be carried by a small enough computer so that the robot could run around the house in a reasonable time. (I do not think chess-boxing should be considered in this case!)