Archive for artificial intelligence

minibatch acceptance for Metropolis-Hastings

Posted in Books, Statistics with tags , , , , , on January 12, 2018 by xi'an

An arXival that appeared last July by Seita, Pan, Chen, and Canny, and that relates to my current interest in speeding up MCMC. And to 2014 papers by  Korattikara et al., and Bardenet et al. Published in Uncertainty in AI by now. The authors claim that their method requires less data per iteration than earlier ones…

“Our test is applicable when the variance (over data samples) of the log probability ratio between the proposal and the current state is less than one.”

By test, the authors mean a mini-batch formulation of the Metropolis-Hastings acceptance ratio in the (special) setting of iid data. First they use Barker’s version of the acceptance probability instead of Metropolis’. Second, they use a Gaussian approximation to the distribution of the logarithm of the Metropolis ratio for the minibatch, while the Barker acceptance step corresponds to comparing a logistic perturbation of the logarithm of the Metropolis ratio against zero. Which amounts to compare the logarithm of the Metropolis ratio for the minibatch, perturbed by a logistic minus Normal variate. (The cancellation of the Normal in eqn (13) is a form of fiducial fallacy, where the Normal variate has two different meanings. In other words, the difference of two Normal variates is not equal to zero.) However, the next step escapes me as the authors seek to optimise the distribution of this logistic minus Normal variate. Which I thought was uniquely defined as such a difference. Another constraint is that the estimated variance of the log-likelihood ratio gets below one. (Why one?) The argument is that the average of the individual log-likelihoods is approximately Normal by virtue of the Central Limit Theorem. Even when randomised. While the illustrations on a Gaussian mixture and on a logistic regression demonstrate huge gains in computational time, it is unclear to me to which amount one can trust the approximation for a given model and sample size…

Children of Time [book review]

Posted in Books, pictures, Travel with tags , , , , , , , , , , on October 8, 2017 by xi'an

I came by this book in the common room of the mathematics department of the University of Warwick, which I visit regularly during my stays there, for it enjoys a book sharing box where I leave the books I’ve read (and do not want to carry back to Paris) and where I check for potential catches… One of these books was Tchaikovsky’s children of time, a great space-opera novel à la Arthur C Clarke, which got the 2016 Arthur C Clarke award, deservedly so (even though I very much enjoyed the long way to a small angry planet, Tchaikosky’s book is much more of an epic cliffhanger where the survival of an entire race is at stake). The children of time are indeed the last remnants of the human race, surviving in an artificial sleep aboard an ancient spaceship that irremediably deteriorates. Until there is no solution but landing on a terraformed planet created eons ago. And defended by an AI spanned (or spammed) by the scientist in charge of the terra-formation, who created a virus that speeds up evolution, with unintended consequences. Given that the strength of the book relies on these consequences, I cannot get into much details about the alternative pathway to technology (incl. artificial intelligence) followed by the inhabitants of this new world, and even less about the conclusive chapters that make up for a rather slow progression towards this final confrontation. An admirable and deep book I will most likely bring back to the common room on my next trip to Warwick! (As an aside I wonder if the title was chosen in connection with Goya’s picture of Chronus [Time] devouring his children…)

weapons of math destruction [fan]

Posted in Statistics with tags , , , , , , , , on September 20, 2017 by xi'an

As a [new] member of Parliement, Cédric Villani is now in charge of a committee on artificial intelligence, which goal is to assess the positive and negative sides of AI. And refers in Le Monde interview below to Weapons of Maths Destruction as impacting his views on the topic! Let us hope Superintelligence is no next on his reading list…

the incomprehensible challenge of poker

Posted in Statistics with tags , , , , , , , , on April 6, 2017 by xi'an

When reading in Nature about two deep learning algorithms winning at a version of poker within a few weeks of difference, I came back to my “usual” wonder about poker, as I cannot understand it as a game. (Although I can see the point, albeit dubious, in playing to win money.) And [definitely] correlatively do not understand the difficulty in building an AI that plays the game. [I know, I know nothing!]

career advices by Cédric Villani

Posted in Kids, pictures, Travel, University life with tags , , , , , , on January 26, 2017 by xi'an

athens1
Le Monde has launched a series of tribunes proposing career advices from 35 personalities, among whom this week (Jan. 4, 2017) Cédric Villani. His suggestion for younger generations is to invest in artificial intelligence and machine learning. While acknowledging this still is a research  topic, then switching to robotics [although this is mostly a separate. The most powerful advice in this interview is to start with a specialisation when aiming at a large spectrum of professional opportunities, gaining the opening from exchanges with people and places. And cultures. Concluding with a federalist statement I fully share.

weapons of math destruction [book review]

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , on December 15, 2016 by xi'an

wmd As I had read many comments and reviews about this book, including one by Arthur Charpentier, on Freakonometrics, I eventually decided to buy it from my Amazon Associate savings (!). With a strong a priori bias, I am afraid, gathered from reading some excerpts, comments, and the overall advertising about it. And also because the book reminded me of another quantic swan. Not to mention the title. After reading it, I am afraid I cannot tell my ascertainment has changed much.

“Models are opinions embedded in mathematics.” (p.21)

The core message of this book is that the use of algorithms and AI methods to evaluate and rank people is unsatisfactory and unfair. From predicting recidivism to fire high school teachers, from rejecting loan applications to enticing the most challenged categories to enlist for for-profit colleges. Which is indeed unsatisfactory and unfair. Just like using the h index and citation ranking for promotion or hiring. (The book mentions the controversial hiring of many adjunct faculty by KAU to boost its ranking.) But this conclusion is not enough of an argument to write a whole book. Or even to blame mathematics for the unfairness: as far as I can tell, mathematics has nothing to do with unfairness. Some analysts crunch numbers, produce a score, and then managers make poor decisions. The use of mathematics throughout the book is thus completely inappropriate, when the author means statistics, machine learning, data mining, predictive algorithms, neural networks, &tc. (OK, there is a small section on Operations Research on p.127, but I figure deep learning can bypass the maths.) Continue reading

machines learning but not teaching…

Posted in Books, pictures with tags , , , , , , , on October 28, 2016 by xi'an

A few weeks after the editorial “Algorithms and Blues“, Nature offers another (general public) entry on AIs and their impact on society, entitled “The Black Box of AI“. The call is less on open source AIs and more on accountability, namely the fact that decisions produced by AIS and impacting people one way or another should be accountable. Rather than excused by the way out “the computer said so”. What the article exposes is how (close to) impossible this is when the algorithms are based on black-box structures like neural networks and other deep-learning algorithms. While optimised to predict as accurately as possible one outcome given a vector of inputs, hence learning in that way how the inputs impact this output [in the same range of values], these methods do not learn in a more profound way in that they very rarely explain why the output occurs given the inputs. Hence, given a neural network that predicts go moves or operates a self-driving car, there is a priori no knowledge to be gathered from this network about the general rules of how humans play go or drive cars. This rather obvious feature means that algorithms that determine the severity of a sentence cannot be argued as being rational and hence should not be used per se (or that the judicial system exploiting them should be sued). The article is not particularly deep (learning), but it mentions a few machine-learning players like Pierre Baldi, Zoubin Ghahramani and Stéphane Mallat, who comments on the distance existing between those networks and true (and transparent) explanations. And on the fact that the human brain itself goes mostly unexplained. [I did not know I could include such dynamic images on WordPress!]